DWP CAUGHT Deliberately Setting Benefit Claimants Up For A Fall?

1 month ago
88

Right, so this ought to be a massive scandal but again has largely been played down in the UK media, because who cares about benefit claimants right? They are there to be sneered at, looked down upon, accused of being feckless lead swingers who are feigning long term illness or disability aren’t they? That’s been the craic for the last 14 years of Tory rule, where it was a situation sold to the populace as shirkers versus workers and now we have another situation under Starmer’s Labour government where it is now being referred to as hurt first fix later, because who cares if people suffer in the meantime before mistakes get corrected, especially since, unless people are talking about said mistakes, they aren’t likely to get fixed at all.
The situation involves the growing use of Artificial Intelligence in government department decision making, specifically the AI use by the DWP because shock horror, colour me surprised, the AI algorithm has inbuilt biases against certain demographics, because no safeguarding and fairness analysis was undertaken before rolling the AI system out and now, the government have been caught with their pants down over it.
Right, so this is a massive story, a huge government scandal, once again targeting those with the least, those who need their incomes supplemented, be they in work or not, but funnily enough, guess which have turned out to be most at the mercy of a computer program with an inbuilt bias over decisions affecting them?
Well first lets go back to August, because that is when this problem first got flagged by the Public Law Project, following the publication of the Department for Work and Pensions annual reports and accounts for 2023/24, obviously most of that time was under the Tories, but the report was published on the 22nd July, so had been signed off for publication by the new Labour government, who had by that point been in power by about a fortnight, but here is some of what the Public law Project had to say about this back in August:
‘DWP’s annual report and accounts for 2023-24, published on 22 July 2024, has provided more insight into DWP’s desire to increase reliance on AI and automation, including for detecting fraud and error in social security claims and payments. The report speaks a lot in general terms about the DWP’s use of data analytics and machine learning, including, in equally general terms, of the safeguards in place.
However, the report provides no meaningful information about what technologies DWP is piloting or actually using nor how they are, or will be, deployed. It similarly provides very little information about DWP’s own assessment of the potentially discriminatory impact that the technologies might have on protected groups or vulnerable claimants.
This weak compliance with the PAC’s recommendation to report to Parliament on an annual basis on the impact of data analytics on protected grounds and vulnerable claimants is very disappointing, especially since:
• There is also scant detail on the safeguards that DWP is purportedly adopting to mitigate harm; and
• The DWP has provided no real detail to allow meaningful scrutiny of the impact of the tools it is adopting. There is a real risk that (as identified by the National Audit Office) technologies of this nature may lead to discriminatory outcomes by unfairly targeting particular claimants, including because of their protected characteristics.
Although the DWP’s report does refer to a fairness analysis it has carried out in relation to the Universal Credit (UC) Advances model, the results have not been published. The report simply states that the results ‘do not present any immediate concerns of discrimination, unfair treatment or detrimental impact on customers’.
The DWP has not provided any details of its assessment due to concerns that publication would allegedly ‘allow fraudsters to understand how the model operates’.’
So the DWP wants to use AI a lot more, but provided very little detail, especially when it comes to safeguarding and making sure the system is fair, makes mention of a fairness analysis, but doesn’t publish details and now it’s being revealed that this system is anything but fair. In fact it’s being accused of being downright discriminatory, which would be yet one more thing to not change under Starmer’s Labour from the Tories isn’t it?
A Freedom Of Information Request by the Public Law Project for access to internal DWP documents relating to the AI being used has demonstrated in built bias, based on race, age, disability and marital status, and your knee-jerk reaction to that is we’re talking single people from abroad and those with disabilities being found against unfairly by a computer program designed to single them out in whole or in part. That is conjecture, because those exact details were redacted, nothing quite saying nothing to see her guv, like hiding certain information, but the excuse for that is if it were revealed, those who would game the system would know how. It’s not other people gaming the system that should concern us most of all, it is those in government gaming the system against certain people from certain demographics where the benefit bill is to be reduced and this may in part at least be how they’re doing it, by again seeking to throw people off the support they should be entitled to, by not doing the proper checks. And when the program, which in itself cannot lie, is programmed to have an inbuilt bias, this isn’t an accident, this is deliberate.
The excuse to use this system is to crack down on some £8bn of benefit fraud each year, but if you are deliberately profiling people based on race, then it is racist, on disability it is ableist, as well as also being ageist and presumably has an issue with people who are unmarried. The DWP claims that human checks would mitigate any such mistakes, but how about you dump the AI until it’s fixed and actually impartial and then you wouldn’t need that? Besides, trust in the DWP is rock bottom. Under the Tories there were pay incentives for getting people off claims and it looks to all intents and purposes like that culture is continuing under Starmer’s Labour. These are protected characteristics here, I can’t wait to see a day in court over this and absolutely that should be case given the attitude, which has been described as hurt first, fix later.
The Public Law Project have accused Starmer’s government of failing to assess whether their automated processes risked unfairly targeting marginalised groups and that they should stop rolling out tools when they don’t realise the damage they can cause to people’s lives. I’d raise you somewhat there to say what is obvious when we have in reality just swapped one bunch of Tories for another, that they simply don’t care. They’ve redacted comparisons between those groups the algorithm targets such as the disabled and compared that to non disabled people, but the findings have not been published, hiding the information we need to know.
You know, Labour have only been in power for 5 minutes really and already AI has been rolled out with a chronic lack of transparency, a lack of rigorous testing because there hasn’t been time for it, continues to shif the burden of proving mistakes onto claimants who may not be able to provide such information and which is chronically unfair when they aren’t to blame for the wrong decisions of a computer program and certainly they don’t have the same resources to do that as the DWP does.
Of the fairness analyses, according to the Public Law Project these are completely absent when it comes to factors such as race, sex, religion, sexual orientation and gender assignment and that lack of basic due diligence is how artificial intelligence systems end up with in-built biases. Just a total rush job bereft of testing and checking, determined to get it rolled out and delivering on Labour promises to cut the welfare bill, regardless of the harm it may cause to individuals in the meantime. Callous and cruel all in the name of meeting their own targets.
But this is a scandal that could be even bigger yet, many times bigger, because if you thought AI being rolled out in the Department for Work and Pensions is the only case of AI being used in government right now, you are so wrong.
Independent reports suggest that across all public authorities in the UK, not just necessarily at central government but obviously the governmental hierarchy would oversee more localised uses of AI, there are some 55 separate Artificial Intelligence systems in use today being used by public authorities. These are being used to oversee decisions on housing, welfare, policing, healthcare and more, but the scandal here? The government’s official register of AI systems only makes mention of nine, so lack of accountability when it comes to the use of AI at local or national government level is off the scale and massive questions need asking of this, especially since public bodies are still awarding contracts for AI systems, with minimal oversight and no way of vetting the AI first to ensure it works exactly as it needs to. Last month a £20m contract for a facial recognition system was awarded by a police procurement body. It has also come to light that no Whitehall department has registered any of the AI programs it is using, despite an apparent mandate that this be done, so who is in charge here? Peter Kyle is the Minister for Science & Technology, perhaps he needs to pull his finger out. As for the DWP, the dreadful Blairite Liz Kendall wants hauling over the coals as well, as Labour seems set on damaging lives, amongst those on the lowest earnings, those who get discriminated against already, based on race or disability. No change for those who needed it most with a change of government and no surprise to those of us who knew Starmer represented nothing of the sort anyway.
He has of course just the other week referred to that welfare bill as a blight on society, therefore the implication being anyone in recept of welfare is a blight on society. Instead of dealing with the root cause of that high bill, which includes in no small part wages that are far too low to live on so they end up being subsidised, Starmer attacks the low paid, and not the low paying. He does seem to love to be hated, he’s so good at giving people reasons to isn’t he? Get all the details of his shocking admission and what triggered it in this video recommendation here as your suggested next watch and please do support the channel with a like, share and subscribe if you haven’t already that is very much appreciated and I’ll hopefully catch you on the next vid. Cheers folks.

Loading 2 comments...