AI was supposed to make the UK benefits system more efficient—instead it's brought bias and hungerA freedom of information request has revealed that an AI system used by the UK government for assessing benefits cases is apparently getting it wrong by a "statistically significant" amount. The admission to journalists at the Guardian emerged after a fairness analysis into universal credit claimants in February 2024. It confirmed that the very tools intended to ensure equity and efficiency may in fact be discriminating against marginalized communities. |
Local governments are using AI without clear rules or policies, and the public has no idea
In 2017, the city of Rotterdam in the Netherlands deployed an artificial intelligence (AI) system to determine how likely welfare recipients were to commit fraud. After analyzing the data, the system developed biases: it flagged as "high risk" people who identified as female, young, with kids, and of low proficiency in the Dutch language....
Umm, here's one reason; there are many others!
New technique reduces bias in AI models while preserving or improving accuracy
Machine-learning models can fail when they try to make predictions for individuals who were underrepresented in the datasets they were trained on.