Secret consultation documents finally released after the Morning Star’s two-year freedom of information battle show the Home Office misrepresented public opinion, claiming support for policies that most respondents actually strongly criticised as dangerous and unfair, writes SOLOMON HUGHES
Killing welfare algorithmically
BAPPA SINHA assesses the way AI is being used to undermine social security rights in India

SOME of the most prominent AI startups, tech companies, their executives, researchers and engineers would have us believe that artificial intelligence (AI) poses an existential risk to humanity and should be considered a societal risk on par with pandemics and nuclear wars.
Beneath the self-serving hype of rogue super-intelligent AI models (the current models are nowhere close to approaching human-like intelligence), the AI models pushed by governments pursuing aggressive neoliberal agendas and monopoly corporate interests are harming society and humanity in far more mundane ways, often targeting the poor, and ethnic and religious minorities.
Two bad AI welfare models
Similar stories

ANSELM ELDERGILL asks whether artificial intelligence may decide legal cases in the future, in place of human judges, and how AI could reshape the legal landscape

Given the threat that AI poses to workers, TONY BURKE recommends a pamphlet that sets out the way forward