SOME of the most prominent AI startups, tech companies, their executives, researchers and engineers would have us believe that artificial intelligence (AI) poses an existential risk to humanity and should be considered a societal risk on par with pandemics and nuclear wars.
Beneath the self-serving hype of rogue super-intelligent AI models (the current models are nowhere close to approaching human-like intelligence), the AI models pushed by governments pursuing aggressive neoliberal agendas and monopoly corporate interests are harming society and humanity in far more mundane ways, often targeting the poor, and ethnic and religious minorities.
Two bad AI welfare models
Two recent news stories on Indian examples published in Al Jazeera illustrate the above vividly. The first story is about an AI-based system called “Samagra Vedika” which is being used in the state of Telangana, ostensibly to sift out ineligible welfare recipients, but has erroneously disqualified thousands of eligible individuals.
Initially introduced by the previous Bharat Rashtra Samithi (BRS) government, this system claims to consolidate citizens’ data from several government databases to construct comprehensive digital profiles, dubbed as “360-degree views.”
Government officials were mandated to consult the algorithm-based system to determine the eligibility of welfare applicants.
Originally deployed by the state police to identify criminals, the system is now widely used by the state government to verify the eligibility of welfare claimants and to detect “fraud” across multiple welfare schemes, including the food security initiative. Between 2014 and 2019, Telangana cancelled more than 1.86 million existing food security cards and rejected 142,086 fresh applications without any notice.
The government initially attributed these actions to fraudulent subsidy claims, asserting significant cost savings from the exclusion of ineligible beneficiaries. But in reality, several thousands of these exclusions were done wrongfully, owing to faulty data and bad algorithmic decisions by Samagra Vedika. Once excluded, the onus is on the removed beneficiaries to prove that they were entitled to subsidised food and other welfare programmes. Even when they did so, officials often favoured the decision of the algorithm. This has caused untold hardships and denial of basic sustenance to the very poor and most vulnerable sections of society.
The same authors documented the travails of old-age pension recipients in another state, Haryana. In 2020, the Haryana government introduced an algorithmic system – the Family Identity Data Repository or the Parivar Pehchan Patra (PPP) database – to determine the eligibility of welfare claimants.
The PPP is an eight-digit unique ID provided to each family in the state and has details of birth and death, marriage, employment, property, and income tax of family members, among other data.
It maps every family’s demographic and socio-economic information by linking several government databases to check their eligibility for welfare schemes. The state said that the PPP created “authentic, verified and reliable data of all families,” and made it mandatory for citizens to access all welfare schemes.
But in practice, the PPP has been a disaster for many welfare recipients. Thousands of these beneficiaries have been wrongfully declared dead, either due to incorrect data fed into the PPP database or wrong predictions made by the algorithm. According to government data presented in the state assembly last August, it stopped the pensions of 277,115 elderly citizens and 52,479 widows in a span of three years because they were “dead.” Such anomalies were not restricted to old-age pensions alone.
Beneficiaries of disability and widow pensions and other welfare schemes, such as subsidised food, have also been excluded because the PPP algorithm made wrong predictions about their incomes or employment, excluding them from the eligibility criteria. When those wrongfully erased by the algorithm went to government officials to get the records corrected, they faced red tape. Many were shunted from one office to another and made to file endless applications to prove that they were, in fact, alive!
The ordeal faced by hundreds of thousands of citizens in getting their data corrected has made PPP one of the most controversial government plans of the Haryana government in recent years. The opposition party has labelled it “Permanent Pareshani Patra” (“trouble-making letters.”)
Nonetheless, the state government continues to defend and expand the programme, asserting that the “PPP was easing and improving the delivery of services to the right beneficiaries and preventing leakages through the use of artificial intelligence and machine learning. The interlinking of different databases was done to get an integrated database which was the ‘single source of truth’.”
Welfare denial agenda
These stories clearly illustrate the real agenda behind deploying these systems, namely to cut welfare programmes and eligible recipients, which leave the most poor and vulnerable at the mercy of opaque systems, while hiding behind the mask of “high technology to improve welfare delivery.”
We see the damaging, yet increasing, use of these technology-based solutions in welfare delivery all over the country, including in Ministry of Rural Development wage payments, food security schemes, public health systems and a whole host of welfare schemes.
There is therefore a pressing need to raise awareness amongst the public about the pernicious nature and the real agenda behind such moves and also to start mobilising people’s movements to demand the rollback of such inhumane systems.
In places where progressive governments do come to power, they need to be wary of embracing such solutions promoted by bureaucrats captivated by slick technological presentations and swayed by neoliberal ideologies.
This article appeared in People’s Democracy.