As Labour continues to politically shoot itself in the foot, JULIAN VAUGHAN sees its electorate deserting it en masse

A COMPELLING feature of science is its ability to predict the future: theories ought to generate testable predictions that can be borne out.
Claiming that analysis is “scientific,” such as the motivation behind historical materialism in its original form, is a way of saying that data and theory from the past are being used to make claims about the future.
It is exactly this same predictive capacity that drives the contemporary fascination for data, algorithms and AI.
Unfortunately, algorithms are most successful when they are asked to reproduce the well-documented past. A striking example was in the algorithm that was used to predict and distribute A-level grades to students who hadn’t sat exams during the pandemic in 2020.
The algorithm successfully reproduced the annual inequality between rich and poor schools, and rightly caused uproar.
As well as the ability to predict the future, science was important to leftists and revolutionaries in the 19th and 20th centuries for its association with technology and progress.
Back then, the left imagined the ways in which technology would enable a higher quality of life for all.
Soviet science is remembered for its spectacular investment in scientific progress and major successes, as well as the tragedy of Lysenkoism.
In contemporary Britain, the alignment of science and progress has largely been absorbed into the dreams of capitalists, leaving a gap in the imaginary futures of the left.
The most extreme recent changes in our lives have been produced by internet services and tech companies. With an ever-deepening personal investment in lifestyles enabled by technological progress grows an irreconcilable antipathy to the systems that currently enable it.
The challenge is to understand how we imagine a future that is both liberated and able to make use of science and technology.
Rethinking science from the left requires understanding its relationship to progress, its role in producing the future and how it might fit into a better world.
In the last few years, research institutes focused on the future have proliferated like mushrooms. The Future of Humanity Institute (2005) and the Global Priorities Institute (2018) at the University of Oxford, the Centre for the Study of Existential Risk (2012) and the Leverhulme Centre for the Future of Intelligence (2016) at the University of Cambridge, the Lifeboat Foundation (2009), the Global Catastrophic Risk Institute (2011), the Global Challenges Foundation (2012), the Future of Life Institute (2014), the Centre for Long-Term Risk (2016). Many of these are located in powerful institutions and backed by huge amounts of wealth.
What links many of them is their association not only with concerns for humanity, but also with the movement known as “effective altruism” (EA).
Over the last decade, EA has grown dramatically and gained significant influence, particularly in Britain, but also in the United States, through incredibly wealthy donors.
Unlike many movements with which it might easily be compared, EA is not traditionally religious, but instead claims to be based on rationalism.
It makes explicit claims to be based on science, data and moral philosophy. The movement originated around 2010 and focused on maximising the most good an individual can do in their lifetime.
Early organisations arising from the movement focused on charitable giving and career planning.
Giving What We Can encourages people to pledge 10 per cent of their income to charities with the highest efficiency in terms of quality-adjusted life years saved per dollar.
80,000 Hours was a career-guidance manual to help adherents understand what job they should do to maximise their utility — with recommendations such as becoming a hedge fund manager and donating money rather than becoming a doctor.
It was criticised for its strong focus on individual rather than collective action, taking the status quo as a fixed set of conditions to be optimised within.
Although initially motivated by a deeply critical approach of existing philanthropy, EA has gained hugely wealthy financial backers from the tech industry.
The movement has become more and more future-focused. The organisation focuses attention on what they call “long-termism,” a view of risks to humanity in the long term. In comparison to short-termist politics which looks no further ahead than four years, long-termism sounds like an admirable alternative.
What’s surprising is that the long-termists are actually interested in the future thousands, or tens or hundreds of thousands of years in the future.
What’s even more surprising is that they believe that on these timescales, problems such as global hunger or global warming are short-term blips.
These research institutes are very intently focused on risks posed specifically by future artificial intelligence, in a hypothetical future where computers become “more intelligent” than humans, and cause the end of humanity itself.
If you are surprised by this, you are not alone. When the unmitigated disaster of anthropogenic climate change threatens to produce misery, starvation and war and ecosystem liquidation on an unimaginable scale, why would a moral philosophy movement choose to focus on computers going rogue?
The answer perhaps lies in the identities of the donors. Many of the major EA backers are the tech billionaires: Peter Thiel (Paypal), Jed McCaleb (Mt. Gox bitcoin), Elon Musk (Tesla, Twitter), Dustin Moskovitz (Facebook), Vitalik Buterin (Ethereum — cryptocurrency), Jaan Tallinn (Skype).
All of these men have lived lives dominated by the wealth they have accumulated through algorithmic capitalism.
Now they are pouring that money back into their own obsessive concern. This thinking is enabled by moral philosophers who have created the argument that this is morally essential.
The first element of this reasoning says that more good can be done by focusing on things that don’t currently have much attention.
The other part says that although many lives may be lost or immiserated through climate change, there exist a vast number of potential future people whose lives and happiness may be preserved, provided the given risk is not “existential” — that is, it does not kill every last human.
The view is obviously abhorrent, but it is easy to understand given the material concerns of the people who finance it.
Effective altruism is a value system based on philanthropy, which is a capitalist response to the misery induced by capitalism itself.
Charitable redistribution is a sticking-plaster, it doesn’t fix the problem. It is tediously unimaginative to assume that the best we can do is mitigation within current arbitrary constraints.
The tech billionaires are right that we should be concerned about their use of algorithms, and demand control ourselves. They are wrong that thinking more about them will save us.
If you would like to join us to discuss science, the left and the future, we will be hosting a series of three online discussions with excellent thinkers on science and society, hosted by the Marx Memorial Library.
The first is tonight at 7pm, sign up here: www.marx-memorial-library.org.uk/event/397.

The distinction between domestic and military drones is more theoretical than practical, write ROX MIDDLETON, LIAM SHAW and MIRIAM GAUNTLETT

Nature's self-reconstruction is both intriguing and beneficial and as such merits human protection, write ROX MIDDLETON, LIAM SHAW and MIRIAM GAUNTLETT

A maverick’s self-inflicted snake bites could unlock breakthrough treatments – but they also reveal deeper tensions between noble scientific curiosity and cold corporate callousness, write ROX MIDDLETON, LIAM SHAW and MIRIAM GAUNTLETT
Science has always been mixed up with money and power, but as a decorative facade for megayachts, it risks leaving reality behind altogether, write ROX MIDDLETON, LIAM SHAW and MIRIAM GAUNTLETT