The biggest strike in global history is a template for our future. The silence tells you all you need to know, writes CLAUDIA WEBBE
Does widespread and uncontrolled use of AI change our relationship with scientific meaning? Or with each other? ask ROX MIDDLETON, LIAM SHAW and MIRIAM GAUNTLETT
IN FEBRUARY 2026, a team of four theoretical physicists published a paper on the non-peer-reviewed repository site arxiv, along with a co-author Kevin Weil.
Weil is a product manager at OpenAI and was included as an author “on behalf of” ChatGPT 5.2, a paid-for generative artificial intelligence (AI) tool that the physicists used extensively in developing their discovery.
It has been claimed as a scientific first — the first time a AI tool has been listed “as an author” — except for the fact that it actually isn’t listed as an author. Arxiv, in common with other peer-reviewed journals, does not accept authorship by a large language model, because a computer program can’t take responsibility for the contents of a paper.
The authors have been public about their decision to credit the computer program with authorship. They feel that their paper’s key intellectual breakthrough was made by ChatGPT, as they fed it questions and discussed the research question with it. The mathematical question they picked involved a feature of gluons, a type of subatomic particle.
It was a particle physics problem expressed mathematically, which those outside the field generally don’t have the skills or knowledge to assess. Certainly we, the Science and Society team, don’t!
The scientists say that they were surprised and impressed by the computer’s breakthrough, and then spent a week checking its solution and found it to be correct. For this we have to trust them. We can also apply some healthy scepticism.
OpenAI has two parts: a not-for-profit foundation, and a private company, which is currently the highest valued company in the world. As a product manager, Weil has been heavily promoting the AI tool “for science,” promising big breakthroughs.
As part of this promotion, he brought these physicists in with the specific aim of producing the first AI-co-authored paper.
If you haven’t used ChatGPT, and experienced the uncanny feeling of an interlocutor who can respond to your prompts with what feel like genuinely novel ideas, you might be sceptical. We have tried it, and can confirm the feeling is bizarre. That it can produce novel mathematical ideas doesn’t seem far-fetched.
We live in a world now saturated by generative AI. Even if we don’t use it directly, anything uploaded to the internet (such as the article you’re currently reading or Instagram and Facebook posts) can be used to train models such as ChatGPT.
When ChatGPT was first released in 2022, many felt that it changed everything. Four years later, everything is still changing fast. But what might be most uncanny is the way outputs from generative AI can be almost exactly the same as those created or written by humans.
Across scientific disciplines, there have been a bundle of publications on how reviewers can’t tell the difference between AI-generated abstracts (the preamble introducing a the main conclusions of a scientific paper) and those written by humans. This isn’t hugely surprising, as scientific abstracts must be among the most restricted and ritualised writing in human history.
It’s extraordinary to live (and to write) at a moment when waves of computer-generated creativity wash over us, often unseen. It has become difficult to know for sure whether any piece of writing is AI-assisted, and to what extent.
The technology is there at everyone’s fingertips and for most of us, whether we know how to use AI or not, it is now baked into our phones and the software we’ve habitually used for decades, such as Google search.
You’d expect us to point out that the important thing here is who controls this technology, what megacompanies own these tools and sell the sum content of humanity’s digitised cultural and scientific commons back to us at a price they hope to soon be free to name. That is of course true. In an ideal world, AI ownership would look very different.
It is also true that AI, generative or otherwise, is simply a tool.
However, this tech promotes “unintentional cognitive offloading.” This is what we experience when we no longer have to worry where we are on the map thanks to GPS, and we don’t have to hold even the simplest sum or spelling in our head.
How far are we ready to let it go this time?
It doesn’t have to be all or nothing; indeed, there isn’t much point in disavowing AI entirely. Instead, we need to work out how to use it. People have different opinions about what’s acceptable.
Generative AI can be used for tasks in the creative process ranging from initial ideas generation, to first drafts, to rephrasing and neatening a final draft of writing up so it flows better.
These are all important parts of scientific writing too, and AI tools are speeding up the workflow of many scientists.
In the co-authorship case, the scientists extended this to the research part too, using it to explore which maths technique in the great archive of historical maths techniques might be worth trying on their problem.
If this AI-aided exploration is part of a useful and meaningful creative process, why would be anything less than celebratory that this new technology will allow for this to happen faster, better, at greater volume and perhaps even more imaginatively than before?
By using tools to avoid unpleasant and draining tasks, can we enhance what we are all capable of and the pleasure we can take in our work? One possibility is that AI tools democratise access to creativity, removing some the limits on who can contribute culturally and how.
But there is more to communication than simply writing and constructing sentences that flow coherently. Take this article as an example. The writing of it is a communication from us, real people writing, to you, a real person reading, of ideas and thoughts that we want to share. The important thing, we hope, is the content of the message.
Another example is scientific research. When a scientific problem is selected and solved, linking together theory and practical experiment, and then shared with others, it is another attempt at communication. Via scientific publications or presentations, researchers communicate their new understanding of the world with other people.
The reason that a computer tool cannot be an author is this: its personhood is missing. It is not a person trying to connect with other people through communication. It cannot take responsibility for an intent to construct new understandings of the physical world and then share them.
This is why it is important to take care in how much we choose to cognitively offload to AI. The line between what is helpful in a task and what might end up reducing the meaning of what we are attempting to do is a thin one.
It’s also crucial to bear in mind that generative AI is a technology most of us don’t understand, with huge environmental costs, and one that is changing faster than we can imagine under the less-than-careful eye of some of the most morally compromised (and richest) people on the planet.
How much each of us allows our own cognitive load to be lifted by these tools is perhaps between each of us and our gods. Perhaps the degree of AI will simply be a marker of personal style for every person that carries out scientific research through the communication of ideas.
Perhaps we will come to believe this is a limited and parochial definition of science. For now, we hope these computational tools could help us return to the fundamentals of scientific exploration and expression. To avoid giving up our own critical thinking in the process will mean engagement with AI with our whole human intent and capacity, rather than using it shamefully and furtively to cut corners, in the moments when we all want to think less, feel less, mean less.
This piece, in common with all Science & Society pieces (as of February 24 2026) was written without the use of generative AI tools other than those now present in eg Google search.


