AI is developing fast, but regulators must be faster | Letters

A picture


The recent open letter regarding AI consciousness on which you report (AI systems could be ‘caused to suffer’ if consciousness achieved, says research, 3 February) highlights a genuine moral problem: if we create conscious AI (whether deliberately or inadvertently) then we would have a duty not to cause it to suffer.What the letter fails to do, however, is to capture what a big “if” this is.Some promising theories of consciousness do indeed open the door to AI consciousness.But other equally promising theories suggest that being conscious requires being an organism.Although we can look for indicators of consciousness in AI, it is very difficult – perhaps impossible – to know whether an AI is actually conscious or merely presenting the outward signs of consciousness.

Given how deep these problems run, the only reasonable stance to take on artificial consciousness is an agnostic one,Does that mean we can ignore the moral problem? Far from it,If there’s a genuine chance of developing conscious AI then we have to act responsibly,However, acting responsibly in such uncertain territory is easier said than done,The open letter recommends that “organisations should prioritise research on understanding and assessing AI consciousness”.

But existing methods for testing AI consciousness are highly disputed so can only deliver contentious results,Although the goal of avoiding artificial suffering is a noble one, it’s worth noting how casual we are about suffering in many organisms,A growing body of evidence suggests that prawns could be capable of suffering, yet the prawn industry kills around half a trillion prawns every year,Testing for consciousness in prawns is hard, but it’s nothing like as hard as testing for consciousness in AI,So while it’s right to take our possible duties to future AI seriously, we mustn’t lose sight of the duties we might already have to our biological cousins.

Dr Tom McClellandLecturer in philosophy of science, University of Cambridge Regarding your editorial (The Guardian view on AI and copyright law: big tech must pay, 31 January), I agree that AI regulations need a balance so that we all benefit,However, the focus is perhaps too much on the training of AI models and not enough on the processing of creative works by AI models,To use a metaphor – imagine I photocopied 100,000 books, read them, and could then string together plausible sentences on topics in the books,Clearly, I shouldn’t have photocopied them, but I can’t reproduce any content from any single book, as it’s too much to remember,At best, I can broadly mimic the style of some of the more prolific authors.

This is like AI training,I then use my newfound skill to take an article, paraphrase it, and present it as my own,What’s more, I find I can do this with pictures, too, as many of the books were illustrated,Give me a picture and I can create five more in a similar style, even though I’ve never seen a picture like this before,I can do this for every piece of creative work I come across, not just things I was trained on.

This is like processing by AI.The debate at the moment seems to be focusing wholly on training.This is understandable, as the difference between training and processing by a pre-trained model isn’t that obvious from a user perspective.While we need a fair economic model for training data – and I believe it’s morally correct that creators can choose whether their work is used in this way and be paid fairly – we need to focus much more on processing rather than training in order to protect creative industries.Michael WebbDirector of AI, Jisc We are writing this letter on behalf of a group of members of the UN high-level advisory body for AI.

The release of DeepSeek’s R1 model, a state-of-the-art AI system developed in China, highlights the urgent need for global AI governance.Even though DeepSeek is not an intelligence breakthrough, its efficiency highlights that cutting-edge AI is no longer confined to a few corporations.Its open-source nature, like Meta’s Llama and Mistral, raises complex questions: while transparency fosters innovation and oversight, it also enables AI-driven misinformation, cyber-attacks and deepfake propaganda.Existing governance mechanisms are inadequate.National policies, such as the EU AI Act or the UK’s AI regulation framework, vary widely, creating regulatory fragmentation.

Unilateral initiatives like next week’s Paris AI Action Summit may fail to provide comprehensive enforcement, leaving loopholes for misuse.A robust international framework is essential to ensure AI development aligns with global stability and ethical principles.The UN’s recent Governing AI for Humanity report underscores the dangers of an unregulated AI race – deepening inequalities, entrenching biases, and enabling AI weaponisation.AI’s risks transcend borders; fragmented approaches only exacerbate vulnerabilities.We need binding international agreements that cover transparency, accountability, liability and enforcement.

AI’s trajectory must be guided by collective responsibility, not dictated by market forces or geopolitical competition,The financial world is already reacting to AI’s rapid evolution,Nvidia’s $600bn market loss after DeepSeek’s release signals growing uncertainty,However, history shows that efficiency drives demand, reinforcing the need for oversight,Without a global regulatory framework, AI’s evolution could be dominated by the fastest movers rather than the most responsible actors.

The time for decisive, coordinated global governance is now – before unchecked efficiency spirals into chaos.We believe that the UN remains the best hope for establishing a unified framework that ensures AI serves humanity, safeguards rights and prevents instability before unchecked progress leads to irreversible consequences.Virginia Dignum Wallenberg professor of responsible AI, Umeå UniversityWendy HallRegius professor of computer science, University of Southampton Have an opinion on anything you’ve read in the Guardian today? Please email us your letter and it will be considered for publication in our letters section.
politicsSee all
A picture

Labour peer calls for ‘arrogant’ attorney general to be sacked

An influential Labour peer has called the attorney general Richard Hermer an “arrogant, progressive fool” and called for him to be sacked, exposing a split at the heart of Keir Starmer’s government.Maurice Glasman, the founder of the Blue Labour group that has risen in prominence since Donald Trump’s victory in the US election, urged the prime minister to replace his attorney general.In an interview with the New Statesman, Glasman said of Hermer: “He’s got to go. He is the absolute archetype of an arrogant, progressive fool who thinks that law is a replacement for politics … They talk about the rule of law but what they want is a rule of lawyers.” He also called Rachel Reeves, the chancellor, a “drone for the Treasury”

A picture

Labour was told about ‘vile’ WhatsApp group more than a year ago, says councillor

Labour was warned more than a year ago about a “vile” WhatsApp group involving two of the party’s MPs, local councillors and a series of offensive messages, the Guardian has been told.It came as a cycling campaigner said he was “profoundly distressed” to learn that one of the MPs, Andrew Gwynne, joked about him being “mown down” by a lorry.Gwynne was sacked as a health minister on Saturday and suspended by Labour after he was accused of posting messages containing racist and sexist comments. A second Labour MP, Oliver Ryan, was suspended on Monday after he was revealed to be a member of the group, which also featured misogynistic and classist messages.The Guardian has seen previously undisclosed posts in the WhatsApp group, called Trigger Me Timbers, and can reveal that:Gwynne, the MP for Gorton and Denton, described a constituent as “an illiterate retard” and a fellow councillor as a “fat middle aged useless thicket”

A picture

Local elections delayed is democracy denied | Letters

The cancellation of local elections (Some councillors in England could stay for more than extra year under shake-up plans, 5 February) means that the government and those council leaders who will gain most from the planned reorganisation will not have to face voters to justify the cost to them.The proposed mergers of district councils and splitting up of county councils to form new unitary councils was examined in a PricewaterhouseCoopers (PwC) report commissioned by the County Councils Network in 2020. This showed that all the options to create multiple unitary councils were extremely expensive and disruptive.PwC predicted that the loss of economies of scale at county level would cost billions across the country. With local government already on its knees, spending a fortune on a reorganisation to create a more costly alternative is the last thing we need

A picture

Benefits system should protect, not punish, vulnerable people | Letters

Frances Ryan underestimates the effect of repeated attacks on benefits claimants and the damage that the potential changes being floated would unleash (As Labour touts more brutal cuts to benefits, how is this different from life under the Tories?, 5 February). As a mental health clinician, I cannot emphasise enough how many relapses have been triggered by the relentless media drumbeat about “cracking down” on benefits. This is not just political rhetoric; it lodges in the psyche, feeding precarity and self-doubt.When the government frames itself as the defender of the public purse at the expense of “fraudulent” claimants, it makes nearly all claimants feel like frauds. To combine this with the terrifying reality of what these speculative reforms could mean – sanctions for those too unwell to comply with back-to-work schemes, and the appalling prospect of removing or gutting the limited capability for work and work-related activity (LCWRA) group that overwhelmingly consists of people at substantial risk of mental collapse – is unconscionable

A picture

Burnley MP Oliver Ryan suspended by Labour over messages on WhatsApp group

The Labour MP Oliver Ryan has been suspended by the party over his membership and comments on a WhatsApp group that featured offensive messages, including alleged racism and sexism.The party took action against the Burnley MP after the emergence of details about the Trigger Me Timbers group, mainly involving a group of councillors and party activists in Greater Manchester.A Labour spokesperson said: “As part of our WhatsApp group investigation, Oliver Ryan has been administratively suspended as a member of the Labour party.“As soon as this group was brought to our attention, a thorough investigation was immediately launched and this process is ongoing in line with the Labour party’s rules and procedures. Swift action will always be taken where individuals are found to have breached the high standards expected of them as Labour party members

A picture

Watchdog recommends £2,500 pay rise for UK MPs

MPs should receive a £2,500 pay rise for the next financial year, an increase of 2.8%, the body that recommends their salaries has said.If the pay rise proposed by the Independent Parliamentary Standards Authority (Ipsa) goes through, it would raise a backbench MP’s salary from £91,346 to just under £94,000.The year-on-year figure for consumer price index inflation in December was 2.5%, but it is expected to increase as the year goes on