Former OpenAI safety researcher brands pace of AI development ‘terrifying’

A picture


A former safety researcher at OpenAI says he is “pretty terrified” about the pace of development in artificial intelligence, warning the industry is taking a “very risky gamble” on the technology.Steven Adler expressed concerns about companies seeking to rapidly develop artificial general intelligence (AGI), a theoretical term referring to systems that match or exceed humans at any intellectual task.Adler, who left OpenAI in November, said in a series of posts on X that he’d had a “wild ride” at the US company and would miss “many parts of it”.However, he said the technology was developing so quickly it raised doubts about the future of humanity.“I’m pretty terrified by the pace of AI development these days,” he said.

“When I think about where I’ll raise a future family, or how much to save for retirement, I can’t help but wonder: will humanity even make it to that point?”Some experts, such as Nobel prize winner Geoffrey Hinton, fear that powerful AI systems could evade human control with potentially catastrophic consequences.Others, such as Meta’s chief AI scientist, Yann LeCun, have played down the existential threat, saying AI “could actually save humanity from extinction”.According to Adler’s LinkedIn profile, he led safety-related research for “first-time product launches” and “more speculative long-term AI systems” in a four-year career at OpenAI.Referring to the development of AGI, OpenAI’s core goal, Adler added: “An AGI race is a very risky gamble, with huge downside.” Adler said no research lab had a solution to AI alignment – the process of ensuring that systems adhere to a set of human values – and that the industry might be moving too fast to find one.

“The faster we race, the less likely that anyone finds one in time,”Adler’s X posts came as China’s DeepSeek, which is also seeking to develop AGI, rattled the US tech industry by unveiling a model that rivalled OpenAI’s technology despite being developed with apparently fewer resources,Sign up to TechScapeA weekly dive in to how technology is shaping our livesafter newsletter promotionWarning that the industry appeared to be “stuck in a really bad equilibrium”, Adler said “real safety regs” were needed,“Even if a lab truly wants to develop AGI responsibly, others can still cut corners to catch up, maybe disastrously,”Adler and OpenAI have been contacted for comment.

technologySee all
A picture

Microsoft reports strong fourth-quarter earnings amid uproar over DeepSeek’s AI

Microsoft reported its second-quarter earnings for fiscal year 2025 on Wednesday, beating market expectations even as questions over multibillion-dollar spending on artificial intelligence continue to mount, spurred by DeepSeek’s shock to the US stock market just days ago.The tech giant reported earnings per share of $3.23, an increase of 10% on a year earlier, and revenue of $69.6bn, an increase of 12%. Wall Street had expected $3

A picture

DeepSeek blocked from some app stores in Italy amid questions on data use

The Chinese AI platform DeepSeek has become unavailable for download from some app stores in Italy as regulators in Rome and in Ireland demanded answers from the company about its handling of citizens’ data.Amid growing concern on Wednesday about how data harvested by the new chatbot could be used by the Chinese government, the app disappeared from the Apple and Google app stores in Italy with customers seeing messages that said it was “currently not available in the country or area you are in” for Apple and the download “was not supported” for Google, Reuters reported.The Guardian confirmed it was not available in the Google app store, but it was available in the Apple store for at least one user. Both Google and Apple have been approached for comment.After the Chinese chatbot was released last week close to $1tn (£804m) was wiped off the leading US tech stock index

A picture

OpenAI ‘reviewing’ allegations that its AI models were used to make DeepSeek

OpenAI has warned that Chinese startups are “constantly” using its technology to develop competing products and said it is “reviewing” allegations that DeepSeek used the ChatGPT maker’s AI models to create a rival chatbot.OpenAI and its partner Microsoft – which has invested $13bn in the San Francisco-based AI developer – have been investigating whether proprietary technology had been obtained in an unauthorised manner through a technique known as “distillation”.The launch of DeepSeek’s latest chatbot sent markets into a spin on Monday after it topped Apple’s free app store, wiping $1trn from the market value of AI-linked US tech stocks. The impact came from its claim that the model underpinning its AI was trained with a fraction of the cost and hardware used by rivals such as OpenAI and Google.Sam Altman, the chief executive of OpenAI, initially said that he was impressed with DeepSeek and that it was “legitimately invigorating to have a new competitor”

A picture

Richard Bleasdale obituary

My friend and collaborator Richard Bleasdale, who has died unexpectedly aged 59, was a true innovator whose software design helped to revolutionise the live entertainment industry.In the 1990s, Richard wrote the programme for the first media server, Catalyst, which came to market in 2001. A media server is a computer system that manages video and audio files for live show control programming. Today, media servers control virtual environments across the world, such as the Sphere in Las Vegas, digital cinematography in film production, theatre and dance shows, broadcast events such as the Eurovision song contest and Strictly Come Dancing, and immersive art installations.It was while working as a lighting console technician in the mid-90s that Richard became frustrated by an industry problem that nobody else could solve

A picture

What International AI Safety report says on jobs, climate, cyberwar and more

The International AI Safety report is a wide-ranging document that acknowledges an array of challenges posed by a technology that is advancing at dizzying speed.The document, commissioned after the 2023 global AI safety summit, covers numerous threats from deepfakes to aiding cyberattacks and the use of biological weapons, as well as the impact on jobs and the environment.Here are some of the key points from the report chaired by Yoshua Bengio, a world-leading computer scientist.In a section on “labour market risks”, the report warns that the impact on jobs will “likely be profound”, particularly if AI agents – tools that can carry out tasks without human intervention – become highly capable.“General-purpose AI, especially if it continues to advance rapidly, has the potential to automate a very wide range of tasks, which could have a significant effect on the labour market

A picture

DeepSeek advances could heighten safety risk, says ‘godfather’ of AI

The potential for artificial intelligence systems to be used for malicious acts is increasing, according to a landmark report by AI experts, with the study’s lead author warning that DeepSeek and other disruptors could heighten the safety risk.Yoshua Bengio, regarded as one of the godfathers of modern AI, said advances by the Chinese startup DeepSeek could be a worrying development in a field that has been dominated by the US in recent years.“It’s going to mean a closer race, which usually is not a good thing from the point of view of AI safety,” he said.Bengio said American firms and other rivals to DeepSeek could focus on regaining their lead instead of on safety. OpenAI, the developer of ChatGPT, which DeepSeek has challenged with the launch of its own virtual assistant, pledged this week to accelerate product releases as a result