NEWS NOT FOUND

technologySee all
A picture

White nationalist talking points and racial pseudoscience: welcome to Elon Musk’s Grokipedia

Entries in Elon Musk’s new online encyclopedia variously promote white nationalist talking points, praise neo-Nazis and other far-right figures, promote racist ideologies and white supremacist regimes, and attempt to revive concepts and approaches historically associated with scientific racism, a Guardian analysis has found.The tech billionaire and Donald Trump ally recently launched xAI’s AI-generated Grokipedia with a promise that it would “purge out the propaganda” he claims infests Wikipedia, the free online encyclopedia that Musk has often attacked but that has long been a key feature of the internet.Grokipedia, now with more than 800,000 entries, is generated and, according to a note on each entry, “factchecked” by Grok, xAI’s large language AI model.The Guardian contacted xAI for comment. Seconds after the request was sent, there was an apparently automated reply that said only: “Legacy Media Lies

A picture

AI firms must be clear on risks or repeat tobacco’s mistakes, says Anthropic chief

Artificial intelligence companies must be transparent about the risks posed by their products or be in danger of repeating the mistakes of tobacco and opioid firms, according to the chief executive of the AI startup Anthropic.Dario Amodei, who runs the US company behind the Claude chatbot, said he believed AI would become smarter than “most or all humans in most or all ways” and urged his peers to “call it as you see it”.Speaking to CBS News, Amodei said a lack of transparency about the impact of powerful AI would replay the errors of cigarette and opioid firms that failed to raise a red flag over the potential health damage of their own products.“You could end up in the world of, like, the cigarette companies, or the opioid companies, where they knew there were dangers, and they didn’t talk about them, and certainly did not prevent them,” he said.Amodei warned this year that AI could eliminate half of all entry-level white-collar jobs – office roles such as accountancy, law and banking – within five years

A picture

How Google’s DeepMind tool is ‘more quickly’ forecasting hurricane behavior

When then Tropical Storm Melissa was churning south of Haiti, Philippe Papin, a National Hurricane Center (NHC) meteorologist, had confidence it was about to grow into a monster hurricane.As the lead forecaster on duty, he predicted that in just 24 hours the storm would become a category 4 hurricane and begin a turn towards the coast of Jamaica. No NHC forecaster had ever issued such a bold forecast for rapid strengthening.But Papin had an ace up his sleeve: artificial intelligence in the form of Google’s new DeepMind hurricane model – released for the first time in June. And, as predicted, Melissa did become a storm of astonishing strength that tore through Jamaica

A picture

Father of teen whose death was linked to social media has ‘lost faith’ in Ofcom

The father of Molly Russell, a British teenager who killed herself after viewing harmful online content, has called for a change in leadership at the UK’s communications watchdog after losing faith in its ability to make the internet safer for children.Ian Russell, whose 14 year-old daughter took her own life in 2017, said Ofcom had “repeatedly” demonstrated that it does not grasp the urgency of keeping under-18s safe online and was failing to implement new digital laws forcefully.“I’ve lost confidence in the current leadership at Ofcom,” he told the Guardian. “They have repeatedly demonstrated that they don’t grasp the urgency of this task and they have shown that they don’t seem to be willing to use their powers to the extent that is required.”Russell’s comments came in the same week the technology secretary, Liz Kendall, wrote to Ofcom saying she was “deeply concerned” about delays in rolling out parts of the Online Safety Act (OSA), a landmark piece of legislation laying down safety rules for social media, search and video platforms

A picture

Personal details of Tate galleries job applicants leaked online

Personal details submitted by applicants for a job at Tate art galleries have been leaked online, exposing their addresses, salaries and the phone numbers of their referees, the Guardian has learned.The records, running to hundreds of pages, appeared on a website unrelated to the government-sponsored organisation, which operates the Tate Modern and Tate Britain galleries in London, Tate St Ives in Cornwall and Tate Liverpool.The data includes details of applicants’ current employers and education, and relates to the Tate’s hunt for a website developer in October 2023. Information about 111 individuals is included. They are not named but their referees are, sometimes with mobile numbers and personal email addresses

A picture

AI firm claims it stopped Chinese state-sponsored cyber-attack campaign

A leading artificial intelligence company claims to have stopped a China-backed “cyber espionage” campaign that was able to infiltrate financial firms and government agencies with almost no human oversight.The US-based Anthropic said its coding tool, Claude Code, was “manipulated” by a Chinese state-sponsored group to attack 30 entities around the world in September, achieving a “handful of successful intrusions”.This was a “significant escalation” from previous AI-enabled attacks it monitored, it wrote in a blogpost on Thursday, because Claude acted largely independently: 80 to 90% of the operations involved in the attack were performed without a human in the loop.“The actor achieved what we believe is the first documented case of a cyber-attack largely executed without human intervention at scale,” it wrote.Anthropic did not clarify which financial institutions and government agencies had been targeted, or what exactly the hackers had achieved – although it did say they were able to access their targets’ internal data