NEWS NOT FOUND

ChatGPT firm blames boy’s suicide on ‘misuse’ of its technology
The maker of ChatGPT has said the suicide of a 16-year-old was down to his “misuse” of its system and was “not caused” by the chatbot.The comments came in OpenAI’s response to a lawsuit filed against the San Francisco company and its chief executive, Sam Altman, by the family of California teenager Adam Raine.Raine killed himself in April after extensive conversations and “months of encouragement from ChatGPT”, the family’s lawyer has said.The lawsuit alleges the teenager discussed a method of suicide with ChatGPT on several occasions, that it guided him on whether a suggested method would work, offered to help him write a suicide note to his parents and that the version of the technology he used was “rushed to market … despite clear safety issues”.According to filings at the superior court of the state of California on Tuesday, OpenAI said that “to the extent that any ‘cause’ can be attributed to this tragic event” Raine’s “injuries and harm were caused or contributed to, directly and proximately, in whole or in part, by [his] misuse, unauthorised use, unintended use, unforeseeable use, and/or improper use of ChatGPT”

Europe loosens reins on AI – and US takes them off
Hello, and welcome to TechScape. I’m your host, Blake Montgomery, writing to you from an American grocery store, where I’m planning my Thanksgiving pies.In tech, the European Union is deregulating artificial intelligence; the United States is going even further. The AI bubble has not popped, thanks to Nvidia’s astronomical quarterly earnings, but fears persist. And Meta has avoided a breakup for a similar reason as Google

Macquarie Dictionary announces ‘AI slop’ as its word of the year, beating out Ozempic face
AI slop is here, it’s ubiquitous, it’s being used by the US president, Donald Trump, and now, it’s the word of the year.The Macquarie Dictionary dubbed the term the epitome of 2025 linguistics, with a committee of word experts saying the outcome embodies the word of the year’s general theme of reflecting “a major aspect of society or societal change throughout the year”.“We understand now in 2025 what we mean by slop – AI generated slop, which lacks meaningful content or use,” the committee said in a statement announcing its decision.“While in recent years we’ve learnt to become search engineers to find meaningful information, we now need to become prompt engineers in order to wade through the AI slop. Slop in this sense will be a robust addition to English for years to come

AI could replace 3m low-skilled jobs in the UK by 2035, research finds
Up to 3m low-skilled jobs could disappear in the UK by 2035 because of automation and AI, according to a report by a leading educational research charity.The jobs most at risk are those in occupations such as trades, machine operations and administrative roles, the National Foundation for Educational Research (NFER) said.Highly skilled professionals, on the other hand, were forecast to be more in demand as AI and technological advances increase workloads “at least in the short to medium term”. Overall, the report expects the UK economy to add 2.3m jobs by 2035, but unevenly distributed

‘It’s hell for us here’: Mumbai families suffer as datacentres keep the city hooked on coal
As Mumbai sees increased energy demand from new datacenters, particularly from Amazon, the filthiest neighbourhood in one of India’s largest cities must keep its major coal plantsEach day, Kiran Kasbe drives a rickshaw taxi through his home neighbourhood of Mahul on Mumbai’s eastern seafront, down streets lined with stalls selling tomatoes, bottle gourds and aubergines–and, frequently, through thick smog.Earlier this year, doctors found three tumours in his 54-year-old mother’s brain. It’s not clear exactly what caused her cancer. But people who live near coal plants are much more likely to develop the illness, studies show, and the residents of Mahul live a few hundred metres down the road from one.Mahul’s air is famously dirty

One in four unconcerned by sexual deepfakes created without consent, survey finds
One in four people think there is nothing wrong with creating and sharing sexual deepfakes, or they feel neutral about it, even when the person depicted has not consented, according to a police-commissioned survey.The findings prompted a senior police officer to warn that the use of AI is accelerating an epidemic in violence against women and girls (VAWG), and that technology companies are complicit in this abuse.The survey of 1,700 people commissioned by the office of the police chief scientific adviser found 13% felt there was nothing wrong with creating and sharing sexual or intimate deepfakes – digitally altered content made using AI without consent.A further 12% felt neutral about the moral and legal acceptability of making and sharing such deepfakes.Det Ch Supt Claire Hammond, from the national centre for VAWG and public protection, reminded the public that “sharing intimate images of someone without their consent, whether they are real images or not, is deeply violating”

Is Farage right to claim that racism allegations are response to a dislike of his politics?

OBR’s leak was the only leak Reeves wasn’t responsible for in pre-budget shambles

Starmer calls on Farage to apologise to his alleged victims of racial abuse at school

Racism claims against Nigel Farage are no surprise to us | Letters

Hereditary peers aren’t out of touch with the realities of the job market | Letter

Reeves freezes fuel duty for now as she confirms 3p-a-mile electric vehicle charge