
California attorney general investigates Musk’s Grok AI over lewd fake images
California authorities have announced an investigation into the output of Elon Musk’s Grok.The state’s top attorney said Grok, an AI tool and image generator made by Musk’s company xAI, appears to be making it easy to harass women and girls with deepfake images on X and elsewhere online.“The avalanche of reports detailing the non-consensual, sexually explicit material that xAI has produced and posted online in recent weeks is shocking,” California attorney general, Rob Bonta, said in a statement. “I urge xAI to take immediate action to ensure this goes no further.”Bonta’s office is investigating whether and how xAI violated state law

Elon Musk’s stubborn spin on Grok’s sexualized images controversy
Hello, and welcome to TechScape. I’m your host, Blake Montgomery, US tech editor for the Guardian. Today, we discuss Elon Musk’s rosy depiction of Grok’s image generation controversy; the seven-figure panic among Silicon Valley billionaires over a proposed wealth tax in California, though with one notable exception; and how AI and robotics have revitalized the Consumer Electronics Showcase.The firestorm over the Grok AI tool has been raging for more than a week now, and it shows no signs of dying down.Last week, I wrote about the rising backlash against Elon Musk’s Grok AI tool, which in recent weeks has allowed users to generate thousands of sexualized images of women

X ‘acting to comply with UK law’ after outcry over sexualised images
Elon Musk’s X is understood to have told the government it is acting to comply with UK law, after nearly a fortnight of public outcry at the use of its AI tool Grok to manipulate images of women and children by removing their clothes.Keir Starmer told the House of Commons on Wednesday that photographs generated by Grok were “disgusting” and “shameful”, but said he had been informed that X was “acting to ensure full compliance with UK law”.“If so, that is welcome,” the prime minister said. “But we are not going to back down. They must act

Young people, parents and teachers: share your views about Grok AI
Degrading images of real women and children with their clothes digitally removed by Elon Musk’s Grok tool continue to be shared online, despite widespread alarm and a pledge by the platform to suspend users who generate them.While some safeguards have been introduced, the ease with which the AI tool can be abused has raised urgent questions about consent, online safety and the ability of governments worldwide to regulate fast-moving AI technologies. Meanwhile, the misuse of AI to harass, humiliate and sexually exploit people – particularly women and girls – is rapidly escalating.We’d like to hear from young people, parents and teachers about how tools like Grok are affecting you. Are young people aware of how easily these images can be created? If you’re a parent, has this changed how you talk to your children about social media, consent or online safety? If you’re a teacher or work with young people, have you noticed an impact in classrooms or among students? Do you have concerns?You can share your views on Grok and other AI tools using this form

Use of AI to harm women has only just begun, experts warn
“Since discovering Grok AI, regular porn doesn’t do it for me anymore, it just sounds absurd now,” one enthusiast for the Elon Musk-owned AI chatbot wrote on Reddit. Another agreed: “If I want a really specific person, yes.”If those who have been horrified by the distribution of sexualised imagery on Grok hoped that last week’s belated safeguards could put the genie back in the bottle, there are many such posts on Reddit and elsewhere that tell a different story.And while Grok has undoubtedly transformed public understanding of the power of artificial intelligence, it has also pointed to a much wider problem: the growing availability of tools, and means of distribution, that present worldwide regulators with what many view as an impossible task. Even as the UK announces that creating nonconsensual sexual and intimate images will soon be a criminal offence, experts say that the use of AI to harm women has only just begun

Crypto coin firm touted by Eric Adams denies allegations of ‘rug pull’ scam
The cryptocurrency launched by New York City’s former mayor Eric Adams is already in hot water, and now the company behind it is being forced to defend itself from accusations that it scammed people.Investors and cryptocurrency watchers say the asset, dubbed NYC Token, surged to about $580m shortly after it hit the market on Monday and then rapidly plummeted in value. Observers speculated that someone behind the scene may have carried out what’s known in the crypto world as a “rug pull” – when the creators of the asset quickly sell their investments.The company behind the coin has denied any wrongdoing.In a statement posted on X, NYC Token said it was aware of the allegations but rejected claims of a rug pull

Woman pulled out of UK ultramarathon after death threats over Afghanistan fundraising

Circumcision kits found on sale on Amazon UK as concerns grow over harm to baby boys

One in four UK teenagers in care have attempted to end their lives, study says

Five minutes more exercise and 30 minutes less sitting could help millions live longer

NHS spending up to £19k a time treating people suffering after overseas surgery, research finds

LGB+ people in England and Wales ‘much’ more likely to die by suicide than straight people
NEWS NOT FOUND