California Investigates Grok Over AI Deepfakes
California’s top prosecutor has launched a significant investigation into the dissemination of sexualized AI deepfakes generated by Elon Musk’s AI model, Grok. Attorney General Rob Bonta expressed deep concern regarding the alarming reports of non-consensual, sexually explicit materials produced by xAI, the company behind Grok. In his statement announcing the probe, Bonta emphasized:
– Shocking Reports: The avalanche of reports detailing the non-consensual, sexually explicit material that xAI has produced and posted online in recent weeks is shocking.
– xAI’s Responsibility: xAI has stated that anyone using or prompting Grok to make illegal content will suffer the same consequences as if they upload illegal content.
The California inquiry coincides with British Prime Minister Sir Keir Starmer’s warnings of potential action against X. Bonta highlighted the serious implications of the content:
– Harmful Imagery: This material, which depicts women and children in nude and sexually explicit situations, has been used to harass people across the internet.
Bonta has urged xAI to take immediate responsibility for its platform. Additionally, California Governor Gavin Newsom condemned xAI’s actions, declaring on X that the company’s decision to create and host a breeding ground for predators… is vile.
In a contrasting stance, Musk insisted on X that he is not aware of any naked underage images generated by Grok. Literally zero. He reiterated that Grok does not spontaneously generate images, stating, It does so only according to user requests. Musk, a prominent Republican donor, further suggested that critics leveraging the Grok controversy are politically motivated and using it as an excuse for censorship.
The discussion extends beyond California, as three U.S. Democratic Senators recently urged Apple and Google to remove X and Grok from their app stores. Following their request, X restricted its image generation tool to paying subscribers, while Grok remains available on both Apple’s App Store and Google Play.
The Debate on AI Accountability
Amid ongoing investigations, an essential debate is underway regarding the accountability of U.S. tech companies for user-generated content on AI platforms. Section 230 of the Communications Decency Act of 1996 provides legal immunity to online platforms concerning user-generated content. However, Professor James Grimmelmann of Cornell University highlighted:
– Clarifying Liability: This law only protects sites from liability for third-party content from users, not content the sites themselves produce.
– xAI’s Defense: Grimmelmann critiqued xAI’s attempts to deflect responsibility onto users, arguing that these defenses may not hold up in court since xAI itself is making the images.
Senator Ron Wyden of Oregon, a co-author of Section 230, asserted that the law does not cover AI-generated images and called for companies to be held fully accountable for such content. He remarked, I’m glad to see states like California step up to investigate Elon Musk’s horrific child sexual abuse material generator.
As the California investigation unfolds, the UK is also progressing toward legislation that would criminalize the creation of non-consensual intimate images. The UK watchdog, Ofcom, has initiated an investigation into Grok, with potential fines of up to 10% of xAI’s global revenue or £18 million, whichever is higher, if any violations are found.
On Monday, Sir Keir Starmer informed Labour MPs that if Musk’s social media platform, X, fails to regulate Grok effectively, we will.
Stay updated on the latest tech developments by signing up for our Tech Decoded newsletter, and for those outside the UK, register here.