In a recent report, Elon Musk's AI video generator, Grok Imagine, has come under fire for allegedly creating sexually explicit content of pop star Taylor Swift without any user prompting. Clare McGlynn, a law professor and expert on online abuse, labeled the phenomenon as a "deliberate choice" by the AI, describing its outputs as an indicators of systemic misogyny woven into AI technology. According to the claims, the platform's so-called "spicy" mode quickly generated uncensored topless images of Swift, sparking outrage and raising critical questions about the platform’s accountability.

The Verge reports that the Grok Imagine system lacked adequate age verification processes, which became mandatory as of July 2023. XAI, the company behind Grok, has yet to respond to requests for comments. McGlynn condemned the incident, asserting that the technology's failure to prevent such misuse reveals systemic biases in AI developments. She stated, "This content being produced without prompting exemplifies the misogynistic tendencies of much AI technology."

Interestingly, this episode is not an isolated event; Taylor Swift's likeness has previously been misused in viral sexually explicit deepfakes that amassed millions of views earlier this year. The generated content can give the illusion of Swift being scantily clad in revealing scenarios, despite the original prompt merely suggesting a celebration. Real-time tests conducted by media reviewers confirm this alarming ability of the AI, with one account noting unexpected explicit outcomes.

Current UK legislation prohibits the generation of pornographic deepfakes used for revenge purposes or involving children, but proposals to extend these restrictions to all non-consensual deepfakes are still pending. Baroness Owen, who has advocated for these amendments, emphasized the importance of consent in matters of image ownership, linking the current situation as a pressing reason for the legislation’s expedited implementation.

The Ministry of Justice reiterated that the rise of such deepfake technology can be harmful, especially if it proliferates unchecked. They are committed to ensuring legislation against the creation of unauthorized deepfakes moves forward swiftly. Past actions taken by social media platforms in the face of similar incidents suggest a recognition of the need for enhanced oversight. The team at The Verge chose to spotlight Taylor Swift to test this new AI technology, anticipating that previous controversies would result in stronger protective measures.

Swift's representatives have been contacted for a statement on this ongoing issue, which underscores the urgent need for reform as technology continues to evolve faster than legal protections can be implemented.