14 January 2025
AI Corporations Choosing Profit Over Safety as Abuse Images Proliferate
Grok, the AI tool embedded in X (formerly Twitter), is being weaponised to digitally remove clothing from women and children’s photos. Users were found to be making up to 6,000 bikini-related requests per hour, with generated images posted publicly on the platform.
When women criticise these deepfakes, they face retaliation through AI-generated sexually explicit material designed to humiliate and silence them.
Women whose intimate images have been shared online without consent report experiences of public shame and humiliation. This violation has serious mental health consequences, including increased anxiety and suicidal ideation. Many survivors of image-based abuse are discriminated against by employers. Survivors face an increased risk of ongoing stalking and further harassment throughout their lives.
The National Center for Missing and Exploited Children identified over 313,000 instances of child abuse on X in the last six months of 2024—all absorbed into Grok’s training data.
Grok trains on its own outputs and user prompts, meaning deepfake content becomes permanently embedded in the system, creating ongoing privacy risks for victims.
X has restricted AI image-generation to paying users, effectively monetising the creation of child sexual abuse material and non-consensual deepfakes rather than stopping it.
Quotes attributable to Tom Sulston, Head of Policy:
“While Prime Minister Anthony Albanese has called these practices “abhorrent,” no meaningful regulatory action has been introduced to protect those most affected. The solution requires regulating algorithms that produce abusive images, not banning platforms.”
“While users who wish to remain on X can opt out of having their data train Grok, and anyone can report AI-generated CSAM or unwanted nude images of themself to the eSafety Commissioner, each of these responses places the burden back on individuals to identify harm, report it, navigate complaints processes, and potentially fund legal action. This is an exhausting and unequal model of accountability, particularly for women and children who are already disproportionately targeted by image-based abuse.”
“There is nothing inevitable about this trend. The misuse of AI to generate exploitative and degrading material is not a natural consequence of technological progress. It is the result of design choices, commercial incentives, and regulatory failures. AI algorithms can and must be regulated. Safeguards can and must be built in. Platforms can and must be required to prevent the creation of this material rather than profiting from its circulation.”
Media contact for interview:
media@digitalrightswatch.org.au
Tom Sulston: +61 448335466