AI corporations creating abuse images is not an inevitability but a business choice

Posted on January 14, 2026 by Digital Rights Watch
AI corporations creating abuse images is not an inevitability but a business choice

Grok, an AI embedded within Twitter/X, is being used to digitally remove women’s and children’s clothing, and the results are being posted publicly.

Women’s images are manipulated by Grok in ways designed to humiliate them, placing them into degrading, violent, and racist scenarios. Grok does not exclude children from this abuse. It has altered photos of teenagers and children to remove their clothing, posting the images to X for public consumption.

The Guardian found that users were demanding as many as 6,000 bikini requests from Grok every hour.

When women criticised the creation of deepfake nudes, they were bombarded with AI-generated sexually explicit material designed to humiliate them. The content was used as a tool to punish women who spoke out and intimidate women against speaking out.

RMIT’s Dr Caitlin McGrane, whose research extensively covers the online harassment of women, notes that ‘every time a new image- or text-generation product is released, we see it used to abuse women through deepfakes or nudification.’

Women whose intimate images have been shared online without consent report experiences of public shame and humiliation. This violation has serious mental health consequences, including increased anxiety and suicidal ideation. Many survivors of image-based abuse are discriminated against by employers. Survivors face an increased risk of ongoing stalking and further harassment throughout their lives.

There are hundreds of platforms designed to “nudify” images, all wrought with ethical, legal, and privacy issues. The ability of Grok to produce nude deepfakes is especially concerning given its widespread use. Users do not need to download a separate application or sign up for a new platform: the tool is embedded within a service they already use. This level of accessibility accelerates the normalisation of AI-generated content that degrades women and depicts child sexual abuse.

X is a popular channel for sex workers to attract clients and promote their business. In 2024, Elon Musk allowed pornographic material to be posted to X. Grok consumes all the content users upload, trains on it, and never forgets it. Elements of this content, such as faces or bodies, can then be reproduced, without credit, compensation, or any regard to the poster’s privacy.

Grok also trains itself on the prompts it receives and its own output. Therefore, deepfake content generated by users becomes part of the system’s memory and cannot be erased, creating significant and ongoing privacy risks for the victims of AI-image generated abuse.

Other nudification services raise similar concerns. Generated images are frequently retained with little or no meaningful privacy protection. One company stored 93,485 AI generated explicit deepfake images in a publicly accessible database.

In the last 6 months of 2024, the National Center for Missing and Exploited Children identified ‘313,917 instances of child abuse’ on the platform. All of which would have been hoovered-up into Grok for training.

When prompted to create images of children, the model draws on all of its training data, including this material. The lack of guardrails means it can reproduce child sexual abuse material (CSAM) with relative accuracy. Even non-sexual prompts may be influenced by the presence of abusive material in the training dataset, resulting in sexualised outputs involving children. This re-victimises the individuals depicted and facilitates the consumption of CSAM, normalising abusive sexual interests.

Grok is not the only AI trained on CSAM. LAION-B5 is the largest collection of images on the internet used to train AIs. In 2023, Stanford researchers found that this dataset included hundreds of items of known CSAM, as well as previously-unidentified CSAM. Midjourney and Stability AI use the LAION-B5 database to train their image-generating AIs.

The LAION-B5 dataset also contains hundreds of images of Australian children. Some of these images included identifying details such as school locations, hospitals, full names, and addresses. Although these images were removed once identified, AI systems are not capable of “forgetting” their training data. This means that AI image-generators continue to use Australian children’s faces and personal information.

Despite CSAM material and photos of children being available within the LAION-B5 dataset, Midjourney and Stable Diffusion regulate their algorithm to prevent the creation of more CSAM. AI image generators creating CSAM is both foreseeable and preventable. When Elon Musk allows Grok to produce this material, he is not making a mis-step, but an active design choice.

X has limited its AI image-generation feature to paying users. Rather than stopping the generation of CSAM and non-consensual sexual deepfakes, Elon Musk has chosen to monetise it.

X is not the only tech giant profiting from the creation of abusive images. Nudify apps are widely-available on major app stores and discoverable through standard search engines. Platforms such as Apple’s App Store take a percentage of all in-app purchases, meaning they profit directly from the creation of abusive content. One AI nudification service paid for more than 87,000 Meta advertisements, with an estimated 90 per cent of its traffic coming from Instagram.

Dr Caitlin McGrane argues:

“This isn’t a problem of a few bad apples. It’s a systemic issue, with digital services being treated as though they are exempt from regulation when they are not. Regulation must be ongoing and responsive. When systems are not fit for purpose or require stronger intervention, we must act to protect all internet users, including women.”

As long as these AI systems remain unchecked, the safety and privacy of women and children will continue to be jeopardised. Prime Minister Anthony Albanese has described these practices as “abhorrent,” but has yet to introduce measures that meaningfully protect those most affected by abusive technology. This means regulating the algorithms that produce abusive images, rather than attempting to ban Australians from the platforms that use them.

While we wait for the Albanese government to regulate harmful algorithms, individuals will need to take action to protect themselves.

X users who choose to remain on the platform can and should opt out of having their data used to train Grok. Anyone can report AI-generated CSAM or unwanted nude images of themself to the eSafety Commissioner, who has the power to investigate and remove them. They may also have grounds to pursue legal action against the creator of the deepfake under the newly-introduced privacy tort.

While these mechanisms are important, they are not a systemic solution. Each of these responses places the burden back on individuals to identify harm, report it, navigate complaints processes, and potentially fund legal action. This is an exhausting and unequal model of accountability, particularly for women and children who are already disproportionately targeted by image-based abuse.

There is nothing inevitable about this trend. The misuse of AI to generate exploitative and degrading material is not a natural consequence of technological progress. It is the result of design choices, commercial incentives, and regulatory failures. AI algorithms can and must be regulated. Safeguards can and must be built in. Platforms can and must be required to prevent the creation of this material rather than profiting from its circulation.

Until that happens, we will continue to treat mass scale, automated abuse as an individual and intractable problem, when it is in fact a systemic and solvable one.