Media Release: National AI Plan

Posted on December 3, 2025

02 December 2025

The Federal Government’s thin National AI Plan folds to Big Tech: unleashing AI training on public and private data with no guardrails.

Today the Federal Government has released their long-awaited National AI Plan. There are some positive elements of this plan, including increased power for regulators, continued ambition for privacy reform, and a commitment to worker participation in the take-up of AI. However, these are future aspirations rather than concrete commitments. The substance of the AI Plan reflects a lack of desire to regulate Big Tech’s AI systems before they harm people.

Digital Rights Watch firmly believes in regulating AI through separate legislation to avoid the acceleration of harms we already see in the workplace and society. While we are pleased to see the Government state they will “not hesitate to intervene" if there is regulation needed to address harms from AI, this is not good enough to keep Australians safe and build trust in new technologies.

There are already concrete harms caused by AI: race and gender bias in AI systems used in healthcare; LLMs generating harmful content like promoting suicide and encouraging delusion; non-consensual deepfake and nudify image- and video-generation; and the creation of mis- and disinformation.

The National AI Plan promotes Australia as a location for datacentre development, without taking into account the serious environmental impacts, most importantly the amount of water used for cooling in a dry country already struggling to adequately maintain our water resources.

Community trust will be fundamental to the AI Plan: we need regulators with the funding and powers to not only properly address harm when it arises, but also to proactively intervene before serious harm takes place.

Quotes attributable to: Tom Sulston, Head of Policy

“Australians consistently show that they’re sceptical of AI companies and would welcome strong regulation. The government took a great opportunity to regulate Big Tech but has flubbed it. Their AI Plan has sold us, and our data, out to the AI companies. We urgently need more guardrails in the deployment of AI, rather than opening the locks to let the world’s most rapacious companies invade our private data to grow their profits.”

“Mandatory guardrails, like risk-management plans, testing AI systems, and third-party transparency, are basic demands of any technology company. When the government denies us these straightforward protections, it neglects its duty to protect us from the excesses of Big Tech playing fast and loose with Australians’ safety.”

“A wait-and-see approach to AI regulation is insufficient when we know that there are AI harms happening right now that urgently require regulatory intervention. Punting regulation into the long grass to chase the fantasy of AI productivity gains harms us all."

Media contact for interview:
media@digitalrightswatch.org.au
+61 448335466