
AI companies are strip-mining Australian books, songs, and artworks to build their models without consent, without compensation, and without accountability.
This explainer describes how AI companies like Google and Meta are not only dodging Australia’s copyright laws, but also lobbying to weaken them. As we look forward to reform, we can learn valuable lessons from other countries.
Finally, we’ll hear from people on all sides of the debate: authors whose work has been taken, lawyers demanding compensation, and developers insisting that AI isn’t breaking the law even as their own models say otherwise.
The Copyright Act 1968 & loopholes in Australian law
The Copyright Act 1968 (Cth) does not refer to artificial intelligence (AI). While the Act still applies to creative works regardless of whether AI was used in their creation, there are no amendments to clarify how the Act should be applied to AI-generated content.
Training AI on copyrighted material
Digitally-reproducing works without permission or payment infringes section 31 of the Copyright Act. This is as true when using copyrighted material to train AI models as it would be for any other purpose.
Training an AI usually involves using crawlers to scrape material like novels, academic journals, or artwork; then using algorithms to train models that respond to user prompts. In general, there is no exemption for this use under copyright law; AI companies that work in this way rarely seek permission or pay to license works. If an individual trained a musical AI on someone else’s songs, they would legally need approval from the rights holders, and could be sued if they don’t obtain it. This is currently happening in AI companies at an industrial scale.
Existing copyright exemptions still apply when AI is involved. The exemptions permit copyrighted work to be downloaded and reproduced:
- If the information is news, research, review or satire.
- If a work is copied temporarily for technical or communication purposes.
It is not clear if this second technical exemption provides a legal out for AI developers using copyrighted material. Copyright holders are struggling to prove their work has been scraped or reproduced, because developers refuse to disclose their training data. The opacity of these systems shields them from accountability. For many creatives this is an insurmountable hurdle. Thomson Reuters, the parent company of the Westlaw legal database was able to clear it, thanks to the information produced by AI being uniquely and provably theirs.
The U.S copyright system provides an exemption to copyright restrictions if ‘fair use’ can be proven. Australia does not have equivalent legislation, meaning that while US decisions may be used for guidance, what is legal in the U.S. may not be legal in Australia.
Copyrighting AI-generated works
The law is murky but not neutral. Copyright only protects material that involves independent intellectual effort. That means typing a prompt into an AI model probably won’t qualify. But if an artist trains a model on their own work and carefully prompts it to produce something specific, that might be different. Right now, the line is undefined and that’s a problem.
No-one is required to disclose whether they used AI when applying for copyright. The government encourages transparency under its AI Ethics Framework, but there’s no legal obligation. This makes it easier for AI-generated content to pass as original, and harder for artists to defend their work.
Big Tech is pressuring our government to legalise industrialised copyright violation.
Google is demanding the Attorney-General create a new exemption in the Copyright Act to legalise the unauthorised use of Australian content to train AI. In its submission to the government’s copyright enforcement review, Google threatened that unless it can freely mine text and data, “Australia will not become a digital powerhouse.” The Business Council of Australia has made similarly-ambitious claims.
In Meta’s submission to the Productivity Commission, they claimed that strong privacy laws are bad for business. They want the right to use people’s data to train AI without asking, without paying, and without limits.
Privacy Commissioner Carly Kind pushed back against these statements. In a public post on LinkedIn, she made clear: “The true consumer benefit,” she wrote, “comes from strong laws that protect the privacy of Australians.”
At the same time the owner of the Claude AI system, Anthropic, is attempting to settle a class-action lawsuit brought by American authors. Even though it would only be paying a small amount of damages per infringed work, this does suggest that AI companies know that what they are doing is both illegal yet merely a cost of business.
In Australia, the Productivity Commission appears to have swallowed Big Tech’s PR. In their recent interim report on data they floated a “Technology and Data Mining” exemption for big tech — a slap in the face to Australian creators.
Such an exemption would grant developers unrestricted access to copyrighted material, effectively legalising industrial-scale copyright infringement, supercharging data extraction, and entrenching corporate surveillance.
AI companies know there are copyright issues with their tools and are trying to shift the blame to users. Midjourney’s terms and conditions assert that it is the user who prompts their AI that is legally-liable for its output.
AI developers claim that if they are required to pay for copyrighted material, their business models won’t be viable. This is a shakedown: code for, ‘let us do what we want, or you won’t get our technology’. Instead of adopting ethical business practices that value creative works, they are pushing to have the law retrofitted to their data-extractive business models. Our laws should not be bent to the commercial whims of big tech companies, and such cynical self-interest should not be validated by government bodies such as the Productivity Commission.
The economic benefits promised by AI companies and anticipated by the Productivity Commission are far from guaranteed.
Most of the wealth generated by AI adoption will flow to the mostly-American corporations that own the technology. Any concomitant benefits for Australian companies will come from cost-cutting measures such as replacing workers with AI. Economic gains won’t reach ordinary Australians, but will be siphoned offshore to the world’s richest companies.
The main purpose of the unproven claims of economic benefits appears to be as a carrot to lead governments away from regulating in favour of regular Australians.
What other countries are doing
The US has taken an attitude of techno-libertarianism with a ten-year moratorium on AI regulation proposed in congress. Much like the US, the UK is yet to create any AI-specific copyright laws.
The state of AI-based copyright abuse in the UK and US is evidence that existing legislation and common-law interpretations cannot be applied to such industrialised levels of copyright infringement.
The EU is taking a different approach by passing legislation to directly address the issue. The EU AI Act sets out some clear obligations for AI developers to meet, including:
- Publishing a summary of training data used (recital 107)
- Adherence to EU copyright law (recital 106)
- Creators must be able to refuse having their work used to train AI (Recital 1015,106)
- Regardless of where training occurs, if an AI wishes to operate in the EU it must adhere to these standards. (Recital 106)
The Act is only just coming into force. The institutional enforcement framework, including the European AI Office and national market surveillance authorities, formally came online in August 2025. AI companies are already adapting to this legislation. Many businesses have initiated compliance programmes, including transparency-reporting and risk-management systems. The EU has also launched a voluntary Code of Practice for general-purpose AI to guide industry alignment with the AI Act, touching on safety, copyright, and transparency.
The human cost: Creators vs. big tech
Authors:
Bookbub’s survey of 1200 authors found that nearly half chose not to use AI, primarily for ethical reasons. The most frequently-cited concern was AI’s use of copyrighted material with no compensation to copyright holders. One author says:
‘Many of my books have been stolen to train AI tools, without my permission. […] generative AI as it currently exists is unethical and destructive.’
Copyright Lawyers:
The Copyright Agency helps creators through copyright litigation and advocacy. Below is a quote from their submission to the Department of Industry, Science, and Resources inquiry into AI and consumer law.
‘The government should introduce a new law that would compensate Australians whose works have been used for training AI models’
AI companies:
‘Regurgitation is a rare bug that we are working to drive to zero’
AI companies such as OpenAI claim that training AI constitutes ‘fair use’ of copyrighted material. However, they simultaneously acknowledge that ‘regurgitation’ is a current problematic feature of AI. ChatGPT has a publicly-available form to lodge a copyright dispute, suggesting a significant volume of disputes.
What’s next for Australia?
It is unclear whether Australia will respond to the AI threat through the courts as have the UK and US, or if it will opt for an EU-style reform-based approach. What we do know is in December of 2023 the Attorney General’s Department unveiled the Copyright and AI Reference Group (CAIRG), to consider and (hopefully) respond to the emerging challenges with Copyright and AI. However, the CAIRG is yet to propose a single amendment.