The Internet Search Engine Services Online Safety Code is another tranche of policy that bars young people accessing ‘harmful content’, this time through search engines. It is unlikely to succeed in its stated ambition of protecting children, and will definitely cause harmful side-effects.
What this means
From the 27th of December 2025, search engines operating in Australia will be required to set default ‘logged out’ search state to ‘safe,’ and verify users are over 18 years old before allowing them to turn ‘safe’ search off.
This means the default state for search, whether you’re in incognito mode or not, will be child-safe settings. Users will need to be logged-in to view results that the search engine company has flagged as potentially harmful.
‘Search engines’ cover any technology, artificial intelligence or otherwise, which return search results in response to input queries.
Why this is harmful
The Internet Search Engine Services Safety Code and the Social Media Minimum Age Bans remain a set of regulations that are not merely ineffective in reducing online harms, but will actually increase the level of online harms that face young and adult Australians alike.
When Anika Wells forces people to prove their age to access a normal search experience, she jeopardises everyone’s online safety and hurts our access to information, our privacy, and the protection of our data.
Privacy-invasion
Any privacy invasion, including age-verification for search, must be demonstrably necessary and effective. However, there is no evidence that age verification is either of these things.
We have already seen in the social media ban the privacy problems of using biometric or government-issued ID to verify age online. Unlike passwords, biometric data like facial scans cannot be revoked or reissued once compromised.
Blocking non-harmful content
When we rely on search engines to determine what is “harmful content”, and threaten them with large fines if they publish any, they are incentivised to take a maximalist approach to censoring and include non-harmful content. They will miscategorise and censor unharmful content. This weakens everyone’s ability to use the internet to search and learn - cultural, medical, and socially-relevant information will be hidden from us unless we submit to pervasive surveillance. Google and Bing image searches do not return Courbet’s famous painting “L’origine du monde” when you search for it. It is not harmful, but a part of the world’s art history.
Age-verification doesn’t work.
Age verification systems fail to meaningfully restrict young people’s access to harmful content, instead pushing them to use VPNs or unsafe alternatives.
Age-verification does not reduce the consumption of pornography by young people. Instead, it creates significant risks to individual privacy, increases the likelihood of data exploitation, and harms young people. While regulators have identified multiple methods for verification, each has critical flaws that either compromise privacy, allow easy circumvention, or both.
The UK’s Online Safety Act is failing in real-time: widespread mockery, VPN usage up 6,430%, and half a million people petitioning for repeal. Age-gating simply doesn’t work. Australia won’t be the first success story.
Where we should invest
Instead of masking the problems of search engines with an age ban, the Online Safety Codes should instead focus on stopping internet companies from profiling and monetising Australian internet users.
We need our government to act in the interests of Australians, rather than Big Tech. This means regulating the opaque algorithms that search corporations use to target advertising at us, rather than increasing the amount of our information that they have.
Young people deserve real protection online - not surveillance, exclusion, and false promises of safety.