Why the online safety inquiry falls short (and why it matters)

It is no surprise that Big Tech has become deeply unpopular. From the Cambridge Analytica scandal and the Facebook Papers to the dark side of TikTok and Google misleading people; the public is becoming increasingly aware of the invasive data collection, exploitative practices and harmful business models of the social media platforms that have come to dominate our lives.

In Australia, the Coalition government has eagerly seized this anti-Big Tech sentiment and has sought to score political points by taking on an unpopular adversary. After all, voters love seeing Zuckerberg taken down a notch. Yet, the government has been happy to promote themselves as “Tough on Big-Tech” while quietly expanding its ability to exploit digital technology for its surveillance and censorship potential. 

Under the banner of ‘online safety’, we have seen the Coalition government repeatedly propose internet regulation ostensibly about reducing online harms for vulnerable people, all the while pushing an agenda of moralism, monitoring, censorship and control. These proposals—including many provisions in the Online Safety Act and the draft Online Privacy Bill—have very little regard to the harmful consequences of undermining encryption, threatening our right to be anonymous, and increasing censorship. This is creating a dysfunctional, and sometimes outright hostile digital infrastructure that ultimately puts our collective safety at risk. 

Having clocked that ‘cracking down’ on Big Tech is politically popular, the Coalition announced its Parliamentary Inquiry into Social Media and Online Safety in late 2021. In the same breath, they proposed the controversial and flawed Social Media (Anti-Trolling) Bill.

Privacy is woefully overlooked as an essential component of online safety

The massive influence of social media companies is deserving of scrutiny. But a safer internet must be a rights-respecting internet. Creating a safe digital future means protecting and expanding our right to privacy, digital security, and democratic participation. In our submission to the Online Safety Inquiry, Digital Rights Watch urged the Committee not to conflate surveillance with safety. We emphasised the need to address the underlying business models of the major social media platforms that create and exacerbate online harm, rather than merely focusing on surface level symptoms. One powerful way to address online harm at the source is to create meaningful privacy regulation that restricts what companies can do with our personal information. 

Despite emphasising that the algorithms used by social media platforms can–and do–harm users, the recommendations in the final report fail to meaningfully address these harms. The potent algorithms that amplify content and even manipulate our moods are only possible because these platforms extract vast amounts of personal information. Put simply, the power of these algorithms are fundamentally fuelled by a huge invasion of our privacy. Robust privacy and data protection regulation is one of the key regulatory tools that we have to protect ourselves against the algorithmic harms that the report is concerned with. We were disappointed to see that our right to privacy was overlooked again.  

The sole privacy-related recommendation in the report calls for the “implementation of a mandatory requirement for all digital services with a social networking component to set default privacy and safety settings at their highest form for all users under 18 (eighteen) years of age.” On the face of it, this may sound like a good thing. What it signals to us though, is a pretty rudimentary understanding of privacy, and a certain lack of boldness which will be required if we ever want to meaningfully challenge the business models of social media companies. 

As far as we’re concerned, the highest privacy settings should be the default for everyone using social media, regardless of age. It’s not enough to just have options for privacy settings available, social media companies are well aware that most people do not change their privacy settings from the default. So we agree with the sentiment from which this recommendation has risen. 

But there are two fundamental flaws to this recommendation. 

First, by limiting this additional protection to children, social media companies must first identify who is and is not under 18 on their platforms. This brings us back to the same debate surrounding age verification, and the plethora of privacy and security risks that come with it. This is the same flawed logic that is contained in the proposed Online Privacy Bill, which would also require social media companies to identify children in order to offer them higher privacy protections. 

The simpler and more effective approach? Require increased privacy protections for everyone, regardless of age. After all, the harms caused by privacy invasion don’t cease when you tick over to 18. 

Second, changing privacy settings on social media generally does nothing to mitigate how the platform itself collects, uses, and discloses your personal information. These settings are about managing peer-to-peer or ‘horizontal’ information sharing, such as whether you allow anyone on the network to see your photos, or limit them to just your friends. Making these settings the highest by default is a positive step, and may reduce interpersonal harms that can occur when people accidentally or unwittingly make their personal details available to strangers or malicious actors online. But these settings do nothing for the larger, systemic issues tied up in the ‘vertical’ information sharing—the pervasive data collection and processing from platforms themselves. 

Rather than challenging the status quo, this shallow approach to privacy actually entrenches platform power by accepting the terms they have presented us. It provides an illusion of individual privacy without challenging the way privacy is undermined on a collective level. Overall, we were disappointed that the inquiry acknowledged but did not adequately explore how the business models of these platforms incentivises online harm. Without a proper understanding of  the causes of online harms, the proposed solutions will always be inadequate and sometimes actively harmful.

But perhaps the most concerning element of the final report was that the government continues to frame online security and privacy as in opposition to online safety, especially for children.This is a dangerous false-dichotomy that risks undermining the very tools we need to create an internet that is not only safe, but vibrant, fun and supports our democracy. 

Online safety need not be about virtuous fearmongering—it can and should be about promoting the autonomy, privacy and security of individuals and communities, as against both big tech and the surveillance state. 

Privacy is lacking, what else is in the report?   

The remaining majority of the recommendations focus primarily on increasing the power, responsibility and resources of the eSafety Commissioner, and calling for further inquiries and reviews into the role of social media into democratic health and social cohesion, into technology-facilitated abuse, and into the use of algorithms in digital platforms. The Committee also recommends that the eSafety Commissioner, the Department of Infrastructure, Transport, Regional Development and Communications, and the Department of Home Affairs, “examine the need for potential regulation of end-to-end encryption technology in the context of harm prevention.” We already know what Australia’s law enforcement and intelligence arm of the government thinks about encryption (spoiler: they want to be able to spy on people more easily).  

We were pleased to see that the report includes calls to increase transparency requirements for social media platforms. Transparency will never be enough on its own, but compelling platforms to show their cards is an important step toward holding them accountable. The report also contains a proposal for a “Digital Safety Review” in acknowledgement of the broad, complex, and sometimes conflicting, range of laws regulating social media and the internet.

What happens now?

The Committee’s recommendations will now go to the federal government, which is looking to push its proposed ‘anti-trolling’ bill before the election, and has just announced plans to introduce legislation to “combat harmful disinformation and misinformation online”…provided they are elected.

Taken together, the questionable timing of this inquiry combined with its unreasonably short timeframe, broad terms of reference and relatively lacklustre outcome suggests that this was never about meaningfully grappling with the big, systemic issues wrapped up in online safety and social media. Instead, it was about attempting to manufacture the social licence to pass the deceptively-named “anti-trolling” bill (happily, this has backfired), enhance the Coalition’s appearance to be tough on big tech, and to promote a conservative view of what safety in the digital age means.