The Online Safety Bill was introduced in December with the aim to “improve and promote Australia’s online safety.”
The Bill contains six key priority areas:
- A cyber-bullying scheme, to remove material that is harmful to children,
- An adult cyber-abuse scheme, to remove material that seriously harms adults,
- An image-based abuse scheme, to remove intimate images that have been shared without consent,
- Basic online safety expectation (BOSE), for the eSafety Commissioner to hold services accountable,
- An online content scheme, for the removal of “harmful” material through take-down powers,
- An abhorrent violent material blocking scheme, to be able to block websites hosting abhorrent violent material.
The first three areas focus on creating pathways of redress for children and adults suffering online bullying, abuse, and non-consensual sharing of intimate images. These are important, as these online issues can translate to significant real-life harms. While we believe there is some small room for improvement in these areas, they are not the parts of the Bill we are most concerned with.
The trouble is, alongside these important objectives, the Bill introduces provisions for powers that are likely to undermine digital rights and exacerbate harm for vulnerable groups. Let’s break it down…
The Online Content Scheme
Part 9 of the Bill gives the eSafety Commissioner expanded take-down powers for content on a ‘social media service,’ a ‘relevant electronic service’ or a ‘designated internet service’ (broadly speaking that means the internet platforms and messaging). The Commissioner can issue removal notices for Class 1 and Class 2 material, as well as app removal notices and link deletion notices.
What is Class 1 and Class 2 Material?
The Online Safety Bill relies heavily on the National Classification Code to determine which content may be issued with a removal notice. The classification system in Australia has been criticised for being outdated and overly broad. Using it as the basis of this scheme can be seen as an application of moral panic to online spaces.
Class 1 aligns with content that would be deemed “Refused Classification” (RC). This includes content that deals with sex or “revolting or abhorrent phenomena” in a way that offends against the standards of “morality, decency and propriety generally accepted by resonable adults.”
Class 2 material includes content that is likely to be classified as X18+ or R18+. This includes non-violent sexual activity, or anything that is “unsuitable for a minor to see.”
Taken together, Class 1 and 2 material captures all sexual content, violent or not. So if you watch porn, go to kinky websites, or maintain subscriptions to adult content, be advised that it is within the remit of the eSafety Commissioner to tell you what you can and cannot see.
Why are we concerned?
This scheme is likely to cause significant harm to those who work in the sex industry, including sex workers, pornography creators, online sex-positive educators, and activists. Especially as last year the pandemic forced many to work online, this scheme risks undermining the livelihood and ultimately the safety of sex workers. Moreover, we have already seen as a result of the controversial Stop Enabling Sex Traffickers Act (SESTA) and Allow States and Victims to Fight Online Sex Trafficking Act (FOSTA) legislation in the US that when sex workers are forced offline they are often pushed into unsafe working environments, in turn, creating direct harm.
The scheme also doesn’t contain an adequate appeals mechanism for individuals and companies who receive removal notices. While Section 220 of the Bill does provide a method for people to challenge decisions through the Administrative Appeals Tribunal (AAT), there should be additional opportunities for people to challenge take down notices, without having to go through the court system. By the time someone goes through the process with the AAT, the harm (and potential loss of income) associated with the removal has already occured. The Commissioner should be able to provide an effective remedy, including the ability to reinstate content.
The Abhorrent Violent Material Blocking Scheme
This scheme is framed by the government as a response to the tragic mass shooting in Christchurch which was live-streamed and went viral online. Part 8 of the Bill gives the eSafety Commissioner the power to issue a blocking request or notice to Internet Service Providers (ISPs) to block domain names, URLs, or IP addresses that provide access to such material. The Commissioner does not need to observe any requirements of procedural fairness for these requests. Under Section 100 of the Bill, blocking notices cannot be for longer than 3 months, however, there are no limitations to how many times the Commissioner can renew such a blocking notice.
Why are we concerned?
While there is no doubt that we need mechanisms to deal with viral violent videos/content online and the harm they cause, the proposed scheme is overly simplistic and overlooks complex underlying issues.
There are some limits to this power under Section 104 of the Bill which includes some exempt material, such as if it is necessary for conducting scientific, medical, academic or historical research, or it relates to a news report that is in the public interest. While we welcome these limitations, there remains a wide scope of discretion for the eSafety Commissioner to determine what is indeed in the public interest.
In some circumstances, violence captured and shared online can be of vital importance to hold those in power accountable, to shine the light on otherwise hidden human rights violations, and be the catalyst for social change. The virality of the video of the murder of George Floyd by a police officer in the US played a key role for the Black Lives Matter movement in 2020. Closer to home, a viral video of a NSW Police officer using excessive force against an Indigenous teenager prompted important discussions about racism in Australian law enforcement.
Simply blocking people from seeing violent material does not solve the underlying issues causing the violence in the first place and it can also lead to the continuation of violence behind closed doors, out of sight from those who might seek accountability. It is essential that this scheme not be used to hide state use of violence and abuses of human rights.
We are also concerned that there are no safeguards or limitations in place under Section 100, with regard to the renewal of blocking notices. As documented by our friends at Access Now, internet blocking is a serious human rights issue that has been abused as a mechanism to suppress and limit dissent and democratic debate around the world. We must tread very carefully when entering into this domain, to ensure that sites are only blocked in very limited circumstances, and never in a way that infringes upon the rights and freedoms guaranteed by international law.
Basic Online Safety Expectations
Part 4 of the Bill gives the Minister power to determine ‘basic online safety expectations’ for ‘social media services’, ‘relevant electronic services’, and ‘designated internet services.’
Section 46 of the Bill requires the expectations to specify that the service should:
- Minimise cyber-bullying or abuse material targeted at a child or adult, non-consensual intimate images, Class 1 material, and abhorrent violent material,
- Take reasonable steps to prevent children from accessing class 2 material,
- Provide ways for people to make complaints about online content.
Why are we concerned?
When drafted so broadly, these expectations incentivise proactive monitoring and removal of content that falls under Class 1 and 2. Given the immense scale of online content, tech companies generally turn to automated processes (such as AI) to determine which content is or isn’t harmful, despite evidence that content moderation algorithms are not consistent in identifying content correctly. This kind of content moderation has been shown to disproportionately remove some content over others, penalising Black, Indigneous, fat, and LGBTQ+ people. As experience with the controversial SESTA/FOSTA in the US demonstrated, some platforms will default to blanket removal of all sexual content to avoid penalty rather than deal with the harder task of determining which content is actually harmful.
Automated processes have also not proven to be as effective for hate speech, making it more likely to be a visual-based scheme, and less effective at identifying specific forms of content like cyberbullying or abuse material. In 2018, Zuckerberg said it’s “easier to detect a nipple than hate speech with AI.” We need to ensure that if automated decision making is used for content moderation to comply with the provisions in this Bill, it is accompanied with requirements to use open source tools, transparent standards, and appropriate appeals mechanisms for cases of false positives.
The requirement under Section 46(d) of the Bill to take ‘reasonable steps’ to prevent children from accessing Class 2 content also raises concerns around the potential technological “solutions” that may come as a result. For example, you may remember the proposal from the Department of Home Affairs to use facial recognition technology for age verification to access porn sites. This would create significant privacy and data protection issues.
Information Gathering Powers, Investigative Powers, and Encryption
Part 13 provides that the Commissioner may obtain information about the identity of an end-user of a ‘social media service’, a ‘relevant electronic service’, or ‘designated internet service.’ Part 14 also provides the Commissioner with investigative powers, which includes a requirement that a person to provide “any documents in the possession of the person that may contain information relevant.”
Why are we concerned?
Given that ‘relevant electronic service’ includes email, instant messaging, SMS and chat, without mention to end-to-end encrypted messaging services, it is possible that the Commissioner’s information gathering and investigative powers would extend to encrypted services. We need additional clarification of the scope of these powers, and a clear indication in Section 194 of the Bill that a provider is not expected to comply with a notice if it would require them to decrypt private communications channels or build systemic weaknesses to comply with the provisions of this Bill.
The eSafety Commissioner has already argued against end-to-end encryption, saying that it “will make investigations into online child sexual abuse more difficult.” Claiming that encryption exacerbates harm to children is unproven, and strengthens a regressive surveillance agenda at the expense of our digital security. It is essential that compliance with this Bill does not create a way to compel providers to restrict or weaken their use and application of encryption across their platforms.
The scheme prompts overarching questions about how much power a non-elected government official should have over what adults can and cannot access online. It also makes the flawed (and outdated) assumption that sexual content and sex work is inherently harmful.
While the goal of minimizing online harm for children is vital to our communities, we must acknowledge that policing the internet in such broad and simplistic ways will not guarantee us safety and will have overbroad and lasting impacts across many different spaces.
Changes we want to see
- A sunset clause: We need the ability to review how and if these powers are working well, and decide if the legislation should be renewed or revisited. A sunset clause ensures such a process takes place.
- Multi-stakeholder oversight board to review decisions made to remove and block content. This should include sex workers and activists and happen on a regular (but at least an annual) basis.
- Transparency over the categories of content take-downs, complaints, and blocking notices issued, including the reasoning. This will allow for public and Parliamentary scrutiny over the ultimate scope and impact of the Bill.
- A meaningful appeals process, so people can challenge removal notices in a timely manner, without having to go through the court system.
- Explicit assurance that ISPs and digital platforms will not be expected to weaken or undermine encryption in any way to comply with any parts of this Bill.