All this talk of protecting women online reminds me of a quote from renowned feminist and author Mona Eltahawy: “I don’t want to be protected. I want patriarchy to stop protecting men who are violent towards women.” I doubt she would approve of the measures in the new Online Safety Bill.
As 2021 is the year the Australian federal government has decided to start caring about women, or at least to appear that way, there has been quite a lot of talk about protecting women online. From legislating online safety and renewed interest in regulating access to porn, to discussions about bullying, abuse and reducing anonymity online—there is no shortage of ideas on how to keep women safe in digital spaces.
There is no denying that gendered abuse, harassment, and violence occurs online and it is an issue that deserves attention. But we cannot just wave through tech-based solutions and despotic laws because they’ve been nicely packaged as a measure to protect women.
The throughline of these proposals is the assumption that increasing surveillance, monitoring, and control will reduce gendered harm online. We know that increasing police powers in the name of public safety often puts marginalised communities at more risk. The same runs true for our digital spaces. More data extraction, collection, and surveillance by authorities will inevitably put marginalised communities at risk—including women.
Let’s start with the Online Safety Bill, which has been repeatedly framed as a “women’s issue”. In fact, Communications Minister Paul Fletcher wrote an entire op-ed criticising the Greens for being critical of the bill with an argument as sophisticated as: well, I guess you must just hate women then? It’s exceptionally paternalistic to determine that this is for women! And then go on to ignore valid concerns from community groups and civil society about how many parts of the bill stand to harm sex workers, activists, and the LGBTQ+ community (many of whom are, shockingly, also women).
There are indeed some provisions in the bill which do stand to benefit women, such as strengthening a pre-existing scheme to take down intimate images shared without consent. However it also makes way for active surveillance and removal of online content if it falls foul of Australian’s outdated National Classification Code. That will affect different women differently—and it will likely be especially difficult for sex workers, potentially putting their safety at risk, as well as women who consume erotic content for education or pleasure. Let’s be clear: increased surveillance and the censorship that follows does not protect all women. You can read more about Digital Rights Watch’s concerns about the Online Safety Bill here.
Discussions about keeping women safe online tend to be contradictory when it comes to this government. In March, a number of female MPs spoke up about the extent of abuse and sexist trolling they experience online. This abhorrent behaviour is indeed a gendered issue that should not be excused. Notably, two days following that report, Andrew Laming MP issued an apology to the women he harassed online, with Prime Minister Scott Morrison indicating he’d take a “zero tolerance” approach to the behaviour. Given that Laming kept his position as MP, he must have found a way to tolerate it after all.
Hearing personal experiences of online abuse directly from women may have failed to prompt meaningful change of Parliament’s culture of misogyny. But it did bolster a conversation about the possibility of requiring 100 points of ID to verify your identity before using social media. There are serious, negative repercussions of such a policy, including security risks associated with providing official identity documents to social media platforms, and the consequences of effectively removing the ability to be anonymous. The proposition arose from a 2021 report following an Inquiry into family, domestic and sexual violence, yet once again, such measures are actually likely to harm many women and other vulnerable groups—the exact opposite of the purported objective. It’s almost as though online abuse of women is an issue that is used by politicians to further an ideological objective, rather than evidence-based policies focused on harm reduction.
Too often conservative politicians link sexual content with online harms experienced by women, which suits their political purposes more than a nuanced understanding and measured policy would. Following the trend, a liberal backbencher justified regulating access to pornography by way of age verification by specifically relating porn to domestic violence. The proposal to use technology to verify the age of porn viewers isn’t new, you might remember that while Peter Dutton was Minister for Home Affairs he suggested the use of facial recognition for the purposes of age verification. This proposal has been consistently criticised by digital rights advocates for attempting to slap a technical “solution” onto a societal issue, and in doing so creating an invasive surveillance-based system that creates a myriad of privacy and security concerns. It betrays conservative politicians’ preference for prudishness over women’s safety.
While we’re listing terrible tech-based ideas to “protect women”, let’s not forget the proposal from the NSW Police Commissioner for an app to verify sexual consent—an idea so blatantly counterintuitive to actually protecting women that the Commissioner quickly took back his comments. Yet, as highlighted by this blog post on the troubles of techno solutionism, even when these ideas are obviously terrible, we would be remiss to just brush them aside when people in power continue to suggest them.
Given that it’s 2021, it pains me to even have to say it, but we need to question which women these people in power are talking about protecting. If your experience of police is that they are helpful rather than a threat, it might not occur to you that they often use their powers against vulnerable people rather than to protect them. Similarly, if you feel that being watched by surveillance agencies keeps you safer, you might be prepared to ignore all the ways in which this idea of safety comes at the cost of others. When they say this is for ‘women’ it too often means it’s for white, wealthy, heterosexual, cisgender women. Intersectionality isn’t high on the agenda of those suggesting mechanisms to protect women by way of bolstering monitoring, surveillance, and control of the spaces they occupy.
Two concerning things arise when we look at this collection of very recent examples of how this government proposes to “protect women”. One, they all rely on some form of control/surveillance, and two, there is a particular undertone of moral panic about sexual content. Taken together, it is clear that these proposals are less about minimising harm to women online, and more about pushing a regressive and punitive political agenda. If we want tech policy that genuinely benefits women, it cannot be grounded in control and surveillance.
One reason people often argue in favour of the use of surveillance and data collection is to facilitate investigations into gendered and sexual violence against women. Yet there are numerous examples of police abusing their powers, including a pattern of police protecting abusers within the police force itself as shown by an investigation in 2020. If you need a recent example of how institutions are not built to serve and protect women, look no further than the police “accidentally” handing over 11 years worth of phone data to a woman’s abuser. Or the time a police officer hacked into a computer system, leaked the address of a victim-survivor to her violent former partner, and then had his conviction overturned. Why would women want more powers for police to collect and aggregate data about their lives when it so often used to harm them?
If we want to give women the chance to protect themselves from technology-facilitated harm, there are many other policies the government could pursue. For example, authorities could crackdown on the use of stalkerware, the use of which is alarmingly still on the rise.
Stalkerware is a form of malware generally installed on a victim’s device which then relays information back to the person who installed it such as location, photos, texts and call history. A 2019 study of the Australian landscape highlighted the acute risk of spyware in Australia for women in particular, and recommended that the Privacy Act be strengthened to better protect people’s personal information. Instead of monitoring women’s online activity, police resources could be used to create tools to assist women in identifying if stalkerware might have been used on them, and provide additional funding and technical training for services to assist women in dealing with technology-facilitated abuse, such as those offered by WESNET.
As highlighted by online security expert, Eva Galperin, “full access to a person’s phone is the next best thing to full access to a person’s mind”. This is why full access or ‘backdoors’ are so alluring for law enforcement, but also for those who seek to stalk, harass, or otherwise abuse women. Notedly, stalkerware is often marketed as a way for parents to keep tabs on their children. Until recently, antivirus companies did not recognise stalkerware as malicious. Thankfully that is changing, due to the work of a collection of human rights activists and antivirus companies. It’s imperative that we start building public policy that doesn’t marginalize, but rather empowers vulnerable groups, including women.
Sanitising the internet of “inappropriate” content, removing the ability to be anonymous, undermining encryption, and relying on invasive data extraction and harmful technologies such as facial recognition will not change the underlying culture which enables gendered and sexual violence against women. It will, however, undermine our collective digital rights.
As we take stock of the current policy landscape we look back on the time that Mona Eltahawy appeared on Australian television and made a bold, impassioned call to hold men who harm women accountable for their actions. The episode of Q&A was later removed by the ABC from iView for coarse language being “confronting and offensive”. The erasure of women’s voices, often angry, indignant, critical or simply provocative, is a predictable and sad outcome of these draconian laws.
Samantha Floreani, Digital Rights Watch Program Lead