Techno solutionism—very few things actually need to be an app.

Last week the Twittersphere and news outlets lit up, reporting that the NSW Police Commissioner suggested a mobile application should be created to verify consent between two adults before engaging in sex.

If you are following us at DRW, you probably see this for what it is—an objectively terrible idea. So why waste breath spending time on it? Well, when bad ideas come from people in power who, at least to some extent, have the resources to execute them, we tend to get worried. It would be flippant to roll our eyes and move on. At the same time, it’s a frustrating cycle, constantly arguing against society’s love for techno solutionism.

Techno solutionism gained notoriety as a term around 2013 after Evgeny Morozov’s book, To Save Everything, Click Here: The Folly of Technological Solutionism. It describes our desire to jump on technological solutions as a quick and flawless way to solve complex real world problems. It seems beautiful in principle to solve complicated issues with data, an app, monitoring, remote portals… but technology isn’t magic. It’s made and programmed by humans, subject to the same design flaws as our objective reality; to the same prejudice and bias. Unfortunately, most complex real-world problems require complex real-world solutions.

And that’s the core of the issue. Very few things need to be an app.

Amid fiery criticism, the Commissioner immediately started walking back his statement clarifying that the app “isn’t meant to be an evidentiary standard” to adjudicate rape allegations, but a “communications tool” between two people. And “it’s not a perfect solution,” but perhaps a part of it? He wasn’t sure. Of course the uncertainty didn’t prevent him—a very powerful man—from putting this thought out into the world.

This isn’t a post to discuss how the app would obviously fail to actually record consent (an evolving and ongoing concept as far as sex goes), or how it could be used by abusers to clear their own conscience, prove innocence, and negate any responsibility. This is about the overarching issue of turning to technology for a quick and easy fix, without having a robust understanding and debate about the issue we are trying to solve. Root-cause analysis, some of us would call it. And in stark contrast to the Commissioner, a quick media search on the topic will tell you just how much effort and energy were spent last week by women enumerating the reasons why this won’t work.

Techno solutionism is problematic because its roots run deep and it’s incredibly hard to weed from public discourse once it has been introduced. It’s often politically appealing to solve issues an electorate is buzzing about by “investing” money to tackle the issue. Look at the millions we poured to this consultancy to build a fancy new thing that will solve our problems! Remember the money and time spent on the CovidSafe app? Or the harmful and inhumane automated welfare system now infamously known as robodebt? Across government, money is thrown at issues in completely irrational attempts to disrupt reality. Yet these technological solutions are too often designed around simplified use cases, not complex abuse cases. Without a clear grounding in reality, these solutions cause harm to the same people they were naively intended to protect. In such a world, citizens become ever more visible to their governments, but not the other way around. In 2019, UN Special Rapporteur on extreme poverty warned about the rise of a digital welfare dystopia and that technology companies operate in a rights-free zone when it comes to developing technological solutions.

Many of these techno solutionistic technologies and systems used to be relegated to fiction, archived across pop-culture, but now they are coming to life at the behest of law enforcement and intelligence agencies. The best known fiction example may be The Minority Report by Philip K. Dick where three mutants foresee all crime before it happens. We are also reminded of that episode of Futurama where the Oracle robot predicted crime (very badly). Fiction no more, predictive policing algorithms are alive and well, and just as one might expect they are biased, racist, and compounding harm to already marginalised communities. Yet they continue to be peddled for all the reasons above—they take a complex problem and provide what seems like an easy fix, they tick all the right political boxes of being seen to be doing something about an issue, and they outsource liability and responsibility away from those in power to grapple with the complexity.

“Law and Oracle” episode of Futurama

This is precisely the case with the NSW Police Commissioner: powerless in the face of widespread gendered and sexual violence and the complexity of protecting his community. We can empathise, but we have to stop letting those in power bully us with technology.

We need robust conversations about democratic participation, education, fairness, and freedom. What we don’t need is another damn app.

Lucie Krahulcova, Digital Rights Watch Partnerships and Programme Director