The power to misinform begins with an invasion of privacy

Concern about disinformation is higher than ever, especially as Australia faces an election. The allegations of Russian interference in the 2016 US election represented a watershed moment when it came to understanding the potential for disinformation campaigns to be waged on social media. COVID and vaccine disinformation has further increased attention on the issue and the pressure on the social media platforms to address it.

Facebook (now Meta) responded to the 2016 US election with a suite of changes to their advertising platforms. They added verification features to ensure that people running political ads live in the country they are targeting as well as transparency features so that journalists, researchers are able to access a library of political ads being run by candidates and lobby groups.

For this Australian election, Meta has partnered with RMIT’s FactLab to fact check content being shared on the platform. Content deemed to be untrue or misleading will have a warning label attached to it with a link to information debunking the claim. Meta have also claimed they will reduce the reach of content that has been flagged as disinformation across its platforms.

While these efforts to increase the transparency of ads on social media are welcome, on their own these approaches are not enough. A global game of disinformation whack-a-mole is hardly a sustainable solution to this problem. To truly solve online disinformation we need to understand how it became a problem in the first place. And for that we need to examine how these social media platforms work.

Lies have always been a part of election campaigns, and to some extent they probably always will be. But weaponised disinformation as we understand it today is a new phenomenon unique to modern digital platforms. So what is it about these digital platforms that turn run-of-the-mill election falsehoods and distortions into weapons of cyber-war?

The power of digital platforms to specifically target disinformation at audiences where it will resonate creates new opportunities for those who seek to spread disinformation and foment division and hate. The amplification algorithms that decide what you see in your newsfeed are trained using huge amounts of personal information extracted from our every move online, and increasingly offline too.

By tracking everything we do, digital platforms are able to tailor our experience. By knowing us intimately they can show us only content that is relevant to us. They promise us our own personal internet, curated just-so without any of the tedious input that manual curation requires. But in deciding what information we see, and how many people see what we post, these algorithms, and the companies who control them, have enormous power to influence our moods, our actions and, critically, our democracy.

In 2014 news broke about a controversial experiment that Facebook conducted on almost 700,000 of its users. For one week in January 2012, they skewed what users saw when they logged into its service to test if they could manipulate their emotions. Some people were shown more happy posts, while others were shown sadder posts and they found that the manipulated users were more likely to post either positive or negative posts at the end of the week. There was an uproar at the news that Meta was willing to play with users’ emotions but the reality is that emotional manipulation is a feature, not a bug of Facebook’s algorithm. The same study found that people who are shown less emotional news feeds are less likely to post anything at all, an outcome that doesn’t suit Meta’s business model. And in a bombshell leak of internal documents last year we learned that Meta was deliberately amplifying posts that received “angry” reactions because they drove more engagement. In 2019 their own research team found that these posts were “disproportionately likely to include misinformation, toxicity and low-quality news”.

Of course this problem goes well beyond Meta. The ABC’s Four Corners conducted research into TikTok’s recommendation algorithm last year and found that it was promoting eating disorder content to those most at risk echoing leaked documents from Facebook that found the company makes body issues worse for one in three teenage girls. Twitter’s research into their own amplification algorithm unearthed a political bias where right-wing political content gets amplified more than left-wing political content.

The power of amplification algorithms isn’t limited to social media platforms either. Search engine ranking may seem relatively innocuous but research into what has been dubbed the “search engine manipulation effect” found that simply changing the order in which search results are shown “can shift the voting preferences of undecided voters by 20% or more” while still being able to hide the manipulation from the user.

Amplification algorithms are the fuel on the fire of disinformation. These algorithms are only possible because they invade our privacy by collecting vast amounts of personal information, which facilitates the microtargeting that is so valuable to advertisers and spreaders of disinformation alike. Limiting what personal information companies can collect via privacy reform is one of the best ways to address not only disinformation, but the other harms that these amplification algorithms can cause.

Strengthening privacy protections is a crucial step to rein in the power of digital platforms. Whoever wins this upcoming election will be inheriting a review of the Privacy Act that is now six months overdue. The review was launched in December 2019 in response to the ACCC inquiry into digital platforms which made a number of recommendations for changes to the Privacy Act to reflect the changes in digital technologies. These recommendations include updating the definition of “personal information” to include online identifiers such as your IP address and ensuring that personal data is only collected with appropriate consent. While the government acted with haste to implement other recommendations from the ACCC such as the News Media Bargaining Code, they opted for a full review of the Privacy Act. Of course, taking the time to get such important reforms right is important, but while we wait Australians still have remarkably weak privacy protections.

Enshrining our right to privacy in law and placing significant limitations on how digital platforms can collect and process our personal information will blunt their ability to specifically target content at the user. This won’t completely do away with disinformation, of course, but it will reduce its potency. Australians have a reasonable expectation of privacy and it’s time that community expectations were represented in law.

It may still prove necessary to find other ways to regulate online platforms and their amplification algorithms. After WhatsApp became the center of disinformation campaigns in India the platform limited the number of people users could forward a message to, an intervention that acknowledges the key role platform design and amplification plays in the spread of disinformation. Regulatory action that limits virality would be a good step. 

Proposed laws to regulate amplification algorithms are progressing in the EU while similar legislation has been proposed in the US, and Australian regulators should watch closely. The Federal Trade Commission in the US recently went as far to order Weight Watchers destroy any algorithms derived from data that it illegally collected from children.

Any attempt to regulate these platforms or limit their collection of personal data on which their algorithms depend will of course face forceful opposition from these companies. But privacy reform is imperative to fight disinformation, expand our rights and defend our democracy.