The Fight for Digital Rights in the Age of AI

The promise of AI innovation has captured the attention of the technology industry and its associated policy makers. While we wait for the development of a National AI Capability Plan in Australia, companies are left with a set of voluntary guardrails to navigate the technologies’ associated risks. 

In a world in which the tech industry has shown blatant disinterest in following laws and policies, against a backdrop of rampant authoritarianism and a global political trend of sunsetting “responsible AI” initiatives, we need an enforceable legislative framework that prioritises privacy and protects us from harm. 

What do we mean when we talk about Artificial Intelligence?

“AI” can be a slippery concept that has different meanings and purposes depending on who is using it and why. In its broadest sense, “AI is the ability of a computer system to perform tasks that would normally require human intelligence, such as learning, reasoning, and making decisions.”

When discussing AI, many people often mean generative AI tools, such as ChatGPT, Claude, or Stable Diffusion; tools that take a prompt (“Show me a video of Will Smith being eaten by spaghetti”) and generate a reply. This can be text- or image-based and often is a convincing approximation of human creativity. Of course, there are many more products that use AI. Simpler automated decision-making systems (ADMS) are commonly understood as AI as well, for example. While there are technical definitions that exist, for the purposes of policy discussions, it’s fair to say most people think of AI when dealing with a wide array of technical tools. 

One other way to think about AI is by imagining its material components, that is:

  1. large data sets;
  2. computing power; 
  3. software (for facilitating access to these components above); and 
  4. the people with the skills who can build and manage the technology. 

Each of these components are governed by rules, and a social licence to operate. Data is not just an inert set of information, it is often extracted from people, sometimes without their knowledge or consent. Computing power is not just about servers and chips, but who owns them, who manages cloud storage and the supply chain for various parts. The software we use to access these systems can be easy or difficult to use, and the market will be constrained by the level of competition. How people work on these projects, and where they might choose to do so, is relevant to what gets built. In respect of each of these components there are policy considerations to take into account and levers at the government’s disposal to make change, though Australia’s capacity to influence these may be varied. 

For a thorough illustration on the material components of AI, take a look at Anatomy of an AI System, created by Kate Crawford and Vladan Joler. This project creates an anatomical map of supply chains, materials, data and labour of the production of an Amazon Echo, highlighting the human and environmental costs of creating each machine. 

The point of using an expansive definition of AI is that it reflects how the terms is used in public debate, and it also opens up the potential for many different kinds of policy interventions.

What we’ve heard so far

Until relatively recently, Australia was one of many places around the world that was grappling with safe and responsible AI. The EU passed the AI Act, which regulated the development and use of AI, particularly in high risk cases. The Bletchley Declaration in the UK was framed around the understanding that AI technologies present “enormous global opportunities” to enhance human wellbeing, if designed and developed responsibly. The US took similar initiatives, with President Biden signing an executive order in October 2023 which aimed to promote a competitive AI industry, and protect civil rights and equity. 

This approach to AI regulation was, in many ways, the blueprint for Australian policy, and explains some of the motivations for the steps taken by the government to date. But it also has left us without an enforceable legislative framework. In a world in which the tech industry has shown disrespect for laws and policies, there are good reasons to think this approach to AI regulation is insufficient for the purposes of protecting individuals from the real risks associated with these technologies. 

This has been brought into sharp relief with the election of Donald Trump. The Trump Whitehouse quickly reversed Biden’s executive order, and signed several more that abandoned commitments to safety and responsible development of these technologiesy. The UK followed suit, backing away from previous similar commitments. Shortly after entering office, Trump announced the $US500 billion joint venture project, Stargate, led by Softbank, Oracle and OpenAI to accelerate AI infrastructure development in the United States. All technology is political, and in an environment of growing authoritarianism, we remain gravely concerned that AI is developing without adequate governance and protections.

Australia’s position

Our federal government appears to be preparing Australia’s industry for heavy investment in AI, following similar moves internationally, including building new data centre infrastructure to help “drive the AI revolution”. According to Treasurer Jim Chalmers in his address to the Business Council of Australia, “Australia is among the top 5 global destinations for the data centre infrastructure AI depends on.” In a world where climate change continues to be a very real threat to our collective future, this bullish approach to building infrastructure known to have devastating effects on the environment, with outsized water usage, is cause for concern.

Australia has not yet introduced any specific legislation regulating the development or deployment of AI technologies. The government is aiming to deliver a National AI Capability Plan by the end of 2025. In the meantime, we have the AI ethics principles published in 2019, a Voluntary AI Safety Standard published by the National AI Centre, and a set of proposed mandatory guardrails

The AI ethics principles provide a high-level framework for engaging with technology. It promotes: human, society and environmental wellbeing; human-centred values; fairness; privacy protection and security; reliability and safety; transparency and explainability; contestability; and accountability. The principles are reasonable and arguably serve businesses well in circumstances where human rights and fairness are already embedded in the business’s operations. The principles are also a good guide for businesses seeking to mitigate risks associated with poorly designed or badly functioning AI products. 

Similarly, the Voluntary AI Safety Standard introduces 10 guardrails to help businesses engage with AI technologies safely and responsibly:

  1. Establish, implement and publish an accountability process including governance, internal capability and a strategy for regulatory compliance
  2. Establish and implement a risk management process to identify and mitigate risks
  3. Protect AI systems, and implement data governance measures to manage data quality and provenance
  4. Test AI models and systems to evaluate model performance and monitor the system once deployed
  5. Enable human control or intervention in an AI system to achieve meaningful human oversight across the life cycle
  6. Inform end-users regarding AI-enabled decisions, interactions with AI and AI-generated content
  7. Establish processes for people impacted by AI systems to challenge use or outcomes
  8. Be transparent with other organisations across the AI supply chain about data, models and systems to help them effectively address risks
  9. Keep and maintain records to allow third parties to assess compliance with guardrails
  10. Engage your stakeholders and evaluate their needs and circumstances, with a focus on safety, diversity, inclusion and fairness

These guardrails are a useful tool for business (and government) but there is also a need for government to provide leadership about the consequences of non-compliance. 

Digital Rights Watch made a submission regarding the proposed mandatory guardrails last October, in which we called for a harm reduction approach that prioritised privacy reform. It remains difficult to assess the utility and value of the guardrails without a fulsome strategy for harm reduction and a clear approach to enforcement. Guardrails alone are not sufficient as an AI governance strategy, and are not consistent with a human rights approach, which imposes specific limits and requires enforceability. 

There is the potential for some kind of AI to be considered too high-risk and too dangerous such that it should be prohibited. The European Union AI Act, which to date represents a legislative high-water mark globally in the field, introduced risk categories for certain developments and applications of AI. Some AI applications do not have a place in our society, such as:

  • AI use cases that pose a high risk to people’s human rights, such as in healthcare, education, and policing;
  • AI systems that deploy dark patterns, that is “subliminal, manipulative, or deceptive techniques to distort behaviour and impair informed decision-making,” or exploit vulnerable people;
  • AI systems that infer sensitive characteristics such as someone’s political opinions or sexual orientation; 
  • Real-time facial recognition software in public places; and
  • The weaponisation of AI.

We agree with the idea that certain kinds of AI systems should be prohibited because they are too high risk. However, ultimately, many of these policies tend to focus on the effects of AI at the final point of impact, rather than further up the pipeline, at the site of collection. Stronger data protections, grounded in our fundamental right to privacy, can go a long way to protecting us from the harms of emerging AI technologies. Luckily, many of these proposals have already been drafted in the Privacy Act Review Report (2022), and agreed to in principle by our federal government; they just require the political will to bring them through parliament and enshrined in legislation. 

The alternative is to focus on impact, rather than risk, in framing legislation. This is the approach that has been taken by South Korea for example, with a ‘high impact’ rather than ‘high risk’ test. We prefer regulatory approaches that prioritises catching harm before it occurs, rather than assessing impact after the fact.

Resisting hype and AI mythologies

There is a prevailing narrative that AI is coming and there’s nothing we can do about it. In the Australian government’s policy for the responsible use of AI in government, there is a suggestion that AI is already here and everyone is using it: “Development and deployment of AI is accelerating. It already permeates institutions, infrastructure, products and services, with this transformation occurring across the economy and in government.” 

In a press conference announcing his appointment as Australia’s new Chief Scientist, Professor Tony Haymut said, “The government is already doing well, but, you know, AI is going to happen, no matter what we do.” At the same event, Minister Ed Husic claimed, “The big thing is to release the handbrake, and that handbrake is constituted by a lack of confidence and trust in technology. And I think we’ve got to work on that,” as if the lack of trust in AI was a problem to be solved, and not a fundamental flaw of the technology. 

It’s important to resist the idea that the future is inevitable. Technological determinism, or the idea that any new technology is inevitable, is a thought-terminating cliche that prevents meaningful discussions around the kinds of futures we want to create in our society. It forecloses who might have a say in how new technology develops. New technologies are born and die every day (remember when we were told the Metaverse was the future of our work and social lives? Or NFTs? REMEMBER NFTs??) and we ought to get to decide, collectively, what makes the cut. 

As we crest the AI Hype Cycle through the Peak of Inflated Expectations, the promise of AI-driven digital innovation has well and truly steamrolled the tech industry; it is impossible to have a conversation about the future of technology without someone mentioning generative AI. This has had a significant impact on tech policy. Instead of making meaningful inroads into privacy and data governance, we are forced to map post-hoc guardrails onto a fast-moving industry of LLMs in search of a business model, fuelled by content that has been taken from the public and private domain without consent, credit or compensation. While institutions like the Tech Council call for a “pro-innovation regulatory environment”, we cannot let imagined benefits outweigh real world, present harms. 

The big business of generative AI requires data, and lots of it, and the rampant collection of data, its processing, and application of predictive models back onto the public damages our already fraught right to privacy. The effectiveness of this Big Data-enabled training is still open to critique, and the “Garbage In, Garbage Out” mantra that has applied to business analytics for many decades continues to ring true. Nevertheless, the promise of AI-fuelled innovation, and the hypothetical jobs that come with it, has well and truly captured the industry. 

There have been AI winters in the past, and there may be another winter to come. In the meantime, we need robust regulation that protects us from the present and emerging harms proliferating from unguarded AI implementation across our public and private lives; regulation that is grounded in human rights principles and underpinned by privacy. 

AI & Personal information

Many of the harms that arise from AI and automated decision making stem from inappropriate collection and use of personal information. As highlighted in OVIC’s Artificial Intelligence and Privacy — Issues and Challenges paper, the OECD has established a set of information privacy principles that provide a foundation for the responsible handling of personal information, including:

Collection limitation: collection of personal information should be limited to only what is necessary; personal information should only be collected by lawful and fair means; and where appropriate, should be collected with the knowledge and/or consent of the individual.

Purpose specification: the purpose of collecting personal information should be specified to the individual at the time of collection.

Use limitation: personal information should only be used or disclosed for the purpose for which it was collected, unless there is consent or legal authority to do otherwise. 
Source: OVIC’s Artificial Intelligence and Privacy — Issues and Challenges paper

As one method of shoring up these principles of data collection in our own legislation, the Privacy Act Review Report proposed the introduction of a fair and reasonable test

Amend the Act to introduce a requirement that the collection, use and disclosure of personal information must be fair and reasonable in the circumstances. It should be made clear that the fair and reasonable test is an objective test to be assessed from the perspective of a reasonable person.

Privacy Act Review Report

Such a test would consider whether an individual would reasonably expect the personal information to be collected, used or disclosed in the circumstances; the sensitivity and amount of data being collected; the risk of harm as a result of collection; whether the collection is reasonably necessary; whether the loss of privacy is proportionate; the transparency of the collection; and whether the information collected relates to a child. 

In the case of emerging AI technologies, many models are being trained on data that was collected for purposes other than machine learning. We ought to be able to trust institutions will use our information only for the purposes it has been collected, and breaking this social contract ought to attract appropriate penalties. 

The Privacy Act Review Report also proposes an updated definition of personal information, including changing the definition in the Act by replacing the word ‘about’ an individual to ‘relates to’, which would recognise the fact that technical information (such as IP addresses, device identifiers and location data) or inferred information can also be considered personal information. 

AI systems have an enhanced ability to infer or generate new information about a person, by comparing an individual to a group of people with other known data points, and fill in any gaps by making calculated assumptions. This inferred data, generated by AI, ought to fall under the definition of personal information and be regulated as such.

Automated Decision Making & Robodebt

AI is a technology of obfuscation and abstraction. The impact of AI businesses has been to create greater distance between the people using these platforms and the sources of data and their costs. 

There are some systems that require a human in the loop, both to ensure fairness and appropriate development, but also to bear responsibility when things go wrong. Automated decision making systems are not an excuse to absolve responsibility when data is mishandled, or decisions are made that negatively affect people’s lives. A machine cannot be held responsible or brought to trial. 

Robodebt, the unlawful automated debt recovery scheme introduced by the government in 2016, is a clear example of the devastating harms introduced by “automated technologies” in pursuit of “efficiency”, and flagrant disregard for human wellbeing. The system replaced a manual human review of individuals’ reported earnings, cross-checked with amounts reported by the individual’s employer. Before the Robodebt system began in 2016, there was an average of 20,000 interventions (or debt notices issued to welfare recipients) per year, but with the introduction of the automated system, this number increased to 20,000 interventions per week. It resulted in assigning individuals with huge debts, often erroneously, supposedly accrued across decades, which they were unable to repay. For many, it has had long lasting mental health impacts, including, tragically, suicide.

This dysfunctional automated scheme was the result of a careless application of technology, a widespread disregard for the wellbeing of welfare recipients, and facilitated by the insufficient privacy protections in place for data shared with the government. Data shared for one purpose (to submit a tax return) was repurposed to assess eligibility for welfare services. When fed into an automated process, the Robodebt system produced poor decisions with little recourse for the affected individual. With collection limitations, and a fair and reasonable test in place, we hope these kinds of systems would never be possible.

Conclusion: What does good look like?

Broad risk-based AI legislation, such as the EU’s AI Act, in combination with targeted reform in affected sectors, like education, workplace laws, and consumer law would give us a foundation to deal with this rapidly growing sector. Increased transparency and provenance would increase accountability on businesses, and critically, trust from consumers. 

Privacy reform would take us further in shoring up the rules around data collection, and placing the onus back on entities to ensure that they are processing our data fairly and ethically. 

Ultimately, we need a regulatory environment that does not incentivise data hoarding, and companies should be forced to delete data when it is no longer being used for the purpose it was collected, or is at risk of being misused. This requires a culture shift, but a necessary one. Promises of productivity and capital gains from AI innovation are, largely, unrealised; certainly when the invisible costs of environmental devastation, unpaid labour, and failure to abide by copyright law are factored into the business model. The technology industry would be better served by weaning themselves away from data gluttony and investing in real innovation that measurably improved the wellbeing of Australians. 

Without reform, the personal information being collected and used by industry to train AI models will take place based on an outdated notion of consent that does not align with community expectations. This is a problem not just for public trust in these models, but also it significantly increases the risk of unintended outcomes. Without proper care and due diligence applied to data sets used to train AI, there are significant risks of discriminatory and harmful outcomes, made all the more dangerous by having the cover of supposedly neutral technology. Data sets need to be governed and managed, including by people from whom the data originates. The government has a role to play in promoting good data governance, and leading by example with respect to public data sets. 

One approach to reframing the arms-race narrative of industry-led AI innovation is to consider the role of investment in publicly-owned compute resources. The case for ‘public compute’ is a response to the risk of relying on funding and infrastructure owned by Silicon Valley corporations, especially in times of global political tensions. Crucially, “governments cannot rely on the good grace of private companies to provide compute infrastructure for use cases that may be publicly beneficial but not commercially profitable.” If Australia is determined to invest public money in building new datacentre infrastructure, we need to be clear about the proposed public benefit of AI.

There are a range of policy interventions that are available to the government. One obvious one that is significantly progressed and would have an important impact is privacy reform.

Privacy law is a fundamental part of AI regulation and must be factored into any regulatory approach. The failure to implement full-scale privacy reform to date undermines our confidence that the guardrails will do the job we expect of them. The government must prioritise the remaining privacy reforms it has committed to implementing.