Submission: Safe and Responsible AI

On 1 June 2023 the Department of Industry, Science and Resources opened a public consultation in response to a Discussion Paper called ‘Safe and responsible AI in Australia’.

The Department was seeking views on how the Australian Government can mitigate potential risks of AI, with a particular focus on governance mechanisms, such as regulations, standards, tools, frameworks, principles and business practices.

Digital Rights Watch provided a submission, which you can read in full below, or download a PDF here.

The current phase of consultation is broad, and we plan to weigh in in further detail in the future, as the discourse develops. A summary of our current position is as follows:

  • Human rights must be placed at the centre of AI governance and regulation. Australia needs a comprehensive federal Human Rights Charter to support this.
  • Given that so much of AI relies on huge amounts of data – including personal information – privacy and data protection regulations play an essential role in AI governance. The Australian Government must prioritise meaningful reform to the Privacy Act to be fit for purpose in the digital economy, especially with regard to AI technologies.
  • We must not be distracted by far-future hypothetical scenarios. AI-related harms are happening already, and our focus is better placed there, than so-called existential threats.
  • Much of the AI hype (both negative and positive) serves the interests of companies to stand to profit the most from the widespread adoption of their products in a low-regulation environment. We must be critical of the ways that the current AI boom is consolidating power in a handful of companies, and avoid regulatory capture.
  • AI regulation in Australia will be far stronger and more effective if it is aligned and consistent with international frameworks.
  • We support a risk-based approach to regulation, however, we are concerned that the proposed framework lacks the requisite sophistication to be effective.
  • Generally speaking, we are not in favour of voluntary codes. We don’t trust AI companies to self-regulate.
  • Some technologies and applications should be ‘no-go zones’ or outright prohibited. One-to-many facial recognition technology, or FRT used in real-time, should be banned.