Police forces are rapidly adopting AI, placing at risk the human rights they are meant to protect.
Victoria Police use generative AI on 20% of crime reports. When a contact centre employee files an online crime report, they use generative AI on the form to generate a summary for police officers.
At Salesforce’s 2025 Agentforce conference, Inspector Matt Henderson spruiked the AI system with hopes of productivity gains. This reflects a concerning trend. Businesses and governments promote the productivity benefits of AI and ignore human rights risks. This is never okay, especially in policing where the stakes are so high. The stated purpose of a police force is to protect the community, not risk their rights chasing productivity.
The AIs used by police are trained on extensive datasets drawn largely from the internet, a space that reflects existing social biases and inequalities. As a result, AI reproduces the same forms of sexism, racism and homophobia that exist in the data that they were trained on.
AI systems are not neutral translators. They predict language based on patterns, and in doing so, they fill in gaps, editorialise and make assumptions. An AI might describe one person as “claiming” something while another “states” it, subtly shifting credibility. When an AI introduces racialised or gendered assumptions, it affects the impression the police officer will form and their scope of investigation.
When police officers read AI-generated crime reports that do not reflect the nature of what happened, they are sent down the wrong investigative path. This leads to further overpolicing of marginalised communities. Potential productivity gains are lost by police time wasted on dead-ends.
Crime reports contain sensitive information about victims of crime who deserve the highest levels of security. This means that use of AI for crime reporting must be open to regulatory scrutiny.
However, Victoria Police is silent about key details about their use of AI, creating serious privacy concerns. We do not know how the police retain data, identify people, or train future models. It is unclear whether Victoria Police or Salesforce is accountable for the AI system.
Australians need to trust that our crime reports will be confidential. If that trust is eroded we will be less likely to come forward.
Hearteningly, Victoria Police have an Artificial Intelligence Ethics Framework. It requires adherence to eight principles:
- Human rights
- Community benefit
- Fairness
- Privacy and security
- Transparency
- Accountability
- Human oversight
- Skills and knowledge
Unfortunately, Victoria Police have chosen to ignore their ethics framework.
Victoria Police waited months to alert the community that AI was rewriting their crime reports. Combined with the opaque manner in which user data is being handled, this debacle could not pass as ‘transparent’.
As governance institutions like the police integrate AI, they must respect human rights. Democratic institutions are bound by checks and balances to prevent the abuse of power. AI systems must also be subject to checks and balances in the form of oversight and accountability mechanisms. Transparency is the bare minimum towards achieving this.
Australian police have form for using AI to infringe Australians’ human rights. In 2021, NSW Police Force used an AI-driven surveillance system called ‘Insight’. The NSW AI Review Committee found that Insight was neither fair nor accurate.
By relying on location data, the AI disproportionately implicated innocent individuals who lived-in or moved-through areas with high crime rates. This criminalised proximity, rather than behaviour, targeting innocent people from marginalised communities.
The Insight system analyses a broad range of data sources including: police camera footage and public CCTV.
Cognitec Systems facial surveillance technology is used by the NSW Police Force. It wrongfully identifies black West African people seven times more frequently than white Europeans. Public outcry forced NSW police to scrap Cognitec Systems facial surveillance tech.
NSW Police Force used an AI-driven predictive-policing program to harass Indigenous children as young as 10. According to the Justice and Equity Centre, of the people that the predictive AI program identified as ‘suspect targets’, more than half of the adults and 71% of the children were Indigenous. Aboriginal Legal Service led criticism of the program, forcing NSW police to stop it.
If police technology cannot treat people of colour fairly, it is fundamentally flawed and inadequate for law enforcement use. The willingness of police forces to sacrifice the safety of marginalised communities in the name of productivity is unacceptable. Communities must be given real opportunities to scrutinise these systems before they are deployed: without criticism from independent bodies, NSW police would have continued to use this tech to harm already-marginalised communities.
Meanwhile, across the Pacific, AI corporations are creating ever-worse products for police use.
‘Axon Enterprise’s Draft One’ is a generative AI product that writes reports based on audio from police officers’ body-worn cameras. Already, this technology is reporting wild inaccuracies, such as police officers turning into frogs.
Once the audio is converted into a report, police officers will then edit the document. However, there is no way to track what edits they make, making it impossible to distinguish what the AI has generated and what the police officer has written.
The result is a lack of accountability for police officers who write inaccurate or biased reports. The officers can blame the AI and there is no way to disprove the claim.
Given the availability of edit tracking technology, Draft One’s inability to track changes is egregious. Axon clarified that this was by design to prevent auditability.
While the editing process of Axon-generated reports remains awful, the fact that we know about it demonstrates a minimal level of transparency. US police had to inform the public of the technology’s use before its full implementation. This regulation gives space for the media and civil society to scrutinise the system and demand safeguards. This is a better situation than Victoria police’s AI program.
Victoria Police’s lack of transparency limits public discourse, puts communities at risk, and sets a dangerous precedent.
Victoria Police has an opportunity to lead and to model responsible, rights-respecting AI use and set a high benchmark internationally. Instead, Australian police forces appear poised to squander this opportunity, choosing secrecy over leadership.
Without transparency in police AI use, there can be no trust, no accountability, and no effective oversight. Transparency enables civil society to scrutinise systems, demand safeguards, and ensure AI tools do not entrench bias. It is a precursor to all other essential safeguards: independent oversight, bias mitigation, data minimisation, informed consent, and secure storage protocols.
Transparency is not a luxury in AI governance, it is the absolute minimum requirement.