The Clearview AI data breach reveals that Australian law enforcement agencies are using the company’s facial recognition tools for identification purposes without any oversight or privacy protections, digital rights experts have warned today.
“We should be deeply concerned that our police forces are using Clearview AI’s facial recognition technologies here in Australia without any accountability or oversight,” said Digital Rights Watch Chairperson Lizzie O’Shea.
“We need an inquiry into the data broking industry, and into the use of Clearview AI technology and other facial recognition surveillance by Australian law enforcement agencies.”
“Facial recognition technologies invade our privacy and can impact on our fundamental rights. There are huge concerns about its accuracy, and the potential for such data to be misused. There are currently no legal frameworks that govern these technologies.”
“Clearview AI scraped profile images of faces from social media platforms without the consent of users, breaching the terms of service of the platforms themselves, creating a database of billions of faces. The use of this database by Australian police raises a number of legal questions. We need transparent policies and regulatory frameworks that oversee the use of facial recognition technologies by government agencies and corporations,” said Ms O’Shea.
“We call for a moratorium on the deployment and use of facial recognition technologies until we fully understand their implications and there are strong regulations that govern their use.”
“This is a growing trend worldwide, with a number of cities banning the use of facial recognition technologies, including San Francisco, and the European Commission considering a five year ban. Australian should follow suit,” she concluded.