Australia’s privacy watchdog, the Office of the Australian Information Commissioner (OAIC), stated that Clearview AI had violated privacy laws by harvesting users’ sensitive information without their consent and unfair methods.
Clearview AI is a facial recognition platform that provides software to companies, law enforcement, universities, and individuals.
A joint investigation by the OAIC and the U.K. Information Commissioner’s Office (ICO) revealed that Clearview AI’s facial recognition tool has scraped and saved biometric information of over three billion users. The OAIC stated that Clearview AI has failed to comply with the Australian Privacy Principle (APP) by not adhering to necessary security practices, procedures, and systems.
Clearview AI’s facial recognition tool allows users to upload a digital image of an individual’s face and run a search against the respondent’s database of more than three billion images. The tool displays likely matches and associated source information to the user to enable the identification of the individual. It also cross-references photos scraped from several social media platforms with a database of billions of user-profiles and images.
The OAIC ordered Clearview AI to:
- Avoid repeating practices that breach users’ data privacy and security
- Stop collecting scraped images, probe images, scraped image vectors, probe image vectors, and opt-out vectors from Australians
- Destroy all scraped images, probe images, scraped image vectors, probe image vectors, and opt-out vectors it has collected from individuals in Australia within 90 days
- Provide written confirmation to OAIC within 90 days regarding the actions taken
Causing Severe Identity Threats
While Clearview AI claims that its facial recognition technology helps law enforcement agencies to identify suspects, persons of interest, and victims, the latest findings from OAIC reveal the growing criticism by privacy regulators over the controversial technology. Clearview services have been used/tested by law enforcement and police departments across the world.
Clearview AI provided free trials to the Australian Federal Police (AFP), Victoria Police, Queensland Police Service, and South Australia Police agencies from October 2019 to March 2020.
Reports suggest that Clearview AI did not take any steps to stop collecting scraped images of Australians, generating image vectors from those images, and disclosing any Australians in matched images to its registered users. Clearview’s website and form for requesting access to the facial recognition tool remain accessible to Australian IP addresses even after the trial period.
The exposure of Clearview’s intrusive practices will certainly cause security concerns across various government officials.
“Consent may not be implied if an individual’s intent is ambiguous or there is reasonable doubt about the individual’s intention. I consider that the act of uploading an image to a social media site does not unambiguously indicate agreement to collection of that image by an unknown third party for commercial purposes. In fact, this expectation is actively discouraged by many social media companies’ public-facing policies, which generally prohibit third parties from scraping their users’ data.
Consent also cannot be implied if individuals are not adequately informed about the implications of providing or withholding consent. This includes ensuring that an individual is properly and clearly informed about how their personal information will be handled, so they can decide whether to give consent,” the OAIC said.
Australia Imposes Stringent Rules on Social Media Platforms
The Australian government has proposed the Privacy Legislation Amendment (Enhancing Online Privacy and Other Measures) Bill 2021 to safeguard Australians against various data threats online. Attorney-General Michaelia Cash recently released the draft of the proposed Bill, which aims to create a compulsory online privacy code for social media companies, data brokers, and other organizations that operate by utilizing user data. The Bill will primarily require social media platforms to obtain parental consent for minors (users under the age of 16).