KGUN 9NewsNational News

Actions

Online tool identifies whether personal photos have been compromised

frame_1010.jpg
Posted
and last updated

CHICAGO — Many of us may be familiar with the auto-tag feature on Facebook. You post a photo and the platform can identify who’s in the picture, using facial recognition.

But privacy experts say millions of images collected online by photo sharing apps are fueling artificially intelligent surveillance.

The day after the siege on Capitol Hill, facial recognition use spiked. Reportedly, the FBI and local law enforcement used the technology to identify rioters.

“This is a technology that enhances the abilities of police officers to the degree of almost superheroes,” said Liz O’Sullivan, the director of the Surveillance Technology Oversight Project, a New York based civil rights and privacy group.

“Over the summer we saw protests over a racial justice in the form of the Black Lives Matter protests and FBI agents were able to identify people based off of other artifacts that they were leaving online, including articles of clothing that they had purchased on online retailers like Etsy.”

This week, researchers at S.T.O.P. launched Exposing.ai, a new facial recognition detection site. S.T.O.P. collaborated on the tool with Adam Harvey, a researcher and artist in Berlin and his partner Jules LaPlace. The creators say your search data will be deleted within 24 hours. None of it is sold or shared. It allows you to match images from the online photo-sharing site Flickr and see whether your photos have been compromised.

“Flickr, while they were under Yahoo, created a database of more than 100 million different photos that had been posted on Flickr under a Creative Commons license,” said O’Sullivan. “They used this as a starter database for artificial intelligence.”

Those databases have been used by researchers, law enforcement and governments to enhance biometric identification technology.

“In fact, some of these databases and some of these data sets have been used by Chinese companies and are in some ways implicated to the human rights violations and the ongoing genocide of the Uyghur Muslims,” she said.

O’Sullivan says most people don’t even realize they’re contributing to the A.I. learning.

“Artificial intelligence researchers and developers are so starved for new data sources that they often resort to some unsavory practices some of which involve scraping the internet, regardless of the terms of service that may exist to protect your privacy.”

In some cases, it is against the law.

Facebook is set to pay out $650 million in a landmark class action settlement to about 7 million of its users in Illinois for allegedly violating the state’s strict biometric privacy law. Facebook denies it violated any law.

“Facebook has elected for now, not to sell their facial recognition tool to police officers or to the military or to the Chinese government, but there's absolutely nothing stopping them from doing it,” she said.

On Wednesday, Clearview A.I., a controversial startup, was found to have violated Canadian law when it “…collected highly sensitive biometric information without the knowledge or consent of individuals.” It allegedly engaged in what the government investigation deemed to be “illegal mass surveillance.”

The tech company also faces multiple privacy lawsuits here in the U.S. for scraping billions of photos from social media and other public sites used by law enforcement.

“People were arrested and charged with crimes that they did not commit because some machine told the officers that they were the ones behind it,” said O’Sullivan.

O’Sullivan says it’s high time that companies, governments and researchers are held to account. She says people deserve to have more control over their images, data and privacy.