• What's Trending
  • Posts
  • The New AI that Knows Your Name Just By Looking at Your Photo

The New AI that Knows Your Name Just By Looking at Your Photo

Technology has the power to change our lives, and the advances in Artificial Intelligence (AI) offers us many benefits that would have been impossible just a few years ago.

Not all of its uses are popular, however. Sophisticated new facial recognition techniques that use AI to collect billions of photos from websites are set to become a powerful surveillance tool for authorities. 

Companies such as Clearwater AI have caused controversy for using these methods, and it could lead to issues in the future.

What is this software?

ID from photos already exists, of course, and has done for decades. Traditionally, passports and driving licenses would get you entry into a country or a nightclub. Gambling websites that require proof of age, such as online casinos, are one of the most common internet examples. But these sites require extra details to match to it, like your name, birthplace, and date of birth. 

The new software doesn’t need this information. Once it has a person’s photo, it can use sophisticated web-crawling techniques to locate other images that match it — and these will normally have details attached to it. 

Take a Facebook profile photo, for example. Next to it, you will almost certainly have your name, unless you use an alias. Then, if publicly shared, one click reveals your ‘About’ section, which gives personal details like your date of birth perhaps, or your hometown. The web-crawling software can now scan billions of images like this without human intervention, so if it’s not social media, it can be anywhere online where you have shared a photo. 

The tool, although powerful, is very easy to use, coming in the form of an app for authorities who wish to use it. Police and government officials have found they can quickly identify criminal suspects – generally a positive thing – but such technology leads to other concerns. 

The dangers of the new technology

The main concern about Clearwater’s tech is clear: personal privacy. It reflects a topical debate about what rights citizens have in the smartphone era: should tech firms like Clearwater have the power to search through our personal images, even if they are on a public domain?

Not according to the American Civil Liberties Union (ACLU). They filed a lawsuit against Clearview last year, saying that the company is encroaching on personal privacy. 

Another complaint is the tech’s accuracy. It’s one thing to search for criminals, but what if innocent members of the public get caught up in an investigation? Cases of mistaken identity might not be that common, but when they do occur, they often inflict harmful consequences on the unfortunate target. The recent case of the homeless man in Hawaii who was wrongfully locked up in a mental asylum is one example, and there are fears that misuse of the technology might endanger online safety.

There’s also the rapidly growing capability of the AI, so rapid that some of its features aren’t fully developed yet.

Two examples are its ‘deblur’ and ‘mask removal’ tools. The first takes a blurred image and sharpens it according to what the AI believes it should be. The second takes a covered face and estimates what the part under the cover looks like. Both tools have attracted criticism for using what’s known as a ‘best guess’ system: that is, it uses statistical patterns from other images to speculate what the image should be. 

While the margin of error might be smaller than ever, it only takes a slight mistake to wrongly convict someone. 

Can governments control it?

Clearwater says that its aim is for governments and police forces to use its software – they state that they want to avoid it falling into private hands in case it gets abused. 

Hundreds of US police departments have tested the tech, according to Buzzfeed, and Clearview have listed 11 federal agencies that have used it, covering the areas of immigration and customs. 

Whether authorities can use it responsibly, however, and prevent abuse is a question whose answer might lie with social media. Facebook and Twitter have already demanded Clearview stop scraping their websites for images. If they get their way, then it will be a lot more difficult for the tech to function, as social media accounts for a large proportion of public images. 

However, demanding probably won’t be enough. Some experts believe the social media giants should use functions that they already have to protect users: tools that also include AI that modifies images to make them undetectable to Clearwater’s algorithms. They are also capable of removing metadata from images that shows where and when they were taken. 

While the argument for using the tech to find criminals is strong, such action would protect personal privacy and limit the power of companies like Clearwater.

Clearwater’s challenge 

Despite the bad press, the technology does have its supporters. Their argument that it helps to solve and prevent crimes is backed up by examples such as hunting those responsible for the US Capitol insurrection, as well as several child abuse cases.

The challenge, as with all technology, will be to harness its power so that it’s a force of good, rather than a sinister intrusion into the lives of innocent people.