W Power 2024

Facial recognition technology is being weaponised, needs guardrails: Kashmir Hill

The New York Times journalist offers a chilling narrative about the inner workings of a secretive startup in her book Your Face Belongs to Us

Divya J Shekhar
Published: Dec 1, 2023 02:48:17 PM IST
Updated: Dec 1, 2023 03:00:06 PM IST

Facial recognition technology is being weaponised, needs guardrails: Kashmir HillKashmir Hill, The New York Times Journalist and author of Your Face Belongs to Us
 
There is a startup called Clearview AI, which counts billionaires Peter Thiel and Naval Ravikant as investors, and uses facial recognition technology in dangerous ways. Ways that even big tech companies like Google and Facebook refused to adopt. The perils of using such technology in unethical ways are many, but when such startups work with law enforcement, like Clearview AI is doing at present, it can even become a get out of jail free card, says journalist Kashmir Hill.
 
In her new book, Your Face Belongs to Us, Hill captures the making of this startup and its inner workings. She narrates how Clearview AI, which has a database of 30 billion faces, has been used by powerful people, like Sequoia Capital MD Doug Leone, football quarterback Joe Montana and actor Ashton Kutcher, and how indiscriminate use of such technology can mark the end of privacy as we know it.
 
She was on Forbes India’s From the Bookshelves podcast. Edited excerpts from the conversation:

Q. Your book is about this startup called Clearview AI that’s into facial recognition technology. You just have to upload a photograph of a person in it and it will show you results scraped from every presence that individual might have on the internet. Can you tell us more about this startup? Who are the people behind it, what’s the scope and scale?
I was surprised to find that when they first started, they were a very tiny startup, really just a rag-tag crew of a mix of individuals interested in technology and business, and just trying to find a new angle to make money. The technical co-founder is this guy named Hoan Ton-That. He grew up in Australia, dropped out of college at 19 to move to San Francisco, and try and make his way in the tech world. [He] eventually wound up in New York, where he met his co-founders for Clearview AI. They have scraped 30 billion faces on the internet. So they have quite a large database, which, of course, is more people than that live on the planet. So they have a number of photos of each person.
 
Q. Thirty billion is a lot…
The thing is, there are so many photos on the internet to be scraped. And I think most of us, when we are putting these photos online, on social media, on education websites, on sites like Flickr, we weren’t thinking that someone would come along and scrape them, and build this tool that would make the internet searchable by face.
 
Q. So they can only access publicly available photos, right, or can they strike up deals to get images from private social media accounts as well?
They say they can only scrape public images. What I’ve seen happen is that if your Instagram account was public at some point and they scraped it from there, and then you made it private, they would still have those images.

Also listen: Facial recognition launched at Delhi and Bengaluru airports; India chip market at $300 bln by 2026

Q. Could you talk a little more about scraping? The method that they use to get information and photos of people from the internet?
Scraping has been going on since the very beginning of the internet. When the World Wide Web was first founded, there were people that would send these, kind of, spiders or automated programmes out to collect information about other websites that were on the internet, back when it was very, very tiny. Scraping is just the act of sending an automated bot to a website and having it download information en mass.

Hoan Ton-That told me that one of the very first websites he scraped was a company called Venmo. They are a social payment network. And they had on their website, transactions that were happening on their network between individuals who had public accounts. It was a real-time feed of what was happening on the social network and he would just send his bot to that website every few seconds and download the photos and links to the profile pages. I compare it to a slot machine in the book, in a casino, where you pull the lever and you just win every time with faces spilling out.
 
Q. In fact, there’s an interesting chapter about how Hoan Ton-That taught himself everything by going online. He’s self-taught and whatever he knows was acquired by standing on the shoulders of what these large tech companies and open-source academicians and researchers were putting out, right?
He benefitted from the kind of open technology culture as a kid. He watched coding videos that were put out by the big American university MIT. That’s kind of how he initially taught himself to code. And then, as he was working at Clearview AI, he told me he was just following machine learning experts on Twitter, now known as X, and looking on GitHub for resources about facial recognition algorithms. He described it to me as standing on the shoulders of giants. He was taking advantage of a lot of open source technology code that is freely shared. And what really set him apart, what set Clearview AI apart was not so much that they made a technological breakthrough, but that they were willing to do what other companies like Google and Facebook had not been willing to do—take all of these photos and apply a facial recognition algorithm to them.
 
Q. In fact, you have mentioned in your book about how tech companies like Facebook and Google have had this technology with them for years, but they’ve decided to draw the line. You also quote quantum physicist Hartmut Neven, who worked with Google on facial recognition technology. And he says, “People are asking for facial recognition all the time, but as an established company like Google, you have to be way more conservative than a little startup that has nothing to lose.” That sent chills down my spine. ‘Startups with nothing to lose’… is that all it comes down to when it comes to the question of privacy?
One of the big lessons of this book, as we are entering this new realm of Generative AI, and all these other tools that we are open sourcing and allowing everybody to access these models because there’s a desire for a democratic approach to technology… this opens doors to companies like Clearview AI to be radical actors, and to do what other big tech companies are not willing to do, and define the limits of the technology.
 
Q. So Clearview AI worked in stealth mode and was a by-invite app, but I was surprised to see how so many powerful people were attached to it. It counts Peter Thiel and Naval Ravikant as investors, and many people continued to use it for their means and reasons but did not flag it. This includes Sequoia Capital MD Doug Leon, actor Ashton Kutcher, American football quarterback Joe Montana… what does this mean?
It really reminded me of this very famous quote by William Gibson, a science fiction writer. He said, “The future is here. It’s just not very evenly distributed.” I was seeing that Clearview AI was approaching rich and powerful people because that is initially to whom they wanted to sell their facial recognition technology. And the stories were really incredible. These people were using it at work conferences to remember people’s first names. One of the investors, a billionaire who lives in New York City and owns grocery stores... Clearview AI approached him about putting the technology in grocery stores to catch shoplifters. But he also got the app on his phone and he was telling me how he was out to dinner one night at an Italian restaurant in downtown Manhattan, and his daughter walked in with a man he didn’t recognise and they were on a date. So he had the waiter go over and take a photo of the couple and then ran the guy’s face through Clearview AI to find out who he was. He told me he wanted to make sure he wasn’t a charlatan.
 
Q. I mean, in the book you later speak with the daughter as well, and both of them actually make light of the whole situation…
But meanwhile, billions of people have no idea they are in the database… we didn’t even know it existed because Clearview AI was very much trying to keep what they had done a secret.
 
Q. In fact, you had a tough time getting on to them, and even finding out who owns this app…
Yes, exactly. Clearview AI eventually started selling the facial recognition tool to police officers, to police departments around the world. And when the company wouldn’t talk to me, I ended up approaching police officers to see if they would tell me about the technology and they were very enthusiastic about it. They said it worked like no other facial recognition tool they had had access to before and that it was very good at identifying people, and offered to demonstrate it to me. But each time that would happen, the officer would ghost me and stop talking to me.

Eventually, I found out that Clearview AI had put an alert on my face and they were getting a notification every time this happened, and they were reaching out to police officers and telling them, ‘Don’t talk to her’. In one case, they did deactivate a detective’s account and this was chilling to me because it showed Clearview AI knows who officers are looking for. At one time, they blocked my face from having results so they could control whether it could be found. And it just shows how facial recognition technology can be deployed to track people that you think are pesky, annoying or kind of enemies.

We’re already seeing that play out in the real world, where facial recognition technology is kind of being weaponised. One of the prime examples of this is here in New York City, there’s a big venue called Madison Square Garden. It’s where big sports teams play and where all the big musicians have concerts. The owner installed facial recognition technology a few years ago to keep out security threats. Over the last year, he decided that this would also be a really powerful way to keep out his enemies—lawyers who worked at firms that sued him. And so the company scraped their photos from their law firm websites, and put thousands of lawyers on a ban list so that they could not come to events at the venue till they dropped their lawsuits.
 
This is such a good example of surveillance creep, that when you set up these kinds of surveillance infrastructure to address security threats, they often get repurposed to address other concerns, such as protestors, dissenters and tracking your rivals. And that’s the reason why a technology like this, while powerful, needs guardrails to protect the kind of world we want to live in.
 
Q. Why hasn’t Clearview AI been sued? In fact, very recently they won a $9 million privacy appeal in the UK.
Clearview AI has been sued in a few different states here in the US, and privacy regulators outside of the United States investigated them in Europe, Canada, Australia. They all found what Clearview AI did was illegal and that it broke their privacy laws, which say that you cannot collect people’s sensitive biometric information without their consent. And Clearview AI stopped doing business in those countries. Meanwhile, in the United States, we just don’t have a privacy law like that at the federal level.

It’s more at the state level, but several of these regulators fined Clearview AI significant fines, up to 20 million euros per country, and they have been fighting and appealing, and they just managed to overturn the fine in the UK because a higher court said that the UK’s privacy regulator doesn’t have the right to control foreign governments use of their citizens’ data. Because Clearview AI is working with law enforcement, they are, kind of, exempted from the UK privacy rules.
 
Q. What kind of precedent does that set?
It does show that if these kinds of privacy invading companies primarily work with law enforcement, that can be a get out of jail free card for them. And I do suspect that may be the way that this goes, that we have greater controls and regulations over how companies and individuals use their technology. But governments may have much greater access and freer rein. We do need to address that and think about what guardrails we want for government because it’s such a good use case to use facial recognition technology like this to identify a suspect or murderer or someone guilty of assault or home invasion. But it could also be the case that this technology was rolled out on all surveillance cameras, all around a city or a country or the world. We’ve already seen that happen in Moscow, Russia, and in parts of China, and it can become very chilling when you are tracking people all of the time. It can quickly get very dystopian when you have surveillance built out to that degree.
 
Q. I read a 2022 study on this website called Comparitech, which tracked the extent of facial recognition technology use in countries around the world. Not surprisingly, India is among the top countries to use facial recognition technology for various purposes. I was surprised that it’s even used in workplaces, schools and in public transport. Where do we draw the line? How can countries make sure that their governments or officials use the technology ethically and responsibly?
There need to be rules set. Clearly, there are good uses cases, at least we’re seeing that in the US where a lot of police departments are starting to use this [facial recognition technology] to solve crimes. But even then you need rules because we are not all unique snowflakes. We do look similar. So you need to make sure that you’re doing more investigating than just making sure somebody matches in a facial recognition search because we have seen in the US, a handful of people have been falsely arrested for the crime of looking like someone else.

My great fear with facial recognition technology is that it will take the kind of tracking that already happens to us online---like when you go from website to website, everything you search, everything you click, it’s all being tracked—will get transferred to the real world. And that our faces will be this token that can be used to constantly track us, identify us and attach all this information to the face. Kind of like at Madison Square Garden, where they’re able to see the moment you walk through the door, who you work for and then discriminate against you.
 
Q. What’s the solution going forward?
I found in the reporting for the book that Google and Facebook could have released a Clearview AI-like power years ago and they decided not to. So there are ethical choices that companies can make about what they want to put out into the world and whether they think it’s a good thing or a bad thing. And social norms, like what we think is acceptable. Clearview AI is restricted to police use but there are other public search engines on the internet right now that people can use. Part of it is us choosing—do we want to normalise the use of these things or not?

Then, the policymakers and regulators, do they want to make those types of companies legal? Or to give us [citizens] more power. In Europe, they’ve decided that you as a citizen should have the right to decide whether you are in those databases. They have said that those databases are illegal if collected without consent. That is a decision other countries can make as well.

Post Your Comment
Required
Required, will not be published
All comments are moderated