The same week that the National Institute of Standards and Technology came out with its Privacy Framework [view related post], highlighting how privacy is basically a conundrum, news articles also highlighted a new technology, Clearview AI, that allows someone to snap a picture of anyone walking down the street and instantly find out that person’s name, address and “other details.” I want to know what that means? Does that mean they automatically know my salary, the number in my bank account, my prescription medication or health issues, my political affiliation, or what I buy at the drug store or grocery store? All of this information tells a lot about me. Some people don’t care, but I am not sure why. There just does not seem to be any respect or interest in the protection of individual privacy. It’s not that people have things to hide—it’s just that it is reminiscent of some darker days of humanity—such as World War II era Germany.
It is comforting to see that privacy advocates are warning us about Clearview AI. Clearview AI has obtained the information, including facial recognition information of individuals, by scraping common websites such as LinkedIn, Facebook, YouTube, and Venmo, and is storing that biometric information in its system and sharing it with others. According to Clearview AI, its database is for use only by law enforcement and security personnel, and has assisted law enforcement to solve crimes. That is obviously very positive. However, privacy advocates point out that the app may return false matches, could be used by stalkers and other bad actors, as well as for mass surveillance of the U.S. population. That is obviously very negative.
There has always been a tension between using technology for law enforcement and national security, which frankly, we all want, and using technology for uses that are less clear and may promote abuse, which we don’t want. Clearview AI is collecting facial images of millions of people without their consent, which may be used for good or bad purposes. This is where public policy and data ethics must play a part. The NIST Privacy Framework can help in determining whether the collection, use and disclosure of facial recognition on the spot is protecting the privacy and dignity of individuals. Technological capabilities must be used for good purposes, but in today’s world technology is moving fast, and data ethics, privacy considerations and abuse are not always being considered, including with facial recognition applications. Perhaps the Privacy Framework can help shape the discussion, which is why its release is so timely and important.