Peekaboo, I See You: An Argument For Legislation Concerning AI & Facial Recognition Technology


(Image via Getty)

Let’s face it: people are sensitive when it comes to facial recognition technology. Having your photo as part of a database for servicing of tailored advertising to you is one thing, but being a part of a surveillance platform is another. Having large amounts of photos in a database is nothing new, but the advent of artificial intelligence (AI) has created new ways to analyze and use such images and related data. Any way you look at it, the advent of AI is changing, well, the face of facial recognition technology. The problem, however, is that the technology is outpacing the ability of the law to catch up.  When it comes to technology, let’s just day that there’s more to your face than meets the digital eye.

As I have written here before, your face may simply not be “yours” anymore. In a report from Georgetown Law’s Center on Privacy and Technology, the Center found more than 117 million adults are part of a “virtual, perpetual lineup,” accessible to law enforcement nationwide. Yep — even though you may not have ever gotten anything more than a speeding ticket, your photo may be part of digital lineup of more than 3 billion faces. Think I am exaggerating? Think again — back in 2011 Google itself admitted that it built, then withheld, facial recognition technology because of the potential for its abuse.  More recently, a company called Clearview AI has made news for developing a facial recognition app that over 600 law enforcement agencies are apparently using to solve cases ranging from shoplifting to child sexual exploitation cases. The point is that the concern is more than an academic one.

Although I applaud the evolution of this technology, I worry about its application under the existing legal framework. Why? Let me count (a few of) the ways:

  • Privacy. The right to privacy is not specifically enumerated in the U.S. constitution, and is only protected under SCOTUS precedent, some federal laws (such as for personal health information, nonpublic personal financial information, etc.) and a patchwork of state laws. None of these laws specifically provide a “right to privacy” in one’s face. Although state privacy torts may provide recourse against the use of one’s likeness for commercial gain, this is far from perfect. Worse, the combination of AI and large photo databases is incompatible with the reasonable level of anonymity currently enjoyed by the general public as they go about their daily lives. Let’s “face” it — from doorbell cameras to security monitors to electronic toll booths, the gathering of photos and the databases holding them are only going to continue to grow and proliferate.
  • Copyright. Using the photos is one thing, but how the photos in these databases are acquired is another. As I have explained before, some of these photos are taken from surveillance cameras being used by city governments across the country, while others seem to have been compiled from less obvious sources (like IBM engaging in facial recognition research derived from publicly available collections for research purposes to “train” their algorithms). These uses may qualify as “fair use” under Section 107 of the Copyright Act for research purposes, but such use in the context of commercial apps is far less clear (if not altogether improper). In some cases (such as with Clearview AI), the images appear to have been scraped from Facebook, YouTube, and many other sites, which leads to questions regarding whether such acquisition is legally permissible under the terms of use for those as a sublicense or otherwise require the consent of the copyright owner.
  • Scope of Use. The possibility for abuse of such datasets using AI cannot be overstated. Depending upon the underlying rights in and to the images within the dataset, there may or may not be constraints on use of the images. In its review of Clearview AI, for example, the New York Times reported that the company did not return the reporter’s telephone calls or emails, but called police officers who ran the reporter’s face through the Clearview AI app to see if they were talking to the media. Further, the technology may be prone to false positive matches, depending upon the programming — a troubling fact in the context of facial recognition.

I actually applaud this evolution of this technology, but caution should be exercised. In much the same way that the privacy of personal health information (and the technology used in obtaining, storing, and transmitting it) garnered the passage of federal legislation and regulations to protect it (i.e., HIPAA and the HITECH Act) , some form of federal legislation and regulation is required to address the collection photos and data for facial recognition to ensure that the technology is not misused. Needless to say, the continued evolution of AI and facial recognition technology will challenge current notions of privacy, but such challenges should not erode (or worse, eviscerate) our right to privacy as we know it. Face it — that would be something worth smiling about.


Tom Kulik is an Intellectual Property & Information Technology Partner at the Dallas-based law firm of Scheef & Stone, LLP. In private practice for over 20 years, Tom is a sought-after technology lawyer who uses his industry experience as a former computer systems engineer to creatively counsel and help his clients navigate the complexities of law and technology in their business. News outlets reach out to Tom for his insight, and he has been quoted by national media organizations. Get in touch with Tom on Twitter (@LegalIntangibls) or Facebook (www.facebook.com/technologylawyer), or contact him directly at tom.kulik@solidcounsel.com.





This article is sourced from : Source link