The ban on police use of facial recognition software made by big-tech was strengthened in mid-March when Amazon (NASDAQ: AMZN) extended what had initially been a one-year pause in the practice started last June.
Amazon is recognized as a strong defender of facial recognition, making their decisions in this area more significant. Soon after Amazon announced its 2020 pause, Microsoft (NASDAQ: MSFT) announced that it would be waiting for federal regulation of the technology before it would consider offering its software to the police. Meanwhile, IBM (NYSE: IBM) announced it would stop working on facial recognition altogether, and Google (NASDAQ: GOOGL) doesn't and hasn't commercially offered its technology to anyone.
The initial ban on police use was announced at the peak of the 2020 Black Lives Matter Protests seen following the murder of George Floyd. Facial recognition software has a well-known history of misidentifying black and brown faces.
Critics of the technology have also cited studies showing that Amazon's software, called Rekognition, struggles to differentiate gender amongst people with darker skin tones; although, Amazon has contested the results.
Of course, a misidentification of this kind could lead to wrongful arrests, a fact that has long been recognized by civil liberties advocates. Advocates also foresee a loss of privacy and restrictions in freedom of expression should facial recognition software, in its current state, becomes widely used.
"Face recognition technology fuels the over-policing of Black and Brown communities, and has already led to the false arrests and wrongful incarcerations of multiple Black men," Nathan Freed Wessler, a deputy project director at the American Civil Liberties Union (ACLU), said in a statement.
Wessler further called upon lawmakers to ban the use of facial recognition software by police agencies.
Fears surrounding facial recognition technology also extend not only to how it might file but also to concerns about its misuse. As co-director of the High Tech Law Institute at Santa Clara University Eric Goldman told The New York Times, "The weaponization possibilities of this are endless."
"Imagine a rogue law enforcement officer who wants to stalk potential romantic partners, or a foreign government using this to dig up secrets about people to blackmail them or throw them in jail," Goldman said.
Not all critics of police facial recognition software want a total ban. For example, facial recognition software has shown promise in rescuing sex-trafficking victims. The companies using the software for this purpose will retain access despite the ban, according to Amazon.
While last year Amazon said that it hoped federal regulation would be put in place in order to guarantee that the technology is used ethically, they declined to give a reason for the extension of the ban this March. By and large, laws regulating the use of facial recognition software by the police have yet to be seen.
"We've advocated that governments should put in place stronger regulations to govern the ethical use of facial recognition technology, and in recent days, Congress appears ready to take on this challenge. We hope this one-year moratorium might give Congress enough time to implement appropriate rules, and we stand ready to help if requested," the company said in a statement accompanying the 2020 ban.
While federal regulation hasn't been introduced, some state and local governments have developed their own laws limiting the use of facial recognition technology. For instance, the technology is banned by San Francisco, Oakland, and Portland have all banned the use of this technology, and Massachusetts and Virginia have restricted its use by law enforcement.
Still, the big-tech bans and attempts at regulation beg the question: did police precincts manage to maintain access to facial recognition software? The answer seems to be 'yes'.
In Pittsburg, officers used facial recognition technology Clearview AI to investigate matters during the Black Lives Matter Protests. Its use was in direct violation of a citywide ban.
Prior to the recent establishment of Virginia's facial recognition laws, Clearview AI was reportedly used by a number of the state's police departments. At this time, most states do not have laws regulating this technology.
If the name "Clearview AI" sounds familiar to you, it might be because the company was recognized as an early developer of the facial recognition tech being used by the police. The company was later sued by the ACLU for allegedly stockpiling illegally sourced images accessed without users' permission.
"Laws have to determine what's legal, but you can't ban technology. Sure, that might lead to a dystopian future or something, but you can't ban it," Dan Scalzo, an early investor in Clearview AI, told The New York Times.
Critics of the ban on the use of facial recognition software by law enforcement include many law enforcement officers themselves. In Virginia, the use of facial recognition by law enforcement has been limited to instances in which strict permission is given by the legislature, a rule which was put into law in March of this year. Police officers have been among the loudest critics of this decision.
"I think a lot of people want to know what impact that is going to have on public safety and a lot of other industries if you do away with it," John Jones, executive director of the Virginia Sheriffs' Association, told reporters. "It is a way to catch bad guys - you can catch really bad actors - and that's always a good thing."
Unsurprisingly, critics of the technology primarily point to the possibility that an innocent person could be mistaken for a "really bad actor" as a reason for concern.