Michelle Bachelet, the chief of the United Nations' human rights office is calling for the end of surveillance systems that are powered by artificial intelligence. Bachelet alleges that ai tools including face-scanning systems and "social scoring" systems pose a serious threat to human rights.
"The higher the risk for human rights, the stricter the legal requirements for the use of AI technology should be," Bachelet said.
Bachelet's announcement accompanied the release of a report on the use of AI systems that directly affect citizens' lives without the proper safeguards in place.
"This is not about not having AI," UN spokesman Peggy Hicks said when presenting the UN report. "It's about recognizing that if AI is going to be used in these human rights - very critical - function areas, that it's got to be done the right way. And we simply haven't yet put in place a framework that ensures that happens."
Bachelet and the UN aren't calling for a complete ban on facial recognition, but rather an end to real-time facial scanning and other applications shown to be discriminatory. Bachelet says this practice should end until governments can prove that their technology will not discriminate and is accurate. Specifically, she is asking countries to bar technologies that don't comply with human rights law.
While Bachelet acknowledged that AI-based technology can be used for good, she says they "have negative, even catastrophic, effects if they are used without sufficient regard to how they affect people's human rights."
Among those systems are the "social scoring" systems that group users based on characteristics like ethnicity and gender. One of the countries that is notorious for its use of social surveillance is China where facial recognition software has been used to monitor the minority Uyghurs living in the country.
"In the Chinese context, as in other contexts, we are concerned about transparency and discriminatory applications that address particular communities," Peggy Hicks, an official with the human rights office, told journalists.
Along with tools used to judge ethnicity, the report also notes the danger posed by applications meant to determine the emotions shown through a person's face and behavior. According to the report, these applications often aren't based on science and provide biased results.
"The use of emotion recognition systems by public authorities, for instance for singling out individuals for police stops or arrests or to assess the veracity of statements during interrogations, risks undermining human rights, such as the rights to privacy, to liberty and to a fair trial," the report reads.
The UN isn't alone in calling for the regulation of facial recognition and AI technologies. European Union officials are currently planning on introducing a ban on tools like real-time facial scanning, and those calling for caution include everyone from President Joe Biden to Microsoft