U.N. Urges Moratorium on Use of Face-Scanning Technology and AI That Threatens Human Rights

GENEVA — The U.N. human rights chief is calling for a moratorium on the use of artificial intelligence technology that poses a serious risk to human rights, including face-scanning systems that track people in public spaces.

Michelle Bachelet, the U.N. High Commissioner for Human Rights, also said Wednesday that countries should expressly ban AI applications which don’t comply with international human rights law.

Applications that should be prohibited include government “social scoring” systems that judge people based on their behavior and certain AI-based tools that categorize people into clusters such as by ethnicity or gender.

AI-based technologies can be a force for good but they can also “have negative, even catastrophic, effects if they are used without sufficient regard to how they affect people’s human rights,” Bachelet said in a statement.
[time-brightcove not-tgx=”true”]

Her comments came along with a new U.N. report that examines how countries and businesses have rushed into applying AI systems that affect people’s lives and livelihoods without setting up proper safeguards to prevent discrimination and other harms.

“This is not about not having AI,” Peggy Hicks, the rights office’s director of thematic engagement, told journalists as she presented the report in Geneva. “It’s about recognizing that if AI is going to be used in these human rights — very critical — function areas, that it’s got to be done the right way. And we simply haven’t yet put in place a framework that ensures that happens.”

Bachelet didn’t call for an outright ban of facial recognition technology, but said governments should halt the scanning of people’s features in real time until they can show the technology is accurate, won’t discriminate and meets certain privacy and data protection standards.

While countries weren’t mentioned by name in the report, China has been among the countries that have rolled out facial recognition technology — particularly for surveillance in the western region of Xinjiang, where many of its minority Uyghers live. The key authors of the report said naming specific countries wasn’t part of their mandate and doing so could even be counterproductive.

  Tencent Opens WeChat to Rivals’ Links as China App Walls Crumble

“In the Chinese context, as in other contexts, we are concerned about transparency and discriminatory applications that addresses particular communities,” said Hicks.

She cited several court cases in the United States and Australia where artificial intelligence had been wrongly applied..

The report also voices wariness about tools that try to deduce people’s emotional and mental states by analyzing their facial expressions or body movements, saying such technology is susceptible to bias, misinterpretations and lacks scientific basis.

“The use of emotion recognition systems by public authorities, for instance for singling out individuals for police stops or arrests or to assess the veracity of statements during interrogations, risks undermining human rights, such as the rights to privacy, to liberty and to a fair trial,” the report says.

The report’s recommendations echo the thinking of many political leaders in Western democracies, who hope to tap into AI’s economic and societal potential while addressing growing concerns about the reliability of tools that can track and profile individuals and make recommendations about who gets access …read more

Source:: Time – Technology

(Visited 4 times, 1 visits today)

Leave a Reply

Your email address will not be published. Required fields are marked *