Asia-Pacific

Tech Versus Human Rights

As the problematic statistics with facial recognition show, there is too much risk of error to let these tools further threaten human rights worldwide.
By
Tech Versus Human Rights

August 15, 2019 11:24 EDT
Print

Imagine that your government could identify you and track your movements digitally, solely based on your physical appearance and perceived ethnicity or race. This is not the stuff of dystopian science fiction, but is happening now due to the widespread use of artificial intelligence (AI) tools.

One of the most egregious examples of the abuse of AI tools like facial recognition is their use in China’s repression of the Uighurs, an ethnic minority group that lives mostly in the country’s far-western Xinjiang province. From police checkpoints to detention camps where at least one million people are incarcerated, horrific stories have emerged about China’s effort to “reeducate” the mostly Muslim minority. Chinese authorities have even designed an app specifically to monitor the Uighurs’ activities.

But this phenomenon is not only prevalent in China. Facial recognition software presents one of the largest emerging AI challenges for civil society, and new surveillance technologies are quietly being implemented and ramped up around the globe in order to repress minority voices and tamp down dissent. Authoritarian countries like the UAE and Singapore have jumped on the facial recognition bandwagon. Despite raising serious concerns over privacy and human rights, the international response to the use of these new technologies has been tepid.

In the United States, reaction to this technology has been mixed. A New York district will soon become the first in the country to implement facial recognition software in its schools. Meanwhile, San Francisco recently became the first city to ban facial recognition software due to the potential for misuse by law enforcement and violations of civil liberties, and a Massachusetts town of Somerville just followed suit. In short, some local and national governments are moving ahead with facial recognition while others are cracking down on it.

Uneven Response

So why is this uneven response problematic? The short answer is that the same software that is used to help track and detain Uighurs in China can be employed elsewhere without proper technological vetting. While facial recognition software may be touted as a more efficient way to track and catch criminals or to help people get through airports more easily, it is not a reliable or well-regulated tool. Human rights organizations have raised major concerns about government use of such technologies, including accuracy issues with facial recognition software and the software’s propensity to reinforce bias and stereotypes.

Last year, a researcher at the Massachusetts Institute of Technology found that while commercially available facial recognition software can recognize a white face with almost perfect precision, it performs much worse for people of color, who are already disproportionately affected by over-policing.

Embed from Getty Images

As governments embrace facial recognition software, some tech companies have taken notice of the related human rights issues. Microsoft recently refused to partner with a law enforcement agency in California over concerns about potential misuse of its products in policing minorities. An Israeli startup has developed a tool to help consumers protect their photos from invasive facial recognition technology that can violate their privacy.

Still, in most cases, companies cannot be trusted to regulate themselves. Amazon, which developed the facial recognition software Rekognition, offered to partner with US Immigration and Customs Enforcement (ICE),  raising concerns that its technology could be used to target immigrants. There is still insufficient oversight of these companies and, more importantly, of the governments that continue to partner with them. As a result, these companies are complicit in the repression of groups vulnerable to this technology.

Going Forward

So what can policymakers and others do to combat the challenges presented by facial recognition technology? First, lawmakers around the globe need to craft legislation that limits their respective governments’ use of facial recognition software and limit companies’ abilities to export these tools abroad, as has been the case with other invasive tech tools.

Second, individual cities and countries across the world, beyond liberal bastions like San Francisco, should prohibit police from using facial recognition tools. Seattle, and several cities across California have adopted similar policies but have not gone as far as San Francisco.

Third, international bodies like the United Nations should take a more active role in advising governments on the intersection of tech tools and human rights. As Philip Alston, the UN special rapporteur on extreme poverty and human rights, recently noted, “Human rights is almost always acknowledged when we start talking about the principles that should govern AI. But it’s acknowledged as a veneer to provide some legitimacy and not as a framework.” The UN is well-placed to provide an international framework for tech governance, and should do so.

Finally, human rights organizations have been raising concerns about facial recognition software and other AI tools for years, but instead of focusing exclusively on legislative fixes, they need to increase investment in public information campaigns. Consumers may be unaware that by using the fingerprint or face-enabled features on their smartphones, they are actually providing biometric data to companies like Amazon that have cozy relationships with law enforcement. In some cases, law enforcement agencies have compelled people to use their faces to unlock their phones. A judge recently ruled that acts like these are illegal in the US, but the battle is far from over in other countries.

As AI tools become more advanced, governments and international bodies must work on country-specific and global frameworks for reining in emerging technology. Otherwise, tools like the Uighur tracking app and facial recognition software will become more and more widespread. As the problematic statistics with facial recognition show, there is too much risk of error to let these tools further threaten human rights worldwide.

*[Young Professionals in Foreign Policy is a partner institution of Fair Observer.]

The views expressed in this article are the author’s own and do not necessarily reflect Fair Observer’s editorial policy.

Support Fair Observer

We rely on your support for our independence, diversity and quality.

For more than 10 years, Fair Observer has been free, fair and independent. No billionaire owns us, no advertisers control us. We are a reader-supported nonprofit. Unlike many other publications, we keep our content free for readers regardless of where they live or whether they can afford to pay. We have no paywalls and no ads.

In the post-truth era of fake news, echo chambers and filter bubbles, we publish a plurality of perspectives from around the world. Anyone can publish with us, but everyone goes through a rigorous editorial process. So, you get fact-checked, well-reasoned content instead of noise.

We publish 2,500+ voices from 90+ countries. We also conduct education and training programs on subjects ranging from digital media and journalism to writing and critical thinking. This doesn’t come cheap. Servers, editors, trainers and web developers cost money.
Please consider supporting us on a regular basis as a recurring donor or a sustaining member.

Will you support FO’s journalism?

We rely on your support for our independence, diversity and quality.

Donation Cycle

Donation Amount

The IRS recognizes Fair Observer as a section 501(c)(3) registered public charity (EIN: 46-4070943), enabling you to claim a tax deduction.

Make Sense of the World

Unique Insights from 2,500+ Contributors in 90+ Countries

Support Fair Observer

Support Fair Observer by becoming a sustaining member

Become a Member