EU Should Regulate Artificial Intelligence to Protect Rights

Click to expand Image

European Union flags are waving in front of the headquarters of the European Commission in Brussels. August 5, 2020.
© 2020 Laurie Dieffembacq (Sipa via AP Images)

The last decade has seen an alarming proliferation of artificial intelligence (“AI”) to monitor protests, predict crime, and profile minorities, in ways that gravely threaten our human rights. The European Commission has pledged to develop groundbreaking regulation of these technologies that will “safeguard fundamental EU values and rights.” In a letter published this week, Human Rights Watch joined more than sixty civil society and rights groups to hold the commission to its word, urging decisive action to prevent abusive applications of AI.

The letter highlights how the growing use of facial recognition can trigger widespread privacy abuses. This technology relies on machine learning, a form of artificial intelligence, to infer people’s identities from still images or video that capture their faces. When deployed in train stations, stadiums, and other public spaces, they are capable of tracking the identities and movements of entire crowds. This unprecedented form of mass surveillance could have a significant chilling effect on our rights to freedom of assembly and association.

Biases embedded in facial recognition algorithms also raise concern that they fuel discriminatory policing practices. Research shows that these algorithms are less likely to correctly identify the faces of people of color and women than those of white people and men, exposing the former to higher rates of misidentification and false accusations.

The letter also calls for measures to ensure that the automation of social security programs and other essential public services protects privacy and social security rights. In their bid to modernize aging welfare systems, a growing number of governments in and outside of Europe are building or procuring algorithms to help them verify people’s eligibility for benefits, perform means testing, and detect fraud.

Ill-conceived algorithms have deprived people of their benefits and led to wrongful accusations of fraud. Last year, a Netherlands court ordered the government to suspend an automated risk assessment tool it was using to predict how likely people were to commit tax or benefits fraud, citing its lack of transparency and privacy concerns. 

The European Commission has said that “the way we approach AI will define the world we live in,” and it plans to publish its proposal for regulation in the first quarter of 2021. A clear rejection of disproportionate surveillance and similarly excessive methods of social control will help protect rights and avert a dystopian future.