Social housing seen in Rotterdam, Netherlands, January 1, 2016.
© 2016 Robert B. Fishman/picture-alliance/dpa/AP Images
Yesterday, a Netherlands court ordered the Dutch government to halt its use of SyRI, an automated program that analyzes a wide range of personal and sensitive data to predict how likely people are to commit tax or benefits fraud. The ruling affirms that individuals who need social security support should be treated as rights-holders whose privacy matters, and not suspects to be constantly surveilled.
The Hague District Court made clear that transparency is needed to guard against technology-enabled abuses of privacy and related rights. During the hearing, the government refused to disclose meaningful information about how SyRI uses personal data to draw inferences about possible fraud.
The Court was not persuaded that authorities had to hide how the system’s risk calculation model works. Without this information, it was nearly impossible for individuals under suspicion to challenge the government’s decisions to investigate them for fraud. This lack of transparency was particularly troubling given that SyRI had been exclusively deployed in so-called “problem” neighborhoods – a potential proxy for discrimination and bias based on individuals’ socio-economic background and immigration status.
To mitigate these concerns, the government stressed that SyRI does not automatically trigger legal consequences or even a full-fledged investigation. But this failed to allay the Court’s concern that the risk-scoring process itself created significant potential for abuse. Individuals had no way of knowing or challenging their risk scores, which are stored on government databases for up to two years.
This ruling also vindicates civil society efforts to ensure human rights and due diligence are central to the design of automated decision-making systems. The government claimed that a single data protection impact assessment it conducted during SyRI’s initial rollout was sufficient. The Court found that this one-size-fits-all approach may have overlooked the need to conduct similar assessments for each municipal-level project that adapted SyRI to its specific fraud detection needs. This is consistent with recommendations from multiple Artificial Intelligence experts who have called for regular and ongoing risk assessments throughout an automated system’s lifecycle.
By stopping SyRI, the Court has set an important precedent for protecting the rights of the poor in the age of automation. Governments that have relied on data analytics to police access to social security – such as those in the US, the U.K., and Australia – should heed the Court’s warning about the human rights risks involved in treating social security beneficiaries as perpetual suspects.