This week European Union negotiators will enter what is expected to be the final round of talks on a new regulation on preventing the dissemination of online content classified as “terrorist.” The working draft poses serious risks to free expression and the rule of law.
Click to expand Image
European Union flags are waving in front of the headquarters of the European Commission in Brussels. August 5, 2020.
© 2020 Laurie Dieffembacq (Sipa via AP Images)
When the EU first proposed the regulation in 2018, Human Rights Watch and other nongovernmental organizations argued that it was neither necessary nor justified. Since then, the draft regulation has improved in some respects, but is still flawed.
One key issue is who determines what content gets removed. The draft empowers “competent authorities” to do so. However, content removal could be subject to political pressure. Negotiators should define competent authorities as independent courts or administrative authorities only.
The definition of “terrorist” in the draft regulation is overbroad, which could result in restricting legitimate expression. It also introduces EU-wide removal orders, which means one EU member state would be able to restrict content throughout the EU, based on its own interpretation of “terrorist content.”
Another problem is that the regulation would require internet service providers to remove content very quickly – in as little as one hour – or face high penalties. This would likely incentivize service providers to pre-emptively take down content to ensure compliance and possibly force the shutdown of smaller providers.
The negotiations are expected to consider the use of upload filters to prevent terrorist content from being posted online. These types of automated tools are notorious for over-censoring and could result in restricting access to material of value to journalists, academics, and human rights monitors, and should be rejected
There is also a risk that negotiators will seek to reintroduce the idea of referrals, which would allow the competent authorities to refer content – such as anti-government protests or depictions of violence that constitute human rights abuses – to companies for removal, based on the companies’ terms of service. Online platforms should not act as quasi-judicial bodies, and pressing them to review content in this manner could easily lead to over-censorship and inadequate process
Two United Nations human rights experts recently expressed concern about the draft regulation – including its possible impact on free expression and its potential to undermine judicial authority.
As EU negotiators enter the last round of closed-door meetings, they should reject heavy-handed and potentially counterproductive approaches and ensure that the regulation is in line with EU and international human rights standards. As civil society groups have warned, this regulation has the potential to impact freedom of expression globally, by inspiring copycat laws and incentivizing companies to remove content deemed “terrorist” – even if it is not.