With the increase in online content circulation new challenges have arisen: the dissemination of defamatory content, non-consensual intimate images, hate speech, fake news, the increase of copyright violations, among others. Due to the huge amount of work required in moderating content, internet platforms are developing artificial intelligence to automate decision-making content removal. This article discusses the reported performance of current content moderation technologies from a legal perspective, addressing the following question: what risks do these technologies pose to freedom of expression, access to information and diversity in the digital environment? The legal analysis developed by the article focuses on international human rights law standards. Despite recent improvements, content moderation technologies still fail to understand context, thereby posing risks to users’ free speech, access to information and equality. Consequently, it is concluded, these technologies should not be the sole basis for reaching decisions that directly affect user expression.