A Misogynistic Glitch? A Feminist Critique of Algorithmic Content Moderation
Abstract
In recent years, all leading social media platforms have integrated artificial intelligence (AI) into their content moderation workflows. An increasingly prevalent narrative suggests that algorithms capable of detecting and removing prohibited or harmful content are crucial for ensuring that marginalised groups have equal opportunities to participate in civic discourse. Drawing on feminist theory, this article disproves this narrative. It begins by unpacking the evolution of perceptions of algorithmic content moderation, shining a light on the recent departure from simplistic efficiency considerations towards a more compelling portrayal of AI as a means of fostering a more inclusive dialogue online. It then examines how the use of AI for detecting and removing violative material further marginalises – rather than empowers – women who are seeking to engage on social media. Apart from being unable to properly address online gender-based violence and misogyny, algorithms employed in content moderation often erroneously restrict women’s lawful counter-speech, thus preventing them from contributing to public debate. The article concludes with brief reflections on how the inevitable expansion of technological solutions in content moderation could be aligned with feminist ideals.

