Rewriting the Narrative of AI Bias: A Data Feminist Critique of Algorithmic Inequalities in Healthcare
Abstract
AI-driven healthcare systems perpetuate gendered and racialised health inequalities, misdiagnosing marginalised populations due to historical exclusions in medical research and dataset construction. These disparities are further reinforced by androcentric medical epistemologies where white male bodies are treated as the universal norm. Additionally, the ‘othering’ of marginalised communities manifests in algorithmic exclusions or biases, where AI systems flag non-dominant populations as statistical anomalies rather than central subjects, reinforcing structural biases in healthcare access and treatment. This article critically examines the framing of AI bias within legal narratives, particularly through the EU AI Act, arguing that bias is not merely a technical flaw, but a structural consequence of exclusionary knowledge production. The study integrates data feminism as a counter-narrative to dominant AI governance frameworks, applying insights from Richard Sherwin’s legal narrative theory, Kimberlé Crenshaw’s intersectionality theory, Carol Smart’s socio-legal critiques, and Ruha Benjamin’s abolitionist AI perspectives. The analysis highlights how specific articles in the EU AI Act: risk-based classification (Article 6), bias audits (Article 10), and transparency requirements (Article 13), reinforce androcentric, racialised, and neoliberal exclusions, failing to mandate intersectional accountability or structural interventions. By challenging the formalist bias framing in AI regulation, the article advocates for equity-driven AI governance through data feminism, embedding data sovereignty, participatory oversight, and redistributive justice.

