Translating Intersectionality to Fair Machine Learning

The field of fair machine learning has sought to make many social concepts as concrete as possible, so that they can be reflected in the socio-technical artifacts we are building in the world. In a new commentary led by Dr. Elle Lett, we argue that the social theory of intersectionality has more to offer fair ML beyond the reporting of model metrics over intersectional subgroups.

Translating intersectionality to fair machine learning in health sciences
Lett, E. and La Cava, W. G. (2023)
Nature Machine Intelligence

In this piece, we review the six concepts for intersectionality theory articulated by Collins and Bilge in their textbook, Intersectionality1. These core concepts pertain to many parts of the ML pipeline beyond model training, including data collection/generation, interpretability, transportability between sites, and post-deployment impact studies.

Intersectionality is not a problem to be solved. Instead, as a critical theory, it prompts us to confront difficult questions. For example: how do we address fairness implications for minoritized groups when our measurement certainty is limited by sample size? Furthermore, it suggests discretion in deploying ML technologies: some use cases may not be appropriate for ML if data cannot sufficiently represent marginalized groups or tools cannot be fairly deployed.


  1. Collins, P. H. & Bilge, S. Intersectionality (John Wiley & Sons, 2020). 


Back to Top ↑


Back to Top ↑


Back to Top ↑