Relaxing the definition of equivalent mathematical expressions to get simpler and more interpretable models
Translating Intersectionality to Fair Machine Learning
The field of fair machine learning has sought to make many social concepts as concrete as possible, so that they can be reflected in the socio-technical artifacts we are building in the world. In a new commentary led by Dr. Elle Lett, we argue that the social theory of intersectionality has more to offer fair ML beyond the reporting of model metrics over intersectional subgroups.
In this piece, we review the six concepts for intersectionality theory articulated by Collins and Bilge in their textbook, Intersectionality1. These core concepts pertain to many parts of the ML pipeline beyond model training, including data collection/generation, interpretability, transportability between sites, and post-deployment impact studies.
Intersectionality is not a problem to be solved. Instead, as a critical theory, it prompts us to confront difficult questions. For example: how do we address fairness implications for minoritized groups when our measurement certainty is limited by sample size? Furthermore, it suggests discretion in deploying ML technologies: some use cases may not be appropriate for ML if data cannot sufficiently represent marginalized groups or tools cannot be fairly deployed.
References
-
Collins, P. H. & Bilge, S. Intersectionality (John Wiley & Sons, 2020). ↩
2024
2023
About our recent HUMIES award-winning algorithm for clinical prediction models
A new perspective on how this social theory relates to fair machine learning.
2022
We consistently observe lexicase selection running times that are much lower than its worst-case bound of \(O(NC)\). Why?