Fair Machine Learning for Health Care
When deployed in healthcare settings, it’s important that models are fair - i.e., that they do not cause harm or unjustly benefit specific subgroups of a population. Improving the fairness of computational models is a complex and nuanced challenge that requires decision makers to carefully reason about multiple, sometimes conflicting criteria. Specific definitions of fairness can vary considerably (e.g. prioritizing equivalent error rates across patient groups vs. similar treatment of similar individuals) and must be contextually appropriate to each application. Inherent conflicts may arise when striving to maximize multiple types of fairness simultaneously (e.g. calibration by group vs. equalized odds1). There are often fundamental trade-offs between the overall error rate of a model and its fairness, and it is important to clearly and intuitively characterize and present these trade-offs to stakeholders in the health system. For example, one might care more about fairly prioritizing patients in patient triage settings2 but care more about error rates in predicting individual treatment plans and outcomes. Furthermore, it is computationally challenging to audit and improve model fairness when considering a large set of intersecting patient attributes including gender, race, ethnicity, age, socioeconomic status, among others3; yet, preventing worst-case performance for minoritized groups is often a central ethical prerogative. Thus, it is critical for investigators to consider not only fairness by what measure, but also fairness for whom, and with what tradeoffs to other measures of model performance and fairness.
Providing a set of models4 by jointly optimizing for fairness and accuracy is one way to aid a decision maker in understanding how an algorithm will affect the people it interacts with when it is deployed. As we describe in a perspective on intersectionality in machine learning5, achieving fairness also requires an broader ethical analysis to extend beyond the model development process (data collection, preprocessing, training, deployment) to the wider context of an algorithm’s use as a socio-technical artifact, for example by eliciting community participation in defining project goals and establishing criteria for monitoring downstream outcomes of the model’s use throughout its complete lifecycle.
References
-
Kleinberg, J., Mullainathan, S., & Raghavan, M. (2016). Inherent Trade-Offs in the Fair Determination of Risk Scores Innovations in Theoretical Computer Science (ITCS) ↩
-
William G. La Cava, Elle Lett, Guangya Wan (2023). Fair admission risk prediction with proportional multicalibration. Proceedings of the Conference on Health, Inference, and Learning ↩
-
Kearns, M., Neel, S., Roth, A., & Wu, Z. S. (2018). Preventing Fairness Gerrymandering: Auditing and Learning for Subgroup Fairness. Proceedings of the 35th International Conference on Machine Learning, 2564–2572. PMLR ↩
-
William G. La Cava (2023). Optimizing fairness tradeoffs in machine learning with multiobjective meta-models. Proceedings of the 2023 Genetic and Evolutionary Computation Conference (GECCO) ↩
-
Elle Lett, William G. La Cava (2023). Translating intersectionality to fair machine learning in health sciences. Nature Machine Intelligence ↩