When deployed in healthcare settings, it’s important that models are fair - i.e., that they do not cause harm or unjustly benefit specific subgroups of a pop...
Research
Our research focuses on developing machine learning methods and using them to explain the principles underlying complex, biomedical processes. We use these methods to learn predictive models from electronic health records (EHRs) that are both interpretable to clinicians and fair to the population on which they are deployed. Our long-term goals are to positively impact human health by developing methods that are flexible enough to automate entire computational workflows underlying scientific discovery and medicine.Overviews
We study both black-box and glass-box ML methods to improve the intelligibility and/or explainability of models that are trained for clinical prediction task...
While artificial intelligence (AI) has become widespread, many commercial AI systems are not yet accessible to individual researchers nor the general public ...
Recent Publications
Optimizing fairness tradeoffs in machine learning with multiobjective meta-models
Genetic and Evolutionary Computation Conference (GECCO)
Fair admission risk prediction with proportional multicalibration
Conference on Health, Inference, and Learning
Proceedings of Machine Learning Research
A flexible symbolic regression method for constructing interpretable clinical prediction models
npj Digital Medicine
Posts
About our recent HUMIES award-winning algorithm for clinical prediction models
A new perspective on how this social theory relates to fair machine learning.
We consistently observe lexicase selection running times that are much lower than its worst-case bound of \(O(NC)\). Why?