When deployed in healthcare settings, it’s important that models are fair - i.e., that they do not cause harm or unjustly benefit specific subgroups of a pop...
ResearchOur research focuses on developing machine learning methods and using them to explain the principles underlying complex, biomedical processes. We use these methods to learn predictive models from electronic health records (EHRs) that are both interpretable to clinicians and fair to the population on which they are deployed. Our long-term goals are to positively impact human health by developing methods that are flexible enough to automate entire computational workflows underlying scientific discovery and medicine.
We study both black-box and glass-box ML methods to improve the intelligibility and/or explainability of models that are trained for clinical prediction task...
While artificial intelligence (AI) has become widespread, many commercial AI systems are not yet accessible to individual researchers nor the general public ...
About our recent HUMIES award-winning algorithm for clinical prediction models
A new perspective on how this social theory relates to fair machine learning.
We consistently observe lexicase selection running times that are much lower than its worst-case bound of \(O(NC)\). Why?