About our recent HUMIES award-winning algorithm for clinical prediction models
Interpretable Prediction Models
Some AI models do not need to be explained; evidence of their reliability is enough. But when it comes to many medical applications of AI, the explainability of models is often crucial. This view is shared by the FDA, where regulatory guidelines state ML recommendations must enable a health care provider “… to independently review the basis for such recommendations”.
Although AI systems may be complex, the clinical models produced by them need not be. We investigate state-of-the-art methods (symbolic regression, neurosymbolic AI, and large language models (LLMs)) as tools to generate simple clinical models that clinicians can use to better understand and treat their patients.

Code
Related Posts
Relaxing the definition of equivalent mathematical expressions to get simpler and more interpretable models