Some AI models do not need to be explained; evidence of their reliability is enough.
But when it comes to many medical applications of AI, the explainability of models is often crucial.
This view is shared by the FDA, where regulatory guidelines state ML recommendations must enable a health care provider “… to independently review the basis for such recommendations”.
Although AI systems may be complex, the clinical models produced by them need not be.
We investigate state-of-the-art methods (symbolic regression, neurosymbolic AI, and large language models (LLMs)) as tools to generate simple clinical models that clinicians can use to better understand and treat their patients.