Intelligible Predictive Models

We study both black-box and glass-box ML methods to improve the intelligibility and/or explainability of models that are trained for clinical prediction tasks using electronic health record (EHR) data. EHR data offer a promising opportunity for advancing the understanding of how clinical decisions and patient conditions interact over time to influence patient health. However, EHR data are difficult to use for predictive modeling due to the various data types they contain (continuous, categorical, text, etc.), their longitudinal nature, the high amount of non-random missingness for certain measurements, and other concerns. Furthermore, patient outcomes often have heterogeneous causes and require information to be synthesized from several clinical lab measures and patient visits. Researchers often resort to using complex, black-box predictive models to overcome these challenges, thereby introducing additional concerns of accountability, transparency and intelligibility.

Can’t we just explain black-box models?

Although black-box models are typically accurate, they are often bad at explaining how they arrive at those predictions, and may also disagree with very similar models about which factors are driving their predictive ability.

Feature importance bi-clustering across diseases and predictors.
Feature importance bi-clustering across diseases and predictors.

Symbolic Regression for Interpretable Machine Learning

An alternative, and promising approach, is to use glass-box ML methods such as symbolic regression that can capture complex relationships in data and yet produce and intelligible final model. Symbolic regression methods jointly optimize structure of a model, as well as its parameters, usually with the goal of finding a simple and accurate symbolic model.

However, intelligibility is complicated to define, and is both context- and user-dependent. In general, the intelligibility of a model depends heavily on its representation, i.e, how it defines its feature space.

An example representation from the [Feat docs.](https://cavalab.org/feat/)
An example representation from the Feat docs.

What makes a representation good? At the minimum, a good representation produces a model with better generalization than a model trained only on the raw data attributes. In addition, a good representation teases apart the factors of variation in the data into independent components. Finally, an ideal representation is succinct so as to promote intelligibility. This means a representation should only have as many features as there are independent factors in the process, and each of those features should be digestible by the user. Many of our research projects center around these three motivations when designing novel algorithms for interpretable machine learning.

Can a simple symbolic model be accurate?

Researchers often see the complexity of a model as a trade-off with its error: more complex models should give better predictions than simple ones. However, very rarely is the nature of the trade-off actually characterized in a robust way.

In fact, what we have found is that for many tasks, symbolic regression approaches can perform as well as or better than state-of-the-art black-box approaches - and still produce simpler expressions.

Symbolic regression algorithms (marked with asterisk) benchmarked against black-box ML on hundreds of regression problems. See more at https://cavalab.org/srbench.

Do they work in clinical care?

Our preliminary work on symbolic regression approaches to patient phenotyping have shown success in producing accurate and interpretable models of treatment resistant hypertension. More work is needed to scale and study these algorithms in routine clinical care.

A symbolic regression model of treatment resistant hypertension.
A symbolic regression model of treatment resistant hypertension.

Related Publications

Pediatric ECG-Based Deep Learning to Predict Left Ventricular Dysfunction and Remodeling
Mayourian, J., La Cava, W. G., Vaid, A., Nadkarni, G. N., Ghelani, S. J., Mannix, R., Geva, T., Dionne, A., Alexander, M. E., Duong, S. Q., & Triedman, J. K. (2024)
Circulation
A flexible symbolic regression method for constructing interpretable clinical prediction models
La Cava, W., Lee, P.C., Ajmal, I., Ding, X., Cohen, J.B., Solanki, P., Moore, J.H., and Herman, D.S (2023)
npj Digital Medicine
Interpretable Symbolic Regression for Data Science: Analysis of the 2022 Competition
de Franca, F. O., Virgolin, M., Kommenda, M., Majumder, M. S., Cranmer, M., Espada, G., ... & La Cava, W. G. (2023)
Preprint
Contemporary Symbolic Regression Methods and their Relative Performance
La Cava, W., Orzechowski, P., Burlacu, B., França, F. O. de, Virgolin, M., Jin, Y., Kommenda, M., and Moore, J. H. (2021)
Neurips Track on Datasets and Benchmarks
Interpretation of machine learning predictions for patient outcomes in electronic health records
La Cava, W., Bauer, C. R., Moore, J. H., and Pendergrass, S. A. (2019)
AMIA Annual Symposium
Learning concise representations for regression by evolving networks of trees
La Cava, W., and Moore, J. H. (2019)
International Conference on Learning Representations (ICLR)
Inference of compact nonlinear dynamic models by epigenetic local search
La Cava, W., Danai, K., and Spector, L. (2016)
Engineering Applications of Artificial Intelligence

Updated: