Interpretable machine learning with scoring models
On the subject of interpretable machine learning, I always recommend Cynthia Rudin’s work first.
Start with her excellent talk “Scoring Systems: At the Extreme of Interpretable Machine Learning” (2022) and some of the papers and packages it references.
Her 2021 paper on grand challenges in interpretable machine learning is still worth reading: “Interpretable Machine Learning: Fundamental Principles and 10 Grand Challenges”.
A few related GitHub repositories:
- fastSparse (R package)
- Generalized Optimal Sparse Decision Trees (Python package)
- FasterRisk (Python package for sparse linear models with integer coefficients)
- Pycorels (Python package, not maintained)
Zachary Lipton’s “The Mythos of Model Interpretability” was also hugely influential on my thinking about interpretability.
Interpretability (added 2025-07-11)
Other papers I like on the subject of interpretability:
- “Interpreting Interpretability: Understanding Data Scientists’ Use of Interpretability Tools for Machine Learning” – This paper is one of the main reasons I’m skeptical of SHAP and similar “opening the black box” interpretability methods.