FOUR Books for Interpretable Machine Learning
From theory to practice: books that make machine learning explainable.
FOUR books for interpretable machine learning.
Interpreting your machine learning models has never been easier with these 4 books, 2 of which you can read for free on the web.
Let’s dive in 👇
1️⃣ Interpretable Machine Learning, by Christoph Molnar.
I think I am not mistaken if I name this book the Bible of interpretability. Christoph was certainly a visionary when he saw the gap and wrote this book for the ML community. Since then, it’s been cited pretty much everywhere, including in scikit-learn’s documentation.
Pros: it covers almost every machine learning interpretability method available out there. It clearly explains the methods, their advantages and limitations.
Cons: It lacks code demos. So, if you want to implement what you learned, you’ll still have to do some extra digging to find out how.
Plus: you can read the book for free, although I’d recommend supporting the author by purchasing a copy.
2️⃣ Explanatory Model Analysis, by Przemyslaw Biecek and Tomasz Burzykowski
Pros: Clear explanations of interpretable models based on plots, like ceteris paribus, partial dependence, and accumulated local effects. It also shows code implementations both in R and Python using the open-source library Dalex.
Cons: It focuses mostly on methods supported by the Dalex library.
Plus: this book is freely available to read.
3️⃣ Interpretable Machine Learning with Python, by Serg Masis and Packt Publishing.
Serg makes a very good case of why interpreting machine learning models used in the real world is extremely important. That’s probably what I liked the most.
In addition, a good explanation of how to interpret linear models, and how to keep models interpretable by introducing constraints. Explanations are accompanied by Python code, so you can carry out what you learned straightaway to interpret your models.
Cons: it does not cover most available Python open-source interpretability libraries, and focuses on some interpretability methods.
4️⃣ … Did I not say FOUR books? Hmm 🤔
That’s because I am entertaining the idea of writing a book on interpreting machine learning models with Python, where I discuss pretty much every method out there along with Python code, so you need to dig no further and can apply what you read straightaway to explain your models.
Do you think this is necessary?
Ok, I’ll through in 2 more resources since you reached this far down the post:
5️⃣ Interpreting Machine Learning Models with SHAP, by the same Christoph Molnar
A lot of people use SHAP to interpret their models, very few know exactly how these values are calculated. Dangerous if you ask me.
If you don’t want to be one of them, then check out this book.
6️⃣ Machine Learning Interpretability, online course, by… Well, me. 😁
If you want to sit back and relax while someone explains the various interpretability methods, their pros, limitations and how to implement them in Python, then check out my course.
It covers almost every method for interpreting models globally and locally, with full coverage of the different ways in which you can estimate SHAP values using the SHAP library.
In addition, I discuss the most widely used Python open-source libraries for interpretability.
That’s all from me folks!
On a second note, I have some exciting news to share that I think will benefit you greatly. 🎉 We’ve just released Feature-engine 1.9.0!
Feature-engine 1.9.0 is here! Find out what’s new.
This release supercharges 3 of our feature selection transformers: ProbeFeatureSelection(), RecursiveFeatureAddition(), and RecursiveFeatureElimination().
🔹 ProbeFeatureSelection()
Now even more flexible! Create probes with new distributions and combine them using mean, max, or mean + standard deviation. That means more control than ever over which features to keep or kick out. 😉
🔹 RecursiveFeatureAddition() & RecursiveFeatureElimination()
No more restrictions! 🥳These transformers now work with all sklearn estimators. And if your model doesn’t return coefficients or importances, feature permutation steps in to determine importance automatically.
🚀 Say hello to smoother, smarter workflows with Feature-engine 1.9.0.
Ready to enhance your skills?
Our specializations, courses and books are here to assist you:
Advanced Machine Learning (specialization)
Forecasting with Machine Learning (course)



