Designing and Implementing Explainable ML Solutions
Michael Munn, David Pitman

#AI
#ML
#Keras
#TensorFlow
#PyTorch
#HuggingFace
Most intermediate-level machine learning books focus on how to optimize models by increasing accuracy or decreasing prediction error. But this approach often overlooks the importance of understanding why and how your ML model makes the predictions that it does.
Explainability methods provide an essential toolkit for better understanding model behavior, and this practical guide brings together best-in-class techniques for model explainability. Experienced machine learning engineers and data scientists will learn hands-on how these techniques work so that you'll be able to apply these tools more easily in your daily workflow.
This essential book provides:
Table of Contents
Chapter 1. Introduction
Chapter 2. An Overview of Explainability
Chapter 3. Explainability for Tabular Data
Chapter 4. Explainability for Image Data
Chapter 5. Explainability for Text Data
Chapter 6. Advanced and Emerging Topics
Chapter 7. Interact ing with Explainable Al
Chapter 8. Putting It All Together
The use of AI as a tool to solve real-world challenges has experienced rapid growth, making these systems ubiquitous in our lives. More and more, machine learning is being used to support high-stakes decisions and being used in applications from healthcare to autonomous driving. With this growth, the need to be able to explain these opaque AI systems has become even more urgent and, in many cases, the lack of explainability is a barrier for applications where interpretability is essential.
This book is a collection of some of the most effective & commonly used techniques for explaining why an ML model makes the predictions it does. We discuss the many aspects of Explainable AI (XAI), including the challenges, metrics for success, and use case studies to guide best practices. Ultimately, the goal of this book is to bridge the gap between the vast amount of work that has been done in XAI and provide a quick reference for practitioners that aim to implement XAI into their ML workflow.
Who Should Read This Book?
Modern ML and AI have been used to solve very complex real-world problems, and model explainability is important for anyone who interacts with or develops those models, from the engineers and the product owners that build these systems to the business stakeholders and the individuals that use them. This book is for anyone wishing to incorporate the best practices of Explainable AI into their ML solutions. Anyone with an interest in model explainability and model interpretability will benefit from the discussions in this book.
That being said, our primary focus is on practitioners; that is, engineers and data scientists who are tasked with building ML models and methods for incorporating explainability into their current workflows. This book will introduce you to a catalog of ideas concerning model explainability and enable users to quickly get up to speed on this increasingly important and quickly evolving field of AI. We will discuss best practices for implementing these techniques and help you make informed decisions about which technique to use and when. We’ll also look at the big picture and discuss how XAI can be used throughout the entire ML workflow to assist you in building more robust ML solutions.
This book is not meant to be a foundational reference on machine learning, and as such, we won’t spend time discussing specific model architectures or details of model building. We assume that you are already somewhat familiar with the basics of ML and data processing. We’ll review these concepts as they arise, but refer you to the plethora of other resources to fill in any remaining gaps.
What Is and What Is Not in This Book?
Explainability is one of the core tenets of Responsible AI. Responsible AI is a broad and emerging field encompassing topics such as ML fairness, AI ethics, governance, and privacy and security. We won’t go into these other areas in this book, although they may come up in context. Explainability has become increasingly important in recent years, with a deep and very active area of academic research focused on model explainability and advancing cutting-edge techniques. While we will at times reference some of this research, that is not the goal of this book, and we will not dive deep into active research topics. All of the techniques discussed here are grounded in some mathematical theory, be it game theory or mathematical optimization, and while it’s helpful at times to understand these theoretical underpinnings, that is not the focus of this book. Furthermore, although these methods may have sound theoretical groundings, their application and benefits are far from well understood. Our goal is to help you, the practitioner, quickly get up to speed in the field, learn common techniques of XAI, and get some insight into this tricky gray area of how to apply these tools in your ML systems.
This is a book for ML engineers working in the industry with a focus on practical implementation intended for real-world applications. The book is not intended for research scientists in industry labs or academia, though early researchers may find it to be a valuable reference. In general, we do not explore the theory behind different techniques but do detail the mathematics and reference papers so you can investigate further if you desire.
...
As you read this book, you may notice that we also cover many techniques that explain how the dataset and its structure influenced the behavior of the model. This can seem counterintuitive: why are we concerned with datasets, and aren’t models what we want to better understand? There are two reasons for this approach. The first is what is under the control of the ML practitioner and can be easily changed: it is usually far easier to manipulate a dataset than rebuild or change a model architecture. The second is that we find many of the techniques that focus on the model itself to generate explanations; while intriguing, these are ultimately not actionable. For example, some explainability techniques seek to explain the behavior of CNN image classification models by creating artificial images that show how the model is perceiving an image at different layers. While this type of technique creates fascinating explanations that lead to a vigorous discussion about the way CNNs may work, we have yet to see the technique be consistently applied in industry to achieve one of the goals of explainability we listed above.
Michael Munn is a research software engineer at Google. His work focuses on better understanding the mathematical foundations of machine learning and how those insights can be used to improve machine learning models at Google. Previously, he worked in the Google Cloud Advanced Solutions Lab helping customers design, implement, and deploy machine learning models at scale. Michael has a PhD in mathematics from the City University of New York. Before joining Google, he worked as a research professor.
David Pitman is a staff engineer working in Google Cloud on the AI Platform, where he leads the Explainable AI team. He's also a co-organizer of PuPPy, the largest Python group in the Pacific Northwest. David has a Masters of Engineering degree and a BS in computer science from MIT, where he previously served as a research scientist.









