Interpret, visualize, explain, and integrate reliable AI for fair, secure, and trustworthy AI apps
Denis Rothman

#Python
#XAI
#AI
#TensorFlow
#XAI
#machine_learning
Resolve the black box models in your AI applications to make them fair, trustworthy, and secure. Familiarize yourself with the basic principles and tools to deploy Explainable AI (XAI) into your apps and reporting interfaces.
Effectively translating AI insights to business stakeholders requires careful planning, design, and visualization choices. Describing the problem, the model, and the relationships among variables and their findings are often subtle, surprising, and technically complex.
Hands-On Explainable AI (XAI) with Python will see you work with specific hands-on machine learning Python projects that are strategically arranged to enhance your grasp on AI results analysis. You will be building models, interpreting results with visualizations, and integrating XAI reporting tools and different applications.
You will build XAI solutions in Python, TensorFlow 2, Google Cloud’s XAI platform, Google Colaboratory, and other frameworks to open up the black box of machine learning models. The book will introduce you to several open-source XAI tools for Python that can be used throughout the machine learning project life cycle.
You will learn how to explore machine learning model results, review key influencing variables and variable relationships, detect and handle bias and ethics issues, and integrate predictions using Python along with supporting the visualization of machine learning models into user explainable interfaces.
By the end of this AI book, you will possess an in-depth understanding of the core concepts of XAI.
This book is not an introduction to Python programming or machine learning concepts. You must have some foundational knowledge and/or experience with machine learning libraries such as scikit-learn to make the most out of this book.
Some of the potential readers of this book include:
In this book, you'll learn about tools and techniques using Python to visualize, explain, and integrate trustworthy AI results to deliver business value, while avoiding common issues with AI bias and ethics.
You'll also get to work with hands-on Python machine learning projects in Python and TensorFlow 2.x, and learn how to use WIT, SHAP, and other key explainable AI (XAI) tools - along with those designed by IBM, Google, and other advanced AI research labs.
Two of my favorite concepts that I hope readers will also fall in love with are:
Finally, I would want readers to understand that it is an illusion to think that anybody can understand the output of an AI program that contains millions of parameters by just looking at the code and intermediate outputs.
The book shows you how to implement two essential tools to detect problems and bias: Facets and Google's What-If Tool (WIT). With this you'll learn to find, display, and explain bias to the developers and users of an AI project.
In addition to this, you'll use the knowledge and tools you've acquired to build an XAI solution from scratch using Python, TensorFlow, Facets, and WIT.
We often isolate ourselves from reality when experimenting with machine learning (ML) algorithms. We take the ready-to-use online datasets, use the algorithms suggested by a given cloud AI platform, and display the results as we saw in a tutorial we found on the web.
However, by only focusing on what we think is the technical aspect, we miss a lot of critical moral, ethical, legal, and advanced technical issues. In this book, we will enter the real world of AI with its long list of XAI issues, using Python as the key language to explain concepts.
"Interpretability and explainability are key considerations beyond predictive accuracy for building trust in machine learning systems for high-stakes applications. There are many different ways of explaining that are relevant for different use cases and personas who consume the explanations. Denis Rothman has done a good job in providing step-by-step tutorial examples in Python to provide an entrée into this important topic, focusing on one of the ways of explaining: post hoc local explanations."
Kush R. Varshney, Distinguished Research Staff Member and Manager, Foundations of Trustworthy AI, IBM Research
"Hands-On Explainable AI (XAI) with Python is a timely book on a complex subject, and it fulfills its promise. The book covers the whole spectrum i.e. XAI for types of users, XAI for phases of a project, legal issues, data issues etc. It also covers techniques like LIME, SHAP from Microsoft, and WIT from Google, and also explores implementation scenarios like healthcare, self-driving cars etc. There is a lot to learn from this book both in the breadth and depth and it's a recommended read."
Ajit Jaokar, Principal Data Scientist/AI Designer, Feynlabs.ai, and Director, FutureText
"The timing of Denis Rothman's book Hands-on Explainable AI (XAI) with Python is perfect. Not only does the book provide a solid overview of the XAI concepts and challenges necessitated by XAI, but it is a perfect catalyst for those data scientists who want to get their hands dirty exploring different XAI techniques."
Bill Schmarzo, Dean of Big Data, Author of The Economics of Data, Analytics, and Digital Transformation
"Hands-on Explainable AI (XAI) with Python covers XAI white box models for the explainability and interpretability of algorithms with transparency for the accuracy of predictable outcomes and results from XAI applications keeping ethics in mind. Denis Rothman shows how to install LIME, SHAP, and WIT tools and the ethical standards to maintain balanced datasets with best practices and principles. The book is a recommended read for data scientists."
Dr. Ganapathi Pulipaka, Chief Data Scientist, Chief AI HPC Scientist, DeepSingularity
Denis Rothman graduated from Sorbonne University and Paris-Diderot University, writing one of the very first word2vector embedding solutions. He began his career authoring one of the first AI cognitive natural language processing (NLP) chatbots applied as a language teacher for Moët et Chandon and other companies. He has also authored an AI resource optimizer for IBM and apparel producers. He then authored an advanced planning and scheduling (APS) solution that is used worldwide. Denis is an expert in explainable AI (XAI), having added interpretable mandatory, acceptance-based explanation data and explanation interfaces to the solutions implemented for major corporate aerospace, apparel, and supply chain projects.