Explainable AI with Python - 2nd Edition by Antonio Di Cecco & Leonida Gianfagna (Paperback)
About this item
Highlights
- This comprehensive book on Explainable Artificial Intelligence has been updated and expanded to reflect the latest advancements in the field of XAI, enriching the existing literature with new research, case studies, and practical techniques.
- About the Author: Leonida Gianfagna (Phd, MBA) is a theoretical physicist currently working in cybersecurity and machine learning as the R&D Director at Cyber Guru.
- 298 Pages
- Computers + Internet, Intelligence (AI) & Semantics
Description
Book Synopsis
This comprehensive book on Explainable Artificial Intelligence has been updated and expanded to reflect the latest advancements in the field of XAI, enriching the existing literature with new research, case studies, and practical techniques. The Second Edition expands on its predecessor by addressing advancements in AI, including large language models and multimodal systems that integrate text, visual, auditory, and sensor data. It emphasizes making complex systems interpretable without sacrificing performance and provides an enhanced focus on additive models for improved interpretability. Balancing technical rigor with accessibility, the book combines theory and practical application to equip readers with the skills needed to apply explainable AI (XAI) methods effectively in real-world contexts. Features: Expansion of the "Intrinsic Explainable Models" chapter to delve deeper into generalized additive models and other intrinsic techniques, enriching the chapter with new examples and use cases for a better understanding of intrinsic XAI models. Further details in "Model-Agnostic Methods for XAI" focused on how explanations differ between the training set and the test set, including a new model to illustrate these differences more clearly and effectively. New section in "Making Science with Machine Learning and XAI" presenting a visual approach to learning the basic functions in XAI, making the concept more accessible to readers through an interactive and engaging interface. Revision in "Adversarial Machine Learning and Explainability" that includes a code review to enhance understanding and effectiveness of the concepts discussed, ensuring that code examples are up-to-date and optimized for current best practices. New chapter on "Generative Models and Large Language Models (LLM)" chapter dedicated to generative models and large language models, exploring their role in XAI and how they can be used to create richer, more interactive explanations. This chapter also covers the explainability of transformer models and privacy through generative models. New "Artificial General Intelligence and XAI" mini-chapter dedicated to exploring the implications of Artificial General Intelligence (AGI) for XAI, discussing how advancements towards AGI systems influence strategies and methodologies for XAI. Enhancements in "Explaining Deep Learning Models" features new methodologies in explaining deep learning models, further enriching the chapter with cutting-edge techniques and insights for deeper understanding.From the Back Cover
This comprehensive book has been updated and expanded to reflect the latest advancements in the field of XAI, enriching the existing literature with new research, case studies, and practical techniques.
The expanded Second Edition addresses advancements in AI including LLMs and multimodal systems that integrate text, visual, auditory, and sensor data. It emphasizes making complex systems interpretable without sacrificing performance and provides an enhanced focus on additive models for improved interpretability. Balancing technical rigor with accessibility, the book combines theory and practical application to equip readers with the skills needed to apply explainable AI (XAI) methods effectively in real-world contexts.
Features:
- Expanded "Intrinsic Explainable Models" Chapter - Now includes a deeper exploration of generalized additive models and other intrinsic techniques, with new examples and use cases for better understanding.
-
Enhanced "Model-Agnostic Methods for XAI" - Focuses on how explanations vary between the training and test sets, introducing a new model to illustrate these differences more clearly and effectively.
-
New Section in "Making Science with Machine Learning and XAI" - Presents a visual approach to learning fundamental XAI functions, making concepts more accessible through an interactive and engaging interface.
-
Revised "Adversarial Machine Learning and Explainability" Chapter - Features a comprehensive code review to improve clarity and effectiveness, ensuring examples align with current best practices.
-
New Chapter on "Generative Models and Large Language Models (LLMs)" - Explores the role of generative and large language models in XAI, covering the explainability of transformer models, and privacy considerations.
- New "Artificial General Intelligence and XAI" Chapter - Examines implications of Artificial General Intelligence (AGI) on XAI, and how advancements toward AGI systems shape explainability strategies and methodologies.
-
Updated "Explaining Deep Learning Models" Chapter - Introduces new methodologies for explaining deep learning models, incorporating cutting-edge techniques and insights for a deeper understanding.
About the Author
Leonida Gianfagna (Phd, MBA) is a theoretical physicist currently working in cybersecurity and machine learning as the R&D Director at Cyber Guru. Before joining Cyber Guru, he spent 15 years at IBM, holding leadership roles in software development for IT Service Management (ITSM). He is the author of several publications in theoretical physics and computer science and has been recognized as an IBM Master Inventor, with over 15 patent filings. Antonio Di Cecco (Phd, MBA) is a theoretical physicist with a strong mathematical background, dedicated to delivering AI/ML education at all proficiency levels, from beginners to experts. Passionate about all areas of machine learning, he leverages his mathematical expertise to make complex concepts accessible through both in-person and remote classes. As the founder of a School of AI community inspired by the AI for Good movement, he actively promotes AI education and its positive impact. He also holds a Master's degree in Economics with a focus on innovation. His professional background includes research positions at Sony CSL / Sapienza University, and he currently works at Università D'Annunzio Chieti-Pescara.