Log in
Enquire now
Explainable artificial intelligence (XAI)

Explainable artificial intelligence (XAI)

Explainable AI, which sometimes goes by the acronym XAI, refers to the set of technologies and practices required to make the outputs of an AI model understandable and interpretable by human experts.

OverviewStructured DataIssuesContributors

Contents

OverviewResearch programsToolsTimelineTable: Companies in this industryTable: Further ResourcesReferences
Is a
Industry
Industry

Industry attributes

Parent Industry
Artificial Intelligence (AI)
Artificial Intelligence (AI)

Other attributes

Also Known As
Explainable Machine Learning (XML)
Interpretable AI
XAI
Overview

Explainable artificial intelligence, also known as interpretable AI or XAI, is a set of processes and methods that help human users comprehend and trust the output of an AI model. As AI algorithms become more advanced, they produce results that are challenging for humans to comprehend. When users cannot retrace how the algorithm came to the result, the model becomes a "black box" that is impossible to interpret. Black box models are created directly from data, and even the engineers or data scientists who built them cannot understand or explain what is happening inside to determine how they arrived at a result.

Diagram explaining explainable AI from DARPA.

Diagram explaining explainable AI from DARPA.

As machine learning and AI techniques find wider use, unexplainable "black box" machine learning models make it difficult to reproduce a result. Explainable AI is used to describe an AI model and its expected impact and potential biases, helping to characterize the accuracy of the model as well as its fairness, transparency, and outputs. Explainable AI is crucial to responsible AI development, helping to build trust and confidence in AI models and allowing human users to understand how the model reached the final output.

There are multiple approaches to implementing explainable AI. The U.S. National Institute of Standards and Technology (NIST) defines four principles driving explainable AI:

  • Explanation—Systems deliver accompanying evidence or reason(s) for all outputs.
  • Meaningful—Systems provide explanations that are understandable to individual users.
  • Explanation accuracy—The explanation correctly reflects the system’s process for generating the output.
  • Knowledge limits—The system operates only under the conditions for which it was designed or when its output has achieved sufficient confidence levels.

NIST states that explanations can range from simple to complex and that they depend upon the consumer in question. NIST demonstrates some explanation types using five sample explainability categories user benefit, societal acceptance, regulatory and compliance, system development, and owner benefit.

Explainable AI helps developers ensure that the system is working as expected; this might be necessary to meet regulatory standards, or it might be important in allowing those affected by a decision to challenge or change that outcome.

Research programs

There are many research efforts into explainable AI by academic institutions, the private sector, and governments. DARPA—the research and development arm of the U.S. military—launched its Explainable Artificial Intelligence program in August 2016.

Tools

A number of companies offer tools to developers to help implement explainable AI processes into their models. These companies include Google and others.

Timeline

No Timeline data yet.

Companies in this industry

Further Resources

Title
Author
Link
Type
Date

A Survey on Explainable Artificial Intelligence (XAI): Towards Medical XAI

Erico Tjoa, Cuntai Guan

https://arxiv.org/pdf/1907.07374.pdf

August 2015

Artificial Intelligence Confronts a 'Reproducibility' Crisis

Gregory Barber

https://www.wired.com/story/artificial-intelligence-confronts-reproducibility-crisis/

Web

September 16, 2019

Explainable AI for Science and Medicine

https://www.youtube.com/watch?v=B-c8tIgchu0

Web

Explainable AI in industry: practical challenges and lessons learned: implications tutorial

Krishna Gade, Sahin Cem Geyik, Krishnaram Kenthapadi, Varun Mithal, and Ankur Taly

https://dl.acm.org/doi/abs/10.1145/3351095.3375664

Conference paper

January 2020

Explainable AI: The Basics

The Royal Society

https://royalsociety.org/-/media/policy/projects/explainable-ai/AI-and-interpretability-policy-briefing.pdf

PDF / Policy Brief

November 2019

References

Find more entities like Explainable artificial intelligence (XAI)

Use the Golden Query Tool to find similar entities by any field in the Knowledge Graph, including industry, location, and more.
Open Query Tool
Access by API
Golden Query Tool
Golden logo

Company

  • Home
  • Press & Media
  • Blog
  • Careers
  • WE'RE HIRING

Products

  • Knowledge Graph
  • Query Tool
  • Data Requests
  • Knowledge Storage
  • API
  • Pricing
  • Enterprise
  • ChatGPT Plugin

Legal

  • Terms of Service
  • Enterprise Terms of Service
  • Privacy Policy

Help

  • Help center
  • API Documentation
  • Contact Us
By using this site, you agree to our Terms of Service.