Artificial Intelligence

What is Explainable AI and How to Build Trust in AI Models

What is Explainable AI

The term “Explainable AI” has become a common phrase in the workplace as AI-powered technologies continue to grow. XAI is a collection of techniques, tools, and frameworks that help designers and users of AI systems to understand how and why they arrived at their predictions.

According to an IDC report, June 2020, business decision-makers consider explainability a “critical requirement” for AI. DARPA, the European Commission’s High-level Expert Group on AI, and the National Institute of Standards and Technology have made explainability a guiding principle in AI development. Startups are emerging that offer “explainability-a-service,” such as Truera. Tech giants like IBM and Google have open-sourced both XAI tools and methods.

Although XAI is more desirable than black-box AI, where a system’s operations aren’t exposed, the mathematics behind the algorithms can make it hard to achieve. Companies sometimes have difficulty defining “explainability” for an application. A FICO survey found that 65% percent of employees are unable to understand how AI models decisions and predictions are made, which further complicates the problem.

What is Explainable AI (XAI), and how can it be used?

In general, there are three types of XAI explanations: Local, Global, and Social.

Global explanations provide information about the system as a whole, not just the processes that result in a prediction or decision. These explanations often contain summaries of how a system uses one feature to make predictions and “meta information,” such as the data used to train it.

Local explanations give a detailed explanation of how a model arrived at a particular prediction. These could include information on how the model generates an output, or what input data flaws will affect it.

Social Influence explanations refer to how “socially relevant” people, i.e. users, behave in response to system predictions. This explanation could be used to show statistics on model adoption or rank the system among users who have similar characteristics.

The coauthors of the Intuit/Holon Institute of Technology research paper note noted that global explanations can be less expensive and more difficult to implement in real-world applications, which makes them attractive in practice. Although more detailed, local explanations are often more expensive as they must be calculated case-by-case.

Also read: Artificial Intelligence Tutorial for Beginners: A Full Guide From AI Experts

Presentation is important in XAI

There are many ways to present explanations, no matter what type. The presentation matters. Wording, phrasing, and visualizations (e.g. charts and tables) can all impact how people perceive a system. Research has shown that AI explanations have as much power in the eyes of the user as they do in the minds and design of the system. Explanatory intent and heuristics are as important as the goal.

As the Brookings Institute Writes “Consider, for instance, the differing needs of developers and users when explaining an AI system. A developer might use Google’s What-if Tool To review complicated dashboards that show the performance of a model in various scenarios, analyze the importance and compare different concepts of fairness. On the other side, users may prefer something more specific. It might be as simple to inform a user about which factors (e.g. late payments) led to a reduction of points in a credit scoring system. Different scenarios and users will require different outputs.

A study that was accepted by the 2020 ACM on Human-Computer Interaction found that certain explanations could lead to a false sense or over-trust in AI. Researchers have found that analysts and data scientists perceive accuracy differently in many papers. Analysts incorrectly view certain metrics as a measure for performance, even though they don’t know how the metrics were calculated.

There are many options for explanation types and presentations. Coauthors of Intuit and Holon Institute of Technology layout points to consider when making XAI design choices, including:

  • Transparency The level of detail provided
  • Scrutability The extent to which users are able to give feedback in order to modify the AI system when it is wrong
  • Trust: The level of trust in the system
  • Persuasiveness The degree to which the system convinces users to buy it or follow its recommendations.
  • Satisfaction The level at which the system makes it enjoyable to use
  • User understanding: The extent to which a user understands what the AI service offers

Data labels, model cards, and fact sheets

Model cards give information about the content and behavior of a system. The first description was made by Timnit Gebru, an AI ethicist. Cards allow developers to quickly understand aspects such as training data, identified biases, and benchmark results.

Although model cards can vary from one organization to another, they often include technical details and data charts showing the breakdown of class imbalance and data skew in sensitive fields such as gender. There are many card-generating tools, but Google’s latest is the one that reports on the model’s provenance, usage and “ethics informed” evaluations.

Factsheets and data labels

Data labels were proposed by the Assembly Fellowship. They are based on nutritional labels found in food and aim to highlight key ingredients of a dataset, such as populations, metadata, and other anomalous characteristics regarding distributions. Data labels provide information specific to a particular dataset, including alerts or flags that are relevant to it.

IBM also created factsheets to provide information about systems’ key characteristics. Factsheets provide information about everything from the system’s operation and training data, to the underlying algorithms, test sets and results, performance benchmarks as well as fairness and robustness checks. They also answer questions regarding intended uses, maintenance, retraining, and maintenance. Particularly for natural language systems like OpenAI’s, factsheets contain data statements that explain how an algorithm could be generalized, how it might deploy and any biases.

Technical approaches and toolkits

There are many tools, libraries, and methods available for XAI. Layerwise relevance propagation, for example, helps determine which features are most important to a model’s predictions. Saliency maps are also created using other techniques. Each feature of the input data is scored according to its contribution to the final output. A saliency map, for example, will rank pixels according to their contributions to the output of a machine learning model.

Glassbox systems, also known as simplified versions of systems, allow you to see how different data impacts a system. Although they are not able to work across domains, a simple glass box system can be used with structured data such as statistics tables. They can be used to debug systems that are more complicated or black-boxed.

Facebook’s Captum has introduced three years ago. It uses imagery to explain feature importance and to deep dive into models to show how they contribute to predictions.

OpenAI released the activation atlases technique in March 2019 to visualize machine learning algorithm decisions. OpenAI explained how activation atlases could be used to audit the reasons a computer vision model categorizes objects in a particular way. For example, it incorrectly associates the label “steam locomotive” with scuba divers’ air tanks.

IBM’s explainable AI Toolkit was launched in August 2019. It uses a variety of methods to explain outcomes. One example is an algorithm that tries to highlight important missing information in data sets.

Red Hat has also recently released TrustyAI, a package for auditing AI decision-making systems. TrustyAI can examine models to determine predictions and outcomes. TrustyAI uses a feature importance chart to order inputs according to the most critical for the decision-making process.

Also read: 7 Things How AI Transforms Software development

Transparency and XAI weaknesses

The Royal Society has provided a briefing about XAI. This is an example of what it should accomplish. XAI should, among other things, give users confidence in the system’s effectiveness and conform to society’s expectations regarding how decision-makers are granted agency. XAI is often less effective than in real life, which increases the power disparities between those who create systems and those who are impacted by them.

The 2020 survey conducted by the Partnership on AI and other researchers revealed that most XAI deployments are internal to support engineering efforts, rather than strengthening trust or transparency with the users. Participants in the study said it was difficult to explain to users due to privacy risks and technological difficulties and that it was also difficult to implement explainability as they didn’t have clarity about the objectives.

A 2020 study focusing on IBM’s user interface and design professionals working on XAI described current XAI technologies as “failing to meet expectations” and in conflict with organizational goals such as protecting proprietary data.

Brookings stated that there are a number of explanation methods being used currently, but they all map to the subset of objectives described above. The best representation of engineering objectives, ensuring efficacy as well as improving performance, seems to be two. Other objectives, such as supporting user understanding and insight into wider societal impacts are not being addressed.

Companies could implement XAI in greater detail if they see legislation such as the AI Act of the European Union, which focuses on ethics. The public’s perception of AI transparency could change as well. A 2021 report from CognitiveScale found that 34% of C-level decision-makers said that trust and explanation are the most important AI capabilities. According to Juniper, 87% of executives surveyed believe that AI has negative consequences and that they have the responsibility for minimizing them.

There is a business incentive to invest in XAI technology technologies, and that goes beyond ethics. Capgemini’s study found that customers reward companies that use ethical AI with more loyalty, more business, and even willingness to advocate for them.

Written by
Aiden Nathan

Aiden Nathan is vice growth manager of The Tech Trend. He is passionate about the applying cutting edge technology to operate the built environment more sustainably.

Leave a comment

Leave a Reply

Your email address will not be published. Required fields are marked *

Related Articles

Algorithmic Decision Making
Artificial Intelligence

AI Bias and Fairness: Regulatory Compliance in Algorithmic Decision-Making

In the rapidly evolving landscape of artificial intelligence (AI), algorithmic decision-making systems...

AI Language Model
Artificial Intelligence

Understanding AI Language Generation And The Power Of Large Language Models

The rise of AI language generation and large language models (LLMs) are...

Lenders Grow Faster
Artificial Intelligence

4 Ways AI Is Helping Lenders Grow Faster and Smarter

Technologies backed by artificial intelligence (AI) are impacting the lending industry. Today’s...

AI Scam Tactics
Artificial Intelligence

Deepfake and AI Scam Tactics In 2024

We can’t measure the money spent on technology since the rise of...