What is Explainable AI (XAI)?

As it expands and becomes ever more complex, the field of artificial intelligence (AI) is becoming increasingly complicated and harder to understand. Due to ground-breaking advancements in the field, especially within machine learning, the research being done around AI still generates an enormous amount of interest. Despite its increasing complexity, developing AI to the point where programs are able to learn independently and find solutions to complex problems remains one of the most important fields of research. That’s why it’s all the more important to ensure that we keep our understanding of the decisions and results of AI up to date.

This is where Explainable Artificial Intelligence (XAI) comes in. Users want to and should understand how the AI of a program works and how targeted results are assessed. Otherwise, there is no real foundation on which to base trust in the digital calculations. The transparency created by Explainable AI is of utmost importance when it comes to the acceptance of artificial intelligence. So, what exactly does this approach entail?

AI Tools at IONOS
Empower your digital journey with AI
  • Get online faster with AI tools
  • Fast-track growth with AI marketing
  • Save time, maximise results

What is Explainable Artificial Intelligence (XAI)?

The term Explainable Artificial Intelligence is a neologism that has been used in research and discussions around machine learning since 2004. To date, there is still no valid definition for Explainable AI. However, the DARPA (Defense Advanced Research Projects Agency) XAI program defines the objectives of Explainable Artificial Intelligence using the following requirements:

Explainable models should be able to exist without the need to sacrifice high levels of learning performance. It should also be possible for future users to understand the emerging generation of AI, have a reasonable amount of trust in it, and work and interact with it effectively.

Definition: Explainable AI (XAI)

Explainable AI (XAI) is the principle of designing artificial intelligence to make its functions, workings, and results as comprehensible as possible for the user.

What is the objective of XAI?

For some time now, AI has concerned people across many different industries outside of research and science. In fact, it is already an integral part of everyday life. Therefore, it is all the more important that the modularity of AI is made clear to others besides designers and direct users. Decision-makers in particular need to gain as high an understanding as possible of how AI works in order to build a foundation for trust in the technology.

Some well-known companies are already setting a good example in this regard. In 2017, Nvidia published an article in the developer blog on their website called “Explaining How a Deep Neural Network Trained with End-to-End Learning Steers a Car”. In this post, four developers explained how their artificial intelligence learns to drive on its own. The company presented its research results in a transparent manner and provided several easy-to-understand examples of how AI learns.

In the same year, Accenture published a guide called “Responsible AI: Why we need Explainable AI” in which the technology service provider addressed aspects such as ethics and trust in relation to machines (especially with regard to self-driving cars).

What methods are involved in Explainable AI?

There are a variety of methods or approaches for ensuring transparency and comprehensibility when it comes to artificial intelligence. We have summarised the most important ones for you below:

Layer-wise Relevance Propagation (LRP) was first described in 2015. This is a technique for determining which characteristics of inputs contribute the most to the output of a neural network.

The counterfactual method describes how data inputs (e.g. text, images, diagrams, etc.) are specifically changed after receiving the results. Then, one observes how much the output has changed as a result.

Local Interpretable Model-Agnostic Explanations (LIME) is an explanatory model with a holistic approach. It is designed to explain any machine learning classifier and resulting prediction. This enables users from other fields to understand the data and procedures.

Rationalisation is a procedure that is used specifically for AI-based robots. It involves designing a machine so that it can explain its actions autonomously.

When and where is Explainable Artificial Intelligence used?

Since AI is already being used in many industries and service sectors, ensuring transparency and accountability have become particularly important.

The following are some areas of application or fields in which AI has taken on a more prominent focus:

  • Antenna design
  • High-frequency trading (algorithmic trading)
  • Medical diagnostics
  • Autonomous driving (self-driving cars)
  • Neural network imaging
  • Training in military strategies

Anyone who has ever used a parking assistant knows how tense and sceptical one can feel behind the wheel at first. But this is often quickly followed by amazement over the assistant’s capabilities. As users, we want to know how it’s possible for a car to park on its own. And this makes it all the more important for the functionality of AI to be explained in simple terms to the end user.

Google Explainable AI

Google is among the companies to recognise the growing need for accountability. The corporation is already doing a lot of research within the area of AI and also uses it as part of its popular search engine. Because of this, the company is keen to make its programs more transparent. The components of Google Explainable AI have been enabling people to create integrative and interpretable machine learning models since 2019. This suite is available free of charge for up to 12 months.

The Google package includes the “What-If Tool,” which enables users to visualise the behavior of a model. The tool’s predictions can be adapted based on one’s needs. Using the variety of demos and the graphical user interface, users can take a closer look at the different machine learning models without having to write a ton of code themselves. Google offers multiple ready-to-use tools for this purpose. In addition to being able to estimate age or classify flowers, for example, there is also a function for evaluating various portraits. This function classifies images based on whether or not the person in the picture is smiling. This module also provides a variety of parameters for facial characteristics. For example, the pictures can be categorised based on whether or not the person has a beard or bangs.

Other companies are also taking advantage of the features offered by the Google Explainable AI. Sky television network uses the What-If Tool for its in-house AI platform to be able to provide understandable explanations and better analyse its data.

Was this article helpful?
Page top