GA-CCRi Analytical Development Services

Explainable Artificial Intelligence (XAI)

An article in last month’s Wired magazine titled “The A.I. Enigma: Let’s Shine a Light into the Black Box” (not available online) described how the inscrutable nature of many artificial intelligence algorithms has frustrated people who want to know why a system made a particular recommendation. The article mentions a recent new European Union regulation that gives citizens the right to learn more about machine learning decisions that affected them, and it also described how the New York City company Clarifai and researchers at the University of Amsterdam have been exploring ways to provide such explanations.

The September issue of the science magazine Nautilus includes an article titled Is Artificial Intelligence Permanently Inscrutable? that also mentions the new EU regulation, and the article gives additional good background on the issue. This includes a chart, presented by David Gunning at a Defense Advanced Research Projects Agency (DARPA) conference, on the relationships between currently popular machine learning techniques and each one’s explainability versus prediction accuracy. The Nautilus article then goes on to describe the work of several computer scientists in this area without explaining why DARPA held this conference (or as DARPA called it, an “Industry Day”): so that they could tell vendors and related researchers about their new Explainable Artificial Intelligence (XAI) project. (The full set of slides, a Frequently Asked Questions list, and videos of Gunning’s presentation and his Q&A session are also available online.)

Another slide set by Gunning from a Workshop on Deep Learning for Artificial Intelligence (DLAI) has many of the same slides and also describes the Local Interpretable Model-agnostic Explanations (LIME) algorithm proposed by researchers at the University of Washington in their paper “Why Should I Trust You?”: Explaining the Predictions of Any Classifier. Most other attempts to explain a system’s machine learning decisions are specific to the models used in that system, but as you can see from LIME’s full name, it is model-agnostic. Typical explainers do so by describing input that led to a decision, such as when Netflix recommends Men in Black “because you watched” Guardians of the Galaxy. LIME, which is available in a Python implementation on github, goes further than this by “perturbing” sample input (as its authors put it in another article, “by changing components that make sense to humans (e.g., words or parts of an image)”) to make it easier to identify which specific components of the input lead to a particular decision.

In the example shown here from their paper, we want to know why Google’s inception neural network predicted that the first picture is a Labrador with an electric guitar. We see that the guitar’s fretboard contributed to the incorrect classification of “electric guitar,” which tells us that when the model evaluated electric guitar pictures in its training data, the algorithm decided that the presence of a fretboard was the distinguishing feature of an electric guitar. In addition to explaining why a certain prediction was made, this suggests one way to tune the model to prevent this mistake in the future: train the system with additional pictures of both electric and acoustic guitars so that it can learn that both types have fretboards and that the guitar’s color or shape might be better criteria for classifying between the two types. (The third image below shows that the model actually did classify the instrument as an acoustic guitar, but, as the “Why Should I Trust You” paper explains, this classification got a lower score than the “electric guitar” one.)

dogguitar2

LIME is one example of a tool that researchers are combining with other tools to do an even better job at explaining how different models reach their decisions. Here at GA-CCRi, we are exploring additional tools as well. In the words of GA-CCRi Data Scientist Dr. Kolia Sadeghi, “Using deep neural networks with tools like LIME or with attention mechanisms gives users visibility into which parts of their inputs led the algorithm to its conclusions.”

While publications such as Wired and Nautilus appear to be unaware of the newly-funded DARPA project for XAI (they might keep in mind that the Internet itself began with DARPA funding), the project has been covered in defense industry news sites such as NextgovDefense Systems, and Military & Aerospace Electronics as well as more general interest news sites such as Inverse and Hacker News.

With this kind of funding and organization behind it, XAI appears poised to make many useful contributions the growing number of artificial intelligence systems out there, and at GA-CCRi we’re looking forward to taking part in that research.

Go Back