Cognitive Times

Your source for news and thought on artificial intelligence, the Internet of Things, big data, and other technology-related fields.


Address: 4030 W. Braker Lane
Building 5, Suite 500
Austin, TX 78759
Phone: 844.205.7173

Demystifying the AI Black Box

AI is at the forefront of technological progress. But with new, inconceivable levels of complex decision making, people are starting to wonder if we’ll be able to understand what’s going on inside the “black box” of machine learning.

Consider the following scenario: A grocery store employee is responsible for ordering inventory and must place an order soon.

The employee sees that a snowstorm is forecasted for the area. Based on experience, the employee knows that people act irrationally during storms.

Over the past few years, every time a storm has been forecasted, the store runs out of perishable goods. Given this evidence, the employee feels justified in ordering additional inventory of milk and eggs.

If questioned by a manager, there is ample evidence to defend the employee’s ordering decision — which is generally required in order to stay employed. There is no guarantee that the actions were the absolute best for the given situation, but a clear, discernible methodology was followed in order to achieve what the employee believed would be the highest utility.

This is an example of heuristic decision-making.

This method of defensible decision-making is prevalent because it leveraged rational systemic analysis as well as intuitive experience. This flow could then easily be interpreted by another person, making it transitive in nature.

Unfortunately for human intuition, the world is changing.

Now, consider an alternate example. A grocery store employee is responsible for ordering inventory and must place an order soon. The employee uses an algorithm that factors in a forecasted snowstorm.

The algorithm also notes that the yen is extremely strong versus the British pound. On top of that, chicken fertility rates in Georgia have dropped below their three-year average and the hashtag #snowpocalypse is trending on Twitter.

Given this evidence, the algorithm instructs the employee to order additional inventory of milk and eggs.

To most people, this decision-making process seems completely insane. The employee couldn’t possibly use it as justification to order more supplies. But puzzlingly, it produces a more accurate forecast for ordering supplies.

This is the newest problem in the algorithmic world: explanation and defensibility of a model. We know that deep learning algorithms work, but why they work breaks human intuition. This is problematic.

Because of this, massive amounts of research have been poured into “explainable AI” to help demystify what is inside of the deep learning black box.

While this research is ongoing, there are a few highly promising technologies available today that we believe are going to help pioneer humanly intuitive, explainable AI.

Model-Agnostic Methods: Where the End Justifies the Means

Instead of trying to decrypt what is happening within a neural network, it’s possible to make black box assumptions about what the network is doing given the inputs and outputs.

These solutions, deemed as model-agnostic, hope to create some approximation of what is happening within any model and explain the decisions made.

The technology leader in this space is an open-source tool called LIME (Local Interpretable Model-Agnostic Explanations) from the University of Washington, which takes a model-agnostic approach to determining the “why” behind any decision.

LIME operates on the assumption that it is difficult to understand how the model behaves in all cases, but it is possible to understand how it behaves in particular situations. LIME was designed to create interpretable linear predictions, which it does by agitating input data slightly and identifying how the output changes.

LIME uses this to determine an approximate linear function capable of replicating similar local outputs to the model it is interpreting. This linear model developed by LIME can then be used to determine feature importance for an outcome.

While LIME clearly adds value to any machine learning model, it still only provides approximate insights into the driving forces behind why a model behaves the way it does.

For example, in the above scenario, it might indicate that the largest influence on the decision to order more milk and eggs was the weather forecast, and the second most important was the Twitter hashtag #snowpocalypse. It could easily prioritize all of the inputs to try and justify the use of them to a person, but it does not actually explain the reasoning behind the decision.

Because of this shortcoming, a new technology is being developed, known as aLIME (Anchor Local Interpretable Model-Agnostic Explanations). aLIME is the if-this-then-that extension to LIME.

For example, in a dataset where the goal is to predict if a person’s salary is more or less than $50,000, a series of features are given.

Lime is capable of highlighting the major features associated with the model’s prediction.

aLime goes a step further by compiling a heuristic that can be used to intuitively explain why the prediction was made.

Tools like aLime are a step in the right direction for explainable AI, but there are two major shortcomings:

  1. These tools do not explain what is actually happening under the hood of a model
  2. They provide no information about how confident a model might be in its predictions

Luckily, there are a few other methodologies that help to address these problems. One promising topic worth mentioning is neural network composition analysis methods.

Techniques such as Garson’s algorithm, Lek’s algorithm, or randomization show a lot of promise, but more research is necessary to justify their application to deep learning.

Bayesian Inference and Deep Learning

A final promising approach to unraveling the secrets of the black box comes in the form of Bayesian inference.

Whereas model-agnostic tools focus on small adjustments of data for analysis and compositional analysis tools analyze network weights and variable behaviors, Bayesian deep learning focuses on making neural networks follow certain statistical principles common in the modeling world — notably Bayesian inference.

When combined with network composition or model-agnostic approaches to understanding a neural network, Bayesian deep learning opens up the possibility of a neural network capable of explaining its actions as well as how confident it was in the decision it made.

Unfortunately, the application of Bayesian inference to deep learning is far from straightforward. Bayesian inference is designed for primarily linear models. Because of the highly nonlinear nature of neural networks, the application of Bayesian inference is often intractable or computationally inefficient.

Over the past two years, many of these deficiencies have been overcome and key innovations have spurred drastic advancements in Bayesian inference for deep learning. The rise of probabilistic programming languages like Edward, developed out of Columbia University, have enabled the use of inference in deep learning frameworks like Tensorflow.

These inference-based neural networks have already been proven to be more computationally efficient than comparable benchmarked machine learning systems. Instead of attaching a scalar value to every parameter in a network (i.e. connection weight), Bayesian deep learning instead fits a distribution to each parameter using some predefined number of sample points.

An example of how these Bayesian networks appear different than traditional scalar networks can be visualized like this:

Based on this figure, it is apparent that a Bayesian neural network requires significantly more parameters than a traditional scalar one. However, that does not necessarily mean more computational resources are required to optimize the network.

The approach taken by probabilistic programming tools allow for rapid computation despite the increased number of data elements. This probabilistic model understands the relationship between the likelihood of an expected output given a distribution of weights, parameters, and known inputs.

Bayesian deep learning is still in its infancy, but it better explains the uncertainty within a model, and has been effective in mathematically proving why certain techniques in deep learning work.

The future ahead for this technology is bright, but continued innovation will be necessary in order to gain parity with other heavily marketed technologies within the deep learning community.

While deep learning has raced ahead as the hero of the 21st century, it’s important to consider all of the technology that must keep up in order to facilitate the decision-making process we have used for centuries.

Luckily, promising technologies like aLime, Neural Network composition analysis, and Bayesian deep learning are providing useful insight into the inner workings of this increasingly complex technology. As long as continued innovation is maintained in developing explainable AI and ethical practices are employed in the development of deep learning systems, the potential for world-changing innovation in the 21st century is abundant.

AI does not have to be perfect, but the decisions it makes must be defensible.