Cognitive Times

Your source for news and thought on artificial intelligence, the Internet of Things, big data, and other technology-related fields.


Address: 4030 W. Braker Lane
Building 5, Suite 500
Austin, TX 78759
Phone: 844.205.7173

How Far Will the Deep Learning Evolution Take AI?

Most world-changing discoveries follow an evolution of improvements after their “Big Bang” moments.

The Wright brothers’ discovery — claimed by the Scientific American as the “birth of flight” — set off an era of aerospace research that delivered technical and commercial innovations within a few decades of the first flight.

Many pundits are labeling Professor Geoffrey Hinton’s recent success with deep learning and deep neural networks as the rebirth and Big Bang moment for artificial intelligence (AI).

The peak of AI has been described as the ability to accomplish scientific discovery, create art, and beat FIFA world champions. However, would a series of incremental innovations from here onward lead AI to this level of sophistication?

Winning “Jeopardy!”

First, it’s necessary to look at the real-world problems that current AI can solve.

Today’s AI research agenda is focused around deep learning. Deep learning systems learn from large data sets to solve a specific task at hand. Deep neural networks can learn complex functions to solve intricate problems, but only within certain parameters.

This specificity to a task has earned it the nickname “narrow AI.” Don’t be fooled by the limiting nomenclature: Narrow AI can easily outperform a human expert when the rules are defined and static, no matter how complex they are.

For example, AI program AlphaGo has become the best player in Go, an incredibly sophisticated game, because Go is played using set rules. The ability of neural networks to represent any function with arbitrary precision provides a powerful premise.

Professors Henry Lin at Harvard and Max Tegmark at MIT believe the universe is governed by a tiny subset of all possible functions. Therefore, to represent universal activities, deep neural networks don’t have to approximate every possible mathematical function, just a small subset of them.

Is the mathematical, data-driven approach alone enough to represent the complex world we live in? Dr. Scott Niekum, director of the Personal Autonomous Robotics Lab (PeARL) at the University of Texas at Austin, has doubts.

“Purely data-driven approaches such as deep learning have significant limitations. These approaches cannot easily incorporate high-level human insight into complex tasks, nor are they applicable when data is limited, as is the case for many real-world problems,” he said. “More importantly, deep learning is only one tool in the AI toolbox — it is very good at solving a particular set of problems, but does not directly address many important issues, such as long-term planning.”

Flustered by Nursery Rhymes

Deep learning is still delivering impressive results on narrow problems, however, and is also helping adjacent fields overcome major limitations.

For example, natural language processing has gotten a boost since parsers started leveraging machine learning (Google Translate’s use of deep neural networks resulted in its biggest improvements in a decade).

Deep learning solves the problem of finding human experts for time-consuming data input by automating perception, rules definition, feature discovery, and knowledge acquisition.

However, more work is required to recreate human-like semantic understanding of complex actions.

Professor Bruce Porter, former Chair of Computer Science at the University of Texas at Austin, elaborated, “Machine learning researchers have a remarkable track record of successfully applying their techniques for finding statistical correlations to tasks, such as natural language processing, that seem to require so much more than simple statistical correlations. But, I’m skeptical that these techniques will extend to natural language understanding.”

Aspirational, broader AI systems would have human-like reasoning skills combined with machine scale to process data.

To illustrate, consider the nursery rhyme: “Jack and Jill went up the hill to fetch a pail of water.”

A modern system can answer “Where did Jack and Jill go?” as the information is explicitly stated in the corpus. AI is now learning to answer questions like, “Are Jack and Jill still at the top of the hill?” demonstrating inductive reasoning.

Mastering these pathways is how modern knowledge retrieval systems are growing smarter. However, an AI system would be unable to describe how Jack and Jill retrieved the water, as it requires background information not specified in the rhyme. The next frontiers for AI researchers are matching humans’ abilities in abductive reasoning and building a symbolic structure.

Professor Bruce Porter

What Will AI Grow Up to Be?

Researchers dream about creating AI that can reason deeply, learn from the environment, and make decisions with limited data. That said, expanding narrow AI to reach its full potential may prove more worthwhile than systems with broader intelligence.

Rob High, CTO for IBM Watson, pointed out, “General intelligence will be moved forward by economic forces. Do I need an AI system that is able to imagine, dream, be self-aware, and have emotional states like guilt? I don’t know that I even need a machine that is self-motivated. What I need is a machine that will take my motivations and act on them.”

AI is most useful not in replicating the human mind, but in filling the gaps where humans fall short.

There are a plethora of problems for narrow AI to solve, a number that will only increase as its abilities are broadened. However, the relationship between deep learning and data is like a double-edged sword. Large amounts of data ensure accurate answers, but it’s not possible to have an expansive enough data set for many real-world use cases.

For example, for AI to be used in clinical trials, where sample size is small by definition, something will have to change.

So what will this evolution of AI look like?

Bulking Up Narrow AI with a Data Diet

Researchers believe that as deep learning is integrated with other AI techniques, it will serve as a key foundation for another set of abilities, namely working with smaller data sets, automating learning processes, and enabling human interaction.

Margaret Boden, a world-renowned cognitive researcher, has seconded the need for AI to expand beyond industry silos and embrace a multidisciplinary approach to move the field forward.

The first problem to tackle is the requirement of big data sets. Techniques that fare well with limited data and perform deeper reasoning (like active learning systems, Bayesian networks, graphical models, and context framing techniques) are returning to the AI research agenda to extend the capabilities of the technology.

When combined with deep learning, these techniques can be a powerful system that use data to find solutions and recommend action from that information.

One success story of solving problems without a large data set is the Zero-Shot Translation system developed for Google’s Multilingual Neural Machine Translation system.

This system uses previous translation models to translate between two new languages. Experimentation with deep neural network architecture to augment deep learning is another likely trend.

Researchers at the University of California, Berkeley have successfully demonstrated this by adding planning-based reasoning to deep neural networks.

The Future of AI Learning

The University of Texas Robotics Lab

There is also significant room for improvement in the spectrum of automated learning. Supervised learning with a rich data set is already providing solutions to complex problems like computer security.

For example, Deep-Armor, a cybersecurity solution from SparkCognition, has been trained on millions of benign and malicious files and is able to identify malicious characteristics, even in attacks it hasn’t seen before.

However, AI research will drive progression toward unsupervised learning. The route to fully autonomous systems will have multiple stages, each moving toward human-like learning like demonstrative learning, where a robot can learn from “watching” a task performed by humans or videos.

Dr. Scott Niekum’s lab at the University of Texas at Austin was able to teach a robot how to build IKEA furniture using only demonstrative learning.

The journey from supervised to unsupervised learning will necessitate an increased level of transparency (“explainable AI”) to ease fears and skepticism surrounding “black box” decision-making.

Although explainable AI is still an emerging area of research, progress in this area is already being made.

Researchers at the University of California and Max Planck Institute for Informatics have effectively demonstrated machine learning-based image recognition that can explain its reasoning in natural language. This area is important to inspire confidence in machine learning and its adoption in a broad range of fields.

Human/AI Interaction

The role of humans in building intelligent systems will expand with new learning and “instructing” techniques.

Dr. Manuela Veloso, head of the Machine Learning Department at Carnegie Mellon University, describes the future of the relationship between humans and AI as “symbiotic autonomy.”

“Human-AI interaction will be much more sophisticated,” she explained. “I envision research on methods of correction to be incorporated in the AI machinery.”

This means that instead of telling Amazon’s Alexa explicitly to “Stop playing music,” the system will learn that context. It could hear “I’m leaving now,” and know it means it should turn the music off.

Deep learning is promising to provide a mathematical representation of the universe, and its potential is still in its infancy. Deep learning has uncovered valuable capabilities that are leading to commercial and real-world applications.

The economic growth from the rebirth of AI is spurring researchers to expand its abilities. But rather than lofty goals, they’re moving toward improvements.

Only time will tell whether deep learning’s recent success is the Big Bang or not. But given that it’s fueling the field’s growth and setting research agendas, it certainly looks that way.