The Third Wave of AI

--

.

AI has come a long way.

Over the decades, combinations of various programming techniques have enabled slow spotty progress in AI — punctuated by occasional breakthroughs such as certain expert, decision and planning systems, and mastering Chess and Jeopardy! These approaches, and in particular those focused on symbolic representations, are generally referred to as GOFAI (Good Old-Fashioned AI). Importantly, a key characteristic that they share is that applications are hand-crafted and custom engineered: Programmers figure out how to solve a particular problem, then turning their insights into code. This essentially represents the ‘First Wave’.

Starting in the early 2010’s, huge amounts of training data together with massive computational power (by some of the big players) prompted a reevaluation of some particular 30-year-old neural network algorithms. To the surprise of many researchers this combination, aided by new innovations, managed to rapidly catapult these ‘Deep Learning’ systems way past the performance of traditional approaches in several domains — particularly in speech and image recognition, as well as most categorization tasks.

Deep Learning (DL) is a statistical, machine learning (ML) approach, and as such is very different from GOFAI. In DL/ML the idea is to provide the system with training data to enable it to ‘program’ itself — no human programming required!

In practice a ton of human intelligence is required to make DL systems function in the real world. In fact, top experts in this field are paid several times that of top programmers: Firstly, training data has to be carefully selected, tagged, and formatted; secondly, one has to select the type and configuration of DL network to use; and thirdly, myriad system parameters need to be tuned to get the whole thing to work effectively. All of these steps require tremendous skill, experience, and experimentation.

In spite of these difficulties, DL has been a roaring success in several areas: For example, the progress we see in self-driving cars and voice assistants such as Alexa would not have been possible without it. It is no exaggeration to say that Deep Learning has been a revolution in AI, with literally tens of billions of dollars being invested to further develop and exploit this technology. It is worthy of the title ‘The Second Wave’.

AI has a long way to go.

In spite of this recent progress, AI has a long way to go to approach human-level learning, thinking, and problem solving ability — a goal known as AGI (Artificial General Intelligence). Today’s AIs are quite narrow and rigid.

The vast majority of researchers agree that current technology is nowhere near human (or even animal) intelligence in terms of general cognitive ability. In particular, today’s AI is very poor at learning interactively (on-the-fly), adapting to changing circumstances, abstraction and reuse of knowledge and skills (transfer learning), reasoning and language understanding. As it turns out, Deep Learning is actually less capable than some First Wave approaches when it comes to certain language tasks, reasoning, planning and explaining its actions. There is widespread consensus that current DL/ML approaches will not get us to AGI.

Here’s a small sample of it’s shortcomings:

The Third Wave

As limitations mentioned above have become increasingly apparent, several AI luminaries have expressed the need for a new paradigm — some (appropriately, I think) referring to it as The Third Wave. Some quotes:

While there is some divergence as to what exactly this new approach should entail, there is good consensus on the ability to learn autonomously in real time, to generalize, and to be able to reason abstractly and use natural language. There is also strong sentiment that we need more complex, more comprehensive architectures.

Cognitive Architectures

Similar to the way that deep learning neural nets had been around for several decades — not quite ready for prime time — before exploding into usefulness and prominence, so have cognitive architectures been lurking in the AI background for a long time. I believe that given the right design, cognitive architectures can provide the path to AGI.

A key feature of the this approach is that it inherently seeks to account for all relevant requirements of human cognition, including such aspects as knowledge representation, generalized short-and long-term memory, perception, focus, goal management, etc. Other approaches start with only one or two aspects of intelligence and then try to deal with missing requirements on an ad-hoc basis.

A cognitive architecture approach can in principle address many, if not all of the requirements of high-level cognition. Here’s a comparison chart based on what our current technology is capable of addressing:

Most cognitive architectures developed in the past are highly modular, utilizing, for example, distinct modules for short-term memory (STM), long-term memory (LTM), parsing, inference, planning, etc. Moreover, usually many of these functional modules were designed by separate teams without much (if any) overall coordination. This is a serious limitation: they tend not to share a uniform data representation or design, making it near impossible for cognitive functions to synergistically support each other in real time.

A much better, indeed essential, approach is to have a highly integrated system that allows for all functions to interact seamlessly. For example, properly parsing, understanding and absorbing the information in a sentence (say, a statement) requires access to both STM and LTM, as well as goals, context, metacognition, and reasoning. In fact, thorough language comprehension and the ability to hold long, meaningful conversations is one area where the advantages of cognitive architectures over deep learning are most pronounced (see article). However, the need for highly synergistic functional interaction is equally true for almost all other cognitive tasks.

There’s a whole universe of tasks that a truly intelligent system could perform. It could help us deal with the many difficult problems facing us: From disease and an aging population, persistent poverty and starvation, natural and manmade disasters, clean energy and environmental issues, to governance and political challenges. The First and Second Waves gave us a glimpse of what AI can potentially do for us; we’re looking to the Third Wave to more fully realize the potential of AI.

--

--