Let’s Set AI Straight

--

Firstly, Artificial Intelligence (AI) is not too difficult subject to understand once we skip professional jargon.

Secondly, AI doesn’t exist yet. Scientists are right now doing their best to create it. We now often call AI the very process of creating of AI and its byside products. Recently, scientists on the basis of artificial neural networks managed to complete only the first step of the AI creation — the artificial representation. The artificial cognition is the next step. There are fundamental problems to be resolved for it completion. A lot of basic research is required to achieve a leap from artificial representation to artificial cognition. Unfortunately, scientists who can do basic research in machine learning — the core technology underpinning the recent AI breakthroughs — are very scarce. So called AI Winter is the main cause for their scarcity.

Machine learning is the discipline of creating artificial neural networks — algorithms which mimic the process of how neurons in animal brains process information through synapse connections between each other. Artificial neural networks learn by processing data in a way vaguely similar to how our brain processes it according to our best knowledge about it. When machine learning was in infancy in 1960–1970, artificial neural networks proved that they can work according to their first principle but their practical implementation fell short of expectations. Computational resources were not sufficient. Neither there was enough data to apply machine learning algorithms to. Some important basic research on machine learning algorithms was also missing. The idea that AI can be more easily programmed with precise formulas than achieved via machine learning prevailed. The AI Winter began in late 1970-s and continued with a short thaw in late 1980-s till early 1990-s when a cold long Spring followed. The Spring finally blossomed with revolution as late as 2012 when artificial neural networks started to match or even outperform humans in a wide array of areas, in which outcomes could only be learned but not calculated.

All recent AI breakthroughs with practical implementation including speech recognition, computer vision and machine translation were achieved explicitly on the foundation of the basic research conducted by a tiny number of institutions which managed to retain the critical mass of scientists required to do the basic research in the first place. It was hard to stay committed to the field that fell out of fashion and, even more importantly, out of favour of governments and venture capitalists. Only several years ago the promise of machine learning finally materialised when very large artificial neural networks — deep neural networks — started to process vast amounts of data labelled humans with the computational brute force of parallel systems.

All the outstanding achievements of AI which have practical implementations to date are based on so called supervised learning. It means that machines learn from humans. The larger and better is human labeled training dataset that machines process the better will be the result. For instance, if a hundred of best human doctors will label a million of fMRI brain images with a particular type of tumor and this million of images will be put through a huge neural network many many times then this neural network will learn to identify tumors of this particular type almost faultlessly and, maybe, even better than every single doctor who helped to train it. Yet this neural network will not be able to identify a tumor of another type in the same images. It will need to be trained on another million of labelled images to be able to identify the other type of tumor too. Supervised learning is narrowly designed to identify images which are very similar to the images labelled by humans.

Artificial neural networks are also prone to forgetting rapidly the results of their training. They can’t learn on real world data in real time. Even more importantly, artificial neural networks today are incapable of unsupervised learning. They can’t do it. Fullstop. So if somebody tells you that AI can crunch patterns undetectable by humans from huge amounts of unstructured data don’t believe it. This ability will require, probably, decades of basic research to finally evolve. It’s artificial cognition and we still need to crack a number of basic nuts to reach it.

The nut of artificial cognition will, definitely, be cracked because, on one hand, practical performance of supervised learning is already very impressive and opens plenty of opportunities for commercial use, but, on the other hand, supervised learning is very impractical to scale as we need to create a huge dataset of labelled data in order to ‘explain’ to the machine any deviation from the data that is very very similar to the original labelled training set. You need for each deviation as much computational power, communication speed and human labelling effort, as for the original set of data. It’s gonna be expensive even when chips are cheap.

Leading AI scientists understand perfectly well what machine learning can or can’t do. Why then do we see so many publications about neural networks making miracles? Corporations like Google, Facebook, Baidu are partially responsible for that. They have to be involved in basic research if they want to secure the lead in the race for artificial cognition. They hire leading AI scientists to create proprietary critical mass required for basic research to happen. Scientists publish articles about their basic research that is far from traditional corporate R&D activities. Journalists often can’t tell the difference between basic research and corporate R&D and make their articles sound so as if Google is already launching AI that thinks, reasons, remembers, cheats and so on. Startupers wish to take an advantage of the AI hype and integrate such ‘inventions’ of Google into their value propositions for investors and future customers. There are also some data scientists which use professional jargon of DNNs, RNNs, backpropagation and convolutional NNs in conversations with laymen to demonstrate their deep knowledge. That’s a normal situation for any hot topic but sometimes it’s important to set basic things straight.

PS: AI scientists have proved the most important thing: humans can make mathematical algorithms inspired by a living brain, which actually work as expected. It means that sooner or later but inevitably the artificial intelligence will become capable of doing everything that scientists and science fiction writers have promised. It makes AI, probably, the most promising technology of today. And the most dangerous too. There are plenty of those who wish to benefit from the promise but we need also those who will address dangers.

In Humanness Learning we believe that humans and AI agents can learn humanness from the carefully selected and labelled dataset of stories packed in a form of an immersive video game.

--

--