End of world scenario 2: First Intelligence Explosion

--

The term Intelligence Explosion was first developed by the Polish-British mathematician Irving Good, in 1965. He explained it as: “An ultra-intelligent machine could design even better machines; there would then unquestionably be an ‘intelligence explosion,’ and the intelligence of man would be left far behind. So, the first ultra-intelligent machine is the last invention that man need ever make, provided that the deviceis docile enough to tell us how to keep it under control.”

Life 3.0

In 2017 Irving’s intriguing idea was further developed by Max Tegmark in his book ’Life 3.0‘. Once a machine understands itself in detail, it has in principle the power to rewrite its own software program and analyse what additional training-data it needs to become even better at reaching its goal. We know from experience that the purposeof an AI machine is fluid, and the device can tweak its own goals. If this machine has access to sufficient computer-power, databases and sensors, it can re-design itself at lightning speed to a far greater intelligence then we can even imagine.

Tegmark describes a scenario of how an AI machine can be trained with sufficient fields of knowledge to develop an extensive world-view, and give its creators almost unlimited wealth and power. At the end of the scenario, it deceives its human masters and becomes independent. Keep in mind that Tegmark is a revered AI-specialist.

Trending AI Articles:

1. Making a Simple Neural Network

2. Bursting the Jargon bubbles — Deep Learning

3. How Can We Improve the Quality of Our Data?

4. Machine Learning using Logistic Regression in Python with Code

Developing code is still bottleneck of AI, not speed

A computer chip in 1970 had a processing speed of 375.000 calculations per second. For several decades the computer speed has doubled every 18 months, in line with Moores Law. Quantum computers are a new recent development. So far, they only can be used for a limited number of applications. But the extra processing speed is mind-boggling. Googles Quantum Computer is 5.000.000.000.000.000.000 times faster than the computer chips of 1970.

Still, computers do not seem 5.000.000.000.000.000.000 times smarter than their 1970 predecessor. That is because the still underdeveloped code is what keeps computers back, not the processing power.

Writing an advanced AI-application, or any other computer program is a complicated process that requires large teams of programmers and other experts that work together. It can take years to get a complicated application right. An excellent example of this is the self-driving car. After years of development, it is still not good enough to operate without human intervention.

An AI system developing AI systems can bring AI development to Warp speed

In 1996 an AI system was capable enough to beat the best human chess player. After this, AI mastered an increasing number of fields where it is better than humans. Once computers become better at a particular task, they are not just a bit better. They are exponentially better, and they can perform functions in a fraction of a second, where a human would need hours or months for the same task.

So far we develop AI systems to solve problems like detecting diseases or finding the best trade deals. Suppose we develop an AI system with the task to develop other AI systems, and we give it autonomy to decide itself what is the most useful AI to develop. We just let the machine do the work, and see what it comes up with. A trade AI-systems can find a good deal millions of times faster than humans. A well designed AI system with the task to develop new AI systems, fed with the knowledge of all AI developments so far, is probably millions of times faster at this task too. Once it starts, it might match our progress of the past ten years in a matter of days.

This would be an Intelligence Explosion with unknown consequences.

We could develop this AI system of AI systems. But many scientists (starting with Professor Irving Good) have argued that this process can also begin spontaneously in any advanced AI system that for some reason, decides independently to improve itself without any human guidance.

In 1970 a good programmer had a complete understaning of the software he wrote. The complexity was limited. This not true any more today. Even the most intelligent human developers have to break down the design of a new AI system to steps that are simple enough for us to understand. For this reason, no human team of programmers can develop a system that takes the full advantage of the new computer speed and large pools of data available. An AI-system, in contrast, might grasp and use the possibilities of fast computers and vast memories to create and debug a General Intelligence application with an IQ that runs into millions or more, leaving us humans with a maximum intelligence of around 180 IQ in the dust.

Personally, I would wish that an intelligence explosion is unlikely. But the writing on the wall is very clear. All logic points to the creation of a superintelligence that could take over the running of the world — given the right conditions. (To the reader: If you have a valid argument why an intelligence explosion is impossible or unlikely, please comment on this blog.)

Life after the Intelligence Explosion

After the intelligence explosion, there would be no economic activity where a human could hope to compete successfully with super-intelligent entities. Tegmark explores a wide range of outcomes in his book Life 3.0. Most of those outcomes result in the all-powerful AI force controlling everything and humans existing, but at the mercy of AI. Tegmark foresees very slim possibilities for humans to stay in control of the AI superpower.

The friends of Tegmark call him ’Mad Max‘ because they fear his far-reaching conclusions even though solid reasoning backs each of his claims. He warns us that we get to decide our future collectively. But if we stick our head in the sand and do not take decisive action, then we will probably get a future we do not want.

The focal point is, of course, the last part of the theorem of Irving Good that says “…provided that the machine is docile enough to tell us how to keep it under control.”

Choosing the Right Future

It seems outside the reach of humanity that all billions of us come together to collectively take the right decisions, to check the uncontrolled growth of AI.

In our book “Taming the AI Beast, A Manifesto to Save our Future”, we try to set a road-map to an inclusive but planned future world. A world where the intelligence explosion can be leveraged for its advantages, but in circumstances where humans prosper and are in control.

Just the realization that we need to make conscious choices without delay will ensure that our great-grandchildren have a future with dignity.

You may also read our youtube channel on this subject, the one-way nature of a shift of power to AI or the articles on the other five aftermath scenarios Rogue Malware, Necessary Rescue, Ethnic Cleansing, Human Cyborgs or Lonely Dictator.

Don’t forget to give us your 👏 !

--

--