The Impending AI Pandemic

--

What COVID-19 teaches us about the existential threat of artificial intelligence.

Source

COVID-19, carbon-induced climate change, and nuclear winter carry existential risk, not due to their intelligence, but because they destroy large systems upon which we depend.

Our first existential digital threat is widely expected to be an artificial general intelligence, a grand AI capable of achieving any goal. Surprisingly, the real AI threat is not big and scary, like a monster, but small, systemic, and decentralized, like the coronavirus.

Replication

This threat is right under our noses. Internet worms, a type of computer virus that seems fairly innocuous, carry a creepy potential. These pieces of code autonomously spread from machine to machine. And they are already unstoppable.

Consider MyDoom, a worm unleashed in 2003. Samples of MyDoom were found in ~1% of all emails sent in 2018. Almost a trillion copies of this worm are scattered every year, waiting to be picked up by an unprotected host.

MyDoom exhibits considerable variation. As the worm spreads, it changes its code to avoid detection. This not only mimics the variation seen in biological evolution, but also provides a great foundation for evolutionary algorithms. Trillions of samples mean a juicy dataset, so many mutations can be tested and models can be trained.

Worm-style replication is so cheap and quick, it’s already a grand success. The classic sci-fi alternative — some kind of Terminator factory, staffed by Terminators that build yet more Terminators — seems silly once this simple, proven technique comes into view. Of course, that vision has been refined. Even in 1953, computing pioneer John von Neumann imagined the problem more abstractly: a pile of parts, and machines that used these parts to create clones of themselves. Today we see researchers pursuing 3d printing and self-assembly to realize contemporary, refined versions of such visions.

However, we are naive if we expect the first existential AI threat to use a complex and costly physical process to reproduce. Genius AI would not take the most expensive path. Internet worms provide a cheap and well-proven method for reproduction that happens to provide ample resource for mischief. Once a worm is inside a machine, it can do most anything that machine can do.

Destruction

In terms of social and financial destruction, internet worms have nothing on the coronavirus. A particular strain of ransomware called GandCrab stands out for causing a mere $2 billion in damages. This cost is nothing compared with the trillion-dollar macroeconomic consequences of COVID-19.

Nevertheless, GandCrab stands high above the financial damages caused by leading-edge AI. Excellence in pattern recognition and decision-making does not translate to convincing embezzlement. Any smart AI would likely use one of the proven, dumb techniques to extort, like locking you out of your laptop and demanding a bitcoin ransom.

So far, in terms of physical destruction, internet worms haven’t held a candle to old-fashioned warfare. The Stuxnet worm gained international fame for a rather paltry act of physical destruction, the annihilation of a few centrifuges. But this may change in the future.

As computer-controlled vehicles and weapon systems develop, the physical risks of hostile worm infestations grow more dire. We don’t need to imagine some generalized AI driving around a killer drone. The navigation and targeting solutions — old-school, specific AI — could be triggered by a dumb worm.

Traditionally, viruses have not had much capacity to perform work. Due to network and machine limitations, viruses could not handle advanced computation. But things have changed. Payloads are growing. The Flame worm was noteworthy almost a decade ago for having a large, component-based software architecture that executed a variety of attacks. These days, the more common setup is a distributed botnet. Worms run massive cybermining operations, one of the most power-hungry (and profitable) forms of work a computer can do all by itself.

We don’t have to wait for a genius, general AI to realize that infinite sustenance for replication exists on a botnet. A very dumb AI (like the even dumber coronavirus) could instead merely stumble into such an ample host.

Trending AI Articles:

1. Making a Simple Neural Network

2. Google will beat Apple at its own game with superior AI

3. The AI Job Wars: Episode I

4. Artificial Intelligence Conference

Havoc

All of this potential for destruction, of course, doesn’t mean that armageddon will indeed be unleashed. There is copious debate in the AI community for what forces (if any) might cause a complex artificial general intelligence to do evil. We could spend weeks weighing the merits of various speculations.

However, with internet worms, such projections need not be performed. A history of evil is their wake. Every day, human actors fight to make code to overpower our computational ecosystem. With so much human bad intention that could accidentally get a bit out of control, the independent evolution of machine evil is not needed. We already live in a world where our militaries join independent hackers in the hunt for disproportionate power.

The very forces that drive black hats to write malware could lead them to unleash an unstoppable distributed evil. Advanced intelligence is not necessary. While toiling away at the worm smart enough to create new attack vectors, some smarter hacker will instead write the bot that can scan and deploy threats as they are reported (unpatched vulnerabilities) without even understanding them.

The exploits themselves, as the incremental results of AI research are applied, are getting stronger and stronger. Learning systems have been used to automate spear phishing. Chatbots have been developed to invade social networking sites.

That there are humans working toward this very adaptable form of malware is not a question, but a reality. The powder keg is surrounded by sparks.

The Future Existential Threat

We’re fighting a microscopic, distributed population of pseudo-organisms. All of the arguments about when a machine will become “intelligent” become moot when something far more stupid can cause life as we know it to come crashing to a halt. As the coronavirus has shown, not very much intelligence is needed to tear society apart.

Don’t confuse “malicious” with “intelligent”. Existential threats to humanity are by definition malicious but, like the coronavirus, need not be intelligent.

The lofty threat of an out-of-control artificial general intelligence may indeed exist. But let’s be real, before that threat ever materializes, we will face threats from dumber artificial brains, like the threats we already face.

Our primate brains are built to worry about monsters that we can see, fight or run away from. Scary monsters trigger our primal fears, developed over millennia living exposed in nature.

Our natural intuition does not serve us well in the late Information Age. Counterintuitively, our existential threats are not smart robots but rather dumb and distributed snippets of code.

Don’t forget to give us your 👏 !

--

--

South African/American Caltech CS PhD, turned international artist, turned questioner of everything we assume to be true about technology. Also 7 feet tall.