Philosophical Musings: Genetic Algorithms + Deep Networks

--

There’s been a lot of talk recently about the shortcomings of deep networks. I suppose that after having built up so much hype, it’s only natural that people would want to take a step back and figure out where they fall short.

A lot of what I’m hearing revolves around the shortcomings of stochastic gradient descent, the need for huge amounts of data, the difficulties of transfer learning, and that deep networks are simply hyper-optimized memory functions. The general consensus is that deep networks, at least in their current form, can’t be the building blocks for artificial general intelligence. Some people think that we’re going to need to start from scratch, that deep networks just fundamentally can’t reach self-awareness.

Top 3 Most Popular Ai Articles:

1. How I used machine learning as inspiration for physical paintings

2. MS or Startup Job — Which way to go to build a career in Deep Learning?

3. TOP 100 medium articles related with Artificial Intelligence

Though I tend to agree that, as they’re set up right now, deep networks probably won’t reach AGI, I’m started playing devil’s advocate in my head and thought about what it would take for them to actually get there.

The premise of my internal debate is that the human brain is essentially a three-dimensional object composed of an unbelievably large number of building blocks (neurons) that interconnect to form specialized regions, each of which work together to form an incredible organ.

These building blocks and specialized regions evolved over millions of years, with areas of the brain that were related to benefits having a higher chance of survival and replication. These external influences led to the creation of, what some have called, the most complex object in the universe.

But if the brain is, at its core, an interconnection of regions, which themselves are an interconnection of neurons, hypothetically, it should be possible to simulate the creation of those regions and neurons in a machine in a way that resembles the human brain.

The difficulty of doing this is pretty much unfathomable at this point. However, one could imagine that it may be possible, one day, to start with a system that has millions of neurons, and let that system interact with some simulated external environment that would influence how the neurons in the system interconnect. Then, we could simulate that system naturally dying off, but not before it generates a new sort of system, its offspring, that inherits its neural structure while also gaining some additional neurons, random mutations, and other characteristics, and that new offspring system could start to interact with a more complicated simulated environment, that would affect its own development, and so on and so forth, for trillions or quadrillions of iterations, until eventually we’d have a neural system that has evolved to form the neural connections that were needed to survive in this simulated environment.

This might just be quackery, and it’s clear that building both a simulated environment that’s sufficiently immersive enough to provide the right stimuli to an artificial neural system would be prohibitively difficult at this stage (think amoeba interacting with their immediate environment, and how they then evolved into more complex organisms that had to interact with the ocean, and then those evolved into land-faring creatures, and how more and more stimulus was provided to their senses with each evolution) and that building in the ability for a neural system to evolve appropriately without the whole system just shutting down would be incredibly tough.

But this sort of evolutionary process, if we could achieve it, could sidestep the need for humans to inject their own knowledge about the brain into an AGI. It could be akin to taking AlphaGo Zero and having it learn from randomness, rather than learn from watching human moves.

Of course, it could be nice to inject some of what we know about the brain into the neural system itself. For example, we know that certain structures in the brain are specialized for certain tasks, so we could build an initial version of the neural system to simulate those specializations, but doing so may actually hamper the evolution of the artificial brain, rather than help it evolve more quickly.

Thinking about the interaction between deep networks and evolutionary algorithms is fascinating, but we’re not quite at the point where we can do anything with this yet. Not to mention this could just be the ramblings of an AI startup guy that’s deep in the weeds of building a new business. In any case, the future will be fascinating and I hope to see as much of it as I can.

--

--