--

The Lie of AI: Can we create true artificial intelligence without the detriment of our species?

The thought of an artificially designed and operated system of intelligence is exciting not just for its potential uses and applications (cars that truly drive themselves, robots who perform otherwise life-threatening tasks, wars waged completely between mechanical wonders, spaceships capable of plotting and running their own ex platform course, and rovers with enough self-sufficiency that we can send them to other solar systems) but also for what they might imply about the emergence of intelligence; what are the steps humanity and life on earth took to reach this level? How did we get here? How does it all work?

Unfortunately, most of the current literature on the subject has ignored one crucial piece of the puzzle; that human intelligence was not created. That at its most basic definition, intelligence is merely the outcome of select environmental pressures. This does not seem that un-obvious when spelled out, and yet somehow it is crucially glossed over by popular proponents of mind design.

Big Data Jobs

The fact of the matter is, regardless of whether or not you believe in duality, or the idea of a spiritual consciousness, a complete simulation of what we know of intelligence is not achievable left solely in the hands of developers. Why? The mind was not built in a day. It took thousands and thousands of years to produce a creature capable of contemplating itself, capable of mathematics, emotions, science, and every manifestation of this intelligence, every human being, creates their own mind in their lifetime.

We are not born knowledgeable, we are simply born with very complex structures for learning, aimed at a certain task. Surely this should be the aim of artificial design? To implement a system preloaded with information means taking away the very thing that makes that information meaningful.

The meaning of semantics itself is debated hotly in the philosophic community, but at its simplest is basically the relation of one system to another. “Dog” means dog because when your parents pointed at a large furry slobbering beast with fur and barking and noise, which was, at its most basic, information loading into your various sensory systems, and said “dog,” that became its own symbol of information, serving in relation to those sensory symbols. But you didn’t get it at first. All experiences blended together, all was one thing. Children’s semantic errors at a young age are indicative of the processes at work.

Trending AI Articles:

1. Why Corporate AI projects fail?

2. How AI Will Power the Next Wave of Healthcare Innovation?

3. Machine Learning by Using Regression Model

4. Top Data Science Platforms in 2021 Other than Kaggle

There is a careful balance between words acquiring meaning and meaning acquiring words. And it takes trial and error to define the boundaries. A young child goes around calling anything big and furry a dog for a few months, because she doesn’t know the definitions of the relations yet; the meanings are blurred, the specific classifications for objects are broader; there really isn’t at much of a difference between a cat and a dog to a child, and perhaps they literally aren’t really seeing one. That this oscillation between words and meaning (“particles of experience”) could be preprogrammed into a computer seems unlikely. It seems to have developed rather organically, a system for not only processing and reacting to a specific environment, but miraculously able to simulate (through language via representation and visualization and imagination) an infinite universe of possibilities.

So the question is, how do you make a robot curious? Not just programmed to learn. But curious. Simple. You make learning beneficial. You make it ultimately the only tool keeping the system from annihilation. How do you do that? First, you need a bigger system.

For the individual is just one part of the equation that is “intelligent consciousness,” It also relies heavily on the outside environment, including its own role within the system of what we call “nature” i.e. that it will live, die, and hopefully reproduce. This is imperative to the nature and thus the meaning of consciousness. If it didn’t help us live, we never would have developed it.

Other individuals, too, function as not only factors in development, but also our own self-identity, and crucial to our understanding of language and thus our understanding of our own stimuli. And now a clearer picture is formed, that the basic units of consciousness rely on interactions between systems and systems of systems in relation to one another. Take away its context and role as a survival instinct, and you take away its meaning. In other words, it’s time to bring Darwin back into the equation.

A “bigger system” doesn’t necessarily have to be our universe. It can be a smaller one, perhaps a bio dome, or just as easily a computer program. The game of life, I believe, is entirely material independent. And the rules are fairly simple; make copies and eat or be eaten. A basic token game will suffice. Now you make players. Equipped to collect tokens, kill each other, make copies and erase itself after a certain amount of time. The trick is, you implement “randomness” into everything it does. So that each copy will only be 99.8 percent perfect, or just enough for some variations to die and for others to succeed.

Let that simulation run for a long enough “time” and eventually you will end up with a few species quite unlike what you first started with, perhaps able to be older, perhaps better at collecting food, perhaps “bigger” or even “smaller” in virtual space, most probably working symbiotically with each other, naturally going to an equilibrium. But there is more to it than that. A simple system will only yield simple animals.

You need a nearly infinitely complex system like our universe, complete with its own “genetic randomness” it’s own code of building blocks and patterns of “energy” that can be used for a token system, and the right accidents, and the right amount of randomness, to develop anything close to human consciousness. Given the right environment, it is an inevitable outcome of “natural evolution”. This is the only way to achieve a virtualization of what we experience. Some would argue that would still only be a virtualization, although isn’t that sort of what experience is? If colors only and rather arbitrarily stand in for electromagnetic wavelengths, of which we only “see” a tiny portion of, isn’t that in itself only a representation of “reality”? We all operate within our own virtualized version of reality based on what we can perceive with our senses. An objective experience of reality requires absolute omnipresence; a feat reserved for gods and metaphysics.

There is another option, but it is the unspoken taboo that could spell the end of our species. The possibility echoes through the zeitgeist. And that is, if it is not feasible to create an infinitely complex simulation, then we may simply raise the stakes for simulated intelligence within our own reality. The implications to this, are of course, horrific. A superior intelligence programmed to survive? This is the stuff of 80’s synth-wave nightmares and movies starring Schwarzenegger. However, until the stakes are raised for artificially intelligent systems, they will only ever be capable of spitting out what we put in. A “female” robot recently expressed the desire to start a family; however, this was no more of a true “desire” than Alexa wanting to call you by whatever weird nickname you told her to. At the end of the day, A.I. has indeed grown in complexity; capable of automating tasks and research and decisions that are often better than any human could produce.

However, to say that we have created a single “thinking” machine, or even come close to that, would be a lie. These systems are many times more complex, however, they are no nearer to true intelligence than the early chat bots of instant messaging. Unless they can experience meaning through the inherently creative process of “staying alive” (which no scientist alive currently has the ability, nor the intention of creating) there can be no true “intelligence” in AI.

Don’t forget to give us your 👏 !

--

--