Has computationalism failed?

About the strengths and weaknesses of different models of the mind

--

An artistic depiction of computations in the brain. [SOURCE]

I recently attended a reading by Siri Hustvedt, where she was talking about her latest book “The Delusions of Certainty”. Siri seems to be a very smart lady with many interesting ideas, for instance regarding the role of the Placebo effect in our health care system. However, she made one claim that I cannot agree with and that I want to analyze in more detail in this post: She said that “computationalism [..] has failed” as a model for the human brain.

DISCLAIMER: I am doing my PhD in machine learning, so I am most likely biased in this matter. However, I will try to bring forth some arguments about this topic on a more fundamental level, in order to facilitate a less opinionated and more profound discussion.

Before we dive into the assessment of different models and their shortcomings, let us first think about what a model is and when it is successful or not. As many students of the mathematical disciplines know, the late British statistician George Box famously said:

Since all models are wrong, the scientist cannot obtain a “correct” one by excessive elaboration. On the contrary, following William of Occam he should seek an economical description of natural phenomena. Just as the ability to devise simple but evocative models is the signature of the great scientist, so overelaboration and overparameterization is often the mark of mediocrity.

In a later paper he specifies:

[…] For such a model there is no need to ask the question “Is the model true?”. If “truth” is to be the “whole truth”, the answer must be “No”. The only question of interest is “Is the model illuminating and useful?”.

From this concise but very momentous statement we can follow that a model should be considered successful if and only if it is parsimonious in its assumptions and still offers a certain amount of usefulness in explaining observable phenomena. On the contrary, we can probably say that a model “failed” if it does not offer any useful explanations or insights. Let us now explore what computationalism is all about and why one might consider it as being a success or a failure.

Computationalism, or respectively the Computational Theory of Mind, is the idea that the human brain works like a computational system. This means that it takes inputs (sensory information), combines them with its internal state and performs computations on these to yield outputs (thoughts, decisions, actions, etc.). Importantly, this does not mean that the brain is equivalent to a modern day computer. Modern computers are designed to be universal Turing machines (except for their limited memory), i.e. you can run any Turing-complete programming language on them (e.g. Python) and use it to emulate any Turing machine which in turn lets you compute any computable function. Conversely, human brains are evolutionarily optimized to just perform one function, namely controlling a human body in the most successful way. We might therefore assume that they can probably approximately implement any function that is helpful in that endeavor (such as predicting the likelihood of getting eaten by a tiger), but not necessarily any arbitrary function (like factorizing large numbers).

One of the first accounts of the very basic idea of computationalism was laid out by Thomas Hobbes in his work De Corpore in 1655:

[…] by reasoning, I understand computation. And to compute is to collect the sum of many things added together at the same time, or to know the remainder when one thing has been taken from another. To reason therefore is the same as to add or to subtract.

Of course back then there were no computing machines and science was pretty clueless about neurons and the physiological foundations of brain function, so Hobbes was making a more philosophical rather than scientific argument. By 1943, neuroscience as well as computer science had evolved quite a bit, which allowed McCulloch and Pitts to write their famous paper “A logical calculus of the ideas immanent in nervous activity”. This can be considered as the first scientific approach to formalize computationalistic ideas.

Computationalism was then further developed by Hilary Putnam and others, while at the same time (in 1986) connectionism was popularized by Rumelhart, Hinton and McClelland in their paper “A General Framework for Parallel Distributed Processing”. The basic idea of connectionism is that information in the brain is stored and processed in a distributed fashion across many different neurons and that the computations involved are massively parallel. For some reason, computationalists and connectionists started fighting at some point, mostly because computationalists love to think about brain states as symbolic representations and connectionists don’t.

I personally think that the two paradigms are well compatible, if taken with a grain of salt. If we view the brain as a computational system, it is true that these computations are driven by the connections of the neurons. The differentiation between hardware and software, however, is not as easy as in electrical computers. As stated earlier, the brain’s “hardware”, namely its components and their connections, are heavily influenced by evolutionary optimization and constrain the space of possible computations. However, the strength of connections can be changed at runtime (through learning) and some neurons can even be completely disabled or newly introduced. These alterations can be dependent on the brain’s use, i.e. its computations or “software” which in turn is subjected to a new set of constraints after every physiological change. The hardware and software in the brain are therefore deeply interdependent and much more closely related than in electrical computers.

The process of optimizing the underlying computing architecture through evolution and then tuning the actual computations through learning seems to work well for human brains and has even spawned some successful machine learning algorithms (see DeepMind’s work on population based neural network training). Moreover, some other work at DeepMind has shown that combining computationalist with connectionist ideas can also lead to very interesting models (see the Neural Turing Machine and Differentiable Neural Computer). These observations seem to suggest that the two theories should not be seen as competitors, but more as two faces of the same coin.

So why does Siri Hustvedt dislike this model? One of her concerns was that it cannot explain many cognitive functions, like consciousness and emotions. While this may be true, it is also worth noting that consciousness is a notoriously hard term to define or measure. Whenever people come up with a working definition however, they often find interesting clues in neuroscience. Emotions, on the other hand, are mostly mediated by hormones which can act as neuromodulators, i.e. they can change the way that neurons work. This is another complication of the brain’s hardware, namely that it is immersed in a constantly changing chemical environment that can alter its behavior. One of my professors once compared emotions to holding a lighter beneath a computer’s main board. They clearly effect cognition, but not on the level that our computational model concerns. This argument therefore reminds me of the creationists’ claim that the theory of evolution cannot explain the origin of life. This is true of course, but then the theory of gravity can also not explain why birds fly. It is just outside of the model’s scope.

Returning to our definition of successful models from above, we have to ask ourselves now whether we can consider computationalism as being successful. It clearly does not explain the “whole truth” (because no model does), but it provides an abstract explanation of a complex system, making some simplifying assumptions. So the question is: is the model useful? On the one hand, we can safely say that bionically the model has inspired many very performant machine learning algorithms recently and that new insights in the neurosciences still lead to interesting algorithmic discoveries. On the other hand, it is also the case that research on artificial neural networks allowed scientists to make more accurate predictions about how actual brains work, in macaque monkeys and even humans. Moreover, computational models of the human brain could help in the treatment of neurological diseases like epilepsy, Alzheimer’s, Parkinson’s and even psychiatric conditions.

Given these observations, I would surely agree that computationalism is not a perfect model of the world, but I would definitely argue that it is useful and illuminating. For many complex cognitive processes it might well be the best model we have right now. And even though I can understand the disappointment in this model’s shortcomings, I would like to challenge any critic to try and come up with a better model that provides more utility with the same amount of parsimony in terms of its assumptions. Because as Sam Rayburn so aptly said:

A jackass can kick a barn down, but it takes a carpenter to build one.

If you liked this story, you can follow me on Medium and Twitter.

--

--