How moral are self-driving cars?

Examining the moral agency of AI

--

How do self-driving cars decide who to kill in a road accident, and how many?

Self-driving vehicles are now on our roads and increasing in number. Ranging from Minis to semi-trailer trucks, autonomous vehicles issue in a whole new age of road transportation whereby human drivers can literally let go of the wheel, sit back and relax, letting highly sophisticated, artificially intelligent computer systems take control.

In taking control of the wheel, however, self-driving cars are also taking on a major responsibility: they must drive with the intention of protecting and saving our lives on the road. Further still, they will have to analyse and determine how many people might or could be killed in a fatal accident. How autonomous cars do this is more than just a technical question, it’s a philosophical one — Are self-driving cars moral agents?

Understanding moral agency

In the moment before a serious and possibly fatal road accident, how does a self-driving car decide what to do?

For humans the act of driving demands that we be constantly observing and judging the external road conditions with the simple aim of avoiding and/or not causing a road accident. The cognition involved is complex but for human drivers it mainly consists of two key instincts: firstly, that of not causing an accident and killing other innocent people on the road including pedestrians. And secondly, that of self-preservation both for themselves and equally all passengers in the vehicle. The latter is best achieved by avoiding an accident at all costs without subsequently causing another one.

Top 4 Most Popular Ai Articles:

1. Natural Language Generation:
The Commercial State of the Art in 2020

2. This Entire Article Was Written by Open AI’s GPT2

3. Learning To Classify Images Without Labels

4. Becoming a Data Scientist, Data Analyst, Financial Analyst and Research Analyst

For self-driving cars, however, things are quite different. That’s because AI systems simply do not possess many of the cognitive abilities that humans do. As human beings we are innately moral creatures with basic instincts that compel us to protect not just ourselves but also those around us from most kinds of pain, injury and harm, not to mention death. Self-driving cars, on the other hand, are in no way concerned with protecting our health and well-being nor prolonging our existence. They do not possess any sentimental concern for life nor the instinctive compulsions that characterise human cognition and moral psychology. Because AI systems lack moral cognition and human-like psychology, self-driving cars are thus unable to perform the kind of decisions and behaviours equivalent to the moral judgements that we as humans make in life or death situations. Whereas we are innately moral beings, self-driving cars are not — they are in fact non-moral agents.

The moral predicament

The predicament is that every time we step into a self-driving car we are putting our lives in the control of non-moral agents that must deal with morally demanding situations. Worse still, we are putting not just our own lives but the lives of our loved ones, not to mention everybody else on the road, into the hands of a computer system that does not possess any moral psychology and cannot execute the same degree of moral cognition, decision making skill and evasive behavioural actions that humans would instinctively perform within life and death situations. In short, autonomous cars are not programmed with any kind of moral coding and therefore cannot behave with any moral concern.

Jobs in AI

Better than human morality?

Some technologists argue that with further programming and training AI systems can and will learn to predict the likelihood of roads accidents and even achieve similar if not better cognitive and behavioural capabilities than humans for avoiding road accidents. Some will even argue that AI will soon become better than humans at making split-second decisions in life and death situations. Certain situation are however worse than others and made all the more complex when there are multiple people involved. Humans are by no means perfect decision makers — let alone moral beings — but the one thing we do possess that AI systems do not is the ability to make complex moral decisions to minimise the potential loss of life in accidental and dangerous situations. AI systems have yet to prove they possess such life-saving decision making skills and behavioural capabilities, at least not yet.

So why then are we allowing these non-moral systems onto our roads?

Blind faith? Nievity? Stupidity? Maybe sheer optimism? Whatever the reason, one thing is for certain: more and more self-driving cars will be hitting our city streets and travelling our highways. We must now come to realise the moral limitations and complications of putting AI systems into morally demanding situations, especially those in which human life is lost behind the wheel of self-driving cars. How do we safeguard and protect ourselves in a world where we are being transported by non-moral AI systems who have no care whatsoever if one person or 100 people die in an accident?

This is a moral predicament that we must figure out for ourselves and figure out fast, because one thing’s for sure, artificial intelligence cannot and will not figure it out for us.

Don’t forget to give us your 👏 !

--

--