The Morality of Artificial Intelligence

Why motorcyclists don’t brake for squirrels, and other ethical curiosities

--

There is a growing debate amongst the technology community about artificial intelligence (AI) and whether we have the proper precautions and models in place to replace human intelligence in certain situations. If an autonomous car being driven by AI suddenly careens out of control due to some unforeseen force, what will the car do when faced with a moral decision such as choosing whether to hit a group of adults or a toddler in a crosswalk? And what kinds of ethics are we building into our artificial intelligence systems to deal with these kinds of situations? At sparks & honey we’re exploring these big, thorny questions on a daily basis and thinking about the potential implications for brands and companies.

But beyond whether we should, one has to wonder how we could: will it ever be truly possible for machine ethics to be programmed into existence? And what do brands need to be aware of, before they program a chat bot for customer service or release an Alexa-powered activation for bartending at home?

As humans we each have a lifetime of experience that we draw upon at a moment’s notice. It isn’t necessarily always driven by religion — although for some it may draw heavily on the norms of their particular spirituality — but more often on the mores of a society and the lifetime of acculturation. Before they’re even able to speak, children begin to learn what’s right and what’s wrong, and by the time we are driving cars we understand the basics of ethical behavior and the expectations our parents, communities and society have for us. It would never occur to me to drive on the sidewalk in the course of everyday life, but you can bet that I’d jump the curb and hit a lamppost if it meant avoiding a collision with a pedestrian.

However, some ethical decisions can be situational. I would never, for example, attempt to run over a small animal with a vehicle or bike. But as a motorcycle enthusiast I know recently described it, you learn when you begin to ride what to do in certain situations to keep yourself and those around you safe. “Let’s say you see a squirrel dart into the road immediately in front of your cycle on a highway, and you’re going at a fairly high speed,” he explained. “Your natural inclination would be to slam on the breaks, but if you do that you’re more likely to be thrown from the bike or cause a much larger accident than if you simply sped up.” The theory is that at a higher speed you’re more likely to kick the squirrel away with the front wheel with less injury to the squirrel, yourself, and everyone else on the road than if you braked hard and went tumbling into the road.

“Squirrel” by flickr user likeaduck

How do we bake these kinds of snap-decision circumstances and “least worst” scenarios into artificial intelligence programs and the technology that surrounds our daily lives? And even if you could predict every last possible ethical quandary that might be faced by AI, what would you tell it about the value of a life and the snap decisions that must be made?

Programming the Internet of Things to have a conscience is one thing; does your fridge really need to have feelings about the fact that you’re still out of milk? It gets much more complicated when we apply the questions to autonomous vehicles, which are currently being tested in market by Tesla, Google and others. If a preventable accident does happen, is the vehicle, the manufacturer, or the programmer going to be held accountable? And what does that mean for the future of artificial intelligence?

It’s something we’re going to have to face as a society sooner rather than later, so the more we ask smart questions about machine and AI ethics now, the better off we’ll all will be in the future.

At least until the AI systems we so naively built take over the world…

Update, 8/15/2016:
The MIT Media Lab and Scalable Cooperation released a Moral Machine game that not only poses these questions, but aims to crowdsource what the machine should do in machine-ethics situations created by driverless cars.

--

--