Machine Learning Is The Most Dangerous Trend In Tech (Though Not For The Reasons You Think)

AXEL
Becoming Human: Artificial Intelligence Magazine
10 min readAug 15, 2017

--

A dusty wind blows across an apocalyptic wasteland…

The end result of trusting technology we don’t fully understand. Killer robots stalk the ruined landscape. And Arnold Schwarzenegger appears, in undoubtedly the easiest role of his career.

What our minds jump to when we hear the words “machine learning” and “danger” together isn’t prophetic (or even realistic). But popular culture has crammed our brains full of cautionary tales of AI gone bad.

Not without basis. The most influential minds in science and technology warn of the same. Some of them: Elon Musk, Stephen Hawkings, Bill Gates.

To quote Musk:

“With artificial intelligence we are summoning the demon. In all those stories where there’s the guy with the pentagram and the holy water, it’s like yeah, he’s sure he can control the demon. Didn’t work out.”

He’s not crazy to sound the alarm.

In the long term, as computers get smarter and smarter…

There are a TON of very scary things that MIGHT lead to things like the Matrix. Or the Cylons. Or HAL 9000.

The good news: most AI researchers agree that’s a long way off.

In a study surveying four different groups of AI experts, the median estimate of respondents was for a one-in-two chance that high level machine intelligence will be developed around 2040–2050, rising to a nine-in-ten chance by 2075.

It’s something to worry about — just not yet. Today’s machine learning systems are generally extremely “narrow.” They do one thing very well, and everything else awfully.

And yet, it’s STILL the most dangerous trend in tech.

For an entirely different reason.

The future with ubiquitous machine learning might not be Skynet… but it might look an awful lot like 1984.

Let me explain.

Machine learning works by studying large amounts of data, essentially picking out recognizable patterns and making decisions based on those patterns. The more data you feed an algorithm, the more it can “train” itself.

The more stuff you feed it, the smarter it grows, the better it works.

Actually, I should clarify…

The more it knows about YOU, the better it works.

The implication about data privacy should disturb you.

Think of how machine learning plays a role in your life today.

If you have an Android or use an iPhone, you’ve probably used Google Assistant or Siri at some point or another.

Why?

Because it made your life easier in some way. Gave you directions to someplace you’ve been before. Reminded you of important calendar dates when you needed to be. Showed you all these scarily accurate suggestions for things you’d like.

All of that required a heck of a lot of data being collected on you.

Now, the dangers of machine learning aren’t necessarily how companies collect data.

Google, Facebook, and Amazon are already MASTERS of learning about you, and probably know more than enough.

Let’s be frank, data privacy, for most people, has already gone the way of the dodo.

I know they know far too much about me, for instance.

They know what kind shoes I wear. What my politics are. Where I like to eat, and who my friends are. They probably know more about WHAT I like than I do.

And this leads to all these creepy moments like me starting a new social media account and instantly getting bombarding to add all relevant people I know.

But it goes far beyond creepy with machine learning.

AI’s are good at sorting through your data and coming to conclusions based on what they know about you. Which is how they can suggest places you might like to eat, concerts you might want to go to, friends you might want to reconnect with.

And yet, today’s algorithms are absolutely amateurish compared to what we’ll have ten, twenty years from now.

Here’s why that’s TERRIFYING.

There’s a concept when examining animal behavior called “Fixed Actions Patterns.”

Essentially, it states that there are certain behaviors programmed into an animal’s brain. That an animal will ALWAYS perform a specific action when presented with a specific external stimulus in certain circumstances.

For instance, say a female goose sees an egg outside her nest.

That’s a trigger.

Once this trigger is flipped, she will ALWAYS begin a specific action with her neck to bring it back inside the nest. This motion continues even IF you were to remove the egg in the middle of the process. She’ll keep bobbing her head at the no-longer-there egg.

It’s a completely instinctual action that has nothing to do with conscious thought.

As humans, we like to believe that our intellect and willpower is responsible for every choice we decide to make. Except it’s very clear that humans also have certain instinctual behaviors particular to us that are almost impossible to resist.

Take me for instance.

I have a weakness for chocolate chip cookies.

I’m hungry. I just finished a hard workout and feel like I deserve a reward. Then, I’m walking by the kitchen and notice a plate of warm chocolaty goodness.

You don’t need to be a behavioral psychologist to figure what I’m going to do.

That’s a trigger for me.

Companies like Google already know more about you than anyone else. But with the sophisticated machine learning we’ll come across in ten to twenty years, they’ll be able to build a complete psychological profile of people in your position.

They’ll know all your ‘triggers’ and how they work

Don’t believe it?

Consider good salespeople.

Most people who work in business will know there is an astronomical difference between an under-performing salesman and a top-tier performer. Selling the same product to the same group, one salesperson can make millions of dollars more than the other.

Why?

It’s not that one person is so much more charming and likable than the other.

Rather, it’s because the best salespeople can quickly identify who the people most likely to buy are. When they are most likely to take in a pitch. And the exact steps needed to maximize the chances of them purchasing.

In other words, they know the ‘triggers’ they need to hit.

They gain this (mostly) through a ton of experience. Here’s the thing…

Machine learning will turn every major company into the best salespeople on Earth.

With the ridiculous amount of data at their disposal, AI’s will be able to quickly identify trends and patterns that would be near impossible for humans.

They’ll be able to know, for instance, that a person like me has a dangerous tendency towards baked sweets after a workout. Or maybe that a man between the ages of 18 and 25, living in New York, who has previously shown interest in superheroes, with a job, is practically guaranteed to watch the new “Avengers.”

That seems pretty innocuous so far. So what if machine learning means you get more relevant ads, right?

Except, there’s a difference between MORE relevant and TOO relevant.

In Asia, there’s a mechanism that’s ubiquitous in most mobile games, called “Gacha.”

Generally, a player will spend money for a chance to get a rare character they like, or a special item that will make the game easier for them.

Think of it as kind of an online slot machine.

Companies spend millions and millions of dollars researching the exact ‘triggers’ that get people to spend money on these kinds of systems. Then they utilize them to full effect by designing systems to hit those triggers.

The results are that many kids and young adults spends THOUSANDS of dollars each on mobile games to get that character or item they really want. It’s so bad that China has required companies publish the percentage rates these characters can be acquired. There are also addiction clinics.

They have, in fact, made spending money addictive, because of the way they ask for it.

Now imagine how much MORE devastating this would be if they knew the exact character you wanted before you played the game, knew what price would be most likely to get you started “spinning the wheel,” and knew when you got your latest paycheck.

The point is, when you know so much about a person that you know exactly which buttons to hit…

At some point it stops being advertising and more resembles compulsion.

Sophisticated machine learning plus massive amounts of your data means companies will identify your ‘triggers’ very, very quickly.

They would have tremendous power to not only tempt you to perform certain actions (like buying things), but would also be able to predict your overall behavior. Where you like to go, who you’re spending time with, all of that.

Machine learning also increases the capabilities of technologies like face detection, speech recognition, and emotion classification.

Data privacy might just become a thing of the past.

The problem is not that companies with these abilities will use your data for nefarious ends. I’m sure Google and Facebook and Amazon are all very careful about what they do with your data.

The problem is they have this capability at all.

This is powerful stuff, and can be badly abused. And even if they’re being careful today, what about 10 years from now. 20?

Worse, consumers will willingly give these companies more and more of their data.

Consumers will DEMAND they take it, actually.

Why?

Because machine learning helps make certain activities WAY more convenient. After all, I WANT better suggestions on where I can go for lunch. Or stories that are more relevant to me popping up on my phone. Or knowing which dates I need to remember.

People will willingly disregard privacy if they feel they get enough out of it.

Take how a visitor to Disney World is asked to wear a Magical Band that tracks their every move — and enjoy it!

And I should make it clear that the trend towards machine learning can be extremely positive.

Your everyday life is full of inefficiencies, and this trend will eliminate a lot of them. It will shorten the number of steps between where you are and where you want to be.

There’s nothing wrong with giving your information to companies, if you know what they’re planning to do with it.

The thing is, as a result of everything mentioned above, every year…

Every piece of data you provide becomes WAY more valuable to companies.

What time you get up in the morning, for instance.

Right now, it probably wouldn’t bring anyone much value to know that.

In the future, they’ll be able to fit that in with a wider picture of your behavior and your habits. It might be the missing piece that lets them completely predict how you behave.

More and more devices we insert into our lives have the ability to collect information on us as well.

The Apple Watch. Amazon Echo. Every smartphone out there.

Which means not only will companies have a greater incentive to collect our data — they will have many more ways of doing so. And they’ll be able to do MORE with the same piece of information. This will naturally accelerate as these systems become smarter.

The point is…

We need to start a conversation about the dangers of machine learning to data privacy.

Like every other trend in tech, we have many influential people and companies worshiping at the altar of machine learning.

Precious few that talk about the very real risks to privacy as a result of that. Some of which are starting to appear, even today.

Share your personal habits with these companies if you want to. But do it with your eyes open.

Understand the danger of these companies collecting more information. Of what they’ll be able to learn from it. And how they’ll be able to use it to induce behaviors with a level of certainty that simply wasn’t possible before.

It’s all part of a disturbing trend where companies demand more data from you to make better products and services.

Is the future really a compromise between privacy and convenience? Where does that end?

If we’re not careful, this will lead to…

A future where companies have total knowledge of everything we do and know the ‘triggers’ to make us do what they want.

One that “Big Brother” would envy.

If that’s not terrifying to you, then you have a much greater faith in the inherent goodness of corporations than I do.

To prevent that, to maintain some level of privacy…

We need to get people talking.

About how these companies collect information. About what they can learn from it. About how they use it.

And how that will change as algorithms become more and more sophisticated.

Instead of just hopping on the latest silicon valley trend and drinking the Kool-Aid, we need to ask questions and get answers.

We need to DEMAND them.

Otherwise, it might be a very, very scary world in a few years.

Liked what you just read?

Do you share our vision of making life easier for people WITHOUT compromising their privacy?

➞ Click the 👏 below to CLAP for this piece.

SHARE our story with people you think will benefit from it.

➞ Get the latest updates — FOLLOW our blog, Reddit, Facebook, or Twitter.

We’re working hard to bring you great content. If you have something you want us to write about, let us know in the comments below!

Written by: Geoffrey Yu

--

--