Intelligent Vigilance in the Age of AI

--

Photo by Tara Winstead from Pexels

This talk was delivered as part of an Intel TedXOcotillo event —please find the recording of it here.

The number of times I relied on smart algorithms to think for me in the past few minutes alone is intriguing. From using facial recognition to log-in, to my Teams’ microphone filtering out background noise and Microsoft Word correcting sentence grammar, AI is there to help think and act for me so I can present the best version of myself. This isn’t just for administrative tasks — smart algorithms are ubiquitous, be it in self-driving cars, healthcare, robotics, policy, and education.

Perhaps more important than asking “Where?” is “How?” these algorithms are being used. For an increasingly large number of smart algorithms, the answer to “How?” is becoming eerily straightforward: Us. These algorithms are designed to help us make decisions by harnessing data from our behaviors and trends. They study us, muttering to themselves “Hmm, Ria from Account ID 98220 just purchased a colorful yoga mat, based on her purchasing history, let’s recommend these sparkly shoes.”

Everything is a data point from the eye of AI and its creators, from how much time we spend on a page to the types of people we associate with. In this talk, I’m going to dive into three ways AI is influencing us: The way we see the world, the way the world sees us, and perhaps most importantly, the way we see ourselves. I will then talk about what we should be doing to understand and improve ourselves in the age of algorithms. Let’s start with the first way AI influences us. AI is a content creator, and the quality of the outputs smart algorithms are generating just keeps getting better.

Today, there is a lot of concern surrounding algorithms being used this way to manipulate and deceive us. One of these techniques is known as DeepFakes, where AI can simulate video and audio to make it seem like someone, such as a presidential candidate or a celebrity, said or acted out something they didn’t do in real life. For many different social media posts and review sites, it can be challenging to figure out whether a bot or human is behind a comment, like, or review.

Big Data Jobs

The good news is that it’s challenging to get AI models to create high-quality content, and it’s easy for humans to recognize AI-generated content now. Language models are writing about real-life unicorns, and simulated faces in DeepFakes have strange behaviors like being too smooth or blinking too much. Engineers are building AI to recognize AI-generated content. In the future, it won’t be feasible to constantly re-evaluate pieces of information. We may be able to fact check fake news, but how do we verify the existence of an AI-generated image of human being if the creators of AI models aren’t willing to tell us?

Secondly, if you’ve ever used a social media site that has advertising, or Netflix, Amazon, or any service that tells you “this is what you want” before you make an actual effort to find that item or product, you’ve seen a recommender system in action. There’s an interesting term from the field of behavioral economics called a “nudge”, which is used to define indirect suggestions like these that motivate us to make certain decisions. But just like how push comes to shove, “nudges” start to turn into “hyper-nudges” when big data and AI come into the picture.

Trending AI Articles:

1. Why Corporate AI projects fail?

2. How AI Will Power the Next Wave of Healthcare Innovation?

3. Machine Learning by Using Regression Model

4. Top Data Science Platforms in 2021 Other than Kaggle

Now, there are positive advantages to AI helping us make decisions leading to empowered human actors and value co-creation. Think of AI giving you a suggestion for a quick and easy thank you reply to a text in the middle of a busy day. Recommender systems are created by naturally biased humans. There are different types of biases, from gender and racial stereotypes, to behavioral biases such as confirmation bias, where we have a tendency to seek information that confirms our current beliefs and ignore contrary perspectives. Personalization AI algorithms are designed to learn from us and they can keep perpetuating our biases back to us, as filter bubbles. New research shows recommender systems could help users break out of their own filter bubbles and be more explorative with product choices, but these systems may still encourage similar preferences across the user base.

On the topic of bias, it is argued that over-reliance on AI may lead to loosing critical thinking capabilities — the final point on how AI influences ourselves. AI has become an extension of our persona; it exemplifies and expands on our own thought processes often without needing our effort. Again, a simple example of this is when you’re buying a product. We could buy sparkly shoes just because an AI recommender system tells us it did all the analysis and decided based on the earlier yoga mat purchase. It’s certainly easier for AI to think for us like this and tell us what we want before we try to search for it. Here, AI is adding to the decision-making behind weighing the pros and cons.

I’ve talked about the ways AI can influence us, but what are some solutions to combat blind faith in algorithms? Circling back, we know it’s challenging to constantly reevaluate pieces of information from our environment, so maybe we should start there. For me, this is where it starts to become a dilemma, because as an engineer, I like to measure things and have some sort of concrete definition. While researchers have proposed metrics for understanding user welfare and beliefs, it’s still all very tentative. Identifying how AI can influence us and how we can analyze this with some sort of risk management framework is quite challenging.

We can draw on two overarching strategies from psychology for analytical judgement and methods. These strategies have been tried and tested in high-stakes decision making scenarios and can be applicable here. Let’s dive into a competing hypotheses analysis. In his book “Psychology of Intelligence Analysis”, Richards Heuer describes some interesting ideas for training ourselves with resiliency and critical thinking skills in dynamically changing environments.

Identify the hypotheses, as an example detecting if a video is a DeepFake or not. What is the evidence and arguments for/against each hypothesis? The video is of an influential person making political comments. Few issues that can be caught are the person’s eye color suddenly changing, video circulating in major news and social media sites with a lot of buzz around what was said in the video, but not released through official sources. We know that DeepFakes often require a combination of talented and extensive human effort and AI, so they may not be that easy to produce.

Creating and refining a matrix to analyze the helpfulness of evidences. Conspiracy theories from news sites that aren’t substantiated aren’t helpful because they comment on irrelevant details. The glitches seem to be important to pay attention to since they help support a hypothesis of a video being a DeepFake, so they have diagnostic value. Also, attempt to disprove the hypotheses and the evidences. Let’s say there really wasn’t a glitch in the video — the evidence was wrong. How does this influence our conclusion? Questioning why that person would say that in real-time is important, and there might be factors other than AI influencing that person’s outreach. We conclude with future observation indicating events are taking a different course than expected.

Another interesting domain to draw from is psychological warfare. Now, I don’t mean we look at this as a warfare between human and AI, but between humans and thought processes others are trying to impose through AI. The objective is to understand the impact of persuasive communications on changing or reinforcing attitudes. Some questions to ask are whether the message being presented seems to be obvious propaganda, if the messaging influences us to act in a favorable way to the message’s sponsor, and whether aspects of the message make us irritable. Smart algorithms can propose decisions, but AI doesn’t go ahead and take the decision for us. It’s a question of us reasoning and being alert to how are we being influenced in our decision-making. A vital note is that our feelings and emotions play an integral role in our perception and evaluation of the recommendations AI provides.

I’ll now conclude with a few wise insights from a philosophical perspective in addressing the question — what do we mean by “alertness”? Focus on “thought inertia” through a nudge to be alert that the future decision to buy sparkly shoes is being recommended right now by AI based on that yoga mat purchase. Essentially, we start reasoning to combat thought inertia by being alert. Critical thinking isn’t just about judging information, but also judging the way it affects our behavior and detecting our behavioral patterns in past, present, and future situations.

AI is vital, and we are responsible for pushing the boundaries of computer science and thinking about the implications of our innovation. This talk just scratches the surface of the relationship between humans and algorithms and serves to spark a meaningful discussion on what’s next to be forward and brilliant in the age of algorithms. Thank you!

Don’t forget to give us your 👏 !

--

--

AI SW Architect and Evangelist at Intel, master’s in data science. Opinions are my own.