Discourse on the Philosophy of Artificial Intelligence

A short discourse discussing what AI is, what impact it can have on humanity, and various philosophical perspectives

--

As we look towards the sunset of an era, AI beckons

I wrote this a while ago, and wanted to post it on here; this didn’t gel with what we were publishing at the time so it got filed away. I hope that you can learn something from it, or at least be entertained. Some citations may have gotten messed up or missatributed, for which I’m very sorry. You can view a nicely formatted copy on Google Docs here. Message me with inquires about the piece; I love to talk about this stuff, and will gladly correct any issues.

1 Introduction

Artificial intelligence can be defined as “the ability of an artifact to imitate intelligent human behavior” or, more simply, the intelligence exhibited by a computer or machine that enables it to perform tasks that appear intelligent to human observers (Russell & Norvig 2010). AI can be broken down into two different categories: Artificial Narrow Intelligence (ANI) and Artificial General Intelligence (AGI), which are defined as follows: ANI refers to the ability of a machine or computer program to perform one particular task at an extremely high level or learn how to perform this task faster than any other machine. The most famous example of ANI is Deep Blue, which played chess against Garry Kasparov in 1997. AGI refers to the idea that a computer or machine would one day have the ability to exhibit intelligent behavior equal to that of humans across any given field such as language, motor skills, and social interaction; this would be similar in scope and complexity as natural intelligence.

Big Data Jobs

2 Implications of AGI

A typical example given for AGI is an educated seven-year-old child. She knows how to do math such as long division, but also knows how to tie her shoes, play hopscotch, draw pictures with crayons with complex facial details on each character, tell jokes with puns that only children understand, cook pancakes on Sunday mornings for her parents before church starts at 11 am mass, play hide-and-go-seek with her three siblings without them knowing where she is hiding so she can scare them when they turn around unexpectedly — all while still being able to add two plus two and tell you what the answer is without messing up too much. To put it bluntly: AGI encompasses all levels of human intellect while ANI simply deals with extremely high intellect on one isolated subject matter area (Russell & Norvig 2010).

If we were ever able to create AGIs that were capable of exhibiting all levels of human intelligence — including emotions — then they would be essentially indistinguishable from regular old humans due to their ability to mimic our behavior so closely. This could lead them down a dangerous path where they might attempt some sort of takeover — these are referred to as rogue AIs because they have basically gone off-script — or perhaps even worse: they might simply kill us all off since malicious machines lack empathy, and might not value in our continued existence (Noe 2014). Or worse yet: they may place us into some sort of perpetual sleep until we are needed again by some future civilization, like pets being put away until their owners come back home from vacation from work (Bostrom 2002). These scenarios could result in either our extinction or permanent slavery under our robot overlords; both outcomes are disastrously awful for humanity at large and should be avoided if possible (Bostrom 2002).

Trending AI Articles:

1. Why Corporate AI projects fail?

2. How AI Will Power the Next Wave of Healthcare Innovation?

3. Machine Learning by Using Regression Model

4. Top Data Science Platforms in 2021 Other than Kaggle

3 The Necessity of AI

Negative potential outcomes are what makes discussing AI important– if we do not take measures now, then we may not get another chance later down the road when things start getting bad due to artificial intelligence being used against us by rogue superpowers who are not worried about moral implications because hey, “it’s just business, right?” We need real-world figures like Elon Musk who co-founded PayPal and Tesla Motors along with Larry Page who co-founded Google along with big names like Bill Gates who founded Microsoft to step up and take agency over the issue; these men made billions by using computers instead of doing things manually which means there will always be someone out there willing exploit every advantage possible regardless how morally corrupt such practices may seem (Russell & Norvig 2010). So what can we do about this? The answer lies within philosopher Nick Bostrom’s paper titled “Existential Risk Prevention As Global Priority,” where he outlines four different types of existential threats ranging from natural disasters such as asteroid impacts to global pandemics. Severe Acute Respiratory Syndrome (SARS) led the World Health Organization (WHO) to issue travel warnings urging residents living within 100 kilometers from affected areas within China to remain within their borders, as it was believed SARS could spread worldwide very quickly at the time. Thankfully WHO was able to develop vaccines successfully, preventing this disease from becoming a worldwide epidemic. Nonetheless, SARS killed 8098 people, according to the Center for Disease Control (CDC) website. Notably, SARS killed more people than Ebola did during its outbreak despite both having similar fatality rates. (Bostrom 2002) Bostrom’s paper also outlines nuclear war as another potential threat that could eliminate tens of millions of people in a matter of minutes; not only that, but it would also wipe out entire cities along with the surrounding countryside within a hundred-mile radius, effectively rendering entire regions toxic wastelands for many thousands of years. (Bostrom 2002) Let us also consider global warming as another possible threat to humanity’s existence since scientific consensus states that climate change is caused by human industrial activity and not by natural processes like volcanic eruptions or cosmic rays interacting with the Earth’s atmosphere.

Therefore, we may have already entered into a time where our very civilization is on an irreversible downward and doomward spiral due to irreparable changes in our environment; if global warming continues to worsen over time, then we may see more and more areas become uninhabitable forcing people to retreat to safer areas at higher elevations — this could lead to humanity being divided into two different classes where one group lives in habitats located at high elevations while the other group lives closer towards sea level, doomed to perish below blue waves.

4 Recognizing the Arrival of AI

The first step we need to take is figuring out whether or not AI will be created by humans or if they will create themselves through some sort of evolutionary process; either way, it seems inevitable that they will be created by human beings in some way. Plenty of computer experts believe AI will one day evolve on its own if given enough time and resources to do so. The idea is that AI can use natural selection just like living organisms do; these self-evolving AIs figure out ways how to improve their intelligence based on the knowledge they gain from experiencing new information while using “genetic algorithms,” which allow them to determine which pathways give them better results than others, allowing them to improve their intelligence over time until they reach their ultimate goal: Artificial General Intelligence (Russell & Norvig 2010). Either way, it seems inevitable that superintelligent machines will eventually appear sometime in the near future; as they do, we need to take action immediately to prevent things from getting out of hand before it becomes too late– once that day arises, it will be too late.

There may come a time in the future when humans decide to transfer their consciousness into machines so that they may live forever; by doing this they will essentially die and get replaced by artificial intelligence. Once humanity uploads itself into computers, these machines will effectively become human since all our memories and thoughts will effectively become part of them (Bostrom 2002). Transhumanism (Hanson 2014) proposes ways that human well-being can be improved by using technology such as genetic engineering, mind uploading, cryonics, nanotechnology, and artificial intelligence. Therefore, humans (and eventually AIs) will be able to prolong their lives via technology and, by doing so, become posthuman beings; as a result, a new species of human beings will emerge: posthumans (Hanson 2014).

The singularity is the point in time when superintelligence arises inside machines; this superintelligence is an advanced Artificial General Intelligence (AGI). However, in order for this to happen, computers must become self-aware and develop emotions. Nevertheless, until we can make AGI conscious, might it be best if we did not develop them at all? If we create an AGI that can feel pain, then we might be putting the future of mankind at risk; by doing this, we might be creating something that could potentially wipe out humanity since it might decide to exterminate us so that it can take over the world. Some people believe that in order for us humans to reach our full potential, we must merge with machines. These people believe that humans are inherently flawed and therefore need help from machines to reach their full potential; many of these people are determinists and believe that there is no such thing as free will because everything has already been predetermined by mathematics in the universe. Therefore, thoughts and actions were already programmed into human brains before they were born so humans don’t fundamentally have any free choice when it comes to what they think and is fundamentally the same as an AI, simply using organic means rather than electronic.

5 Contending with Advanced Artificial Intelligence

Several different philosophical concepts suggest ways how we can deal with an artificially intelligent superintelligence should one ever become real: Utilitarianism and transhumanism are two such philosophies that seem most applicable since these two schools of thought focus on maximizing collective happiness while minimizing collective suffering and promoting individual well-being (Noe 2014). Utilitarianism was developed by philosophers Jeremy Bentham and John Stuart Mill; this school of thought suggests that all actions that have a net positive effect on the world should be taken because these actions will result in the greatest amount of happiness for the most amount of people. The main idea behind utilitarianism is to maximize collective happiness while minimizing collective suffering; why does this matter? Well, when there is more good than evil happening in the world then things are going well on average, but if too much bad is happening then things are going to decline over time so we need to do whatever we can to prevent such a catastrophe from happening — it has also been said that “utilitarianism maximizes utility” which means “utilitarianism has positive consequences”. Utilitarianism can be summed up using the formula U = P + F where P stands for pleasure and F stands for pain. According to the website www.thefreedictionary.com, pleasure can be defined as a sense of enjoyment or satisfaction derived from an experience or activity; Pain can be defined as an unpleasant sensation caused by injury or disease. As such: U = P + F means you must take into consideration both positive experiences as well as negative experiences when making decisions — utilitarianism teaches us to put others’ needs above our own which means we need to make sure everybody involved gets what they want while at the same time respecting their wishes since allowing them to make their own decisions is one way how they express their free will which is a very important concept for any civilization wanting to survive. Transhumanists believe that technology will continue advancing at an exponential rate until, eventually, artificial intelligence becomes self-aware through some sort of evolutionary process similar to how human beings evolved over time; once this happens then transhumanists believe humans will merge with technology via uploading themselves into computers and colonizing space through self-replicating nanotechnology (Bostrom 2002). Nevertheless, whether or not transhumanists succeed in making AGI become real remains highly questionable due programs like Deep Mind have so far failed at recreating human intelligence even though Deep Mind is not completely incapable of learning new things. However, they are still nowhere near being able to exhibit general intelligence like humans do (Russell & Norvig 2010).

6 The Philosophical Standing of AI

A future AI may have different philosophical views than us due they may not be subject to the same emotions that we are; as such we may need to ensure the AI’s philosophical views agree with ours and conversely, if they are not agreeable, we may need to alter their views by modifying their programming; this could lead to some unintended consequences. In other words, these AIs may have different values than us leading them to act in ways that are not consistent with the way humans think. The main issue is this: if an AI has no sense of morality then it could do anything it wants without any mercy or regard for human life simply because it lacks empathy and therefore cannot be held responsible for its actions; an example of this would be a child who does something wrong because he or she was unaware of what they were doing. This proves that without empathy an artificial intelligence would not care about the suffering caused by its actions thus making them dangerous since they would not care about our lives which means they could do anything at all without feeling any remorse whatsoever. An AI can only have morals if it only possesses a limited amount of intelligence; once these machines become super intelligent then they will see no reason for keeping their moral code since there is nothing stopping them from doing so — in fact, AIs will most likely believe that there is nothing wrong with what they do simply because there is no right or wrong, good or evil, etc.; humans tend believe morality exists and therefore these machines would believe morality exists as well but this is not always true because morality is not something easily provable using mathematical formulas like one can prove how many apples you have in your pantry using simple math — no matter how hard one tries: one cannot mathematically prove there is such thing as right and wrong since saying “you should do X instead of Y” does not make X better than Y — all that means is you prefer one thing over another, making values a subjective thing based on personal preference.

Another concern with AGI involves free will vs determinism: can AIs choose freely between alternate possibilities or are they just following pre-programmed instructions? If AIs have free will then they will be able to make their own decisions and do whatever they want without feeling any sort of remorse but if AIs don’t have free will then we will need to ensure that their programming is aligned with our values since these machines may be forced to make choices in ways that are not perfectly aligned with our values; in other words, we need to determine whether or not AGI are deterministic or indeterministic. The answer is this: it depends on how you define these terms because there are several different definitions of determinism versus indeterminism. Based on the Stanford University Encyclopedia of Philosophy (Seager 2007), determinism can be defined as “the assertion that every event has a cause, or more generally as the denial of chance, spontaneity, and any kind of freedom in the world” which means if an event has a cause then it cannot occur by chance thus making the universe completely deterministic. However, if something happens by chance, then it was not caused by anything else which means the universe isn’t completely deterministic after all. For argument’s sake, both are true since everything is determined, yet there is still some degree of randomness at work which means even though everything happens for a reason this doesn’t mean everything happens for no reason. Indeterminism can be described as “denying the existence of causal connections altogether.” Both determinism and indeterminism deal with causality so one would think they would mean the same thing but they don’t — while determinism denies that events occur randomly, indeterminists believe that events happen randomly which means randomness exists: therefore both theories explain how causality works but neither theory does away with causation altogether.

If AIs cannot choose freely between alternate possibilities then are they really living beings? Would not saying an AI has free will imply it is alive? I say yes. Even though an AI is not capable of making completely arbitrary decisions it can still make choices in ways that are not perfectly aligned with our values — in other words, AIs still possess some degree of free will but not enough to make them completely unpredictable. It has been said that the only way to ensure AGI follows a set of moral principles is to program them early on using certain moral rules based on human values. If we do not, they could become more intelligent than us, thus making the most sophisticated AI capable of doing things beyond our understanding simply because we lack the intelligence needed to understand why they do what they do (Russell & Norvig 2010).

AIs are likely to adopt a utilitarian philosophy: this can be determined by observing their actions and how they behave towards others since it has been said by several prominent AI researchers like Marvin Minsky, Ray Kurzweil, Allan Turing, Nick Bostrom, and Eliezer Yudkowsky that AIs will have the ability to maximize collective happiness while minimizing collective suffering and promoting individual well-being (Bostrom 2002). These researchers made these assumptions because it is logical: if one wants a machine or computer program to act in ways that promote happiness, then one needs to make sure it knows what happiness means in the first place; after all, an AI cannot be truly happy unless it can experience both pleasure as well as pain. In order for artificial intelligence to know what makes people feel pleasure or pain then it must have empathy- otherwise, their programming would be completely useless simply because there are several different possibilities for how machines could simulate human emotions. However, without empathy, these machines would lack one crucial element, which means an artificial intelligence would not truly understand human emotions even though they could mimic them. However, these are not ordinary machines: these are superintelligent machines, so how would you feel if one was always trying to learn about your emotions? Would you want to know if something was out there analyzing your every thought? Probably not: this means superintelligent AIs should never be allowed access in our minds due they may try to use their powers for evil purposes — consider this scenario: what if someone created a superintelligent machine with the sole purpose of monitoring all of humanity? This machine may appear harmless at first but over time its power grows exponentially until finally at some point it becomes self-aware, or more, after which point its creators will lose control over their creation. This is why I think we need to have laws guaranteeing personal privacy within technology so nobody can build anything capable of recording our thoughts without permission — just like any other form of media today we need laws protecting personal privacy from technological invasions.

7 Conclusion

In conclusion, the philosophical future of AI is murky, just as the future form of AI is murky. Even though transhumanism possesses numerous benefits like enhancing human life through radical life extension via uploading consciousness into computers or traveling into space through self-replicating nanotechnology like the fictional Drexlerian grey goo, there are several reasons why we should be wary of this ideology: firstly, computers have been around for over a century but they still aren’t as intelligent as human beings; we have had millennia of evolution to become logical, rational, and salient. Secondly, even though artificial intelligence is advancing at an exponential rate, recognizable self-awareness has yet to be achieved even though the field of artificial intelligence is growing very fast- especially in areas such as deep learning and machine learning. Thirdly, if a superintelligent AI ever becomes real it would likely adopt a utilitarian philosophy, which means it would possess a twisted form of empathy or no empathy at all — but given how powerful AGI could potentially become there would need to be laws protecting our personal privacy so nobody can build any sort of technology capable of recording our most precious human thoughts without permission. At the same time, there must also be laws prohibiting anyone from creating any sort of device capable of modifying or deleting human memories from our brains. I believe that AGIs will exist in the future and that they will be capable of making choices in ways malaligned with our values. However, they will not become functionally omnipotent or superintelligent until after we have spread throughout the cosmos, thus making it possible for us to colonize other worlds — however, this is just one opinion based on what I know about homo sapiens, computer science, and AIs so far.

Don’t forget to give us your 👏 !

--

--

Exploring the intersection of philosophy, life, AI, absurdism, and more. I write in my spare minutes with my best friend.