I recently posted about 4 common AI fallacies or myths regarding artificial intelligence (AI). I wanted to dive a little deeper into some of these myths, and discuss why AI will NOT take over the world.
First of all, it is easy to fear what we don’t really understand, especially when some people push the narrative of computers becoming ‘aware’, which would result in them dominating the human race.
I mentioned 2 definitions of AI in my previous post, but would like to circle back to Investopedia’s definition, which I said I liked in my last post: “the simulation of human intelligence in machines that are programmed to think like humans and mimic their actions.”
I like this definition for the following reasons:
- “Simulation” indicates it is not the same, but similar. Human intelligence is broader than speedy calculations and identifying patterns. More on this later.
- “programmed” rightly notes that the machine or the algorithm didn’t dream the ability up on its own, but real human intelligence created it. The word also notes that the computers* are following what humans told them to do (executing the code).
- “mimic” reminds us for the second time that humans and computers are not the same. Mimicking the actions of humans and thinking like humans are light years apart.
*In this post, when I use the term computer, I’m talking about the algorithm or computer process (AI), not the hardware itself.
I don’t like the word “think” because that is too ‘humany’ a description for a computer process and not accurate. Again, the process is doing what it was programmed to do; it is not thinking. Other than that, I like the definition because it at least attempts to draw the line between computers and humans (no definition of AI is actually correct, but many are worse than this one–more on that at the end of this post).
Meanwhile, Bing.com defines “think” as to:
- have a particular opinion, belief, or idea about someone or something.
- direct one’s mind toward someone or something; use one’s mind actively to form connected ideas.
Now, I’ll admit that this definition was not meant to define how a computer “thinks”, but that’s my point: AI does not lead to a computer forming opinions, beliefs, or ideas on its own, and a computer certainly doesn’t have a mind to direct towards anything or form ideas (like humans do).
Computers, unlike most children, do as they are told. Yes, computers learn and get better, but that’s only because they were programmed with this “gain of function” so to speak (pretty soon, woke computers will need to get vaccinated too).
Per the fallacy article mentioned in the previous post, there’s a big difference between programming a routine to do something really well (and many times better and faster than humans), which is narrow intelligence, and programming a routine to do many things well and simultaneously like humans do.
Science fiction writers and others tell us that computers are learning and getting better, and that’s true, but again, computers are learning and getting better in the area/task/process they were programmed for, and not learning and getting better at things they were NOT programmed for. The AI routines are getting deeper, not broader (more general).
Human intelligence is very board (well, except for some YouTubers).
As many have attested, human intelligence involves more than just computation; it involves emotions (sorry, Spock), planning, creativity and many more facets, like the ability to learn on a broad scale, not just a narrow one.
Finally, according to Eric Siegel, most of what is typically called AI is simply machine learning, which is divided into 2 categories: supervised and unsupervised.
Supervised machine learning uses data where the outcome is labeled (did the person default on his mortgage or pay it off?). For example, you have data that includes a person’s income, zip code, mortgage amount, payment amount, amount of other debt, and credit rating, and whether he defaulted on his mortgage.
The machine learning algorithm analyzes all the data, including the labeled event (yes, person defaulted on mortgage, or No, did not default) and identifies how to analyze similar data without that label (Y/N) to predict whether other people applying for mortgages are likely to default or not.
Unsupervised learning analyzes data without any labels to identify patterns and group data based on similarities using a variety of methods.
Siegel also notes that most of machine learning is supervised, which means you have to specify to the algorithm how to determine the action you want to predict (e.g., will the person default?). In other words, if the bulk of AI requires labels, and we really can’t identify and label exactly all the facets of human intelligence and thinking, how do we provide it to a computer so it can learn it and become human? Well said, Siegel!
We cannot. Therefore, no take over.
In fact, Siegel says AI is a big, fat lie. Watch the video here by the same name. The video is 31 minutes long, but engaging, and it will challenge your perceptions.
At first, it seem like he’s off his rocker, but give him some time to explain. He taught an AI class at Columbia University, so he’s not a kook (except in his Dr. Data dance videos in which he dances and raps about machine learning, which are factual, funny, and informative).
I dare anyone to watch 5 minutes of any of his videos without learning something and laughing out loud. If that’s you, let’s hear about it in the comments. If you learning something, I’d like to hear about that too.
In spite of what Siegel says about AI being a lie, I still use the term AI because most people understand the term AI more than the term machine learning. I don’t think most people would have been as interested if I’d titled this post Machine Learning will NOT take over the World.