I recently posted about 4 common AI fallacies or myths regarding artificial intelligence (AI). I wanted to dive a little deeper into some of these myths, and discuss why AI will NOT take over the world.
First of all, it is easy to fear what we don’t really understand, especially when some people push the narrative of computers becoming ‘aware’, which would result in them dominating the human race.
An article posted on MachineLearningTimes.com discusses 4 common fallacies or myths regarding artificial intelligence (AI). These misconceptions lead to many misunderstandings and fear* regarding AI.
Wikipedia defines AI as “intelligence demonstrated by machines, unlike the natural intelligence displayed by humans and animals, which involves consciousness and emotionality.”
I like Investopedia’s definition better*: “the simulation of human intelligence in machines that are programmed to think like humans and mimic their actions.”
In the post, Melanie Mitchell, Davis Professor of Complexity at the Santa Fe Institute and author of Artificial Intelligence: A Guide For Thinking Humans, lists the 4 most common fallacies that I would summarize as follows:
- Narrow intelligence (being really good at one task) leads to general intelligence (being good at many things, the way humans are). In other words, computers will become super-smart and take over the world.
- Easy tasks are hard to automate/hard tasks are easy to automate.
- AI works like the human mind. This comes from using ‘human-y” terms like learn, understand, read, and think, which leads some to believe AI can achieve humanness.
- Intelligence is all in the AI brain. In other words, “the right algorithms and data…can create AI that lives in servers and matches human intelligence.”