These days, it’s easy to believe arguments that artificial intelligence has become as smart as the human mind—if not smarter. Google released a speaking AI that dupes its conversational partners that it’s human.
DeepMind, a Google subsidiary, created an AI that defeated the world champion at the most complicated board game. More recently, AI proved it can be as accurate as trained doctors in diagnosing eye diseases.
And there are any number of stories that warn about a near future where robots will drive all humans into unemployment.
Everywhere you look, AI is conquering new domains, tasks and skills that were previously thought to be the exclusive domain of human intelligence. But does it mean that AI is better than the human mind?
The answer to that question is: It’s wrong to compare artificial intelligence to the human mind, because they are totally different things, even if their functions overlap at times.
Artificial intelligence is good at processing data, bad at thinking in abstract
Even the most sophisticated AI technology is, at its core, no different from other computer software: bits of data running through circuits at super-fast rates.
AI and its popular branch, machine learning and deep learning, can solve any problem as long as you can turn it into the right data sets.
Take image recognition. If you give a deep neural network, the structure underlying deep learning algorithms, enough labeled images, it can compare their data in very complicated ways and find correlations and patterns that define each type of object.
It then uses that information to label objects in images it hasn’t seen before.
The same process happens in voice recognition. Given enough digital samples of a person’s voice, a neural network can find the common patterns in the person’s voice and determine if future recordings belong to that person.
Everywhere you look, whether it’s a computer vision algorithm doing face recognition or diagnosing cancer, an AI-powered cybersecurity tool ferreting out malicious network traffic, or a complicated AI project playing computer games, the same rules apply.
The techniques change and progress: Deep neural networks enable AI algorithms to analyze data through multiple layers; generative adversarial networks (GAN) enable AI to create new data based on the data set it has trained on; reinforcement learning enables AI to develop its own behavior based on the rules that apply to an environment… But what remains the same is the same basic principle: If you can break down a task into data, AI will be able to learn it.
Take note, however, that designing AI models is a complicated task that few people can accomplish. Deep learning engineers and researchers are some of the most coveted and highly paid experts in the tech industry.
Where AI falls short is thinking in the abstract, applying common sense, or transferring knowledge from one area to another. For instance, Google’s Duplex might be very good at reserving restaurant tables and setting up appointments with your barber, two narrow and very specific tasks.
The AI is even able to mimic natural human behavior, using inflections and intonations as any human speaker would. But as soon as the conversation goes off course, Duplex will be hard-pressed to answer in a coherent way. It will either have to disengage or use the help of a human assistant to continue the conversation in a meaningful way.
There are many proven instances in which AI models fail in spectacular and illogical ways as soon as they’re presented with an example that falls outside of their problem domain or is different from the data they’ve been trained on.
The broader the domain, the more data the AI needs to be able to master it, and there will always be edge cases, scenarios that haven’t been covered by the training data and will cause the AI to fail.
An example is self-driving cars, which are still struggling to become fully autonomous despite having driven tens of millions of kilometers, much more than a human needs to become an expert driver.
Humans are bad at processing data, good at making abstract decisions
Let’s start with the data part. Contrary to computers, humans are terrible at storing and processing information. For instance, you must listen to a song several times before you can memorize it.
But for a computer, memorizing a song is as simple as pressing “Save” in an application or copying the file into its hard drive. Likewise, unmemorizing is hard for humans. Try as you might, you can’t forget bad memories. For a computer, it’s as easy as deleting a file.
When it comes to processing data, humans are obviously inferior to AI. In all the examples iterated above, humans might be able to perform the same tasks as computers. However, in the time that it takes for a human to identify and label an image, an AI algorithm can classify one million images.
The sheer processing speed of computers enable them to outpace humans at any task that involves mathematical calculations and data processing.
However, humans can make abstract decisions based on instinct, common sense and scarce information. A human child learns to handle objects at a very young age. For an AI algorithm, it takes hundreds of years’ worth of training to perform the same task.