Be The First To Know
4 Reasons Why We Are Probably Overestimating the Power of AI
The advancement of Artificial Intelligence has resulted in a mixture of hope, hype and fear. The fear is based on some pretty far-fetched assumptions, but interestingly, most of us actually believe these assumptions will come true in the near future. This is likely thanks to the media and some pessimist AI researchers who love to foresee a future where AI has gone out of control.
Interestingly, the hype about AI is not something new. AI research went through a golden age during 1956 – 1974, when government agencies like DARPA spent millions of dollars to fund AI-related research projects. But interest died down in the 70s mostly due to a lack of substantial success.
Interest in the subject would rise and fall multiple times throughout the 80s and 90s, where we did see some developments in theoretical aspects of AI and its application. But it wasn’t until the 2000s that things really got moving.
With the advancement of computing power and machine learning techniques combined with big data, AI hype is now at its peak and the expectations are climbing up every day.
But to the sceptics, it is more like deja vu.
Failed AI predictions from the past
When Arthur C. Clarke and Stanley Kubrick envisioned HAL 9000 in 1968, most AI researchers shared their belief that such an intelligent machine will exist by the year 2001. But obviously, AI development took a little longer than what many people expected.
During the first wave of AI hype in the 50s, Turing Award-winning scientists H. A. Simon and Allen Newell predicted that a digital computer would beat the world’s chess champion within a decade. IBM supercomputer Deep Blue did beat Kasparov, but this wasn’t until 1997, some 30 years after the prediction. Moreover, Deep Blue is definitely not what the pioneers of AI meant by an intelligent machine.
In 1970, Marvin Minsky, the founder of MIT’s AI lab, said that we would have machines with the intelligence level of an average human being within 8 years. But even today, with all the computing power and advancements in deep learning, Artificial General Intelligence (AGI) is far out of our reach.
Mistaken predictions have always been very common in the AI research community. This has given led to misinformed perspectives, which, unfortunately, are also shared by some experts in this field. As a result, people often tend to misunderstand (and mostly overestimate) the power of AI and its potential applications.
Media also plays a key role in this. Attractive headlines and sci-fi movies promote an exaggerated view regarding the advancement of AI, which is more often than not, far from reality.
But why do we tend to overestimate the power of AI even though we have scientific data regarding what it can actually do? Let’s explore a few possible explanations.
Reasons why our AI predictions go wrong
1. Misjudging the performance of Narrow AI
Playing chess is an intellectual activity for us. We tend to associate an intelligent mind with a good chess player. But that is not true for an AI. Modern chess engines can easily beat the best human chess players. This often gives people a false idea that the engine is actually intelligent. It is not, in the true sense of the word. A chess engine does not have a mind.
For another example let’s imagine an image recognition system. If an AI can recognise an image of people playing football that does not really tell us much about its intelligence. If a person says people are playing football in a park, then things are completely different. An image recognition AI might identify objects more accurately than a human baby can. But that doesn’t mean it is more intelligent than the baby. It is tempting to mistakenly imagine a full grown mind behind an AI that performs a particular task efficiently. But we should always remember – A calculator is not a math wiz.
The successful AI that we hear about, the ones like Alpha Go and other that make headlines, are actually called “Narrow AI” systems. Though their performances are impressive for their respective tasks, the impression of intelligence that they demonstrate is totally false. A chess engine does not even know what a physical chess board looks like. It doesn’t “play” the game like a human does.
In our causal thoughts regarding AI, we often overlook this difference between Narrow AI and the general human intelligence.
2. Misrepresentation of the AI development
You will often hear that modern AI systems are capable of “learning”. When people hear a word like “learning” they usually associate that with their own learning experience. But a machine “learning” is nothing like the human learning experience. It needs custom coding, hours of human-input and the learning structure and process are predefined. A machine cannot learn anything in a general way.
Also, a machine doesn’t learn things on its own like a human would. At least, not yet. But when press headlines say how two chatbots are “communicating” among themselves, we assume as if the bots have learned to talk as two humans would. This is far from the truth and a complete misrepresentation.
We all know that speech recognition systems have developed a lot. Home devices like Alexa and mobile assistants like Siri and Google Assistant are being used by more and more people every day. What people often ignore is that these voice recognition systems do not really understand what you say. When you ask Alexa to turn on the lights, it doesn’t feel what you are saying. It matches the received sound waves with some predefined parameters and takes one of the predefined actions.
Press headlines do not focus on or clarify these important details. Because they always want to tell us about the “amazing progress” that AI systems are making. The impression of an intelligent AI is more newsworthy than these uninteresting details.
3. Overestimating the short-term results and underestimating the required time for development
Without having any clear idea of many theoretical and practical difficulties that lie ahead, it is very easy to forget about the limitations of AI and its development process. This is a common problem that we encounter while thinking about future technologies in general.
There is a well-known statement called Amara’s law, which says – “We tend to overestimate the effect of a technology in the short run and underestimate the effect in the long run.” This describes what is happening with AI. AI is reaching the peak of its inflated expectation. And people are underestimating the actual time needed to reach that height in reality.
This pattern is well known. When the GPS system was first introduced in 1978, it was thought as a very promising technology. But it completely failed to meet the expectations at that time. In fact, the whole program was almost cancelled multiple times in the 80s because the short term output of the system was overestimated and never met. However, GPS eventually became successful in the 2000s. The point to notice here is that initial overhype almost killed its development.
It’s likely that the same kind of short time overestimation is happening for AI. And like we mentioned before, this type of overestimation regarding the capabilities of AI has happened again and again in the 60s and 80s. Maybe AI will eventually be able to do the things that we envision, but it needs the required time to develop, like the GPS.
Even if AI eventually reaches its full potential, when will that be? Actually, it is very unlikely that the AI boom will happen as quickly as we expect. It seems that we are often mistaken about the “exponential growth” of technologies. In 2002, the iPod had a storage capacity of 10 GB. The storage doubled each year for subsequent models. It became 20 GB in 2003, 40 in 2004, 80 in 2006 and 160 in 2007. Extrapolating this growth, an iPod of 2017 should have a 160 thousand GB of memory. But no, at present an iPod has a 256 GB memory at best.
The exponential growth scenario of any technology does not continue indefinitely. It is broken down by either new practical or technical impossibilities. The iPod hit a practical limit. No one needs 160K GB memory to store songs. Here is an example of a technical limit – The Moore’s law, which predicted the growth of microchip efficiency. The law held true for almost half a century, but is failing in recent times. That is because to keep increasing efficiency, the chip components have to become so small that rules of Quantum Physics come into play and Moore’s exponential prediction can no longer hold true.
As we are not completely aware of all the technological and practical difficulties that are going to arise in the future, the hope of AI breaking into everything is probably a bit far-fetched. The speed of AI development and deployment is not going to be exponential.
4. Imagining an uneven growth
If you look at the Hollywood sci-fi movies from the past, you will see intelligent robots are around but often times no cell phone! While thinking about future technologies our imagination often becomes uneven.
The possibility of creating a general AI or reaching an AI singularity is quite farfetched. But most importantly, even if it happens, that will not come out of the blue. Before we reach a conscious AI, we will have less developed AI systems. Our systems, society and culture will adapt with AI development first.
The likelihood that one day we will face the sudden rise of a superintelligent AI that wants to cause harm to humankind is nil. If we see the rise of AI, we too will change along the way, we will influence the technology’s path of development. It’s not possible that AI becomes self-conscious without our direct involvement over a long period of time.
The possibilities of AI are practically endless at this point. And there is no doubt that the technological advances are promising. But creating a false notion of hope or fear will not do any good for its progress in reality.
The current AI wave of hype is no different than those that came before it. But technologically, we are definitely better equipped than we were in the past. For the sake of real progress in this field, we should stop having unrealistic expectations and focus more on integrating the benefits of AI into our everyday life. AI and automation are supposed to bring us a utopia, not a dystopia. Now it is our responsibility to make well-informed choices and turn that into reality.
Why Will AI Never Replace All Jobs But C...
14 Jun, 2020
7 Challenges of Moving Your Business to ...
14 Jun, 2020
5 Tips to Develop a Company Culture for ...
14 Jun, 2020
The Pros and Cons of AI in Web Design...
14 Jun, 2020
12 Areas Where Business Process Automati...
14 Jun, 2020
What Does a UX Designer Do?...
14 Jun, 2020
Decode Lab is one of the raising IT companies in the country. It provides all types of IT related solutions and services. Though it has registered as a private IT company in 2018 it’s human resources have long experiences in this type of business. The company has a good set up and capability to run IT business.
- + 880 1732329239
- Baliadi Mansion (6th floor), 16, Dilkusha C/A Dhaka- 1000, Bangladesh