Artificial Intelligence an “Alchemy”

Artificial Intelligence an Alchemy
John McCarthy first coined the term “Artificial Intelligence” (AI) in 1956 at the Dartmouth Conference along with four other founding colleagues – Marvin Minsky, Oliver Selfridge, Ray Solomonoff, and Trenchard More. The original definition and concept of AI according to John McCarthy is “Every aspect of learning or any other feature of intelligence can in principle be so preciously described that a machine can be made to simulate it. An attempt will be made to find how to make machines and concepts, solve kinds of problems now reserved for humans, and improve themselves.”

What it simply means is that AI is a term for “simulated intelligence” in machines. The machines are programmed to mimic the cognitive functions of the human brain. Upon further discussion, seven following criteria were agreed for the Artificial Intelligence (AI).

The original seven aspects of AI were:

  1. Simulating higher functions of the human brain
  2. Programming a computer to use general language
  3. Arranging hypothetical neurons in a manner so that they can form concepts
  4. A way to determine and measure problem complexity
  5. Self-improvement
  6. Abstraction: Ability to interpret ideas rather than events
  7. Possessing randomness and creativity

In the present moment after 60 years, I believe that no fully conclusive form of AI has been achieved by modern science, however, randomness and creativity are just starting to be explored.

Every aspect of learning or any other feature of intelligence can in principle be so precisely described that a machine can be made to simulate it. Let us understand what Intelligence is?

According to Jack Copeland some of the critical factors of intelligence are:

1. Generalization learning: 

Learning to be able to perform better in situations not previously encountered.

2. Reasoning: 

To reason is to conclude appropriately at a situation in hand.

3. Problem Solving: 

Given the data to find ‘x’, perception, analysing a scan environment and analysing features and relationships between objects. Self-driving cars are an example.

4. Language Understanding: 

Understanding the language by following syntax and other rules similar to a human.

To summarize, Artificial Intelligence (AI) it is:

  1. Machine Learning
  2. Computer Vision
  3. Natural Language Processing
  4. Robotics
  5. Pattern Recognition
  6. Knowledge Management

Let me dive a little deeper. There are also different types of Artificial Intelligence concerning the approach:

1. Strong Artificial Intelligence (AI): 

Strong AI is a form of machine intelligence which is equal to human intelligence. The key characteristics of strong AI are the ability to reason, make judgements, solve puzzles, learn, plan and communicate. The strong AI should also have objective thoughts, consciousness, sentience, self-awareness and sapience. Hence, Strong AI is also called Artificial General Intelligence (AGI) or True Intelligence. It helps provides insights into how the human brain functions. Strong AI can do anything as well or better than a human. However, we are not there yet, and it currently doesn’t exist while some experts believe it may be developed by 2030 or 2045 while others think it may happen in the next century or that the development of strong AI might not be possible at all.

2. Weak Artificial Intelligence (AI): 

Weak AI or Narrow AI is a machine intelligence that is limited to a specific or narrow area. Weak AI simulates human cognition and benefits humankind by automating time-consuming tasks and by analyzing data in ways that humans sometimes can’t. IBM’s deep blue chess-playing AI was an example. It processed millions of moves before it made any actual moves on the chess board.

It doesn’t stop there though. There is a new kind of middle-ground between strong and weak AI. This is where the system is inspired by human reasoning but doesn’t have to stick to it. IBM’s Watson is an example. Just like humans it reads a lot of information, recognizes patterns and builds up evidence to say, “I am ‘X’ percent confident that the solution provided by the AI is the right solution to the question asked based on the information it has.” Google’s Deep Learning is similar as it mimics the structure of a human brain by using the neural network but doesn’t follow its function exactly. The system uses nodes that act as artificial neurons connecting information. Neural Networks are the subset of Machine Learning.

Therefore, Machine Learning refers to Algorithms that enables software to improve the performance over time as it obtains more data and here begins the major challenge within AI technology. Machine learning’s reproduction crisis is getting worse. Google AI researcher Ali Rahimi got a standing ovation at a machine learning conference when he called the field “alchemy,” criticising the unsystematic reliance on rules of thumb, trial and error, and superstition. He shows that the trial-and-error method produces worse outcomes than empirical research into the best ways to tune algorithms for different purposes, and argues that publication bias is behind the crisis, with AI researchers drawing the most interest when they produce algorithms that perform better, rather than algorithms that are better understood.

Rahimi calling AI an Alchemy proves the statement true with the recent death of a woman who was struck by an autonomous car operated by Uber is believed to be the first pedestrian fatality associated with the self-driving technology. The accident is a reminder that self-driving technology is still in the experimental stage and the governments are still trying to figure out on how to regulate it. The organisations manufacturing the technology have argued that the self-driving cars are safer than humans. However, the sceptics have pointed out that the industry is entering a dangerous phase while the cars might not yet be fully autonomous, but the human operators are not fully engaged. Here is the link to the video.

Last week Google launched its Google Voice Assistant that has taken over Alexa, Siri and Cortana. Google has made incredible progress so far. Google’s voice-controlled smart assistant was initially an upgrade or extension of Google’s existing ‘Ok Google’ voice controls. Google Assistant can now hold human-like conversations with people over the phone. The assistant is assisting you to get your things done for you in the background and even calling your nearest and dearest to give updates about your well-being. Google is working on this technology for many years, and it is called ‘Google Duplex’. It brings together years of investment, natural language understanding, deep learning, text-to-speech etc.

While all these technological break-throughs are excellent, I foresee some significant challenges in the technology or the way the algorithms are programmed for deep learning. For example, let us assume in my absence a malicious person calls my phone on behalf of some bank requesting for my personal details, the Google Assistant is not going to be able to determine if the person who has called is from the bank or not. Similarly, when called by any other individual be it, my friends or family, the Google Assistant is not going to know what information to share and what not to. This is where the Data Security and Privacy concern poses a significant threat to the adoption of this technology.

To summarize, Artificial Intelligence is making significant break-throughs to become Super Smart Intelligent with massive investments from the big IT Giants, however, AI will have to live up to its undoubtedly huge potential before any solutions can be implemented. The Government and other regulative authorities should come-up with strong regulations which will ensure the safety, security and privacy of an individual, city, country and the nation overall.

Check out my BlockDelta profile for additional articles.



Please enter your comment!
Please enter your name here