Types of Artificial Intelligence
Artificial Intelligence (AI) is an extensive branch of computer science that involves building smart machines capable of performing tasks that require human intelligence. AI enables computers and machines to mimic the perception, learning, problem-solving, and decision-making capabilities of the human mind. It is an intelligent entity which humans create to make machines capable of thinking and acting humanely.
Few programmable functions of AI systems include planning, reasoning, learning, decision making, and problem-solving. They are usually powered by algorithms and utilize machine learning, deep learning, and rules. Machine Learning(ML) algorithms are used to feed data to AI systems, and ML's statistical techniques boost AI system's learning. It has led to the enhancement of AI as systems are progressing at the task without having a program specifically defined for it.
TYPES OF AI
AI can be classified based on its functionalities as Type 1 and Type 2. Type 1 are mentioned below:-
1. Artificial Narrow Intelligence (ANI)
Artificial Narrow Intelligence, also known as narrow AI, is the only type of artificial intelligence that has been successfully utilized to date. It is designed to perform singular tasks such as driving a car, searching the internet, voice assistant, facial recognition and is very efficient at completing the specific task that it is programmed to do.
Narrow AI comes with a limitation which is that it doesn't replicate human intelligence. It just simulates human behaviour based on a narrow range of parameters. This intelligence has gone through various breakthroughs in the last decade, powered by machine learning and deep learning achievements. It uses Natural Learning Processing(NLP) to perform tasks.
Narrow AI can be either reactive or has limited memory. Reactive AI is extremely basic. It has no memory or data storage capacity, which will emulate the human mind's ability to respond to various stimuli without prior experience. Limited memory AI is more advanced and equipped with data storage and learning capabilities, enabling machines to use historical data to make decisions. The most implemented is limited memory AI where machines use huge volumes of data for deep learning, enabling personalized AI experiences.
Examples of Narrow AI:
- Virtual Assistant like Siri by Apple, Cortana by Microsoft, and Alexa by Amazon
- Self-driving cars
- IBM's Watson
- Rankbrain by Google
- Facial or image recognition software
- Disease prediction and mapping tools
2. Artificial General Intelligence (AGI)
Artificial General Intelligence, also known as Deep AI, refers to a machine learning concept with a general intelligence that mimics human intelligence and behaviours, with the ability to learn and apply its intelligent concepts to solve any problem. AGI can think, understand, and act similarly as the human brain does in a given situation.
Though scientists and researchers have not achieved Deep AI yet, they are trying to make the machine more conscious by programming them with a full set of cognitive abilities. Machines are being given experiential learning to improve their efficiency on singular tasks and gain knowledge for a wide range of problems. Lack of extensive knowledge about the human brain's functionality has made it difficult for researchers to clone basic functions of sight and movement. Deep AI uses the theory of mind AI framework that is the ability to perceive needs, beliefs, emotions, and thought processes. The theory of mind-level AI is about training the machines to understand the human brain rather than replication or simulation.
3. Artificial Super Intelligence (ASI)
Artificial Super Intelligence is a hypothetical AI concept that aims to make machines self-aware and surpass human intelligence and capability. The concept of artificial superintelligence wishes to see AI evolve to be similar to human emotions and experiences. They don't understand them and create emotions, needs, desires, and beliefs of their own when required.
ASI would clone the multifaceted intelligence of the human brain and would theoretically surpass everything that humans do, like math, sports, medicine, emotional relationships, and more. ASI would consist of a great memory and faster processing and analyzing stimuli with an exceptional sense of decision making and problem-solving capability that will be superior to those of human beings. The idea of having such well-built and powerful machines may seem to be tempting but might have some brutal unknown consequences.
The classification of AI-based on Type 2 functionalities are mentioned below:-
1. Strong AI Strong AI can be defined as a hypothetical machine that will exhibit human cognitive abilities. It refers to machines with a mind of their own and can think and accomplish complex tasks independently. There are no examples of strong AI as it's a hypothetical concept with no true form.
2. Weak AI Weak AI can be defined as intelligent behaviour that can be modeled and used by computers to solve complex problems. It refers to programmed systems to accomplish a wide range of problems but operate within a predefined range of functions. Weak AI-powered machines do not have a mind of their own—for example, Alexa and Siri.
HISTORY OF AI
The idea of AI dates first popped up in ancient Greek myths of Antiquity. The roots of AI's history are long and deep so let's take a quick look at some important events and milestones:-
A paper on the topic "A Logical Calculus of Ideas Immanent in Nervous Activity" was published by Warren McCullough and Walter Pitts. The first paper proposes a mathematical model for building a neural network.
Donald Hebb proposed a theory stating that neural pathways are created from experiences and connections between neurons become stronger with their usage frequency through his book "The Organization of Behaviour: A Neuropsychological Theory." The Hebbian learning process was an important model in AI.
Alan Turing (received fame for cracking the Nazi's ENIGMA code during World War 2 ) published "Computing Machinery and Intelligence" and answered how machines think through the introduction so the Turing Test would help determine whether machines also comprise similar intelligence as a human. The value of the Turing Test has been debatable ever since.
Arthur Samuel developed a self-learning program to play checkers
Georgetown IBM machine automatically translated 60 selected Russian sentences into English during a translation experiment.
At the first-ever AI conference held in Dartmouth College, John McCarthy coined the term "Artificial Intelligence." Logic Theorist, the first-ever running AI software program, was created by Allen Newell, J. C. Shaw, and Herbert Simon later that year.
A paper on the topic "Programs with Common Sense" was published by John McCarthy, and he also developed the AI programming language Lisp. The paper proposed a complete AI system called Advice Taker, which could learn from experience as humans, but it was hypothetical.
A program called General Problem Solver(GPS) was developed by Allen Newell, Herbert Simon, and J.C. Shaw to imitate human problem-solving ability. Herbert Gelernter developed the Geometry Theorem Prover program, and Arthur Samuel coined the term machine learning at IBM. In the same year, the MIT Artificial Project was found by John McCarthy and Marvin Minsky.
An AI lab was started at Stanford by John McCarthy.
Due to the lack of progress in machine translation research stated by the Automatic Language Processing Advisory Committee (ALPAC) report, all government-funded MT projects were canceled.
Mark 1 Perceptron, the first computer-based on neural network that learned through trial and error, was built by Frank Rosenblatt. Perceptrons were published by Marvin Minsky and Seymour Papert, which became a landmark work on neural networks and also an argument against the future of neural network research papers.
The British government released the "Lighthill Report," which stated the disappointments in AI research, and so there was a drop in the artificial intelligence project fundings.
This period was known as the "First AI Winter" as it was a combination of the ALPAC report and the "Lighthill Report," which further led to stalling of funds and research.
Neural networks with backpropagation algorithms to train the network were extensively used in AI applications.
The Fifth Generation Computer Systems(FGCS) project was launched by the Japanese Ministry of International Trade and Industry to develop supercomputers and a platform to boost AI development.
Looking at Japan's FCGS, the U.S government decided to launch the Strategic Computing Initiative to provide DARPA with funded research in advanced computing and artificial intelligence.
A few companies like Symbolics and Lisp Machines spent more than a billion dollars to create specialized computers to run on AI programming language Lisp.
1987 was known as the "Second AI Winter" as cheaper alternatives emerged with improving technology, leading to the collapse of the Lisp machine market. In 1992, Japan terminated the FGCS project because the goals could not be fulfilled a decade earlier. In 1993, after spending $1 billion, DARPA ended the Strategic Computing Initiative as it fell short of their expectations.
During the Gulf War, U.S. forces deployed DART, an automated tool for logistics planning and scheduling.
IBM's Deep Blue(chess-playing computer) beat the world chess champion, Garry Kasparov, in a match and rematch within regular time controls.
The DARPA Grand Challenge was won by a self-driving car named STANLEY. The U.S. military invested in autonomous robots like Boston Dynamic's "Big Dog" and iRobot's "PackBot."
Google's breakthrough in speech recognition was introduced as a feature in its iPhone app.
IBM's Watson(question-answering computer) beat the champions KEn Jennings and Brad Rutter at the quiz show Jeopardy.
A neural network was fed 10 million Youtube videos as a training set using deep learning algorithms by Andrew Ng, the Google Brain Deep Learning project founder. The network recognized a cat without any explicit programming.
Baidu's Minwa(artificial intelligence-based supercomputer) used convolutional neural networks to identify and categorize images with a greater accuracy rate than that of an average human being. Minwa also beat google at image recognition.
In a five-game match of Go, Deepmind's AlphaGo program, powered by a deep neural network, beat the world champion Lee Sodol. Google later bought DeepMind for $400 million.