Ready to Start Your Career?
July 8, 2018
What is Artificial Intelligence?
July 8, 2018
July 8, 2018
Artificial intelligence, or AI, has long been portrayed in entertainment like films and television shows as human-like robots that are either serving humans or taking over the world. Those portrayals, however, are actually over-generalizations that fail to show the many facets and forms of artificial intelligence that we’ve all interacted with in some way in everyday life, whether we realize it or not.Artificial intelligence is the science of computers and machines emulating human thinking, learning, and behavior. This definition is very broad and simple, but it adequately sums up what AI is about and how it works. However, artificial intelligence consists of different subtypes and applications in society that should be looked into in some depth in order to truly appreciate the versatility and utility of the science. But regardless of how AI is developed or used, it ultimately serves the purpose of performing and/or facilitating a broad range of tasks and processes to improve the quality of life for people. Here is a brief guide on AI that describes its subfields, key characteristics, and forms that we see in life, and the industries in which it is currently being applied.
Areas of AIArtificial intelligence itself is a vast field that consists of areas with distinct capabilities, giving AI an adaptability that allows it to be used across various applications. However, the significant feature that AI possesses is that it works by utilizing data with algorithms, which are systematic methods of problem solving, and repetitive processing to enable a system, which is usually software in a machine, to automatically learn from patterns and observations it finds in that data. Therefore, in order to perform successfully, artificial intelligence requires highly detailed data, very diverse types of data, and large amounts of data. AI also needs the repetitive processing to be fast and for the algorithms it uses to be very intelligent in order to efficiently and accurately analyze and process this data. Below are the main subfields of artificial intelligence:
- Machine Learning – The term “machine learning” is commonly used interchangeably with “artificial intelligence,” but machine learning is actually a subset of AI. It is a method of training a machine how to learn. Machine learning models automatically search for patterns in data and attempt to draw conclusions from it the same way that a human would. This means that as these models are given data and examples, they learn what to do or how to respond from the examples. Eventually, the models’ algorithms improve in forming correct conclusions from the data they’re given and apply the improvements and knowledge to new sets of data they encounter.
- Deep Learning – Deep learning is a form of machine learning, but it stands out on its own as a subtype that conditions computers to learn intricate patterns in large volumes of data to perform human-like tasks like image identification, speech recognition, and prediction making. Deep learning relies on neural networks, which are computer systems that are made of interconnected units called “neurons” that process data by responding to input from exterior sources and transmitting data between the units. These neural networks must process data multiple times to glean understanding from raw data and to make connections. Deep learning uses big forms of these neural networks together with several layers of processing units to train computers to learn on their own.
- Cognitive Computing – Cognitive computing is an AI subfield that seeks to achieve human-like interaction with machines by employing machine-learning systems that can not only comprehend and process data input like languages, voice, or video, but also perform specialized functions like driving a car by being able to intelligently reason and deliver output that humans can use.
- Natural Language Processing (NLP) – Falling under cognitive computing, natural language processing (NLP) is a branch that enables computers to understand, manipulate, and generate human language, ultimately serving to bridge the gap between computer comprehension and human communication.
- Self-Learning – Unlike other computer technologies that automate manual tasks like pouring a cup of coffee like robotics, artificial intelligence automates learning. With automated learning, AI creates self-learning systems that iteratively probe complex data to develop understanding by recognizing patterns in the data.
- Analyzes Data and Adapts – The self-learning capability of AI that can observe and identify trends in data gives AI the capacity to discern structures, regularities, and irregularities in information so that it can analyze and adapt to changes in data. This is what enables AI systems to make categorizations or predictions in real-world situations like music recommendations in apps like Pandora or make moves in automated games of chess. It also allows AI models to react to new data it receives, particularly in back propagation, an AI technique that enables models to correct themselves with training and supplemental data when they deliver output that is not quite correct.
- High Specialization – What gives AI systems their abilities to reliably perform activities and accurately deliver information as output is that they are highly specialized, with each one doing a very specifically defined, single task. For example, if a system is designed to recommend a song selection, then it cannot also recommend a personal injury lawyer.
- Non-Autonomy - It should also be noted that though they are self-learning, AI systems are not autonomous, which at least partly owes to the fact that these systems initially need to receive instructional input from humans for proper set up, and it also needs to be confirmed that these systems are being fed the appropriate and accurate type of data so that they can yield the right results and answers to problems.
- Enhances Intelligence – Because AI’s self-learning systems are not autonomous, it’s very common for them to be incorporated into other applications to expand those applications’ ranges of features and capabilities. With tools like deep learning that use large neural networks to traverse layers of processing units that get more accurate as they process more data, AI systems supplement products and other systems for smoother, more reliable performance. For instance, the addition of the virtual assistant Siri to Apple’s iPhones brought tremendous convenience to users, who could send natural-language voice queries and commands to Siri to get information like movie show times and restaurant recommendations, as well as perform tasks like making phone calls to contacts without having to manually select them.
Types of Artificial IntelligenceThe various subsets and abilities of artificial intelligence have come together to create many types of programs, machines, and devices that employ different forms of AI. These are the four primary types of artificial intelligence that are either seen in society today or have been portrayed in entertainment and are still yet to be seen.
- Type I: Purely Reactive – As the most basic type of AI, Type I is purely reactive, as it perceives its surrounding environment directly and acts directly on what it can see. It does not have the ability to develop memories or use past experiences to make current decisions. Purely reactive AI also specializes in only one area and does not encompass an internal concept of the wider world. An example of Type I AI we’ve seen is Google’s AlphaGo, a computer program that can play Go, which is an ancient Chinese board game that uses abstract strategy.
- Type II: Limited Memory – Type II AI is a step up from Type I because unlike the first type, it can recall past information and use it with its preprogrammed representations of the world to make decisions. However, the data from past experiences is not saved as part of a Type II system’s overall trove of information. An example of Type II AI is a self-driving car.
- Type III: Theory of Mind – Type III AI has yet to be actually seen in everyday life, although forms of it have been portrayed in movies. With Theory of Mind AI, systems have the ability to understand human thoughts and emotions that include motives, expectations, and more, as well as the capacity to interact socially. So, Type III AI machines can essentially comprehend the forces that drive human behavior. An example of Type III AI that’s been displayed in entertainment is the character Sonny, a robot in the 2004 film I, Robot.
- Type IV: Self-Aware – Type IV AI systems build on Type III with their abilities to possess consciousness, or develop representations about themselves. They are aware of their own internal conditions and can anticipate the sentiments of others. Type IV systems can also form ideas and inferences. Self-Aware AI does not currently exist, but if it did, it would be similar to the character Eva in the 2015 movie Ex Machina.
AI Applied in IndustriesThe invaluable benefits of artificial intelligence have been recognized by professionals in various industries. Here are a few fields in which AI is making a difference:
- Healthcare – Deep learning techniques, along with object recognition and image classification, are being used to scan tumors in MRIs to determine if they are benign or not.
- Business – Business tools that incorporate AI offer predictive analytics features that can help with strategy planning and setting market goals.
- Web - Social media website Facebook uses deep learning to identify individuals by facial recognition in photographs. It also utilizes deep learning to determine which types of advertisements should be displayed to users based on their interests and other factors.
- Science – Scientists are using AI to help save endangered species from extinction with tools like footprint analysis to track them.
Keep Your Online Assets and Systems SafeAs more businesses and organizations understand AI and how it can benefit them, more of them will harness the power of AI by incorporating it into their processes and developments of their products and services to improve performance and profitability. The immense knowledge and competitive insights that AI will bring will, in turn, make things like a company’s intellectual property, methodologies, and business strategies even more vital than they were before, warranting strong cybersecurity protection from cyber thieves. Learn about how you can keep your business assets and systems safe on the Internet by taking cybersecurity courses from Cybrary.
Build your Cybersecurity or IT Career
Accelerate in your role, earn new certifications, and develop cutting-edge skills using the fastest growing catalog in the industry