What's the Difference Between AI, Machine Learning, Computer Vision, and Deep Learning?
Artificial intelligence, machine learning, computer vision, and deep learning. They all seem to be used interchangeably. But are they really the same?
In short, no. Before we dive in, let's preface this article with a review of AI's origins.
Background on AI
Artificial intelligence is a discipline of computer science. The purpose of AI is to program machines to perform tasks in a similar manner to humans. This can be anything from recognizing everyday objects to complex problem solving.
The concept of AI began in 1955 when John McCarthy coined the term. McCarthy was an American computer scientist who is considered one of the founding fathers of AI.
This photo of John McCarthy was taken on March 7, 1974 at Stanford's artificial intelligence lab.
From the mid-20th century onward, the development of AI is a complex timeline of scientific discovery and technological breakthroughs.
In a broad sense, you can divide AI into two different buckets:
- Narrow AI tackles one specific task either as well as or better than humans.
- General AI is a more sophisticated system that has a wide gamut of capabilities, similar to that of the human brain.
According to Hacker Noon, "narrow AI is where we have been, general AI is where we are going... the ideal of general AI is that the system would possess the cognitive abilities and general experiential understanding of its environments that we humans possess, coupled with the ability to process this data at much greater speeds than mortals."
Where does machine learning, computer vision, and deep learning come into play?
As it turns out, there are actually six elements that lay the foundation for AI:
- Machine learning (ML)
- Computer vision (CV)
- Cognitive computing (CC)
- Natural language processing (NLP)
- Deep learning (DL)
- Neural networks (NN)
These components are heavily intertwined and reliant on each other. Each element contributes to creating an artificially intelligent machine.
Machine learning (ML)
Machine learning uses statistical techniques that give computers the ability to "learn" without being explicitly programmed. It's important to note that learning is defined as progressively improving performance on a specific task.
This type of technology hones in on algorithms that can analyze data to make predictions. In real world applications, think of your Discovery Weekly playlist from Spotify, Netflix's content recommendations, or any online retail site that offers custom recommendations based on your prior selections.
Applications for machine learning are wide-reaching and only getting more advanced. Predicting health risks, diagnosing patients, determining the fastest route to work, online customer support, and our personal assistants are all powered by the predictions from machine learning algorithms.
The beauty of machine learning algorithms is the ability to create truly objective analyses from a given data set.
Computer vision (CV)
Computer vision interprets the content of an image using deep learning and pattern identification. Anything from charts and graphs to pictures and videos, CV processes and interprets.
CV is critical for robots and autonomous vehicles, as it is the component that allows computers to "see" the world similarly to humans. CV uses a variety of hardware sensors to collect this type of data:
- Lidar sensors
- Speed sensors
See what it takes to equip a robot with computer vision.
Once the data is captured, there is a simple process to making sense of it:
- Represent colors based on their HEX value.
- Segment the image based on similar color groups.
- Find certain features (also know as corners).
- Find different textures to accurately categorize each object.
- Guess. That's right, computers make their best guess based on the images it already has in its database.
Check out The Beginner's Guide to Computer Vision for more detailed information and resources on this process.
Cognitive computing (CC)
Cognitive computing's purpose is to create contextual inferences based on given data. This translates to machines to solving problems similarly to their human counterparts.
Although there is no widely accepted definition for cognitive computing, most of these systems are designed to be adaptive, interactive, iterative, and contextual. This allows the algorithms to learn on the fly, and get smarter along the way.
You probably interact with CC technology on a daily basis without realizing it:
- Speech recognition (Siri, Alexa, Google Assistant)
- Sentiment analysis of survey data, online comments, and social media responses
- Face detection
- Fraud detection
- Wealth management
- Risk assessments
Natural language processing (NLP)
NLP gives computers the ability to recognize, interpret, and speak our languages. In a sense, this is the last leg of the race; we started at building the framework for this technology, and NLP allows us to interact with it. Computers can already complete tasks faster and better than humans, and NLP allows for seamless interactions between humans and machines.
NLP algorithms teach computers to understand human language contextually, thereby producing logical responses. In the future, we might be using NLP to communicate with our virtual assistants like Tony Stark and his artificial assistant Jarvis.
Currently, computers tend to take things literally, and they struggle with reading between the lines. NLP hopes to bridge the gap, and we can keep dreaming of a future where Siri and Alexa can appreciate our witty comments.
There is an excellent Medium post on NLP. Read it here.
Deep learning (DL)
Deep learning is a subset of machine learning algorithms that's based on learning and understanding data representations. These algorithms extract features based on specific parameters and generate analyses accordingly.
The complexity of deep learning can be overwhelming, so just remember that this is a primary mechanism for machine learning. To learn more, read this.
Neural networks (NN)
Neural networks use the computer equivalent of neurons (called perceptrons) to create artificial networks inside of a computer. These networks learn by processing data and making associations based on the data.
The more data, the better. The law of large numbers states that the more data available, the closer the average will be to the expected value. This translates into higher accuracy for networks that have access to plenty of data points. In the same way that great chefs know how to cook hundreds of dishes, great neural networks have plenty of data to pull from.
All in all, these components morph together to create technology that we have only previously experienced in Sci-fi films. Applications and uses may vary, but they are always linked by these six concepts.
This article is a light introduction to these concepts, as the long-form explanations are dense enough to fill hundreds of pages. I encourage you to explore videos, articles, and community groups to learn more about these technologies.
Posts by Tag
- Technology Trends
- Smart Security
- Emerging Technology
- Artificial Intelligence
- IoT devices
- Affordable Security
- Deep learning
- Machine learning
- Edge devices
- Small Business
- Edge computing
- Small Business Security
- Computer Vision
- Neural Networks
- Product Design
- Trade Show
- Video Analytics
- Design Thinking
- ISC East
- ISC West