AI, machine learning and deep learning: what is the difference?

  • Home
  • Digital
  • AI, machine learning and deep learning: what is the difference?

You can’t imagine it today that some marketer claims that it’s made with machine learning, or has an AI on board. Thus the terms threaten to become hollow terms. It is therefore important to look beyond the hype.

As soon as a catchy sounding technology term gets enough attention to get under the eye of the world’s marketing specialists, the term quickly grows into a meaningless buzzword. We already saw it happen with blockchain and hollow terms such as big data. These days, AI, machine learning and deep learning also get the hype treatment.

Blank concept

For example, we are confronted with laughable, but meaningless situations such as an AI travel case, AI televisions and AI fitness trackers. Such use of the term makes a professional rightly think about the real usefulness of AI if there is one. The truth is that today the concept has become so broad and hollow that it says so little. Just about everything where a computer seems to make a decision is today depicted as AI. In that respect, you could even say that Windows 10 has an AI-driven update process. The operating system tries to install updates at a time that suits you. Smart!

 

It is not because your PC or telephone knows when an update is inconvenient that you can suddenly attribute intelligence to the thing.

In reality, of course, there is little more than thoughtful algorithms that superficially create the illusion of smartness. AI is usually a rudimentary form of predictive analytics. If your smartphone optimizes its battery by allowing apps to fall asleep when you don’t usually use it, that’s inconvenient enough to call your phone truly intelligent.

On to the next term

Even the marketers seem to realize that they have used up ‘AI’. On to the next hip word then: machine learning. Or maybe they opt for deep learning if it may sound extra technical. Typically, the terms are used as a new flag to cover the same load. A healthy dose of cynicism is therefore appropriate, but that does not mean that nothing has changed in recent years.

The rise of the internet, e-commerce and connected devices caused a real data explosion. A number of innovations in the way we taught computers things, coupled with the exponential growth of hardware capacities, led to the sudden emergence of new and innovative ways of handling that data.

Computer power as a catalyst

In concrete terms, we are currently witnessing an enormous evolution in the capacity of a computer system to recognize patterns in large amounts of data. Researchers developed algorithms that are capable of learning things based on pattern recognition. The clearest example is image recognition. Outlined very simply, algorithms today are so good at recognizing photos of a cat because they were presented with millions of photos of cats. Searching for patterns, they eventually discovered the parameters that make a cat into a cat.

image recognition of food, toys and plants

AI, machine learning and deep learning: what is the difference?

Image recognition algorithms are trained today to recognize specific features. In this image, you see how the algorithm for recognizing animals sees an image that has been put in a feedback loop.

Artificial intelligence

To know what is so interesting about that, it is a good idea to list the relevant terms. Artificial intelligence or AI is ultimately more of a concept. It is a very broad concept that indicates that a computer shows signs of intelligence. AI can be specific or generalistic. A specific AI is good at one or a handful of tasks: recognizing cats in photos, driving a car … A generalistic AI does not yet exist, although digital assistants such as Amazon Alexa and the Google Assistant do their best to get close. . A generalistic AI is, in principle, capable of completing a multitude of completely different tasks, just like a human being.

Today we mainly live in the era of specific AIs. The capabilities of the algorithms are becoming increasingly complex. In this way we are gradually approaching speech assistants who can keep a conversation going and there are already a lot of ‘intelligent’ cars that can drive perfectly independently on the motorway. When will you start talking about AI? That is a difficult matter. A self-driving car is probably smart, but ultimately not really more intelligent than a pocket calculator, which is much better in mental arithmetic than you.

 

A self-driving car is not really more intelligent than a pocket calculator.

 

To teach algorithms more complex tasks, researchers have focused on a general approach. It used to be a good idea to program an AI yourself, but today it’s clear that something like that doesn’t work. It is much more efficient to provide an algorithm with the necessary tools to learn something yourself.

Machine learning

We call this action plan machine learning. In other words, machine learning is the best method that exists today to arrive at a specific AI. With AI machine learning, a computer is instructed to search through huge amounts of data in search of patterns and predictions. Programmers no longer have to program-specific tasks into an algorithm, since the computer learns the best approach based on the available data. It is the task of the programmers to provide the computer with a framework and an action plan that makes AI machine learning possible.

 

AI machine learning
Programming an algorithm so that it knows what a person looks like is extremely difficult. It is much easier to help the algorithm to discover for itself what a person makes a person.

Deep learning

Finally, there is deep learning, which is the most common machine learning strategy today. Deep learning uses ‘artificial neural networks’. These are networks of digital neurons inspired by the human brain. The networks consist of different layers, all of which are capable of recognizing specific things. For example, a network that has to recognize a traffic sign will focus on shapes, colors, and formats. A first layer can look for an inverted triangle, a second for bright red, and a third for white. Each layer will indicate whether it has found its item and how certain it is. For example, a neural network for image recognition can look at a photo and recognize a priority board. That never happens with 100% certainty.

During that training, the neural network is put into a kind of feedback loop. In addition, it gets to see images, for example of priority signs, and it must always indicate whether or not it has a sign on it. If it is wrong, it will be indicated (usually through human intervention). The network then slightly adjusts itself in an attempt to be correct next time. This can be done, for example, by having certain neurons in the network weigh more heavily on the final decision.

Completely autonomous learning

The training of such a network often involves a human component, but not always. Some time ago, Google demonstrated how an algorithm was able to teach itself how to control a robot arm and pick things up with it. Trying and continuing to try is always the key. Initially, the algorithm moves the arm a little, without much result. Every time an attempt fails, the software tries something different and evaluates whether it has worked better.

Advanced artificial neural networks have an enormous number of layers, which explains the ‘deep’ in deep learning. A well-trained and well-adjusted neural network is a very powerful tool. In some cases, image recognition algorithms today are even better at their job than humans. The same applies in the meantime to (English-language) speech recognition.

Deep learning via neural networks as a concept has been around for a long time. As soon as computer scientists started thinking about AI, the networks were seen as an option. In practice, researchers could do little with it, since the training of the algorithms requires immensely parallel computing power. Only since the professionalization of the GPU as hardware for deep learning can technology really come into its own. That explains the recent explosion in AI-related applications.

Predictive analytics

Specific AI often looks very much like predictive analytics. Certainly in practice, many applications do not go much further than one or another implementation thereof. Predictive analytics does not have to be based on deep learning and can usually be much easier. Think again about your smartphone, which supposedly uses AI to optimize your battery. In the most technical sense of the word, that is not a lie, but although the function is useful, it is difficult to speak of an AI smartphone. Many examples of what you can safely call marketing AI are ultimately a repackaged form of an efficient analysis platform.

The most successful examples of truly advanced AI applications can be found in areas such as speech recognition and image recognition. It is not surprising that researchers are strongly committed to this. On the one hand, it allows computers to converse with us in a much more natural way and on the other hand gives them the power to really help with certain matters.

In practice

One concrete example is the image recognition of Einstein, the smart assistant of the sales and marketing platform Salesforce. Einstein can help a marketer to monitor social media looking for images of a product. If it finds such an image, it can estimate whether the post is positive or negative and inform marketers of this so that they can respond.

AI is also quite far ahead of chatbots today. Language is interpreted in such an advanced way that consumers do not have to give specific commands to an automated help desk to be helped. It is sufficient to ask a question and the chatbot will usually be smart enough to understand the essence of the matter. Google goes a long way with Duplex, an advanced version of the Assistant that must be able to make telephone restaurant reservations with a human waiter on the other side of the line.

AI is also very successful in health care. Certainly, IBM is strongly committed to this with Watson. Watson is able to help doctors with diagnoses, simply because it is better at interpreting medical data such as scans, and, moreover, can rely on an enormous stock of historical data.

Irrelevant

The best thing about it: all of the above is irrelevant from a business point of view. Although it is interesting to know the difference between the most important terms in AI country, and to know what they are responsible for, it is not only that knowledge that helps to make the right decisions. The best advice: if you see AI, machine learning or deep learning somewhere in the product description, just ignore it. Look at what the product aims to do and wonder if that is useful for you. Whether it is predictive analytics or an ultramodern AI: if the result helps you further, it is interesting, otherwise not.

Optimization WordPress Plugins & Solutions by W3 EDGE