Artificial Intelligence was a term coined in the last century to describe human like capabilities to be acquired by human creations.
As researchers tried to provide bring these capabilities to fruition, they realized one of the ways human acquire the skills is through learning. Hence, Machine Learning as a discipline came into being.
Now, as we know humans have many many different ways of learning. One of ways is the structured way. In this method, if we were tried to determine correlation and/or causation, we would study the domain, carefully select the features which would logically be relevant, conduct experiments and then evaluate the impact. This sort of approach corresponds to an Feature Engineering based approach of Machine Learning.
Such an approach usually has teams working a large amount of their time on feature selection and engineering ( about 90%) and a relatively minimal amount on model building. The models are based on statistical methods with algorithms designed to get the best result depending on metrics relevance to the use case.
The main statistical or stochastic methods used in this practice are Regression, K Means, K Nearest Neighbor, Decision Trees and ensemble methods like Random Forest and XGBoost. This type of Machine Learning could be called “Traditional” Machine Learning.
Deep Learning is also a way of acquiring AI capabilities by Learning so it is a subset of Machine Learning. It is a type of Representational Machine Learning.
In Representational Machine Learning, the algorithm learns the features from data automatically.
The method of Learning in Representational and hence, Deep Learning is very different from the one based on Feature Engineering. In this case,Learning method is inspired by biology of brain where multiple neurons work together to accomplish very complex tasks. Though the origin of the deep learning was biological, the advances have been creative and used many inspirations.
The prefix, Deep, signifies the complexity and amount of computation done in this architecture. The feature selection and engineering is not significant in this approach as the algorithm (or the network) tries to and is capable of figuring out patterns and deduce information because of its computation capacity and sophistication.
As mentioned earlier, there are multiple architectures possible in deep learning as the f now, starting from Perceptron, Artificial Neural Network(ANN), Convolutional Neural Network(CNN), Recurrent Neural Network (RNN), Generative Adversarial Network (GAN etc.
In fact, even Deep Learning has specialized subsets like Reinforcement Learning. Generative Adversarial Network (GAN) are a product of Reinforcement Learning domain.
So in summary, Machine Learning methods divide into Traditional and Representational. Deep Learning is type
of Representational Learning. Jon Krohn’s book Deep Learning Illustrated has more details on this topic in Chapter 2. The link is at the bottom of the post.
In an Ideal world, if we named ‘Deep Learning’ as ‘Representational Deep Machine Learning’, it may remove at least some confusion and even lead to better understanding faster. I wrote a post on LinkedIn a few days back about the titles and they propagate confusion and fuel debate.