Using Transfer Learning as A Powerful Baseline for Deep Learning


Using Transfer Learning as A Powerful Baseline for Deep Learning

"After supervised learning — Transfer Learning will be the next driver of ML commercial success."

- Andrew NG, one of the world’s foremost data scientists

Machine learning has evolved from the early days of mere datasets feeding into algorithms. And, with evolutions, come newer avatars in themselves. In the world of data science and more specifically, machine learning, deep learning has carved out its niche in the hearts of scientists and engineers in companies across the world and the technology has proven, beyond a doubt, to match or even surpass human level proficiency in analyzing and extracting features of a fixed dataset and even streaming data.

The question that’s probably on every data scientist’s mind today is “how do we make this human intensive training process simpler, faster and more accurate without having to return to the drawing board? Especially when there isn’t enough data to train a model?”

Enter The World of Transfer Learning

To begin with, transfer learning is just as it sounds - a baseline to facilitate richer feature extraction and top training algorithms piggybacking on already existing successful models to provide deeper and more refined results in feature extraction. By definition, transfer learning is a machine learning strategy wherein a machine learning model trained for one task is repurposed for another task.

Transfer learning requires a whole different approach compared to a traditional machine learning model, wherein data sets are fed into single models and the output is a trained ML model. In transfer learning, the model used by a source task (or source domain) is applied and then fine-tuned to achieve a destination task (or destination domain). The assumption, in this case, is that the source task and the destination task are adequately similar in nature. Transfer learning is especially useful in cases where the destination model or domain data is sparse.

Typical Transfer Learning Model

A Deep learning convolutional neural network automates feature extraction and regression classification - the reason why it is considered superior to most forms of machine learning algorithms. However, it is also very resource intensive. Transfer learning, on the other hand, solves this problem by using the same model on a similar classification base as the source for the new problem. The two different methodologies for transfer learning are feature extraction and fine tuning.

Feature Extraction

In a deep convolutional neural network, the idea of using transfer learning in feature extraction is that it uses the classifiers from a previous network as the base to extract meaningful features from a new data sample.

Fine Tuning

Fine tuning, on the other hand, refers to the removal of the few last layers of the pre-trained model in the Convolutional Neural Network to update the architecture with the new feature classifiers to predict input classes successfully.

Transfer Learning as a Baseline for Deep Learning CNNs

The biggest advantage in transfer learning is its modularity and the fact that it builds atop already trained models, which may have been trained on intensive datasets. Most machine learning models, supervised or unsupervised, are trained in isolation and on single datasets. Solving real world problems using these applications require heavy resources and datasets numbering in millions which may not be easily available.

Deep learning is already seen as one of the most promising technologies to have emerged from machine learning and data science, with applications ranging from healthcare, wherein it is used as an image classifier for internal organs, to computer vision which is used for spam prevention and even autonomous driving cars. Transfer learning can solve the singular problem deep learning otherwise has - repeated CNN training from scratch and massive resource requirements, both in terms of neural network layers and in terms of training data requirements. Most common libraries, including PyTorch, Keras and DL4J, include the possibility of code being reused across several neural networks, making the entire deep learning process easier with transfer learning.

Data Science is evolving at a scorching pace, and keeping up with the discipline requires rigor, dedication and a constant upgradation of one’s skills. Get certified with the latest skills in advanced Data Science and keep your career on the high flying trajectory you’ve always wanted it to have.

Follow Us!

Brought to you by DASCA
Brought to you by DASCA

Stay Updated!

Keep up with the latest in Data Science with the DASCA newsletter.


This website uses cookies to enhance website functionalities and improve your online experience. By browsing this website, you agree to the use of cookies as outlined in our privacy policy.

Got it