AI Learning Strategies: Transfer Learning, Meta Learning, and Multi-Task Learning

Aishwarya
2 min readAug 27, 2024

--

How Models Learn New Tasks

Transfer Learning, Multi-Task Learning, and Meta-Learning are all techniques for training a model to learn new tasks, but they have different underlying approaches[1].

Transfer Learning

In Transfer Learning, we fine-tune a large pre-trained model over a more specific task. As opposed to traditional training where we would build a new model from scratch for every new task, transfer learning uses previously learned knowledge in the same domain and applies it to the new task.

For example, in the movie The Martian, Mark Watney, an astronaut stranded on Mars, uses his existing knowledge as a botanist and engineer to survive. He adapts his skills from Earth to tackle Martian challenges.

Multi-task Learning

As opposed to Single-Task Learning, where we train models to learn a single task against a dataset, in MTL we train models to learn multiple tasks at a time by leveraging a shared data/learning space. The training data is aggregated and shared across tasks to enhance learning.

Not only is MTL cost and compute efficient it can also map to most real-world problems, which tend to be multi-modal or multi-task, for instance, we often want to predict multiple diseases simultaneously given the symptoms and multiple active compounds in a drug, the model accuracy in fact improves with the number of tasks that it concurrently tries to learn.

MTL can be homogenous where tasks are similar, e.g. classification of text based on sentiment and topic, or it can be heterogenous which is a mix of different types of tasks such as building a recommender system that predicts the likelihood (regression) of a user watching a movie and the star rating (classification) they might give it.

Why MTL works?

  • Regularization
  • Representation Bias
  • Feature Sharing also called “Eavesdropping”

Meta-Learning

Meta-learning (Hospedales et al., 2021), also called “learning to learn” is an approach where we use previously learned knowledge on a variety of tasks to rapidly adapt to a new task. The motivation behind meta-learning is to be able to learn new tasks with limited data, through techniques like few-shot or k-shot learning.

Sign up to discover human stories that deepen your understanding of the world.

Free

Distraction-free reading. No ads.

Organize your knowledge with lists and highlights.

Tell your story. Find your audience.

Membership

Read member-only stories

Support writers you read most

Earn money for your writing

Listen to audio narrations

Read offline with the Medium app

Unlisted

--

--

Aishwarya
Aishwarya

Written by Aishwarya

Data Science Practioner | Machine Learning Enthusiast

No responses yet

Write a response