FacebookFacebook group TwitterTwitter
ICVSS Computer Vision in the Age of Large Language Models

Lifelong learning for visual representations

Diane Larlus

Naver Labs Europe, FRA

Abstract

Whether it is from observing the development of the brain of a child, or by considering the most common applications of machine learning in a practical context, our intuition is that learning should be incremental; The parameters of a prediction model should not be an offline process, but instead allow for incremental updates, when new data becomes available or the model behavior needs to be adjusted. Several mechanisms are at play, such as mitigating the catastrophic forgetting that typically occurs when models are updated to handle new tasks or the computational cost related to frequent updates. This continual learning process, often referred to as lifelong learning, has been envisioned since the early days of computer science and has recently gained more traction. It is now being revisited in light of the large pre-trained visual, language, and multimodal models that have become available. In this lecture, we will review the main types of approaches for lifelong learning, and discuss a few examples for each.