AIML Research Seminar: Embracing Changes in Deep Learning: Continual Learning with Augmented and Modularised Memory (with Pre-trained Models)

Conventional DL approaches focus on the end results on fixed datasets/scenarios and fail to handle the dynamically raised novel requirements in the real world. Continual learning (CL) aims to train deep neural networks (DNNs) to efficiently accumulate knowledge on dynamically arriving data and task streams like humans. The main challenges include how to enable DNNs to learn on data and task streams with non-stationary distributions without catastrophic forgetting. To ensure DNNs effectively retain past knowledge while accommodating future tasks with a balance between stability and plasticity, we explore CL techniques from the viewpoint of augmenting and modularising the memorisation of DNNs. Given pre-trained models, do we need continual learning or how should we use continual learning? This talk delved into investigating how to continually perform adaptive learning with the pre-trained models and how to continually enhance pre-trained models. 

Dong Gong is a Senior Lecturer and ARC DECRA Fellow (2023-2026) at the School of Computer Science and Engineering (CSE), University of New South Wales (UNSW). He is also an Adjunct Lecturer at AIML. After obtaining his PhD degree in December 2018, Dong worked as a Research Fellow at AIML until January 2022. His research interests are in computer vision and machine learning, and he has been focusing on learning tasks with dynamic requirements and non-ideal supervision in real-world scenarios, such as continual learning.

Dong pic

Dr Dong Gong

Tagged in Artificial Intelligence, Machine Learning, computer vision