Joao Sacramento: Task-dependent neural shift and gain modulation for continual learning

ETH Zürich

 

Abstract

Learning neural network weights by backpropagating prediction errors has led to great success in difficult machine learning problems. Whether the brain uses this algorithm is a controversial matter since its inception, which remains unsettled to date. In my talk, I will first review some of the issues that have made error-backpropagation seem implausible as a neurobiological algorithm, as well as recent attempts to solve them. I will then focus on the problem of catastrophic forgetting that arises when backpropagation-based learners are asked to continually learn multiple tasks in sequence. I will present a complementary learning systems approach, in which a neural network learns to modulate another one in a task-dependent manner, producing finely-tuned changes in the target neurons’ response gains. Paired together with a synaptic consolidation model, this strategy almost entirely resolves forgetting in a series of sequential learning benchmarks, while developing mixed-selectivity neural responses throughout the network. Finally, I will discuss our algorithm’s compatibility with the architecture of cortex, where top-down input to layer 1 apical dendrites and VIP interneurons could exert the required task-dependent control over bottom-up signals.

 

Organized by

Johannes Rieke/Matthew Larkum

Location: BCCN Berlin, lecture hall, Philippstr. 13 Haus 6, 10115 Berlin

Go back