Sofia Pereira da Silva: Using Brain–Machine Interfaces to Identify Cortical Learning Algorithms

BCCN Berlin / Technische Universität Berlin

Abstract

By causally mapping neural activity to behavior (Golub et al., 2016), brain–machine interfaces (BMI) offer a means to study the dynamics of sensorimotor learning. Here, we combine computational modeling and data analysis to study the neural learning algorithm (Portes et al., 2022) monkeys use to adapt to a changed output mapping in a center-out reaching task. We exploit that the mapping from neural space (ca. 100 dimensions) to the 2D cursor position is a credit assignment problem (Feulner and Clopath, 2021) that is underconstrained, because changes along a large number of output-null dimensions do not influence the behavioral output. We hypothesized that different, but equally performing learning algorithms can be distinguished by the changes they generate in output-null dimensions. We study this idea in networks for three different learning rules [Gradient Descent (Rumelhart et al., 1986), model-based Feedback Alignment (Lillicrap et al., 2016), and Reinforcement Learning (Williams, 1992)] and three distinct learning strategies [Direct, Re–aiming (Menendez, 2021), Remodeling (Golub et al., 2015)] in feedforward and recurrent architectures. We find that various combinations of rules and architectures lead to changes in different low–dimensional subspaces of neural activity. Comparing these changes in neural activity and their subspaces with available data from BMI experiments (Golub et al., 2018; Hennig et al., 2018, 2021a) points towards monkeys employing a combination of distinct strategies to learn BMI tasks. Future work should continue exploring the models with recurrent architectures toward increasing biological realism further.

 

Additional information:

Master thesis defense

 

Organized by:

Prof. Dr. Henning Sprekeler & Prof. Dr. Klaus Obermayer

 

Location: Room MAR 5.013, MAR Building, TU Berlin, Marchstraße 23, 10587 Berlin

Go back