Yael Niv, Princeton University

How we learn and represent the structure of tasks

In recent years, ideas from the computational field of reinforcement learning have revolutionized the study of learning in the brain, famously providing new, precise theories about the effects of dopamine on learning in the basal ganglia. However, the first ingredient in any reinforcement learning algorithm is a representation of the task as a sequence of states. Where do these state representations come from? In this talk, I will first suggest that humans use attention processes (alongside reinforcement learning processes) in order to identify task structure and learn state representations from trial and error. I will then suggest that the orbitofrontal cortex is critical to representing the learned states, in particular in tasks in which states depend not only on externally observable information, but also on internal information, for instance from working memory.

Organized by

John-Dylan Haynes

Go back