Welcome, Guest.
Please login or register.
Request Password
Research in animal learning and behavioral neuroscience has distinguished between two forms of action control: a habit-based form, which relies on stored action values, and a goal-directed form, which forecasts and compares action outcomes based on a model of the environment. While habit-based control has been the subject of extensive computational research, the computational principles underlying goal-directed control in animals have so far received less attention. While there are a number of algorithmic possibilities, one particularly interesting approach frames goal-directed decision making as probabilistic inference over a generative model of action. I will introduce this computational perspective and explore some ways in which it may shed light on the neural mechanisms underlying goal-directed behavior.
Video registration (45min)
Podcast interview (51min)