How Neuroscience can help to solve AI

Kai Ueltzhoeffer

October 09, 2017

Abstract

The ties between AI and neuroscience date back to the perceptron algorithm in the 1950s. Over the years more and more knowledge about the human brain accumulated, which is leveraged for AI solutions at an increasing rate. "Active Inference" is an emerging theory in neuroscience (Friston & al., 2010), which models human action and perception within a probabilistic framework called approximate Bayesian inference. Using state of the art deep learning techniques, the resulting optimization objective can be used directly for training deep agents as a probabilistic alternative to reinforcement learning.

Kai Ueltzhoeffer holds a PhD in Cognitive and Computational Neuroscience and is currently studying medicine at the university of Heidelberg to enhance his anatomical and biochemical understanding of the human brain. In his free time he develops Deep Active Inference Agents and writes blogposts (kaiu.me) about it. We are very excited to feature Kai in this two-part lecture coming up in October.

Part I: "The Bayesian Brain", 9 October 2017

The first talk introduces you to the theory of Active Inference and the underlying principles of Bayesian modelling and variational inference. We get so see, how perceptual phenomena, in terms of multisensory integration and optical illusions, neurophysiological observations, such as predictive coding or the mismatch negativity, and actions, from motor responses to human choice behaviour, can be well-explained by this model.

Slides: Part I

Part II: "A Deep Active Inference Agent", 16 October 2017

While the last talk developed this theory of the human brain as an approximately Bayesian prediction machine from a neuroscientific perspective, we get to see this time how the resulting optimization objective can be used directly for training a deep agent to solve the mountain car problem and discuss possible advantages over "classic" reinforcement learning approaches. Along the way, the utilized machine learning tools such as variational auto-encoders and evolution strategies for optimization of non-differentiable objectives will be explained.

Slides: Part II