Applications of the free energy principle to machine learning and neuroscience
View/ Open
Date
30/11/2021Author
Millidge, Beren
Metadata
Abstract
In this thesis, we explore and apply methods inspired by the free energy principle to
two important areas in machine learning and neuroscience. The free energy principle
is a general mathematical theory of the necessary information-theoretic behaviours
of systems which maintain a separation from their environment. A core postulate of
the theory is that complex systems can be seen as performing variational Bayesian
inference and minimizing an information-theoretic quantity called the variational free
energy. The free energy principle originated in, and has been extremely influential in
theoretical neuroscience, having spawned a number of neurophysiologically realistic
process theories, and maintaining close links with Bayesian Brain viewpoints.
The thesis is split into three main parts where we apply methods and insights from the
free energy principle to understand questions first in perception, then action, and finally
learning. Specifically, in the first section, we focus on the theory of predictive coding,
a neurobiologically plausible process theory derived from the free energy principle
under certain assumptions, which argues that the primary function of the brain is to
minimize prediction errors. We focus on scaling up predictive coding architectures and
simulate large-scale predictive coding networks for perception on machine learning
benchmarks; we investigate predictive coding’s relationship to other classical filtering
algorithms, and we demonstrate that many biologically implausible aspects of current
models of predictive coding can be relaxed without unduly harming the performance
of predictive coding models which allows for a potentially more literal translation of
predictive coding theory into cortical microcircuits.
In the second part of the thesis, we focus on the application of methods deriving from the
free energy principle to action. We study the extension of methods of ‘active inference’,
a neurobiologically grounded account of action through variational message passing,
to utilize deep artificial neural networks, allowing these methods to ‘scale up’ to be
competitive with state of the art deep reinforcement learning methods. Additionally, we
show that these active inference inspired methods can bring conceptual clarity and novel
perspectives to deep reinforcement learning. We show how active inference reveals the
importance of deep generative models and model-based planning for adaptive action,
as well as information-seeking exploration which arises under a unified mathematical
framework from active inference. Finally, we provide a unified mathematically principled framework for understanding and deriving many information-seeking exploration
objectives through the lens of a dichotomy between ‘evidence’ and ‘divergence’ objectives. We show that this distinction is crucial for understanding and relating the
many exploratory objectives in both the reinforcement learning, active inference, and
cognitive science communities and that this provides a general mathematical framework
for specifying the objectives underlying intelligent, adaptive behaviour.
Finally, we focus on applications of the free energy principle to questions of learning
where we attempt to understand how credit assignment can take place in the brain.
First, we demonstrate that, under certain conditions, the predictive coding algorithm can
closely approximate the backpropagation of error algorithm along arbitrary computation
graphs, which underlies the training of essentially all contemporary machine learning
architectures, thus indicating a potential path to the direct implementation of machine
learning algorithms in neural circuitry. Finally, we explore other algorithms for biologically plausible credit assignment in the brain, and present Activation Relaxation, a novel
algorithm which can approximate backprop using only local learning rules which are
substantially simpler than those necessary for predictive coding. We additionally show
that the some relaxations that apply to predictive coding, also work for the activation
relaxation algorithm, thus producing an extremely elegant and effective algorithm for
local approximations to backprop in the brain.
In sum, we believe we have demonstrated the theoretical utility of the free energy
principle, by demonstrating how methods inspired by it can interface productively with
other fields, specifically neuroscience and machine learning, to develop and improve
existing methods, as well as inspire novel advances, in all three areas of perception,
action, and learning. Moreover, throughout this thesis, we demonstrate implicitly, the
theoretical benefit brought about by the FEPs unified treatment of these seemingly
disparate processes, under the rubric of free energy minimization.