Relation learning and reasoning on computational models of high level cognition
Puebla Ramírez, Guillermo Antonio
Relational reasoning is central to many cognitive processes, ranging from “lower” processes like object recognition to “higher” processes such as analogy-making and sequential decision-making. The first chapter of this thesis gives an overview of relational reasoning and the computational demands that it imposes on a system that performs relational reasoning. These demands are characterized in terms of the binding problem in neural networks. There has been a longstanding debate in the literature regarding whether neural network models of cognition are, in principle, capable of relation-base processing. In the second chapter I investigated the relational reasoning capabilities of the Story Gestalt model (St. John, 1992), a classic connectionist model of text comprehension, and a Seq-to-Seq model, a deep neural network of text processing (Bahdanau, Cho, & Bengio, 2015). In both cases I found that the purportedly relational behavior of the models was explainable by the statistics of their training datasets. We propose that both models fail at relational processing because of the binding problem in neural networks. In the third chapter of this thesis, I present an updated version of the DORA architecture (Doumas, Hummel, & Sandhofer, 2008), a symbolic-connectionist model of relation learning and inference that uses temporal synchrony to solve the binding problem. We use this model to perform relational policy transfer between two Atari games. Finally, in the fourth chapter I present a model of relational reinforcement that is able to select relevant relations, from a potentially large pool of applicable relations, to characterize a problem and learn simple rules from the reward signal, helping to bridge the gap between reinforcement learning and relational reasoning.