Representational principles of function generalization
dc.contributor.advisor
Lucas, Christopher
dc.contributor.advisor
Ramamoorthy, Subramanian
dc.contributor.author
León-Villagrá, Pablo
dc.date.accessioned
2021-09-17T11:31:49Z
dc.date.available
2021-09-17T11:31:49Z
dc.date.issued
2020-11-30
dc.description.abstract
Generalization is at the core of human intelligence. When the relationship between continuous-valued data is generalized, generalization amounts to function
learning. Function learning is important for understanding human cognition, as
many everyday tasks and problems involve learning how quantities relate and
subsequently using this knowledge to predict novel relationships. While function learning has been studied in psychology since the early 1960s, this thesis
argues that questions regarding representational characteristics have not been
adequately addressed in previous research.
Previous accounts of function learning have often proposed one-size-fits-all
models that excel at capturing how participants learn and extrapolate. In these
models, learning amounts to learning the details of the presented patterns. Instead, this thesis presents computational and empirical results arguing that participants often learn abstract features of the data, such as the type of function or
the variability of features of the function, instead of the details of the function.
While previous work has emphasized domain-general inductive biases and
learning rates, I propose that these biases are more flexible and adaptive than
previously suggested. Given contextual information that sequential tasks share
the same structure, participants can transfer knowledge from previous training
to inform their generalizations.
Furthermore, this thesis argues that function representations can be composed
to form more complex hypotheses, and humans are perceptive to, and sometimes
generalize according to these compositional features. Previous accounts of function learning had to postulate a fixed set of candidate functions that form a partic ipants’ hypothesis space, which ultimately struggled to account for the variety of
extrapolations people can produce. In contrast, this thesis’s results suggest that
a small set of broadly applicable functions, in combination with compositional
principles, can produce flexible and productive generalization.
en
dc.identifier.uri
https://hdl.handle.net/1842/38071
dc.identifier.uri
http://dx.doi.org/10.7488/era/1342
dc.language.iso
en
en
dc.publisher
The University of Edinburgh
en
dc.relation.hasversion
Generalizing Functions in Sparse Domains León-Villagrá, P. and Lucas, C.G.–Proceedings of the 41st Annual Meeting of the Cognitive Science Society, 2019
en
dc.relation.hasversion
Exploring the Representation of Linear Functions León-Villagrá, P., Klar, V.S., Sanborn, A.N., and Lucas, C.G.–Proceedings of the 41st Annual Meeting of the Cognitive Science Society, 2019
en
dc.relation.hasversion
Data Availability and Function Extrapolation León-Villagrá, P., Preda, I., and Lucas, C.G. –Proceedings of the 40th Annual Meeting of the Cognitive Science Society, 2018
en
dc.subject
generalization
en
dc.subject
function learning
en
dc.subject
patterns
en
dc.subject
extrapolation
en
dc.subject
domain-general inductive biases
en
dc.subject
contextual information
en
dc.title
Representational principles of function generalization
en
dc.type
Thesis or Dissertation
en
dc.type.qualificationlevel
Doctoral
en
dc.type.qualificationname
PhD Doctor of Philosophy
en
Files
Original bundle
1 - 1 of 1
- Name:
- León Villagrá2020.pdf
- Size:
- 5.42 MB
- Format:
- Adobe Portable Document Format
- Description:
This item appears in the following Collection(s)

