Computational framework of human causal generalization
Abstract
How do people decide how general a causal relationship is, in terms of the entities or
situations it applies to? How can people make these difficult judgments in a fast, efficient
way? To address these questions, I designed a novel online experiment interface
that systematically measures how people generalize causal relationships, and developed
a computational modeling framework that combines program induction (about the hidden
causal laws) with non-parametric category inference (about their domains of influence)
to account for unique patterns in human causal generalization. In particular, by
introducing adaptor grammars to standard Bayesian-symbolic models, this framework
formalizes conceptual bootstrapping as a general online inference algorithm that gives
rise to compositional causal concepts.
Chapter 2 investigates one-shot causal generalization, where I find that participants’
inferences are shaped by the order of the generalization questions they are asked. Chapter
3 looks into few-shot cases, and finds an asymmetry in the formation of causal categories:
participants preferentially identify causal laws with features of the agent objects
rather than recipients, but this asymmetry disappears when visual cues to causal agency
are challenged. The proposed modeling approach can explain both the generalizationorder
effect and the causal asymmetry, outperforming a naïve Bayesian account while
providing a computationally plausible mechanism for real-world causal generalization.
Chapter 4 further extends this framework with adaptor grammars, using a dynamic conceptual
repertoire that is enriched over time, allowing the model to cache and later
reuse elements of earlier insights. This model predicts systematically different learned
concepts when the same evidence is processed in different orders, and across four experiments
people’s learning outcomes indeed closely resembled this model’s, differing
significantly from alternative accounts.