Trading Spaces: Connectionism and the Limits of Uninformed Learning
It is widely appreciated that the difficulty of a particluar computation varies according to how the input data are presented. What is less understood is the effect of this computation/representation tradeoff within familiar learning paradigms. We argue that existing learning algoritms are often poorly equipped to solve problems involving a certain type of important and widespread regularity, which we call 'type-2' regularity. The solution in these cases is to trade achieved representation against computational search. We investigate several ways in which such a trade-off may be pursued including simple incremental learning, modular connectionism, and the developmental hypothesis of 'representational redescription'. In addition, the most distinctive features of human cognition- language and culture- may themselves be viewed as adaptions enabling this representation/computation trade-off to be pursued on an even grander scale.