Efficacy, Content and Levels of Explanation
Let’s consider the following paradox (Fodor , Jackson and Petit  , Drestke , Block , Lepore and Loewer , Lewis , Segal and Sober ): i) The intentional content of a thought (or any other intentional state) is causally relevant to its behavioural (and other) effects. ii) Intentional content is nothing but the meaning of internal representations. But, iii) Internal processors are only sensitive to the syntactic structures of internal representations, not their meanings. Therefore it seems that if we want to defend the idea -absolutely plausible from an intuitive point of view- that mental / intentional states are causally responsible for behavioural outputs and we want to do it on the physicalist basis of any scientific methodology, we will have to give up the conviction that such intentional states qua intentional, i. e. as having a particular meaning, are the ones causally responsible for our behaviour. The path that takes us to mental epiphenomenalism is clear: 1) the causal powers of any event are completely determined by its physical properties; 2) although intentional properties supervene on physical properties, they can’t be identified with them; 3) intentional properties, as intentional, are not causally responsible for behaviour, because they don’t take part in the causal powers of the states to which they belong, i. e., intentional properties are epiphenomenal. Let’s consider now a different yet parallel position to the one just described. There is an important debate in cognitive science about whether the class of mechanisms to which we belong and which the computational modelling project of cognitive processes refers to is best represented by classical or connectionist approaches (McClelland, Rumelhart et. al , Smolensky  , Fodor and Pylyshyn , Pinker and Prince , Clark , Ramsey, Stich and Rumelhart , Clark and Karmiloff-Smith [forthcoming]).In classical, serial processing models, information is encoded in terms of rules that have a linguistic character. In connectionist or parallel distributed processing (PDP) models, the causal relationships among the units that constitute the system determine how the information is processes by the network, although these units don’t have a direct semantic interpretation. But, for both, classical and connectionist models, all the computations can be explained without any reference to the content of the processed information, i.e., in both cases the properties that seem to be responsible for the system’s behaviour are ultimately physical properties nor intentional ones. This situation mirrors thus, within cognitive science, the philosophical discussion concerning the causal efficacy of semantic properties. Now, if in this debate we opt for the classical paradigm, there is a way of finding a solution to the computational version of the epiphenomenalism paradox. This solution is based mainly on the notion of supervenience or, more precisely, on the notion of mereological supervenience (Kim  )1 . The idea of intentional properties supervening on physical properties makes sense within the classical context because there exists an easily isolable supervenience base comprising the syntactic items in the socalled language of thought. But, what happens if we opt for the connectionist paradigm?. The situation here doesn’t seem to favour the use of the same supervenience strategy. For it has been argued (Ramsey, Stich and Garon ) that beliefs, desires, and other mental states are nor, in the connectionist paradigm, individuable as weight or activation states of the system. This is because information is encoded by the network in distributed and superpositional representations, i.e., there are no straightforwardly isolable vehicles at the physical level that can be identified as the articulated supervenience base on which the semantic properties supervene2 . If this is true, then the connectionist not only loses the battle against epiphenomenalism but more drastically, seems to offer a standing invitation to eliminativism, since talk of beliefs and desires, etc. now seems to be floating free of any acceptable scientific underpinning, i.e., she has lost the necessary theoretical apparatus for supporting the intuitive idea that propositional attitudes -beliefs, desires and any mental states with semantic content- are physically realized. This second line of argumentation doesn’t take us to a paradox but to a dilemma: either we accept eliminativism, if connectionist hypothesis are correct or we defend the causal efficacy of mental states with semantic content by showing that, after all, connectionist networks are not plausible cognitive models (Davies ). From my point of view, however, both lines of argumentation need to be revised. The aim of this paper is to find an account of the causal efficacy of content that avoids the aforementioned epiphenomenalist objections and that doesn’t require the discovery of inner symbols in the computational modelling of such contentful mental states. In short, the aim is to find a meeting point where a philosophical story about content and cause and the connectionist computational model can be brought together (Cfr. Clark ).