Thinking animal: from automatic-nonconscious processes to controlled-conscious thought
Item Status
Embargo End Date
Date
Authors
Coelho Do Nascimento, Rafael Augusto
Abstract
In this dissertation, I present a model of cognition which intends to explain high-order cognitive abilities, usually defined as conscious and controlled, taking into account the picture emerging from the cognitive sciences, according to which most—or all—cognitive processes are nonconscious and automatic. The two first chapters of the dissertation constitute the negative part of the argument, motivating an alternative view. The three last chapters gradually build the positive view, which shall, not only explain high-order cognition general mannerisms (the “Easy” problems of cognition) but, ultimately, solve “The Hard Problem of Cognition”, bridging the persistent gap between the general automaticity of cognitive processes and the personal level control some cognitive phenomena seem to entail.
The negative part starts with an analysis of Dual-Process Theories (DPTs). These conceptualize the cognitive mind dividing it in two. On the one hand, we have Type 1 processing, in which autonomous and nonconscious processes (might) result in conscious perceptual or intuitive outputs. On the other hand, we have Type 2 processes, which use Working Memory, are effortful, serial, and are at least more conscious and somehow controlled. This framework has faced strong criticism, but it remains one of the most prominent theories in psychology. In the last few years, many authors (e.g. Evans & Stanovich, 2013; Carruthers, 2015), have abandoned the “received version”, in which a long list of features distinguished both “systems”, in favour of a different kind of approach: most features constitute correlations, and only a few work as defining features. In Evans & Stanovich’s case, for instance, the defining features are 1) the autonomy of Type 1 processes and 2) the use of Working Memory by type 2 processes. In order to analyse this new variation of the dual typology, I start by distinguishing two categories of Dual-Process Theories (DPTs), disambiguating the meaning of “Type 2 processes”: to instrumental DPTs, Type 2 processes are personal level ones. To substantial DPTs, type 2 processes are subpersonal level ones (according do Dennett’s distinction). It happens that most dual-process theorists (including the ones I focus on, that is, Evans & Stanovich and Carruthers) endorse a substantial version of the duality (in contrast with, e.g., Frankish). This means that, when they claim Type 2 processes exist, they mean there is some kind of subpersonal mechanism present in Type 2 processing that 1) is absent in Type 1 processing; 2) does not align with the definition of Type 1 process. I show that the arguments used to defend a substantial DPT only support an instrumental version and that, as so, there are no reasons to hold the belief that qualitatively different processes are involved in the personal level activity of thinking consciously. First, although the use of Working Memory by Type 2 processes and their controlled nature—as opposed to autonomous—might support the existence of a different level of processing, it does not support the existence of a different type of subpersonal process, as the mechanisms involved in the personal level of thinking consciously might be the same that are active in intuition and perception. Second, I also show that the activities usually ascribed to Working Memory’s executive component (attention, manipulation of information, inhibition, etc.) are also better understood as Type 1 processes—as they are present in Type 1 processing and align with the definition of Type 1 process). Then, we are left with a challenge: explaining Cognition2 workings, mannerisms and evolution without resorting to any additional cognitive process—and no “centre” to perform it (this includes the absence of subpersonal Type 2 processes both in terms of Working Memory and Control and in terms of the processes usually taken as deployed by the executive part of Working Memory in response to its contents). I thus present my positive proposal as an alternative to the present conceptualization, having this question has the starting point: How can we make sense of a virtual system approach in order to make sense of higher cognition without relying on subpersonal type 2 processes? If we invert the question using the classical terminology, how can we make sense of Type 2 reflective processing relying only on Type 1 processes?
To build an alternative, I start by presenting a principle for access conscious contents that, being implicitly widely accepted for Type 1 processing, is somehow questioned for higher cognition. As such, this principle makes no assumptions about consciousness (conscious processes that exist in Type 2 processing), the existence of further processes (Type 2) or contentious psychological constructs (Working Memory and its executive component). The principle can be divided in two sides of the same coin: on the one side, every cognitive process is nonconscious and autonomous (Type 1); on the other side, every access conscious state is, by definition and nature, an output of nonconscious-autonomous processes.
In order to explain higher order cognition starting from this assumptions—that are widely accepted for Type 1 processing—we just need two rules: 1) systems that read globally available states are the same that write them; 2) Type 2 processing is a loop produced by the interaction between nonconscious, autonomous processes and conscious states (their outputs, which hereby work as inputs as well). Type 2 processing therefore happens when the outputs of nonconscious-autonomous processes become inputs to the same nonconscious-autonomous processes—in accordance with The Global Workspace Theory, according to which these states are conscious in virtue of being globally accessible). This results in a feedback loop in which chains of related conscious events arise as “simulations” of action and perception. After presenting the model, juxtaposing it with several existing theories that align with its parts, I will present some evidence for its simulatory nature and show how it can explain and integrate 1) the feature correlation of DPT’s received version; 2) the data used to object to DPTs (thus explaining and integrating both sides of the discussion) and 3) The fact that the so-called System 2 appears to be an “Opaque Story-Teller”—“opaque” because there is nearly no epistemic access to thought processes and reasons, “storyteller” because Type 2 processing seems to have a confabulatory and argumentative nature (see Nisbett and Wilson, 1977; Mercier and Sperber, 2019).
Lastly, I confront the mentioned gap—the “hard problem”. On the one side, we know that cognition is fundamentally nonconscious and automatic (e.g., Strawson, 2003; Lieberman, 2007; Wegner, 2002; Sklar et al., 2021; Nisbett & Wilson, 1977; Bargh & Chartland, 1999). On the other, “It is undeniable that deliberation contains many elements that are straightforwardly intentionally controlled" (Vierkant, 2022: 68. See also Wu, 2013; Jennings, 2022; Brent, 2023) without which—and an explanation of which—the notion of agency is undermined (e.g., Wu, 2013; Schroeter, 2004). This creates a dissonance between the observation that most of our cognitive processing is automatic and the requirement that we explain how intentional, controlled cognitive processing is possible—and how it might emerge from its (automatic) foundations.
In my account, control is relative to a level. Without relying on any controlling entity—at the personal or at the subpersonal level, I start with the claim that, while automatic processes happen at the subpersonal level, they deliver outputs at the personal level—as conscious states or events, which behave as “voices”. If we generalize this principle, which is taken for granted in automatic processing (perception, intuition), we have that deliberation consists of a loop where those outputs work as inputs to the same systems. At the personal level, this loop creates a “virtual” hub in which a personal level “process” emerges. This process, although its contents are automatically generated, is relatively controlled—the sequence of states, from the personal level perspective, can be stopped or interrupted—and controlling—each state, being a state of the person, not the personal level system, controls the generation of the next state by working as an input. Metaphorically, if we take the suprapersonal (social) level into account, and imagine a conversation between people, each voice is a suprapersonal level state—that only exists as such—, and, from the perspective such level (e.g., a room), the emerging conversation is controlled while each person is an autonomous black box which receives inputs and delivers outputs. I also distinguish intentional control as an instance of relative personal level control in which the main controlling factor connecting thoughts (e.g., semantics, affects), being isolated by attention and language, increases the sense of agency and the predictability of the next thought's content. Motivating the proposal by refuting the attribution of control to controllers—personal or subpersonal—and uncoupling the terms “personal” and “personal level”, I explain how the model may contribute to unifying the concepts of automaticity and control in cognition.
Being able to explain the available data, this model explains controlled conscious cognition and how it might emerge from its automatic, subpersonal foundations, without relying on questionable controlling centres, selves or psychological constructs. If we already knew there were something like controlled processes, even if the mind seems to be, on the one side, entirely automatic, and, on the other, devoted to a central controller, now we have a better grasp of what that might mean and how it might be possible. Since the mind is the subject and object of study, there is no way—nor is that my goal—of completely dispelling its secrets. But I hope to shed some light on one of its central mysteries.
This item appears in the following Collection(s)

