dc.description.abstract | Deep Learning has revolutionized artificial intelligence (AI) over the past decade (LeCun et. al.
2015). This has led to the creation of ‘neurally inspired’ deep-neural-networks (DNNs). DNNs are
claimed to be biologically realistic in the sense of incorporating key mechanistic or architectural
features of the brain, such as a hierarchical structure. Interestingly, they are also said to exhibit
similar behavioral capacities, as they are capable of performing ‘near human-level’ on a variety of
behavioral tasks (Kriegeskorte 2015). These similarities have led researchers to propose DNNs as
biologically realistic models of behavior and brain function (ibid). In this paper, I argue that there
are at least two concerns in relating DNNs to the brain. First, I suggest that DNNs might not, as
they stand, exhibit biologically realistic behavior. Secondly, I argue that there are key mechanistic
dissimilarities between biological and artificial neural networks that impedes a so-called modelmechanism-
mapping relationship. Thus there is a mismatch between the two systems at both the
behavioral and implementational levels. The explanatory status of DNNs is accordingly called into
question. To defend this position, I make recourse to the mechanistic philosophy of science, a framework
within which to assess a model’s explanatory status (Craver 2007). | en |