On the granting of moral standing to artificial intelligence: a pragmatic, empirically-informed, desire-based approach
View/ Open
Date
26/07/2020Author
Novelli, Nicholas Alexander
Metadata
Abstract
Ever-increasingly complex AI technology is being introduced into society,
with ever-more impressive capabilities. As AI tech advances, it will become harder to
tell whether machines are relevantly different from human beings in terms of the
moral consideration they are owed. This is a significant practical concern. As more
advanced AIs become part of our daily lives, we could face moral dilemmas where we
are forced to choose between harming a human, or harming one or several of these
machines. Given these possibilities, we cannot withhold judgement about AI moral
standing until we achieve logical certainty, but need guidance to make decisions. I
will present a pragmatic framework that will enable us to have sufficient evidence for
decision-making, even if it does not definitively prove which entities have moral
standing.
First, I defend adopting a welfarist moral theory, where having the capacity for well-being determines that a being has moral standing. I then argue that a desire-based
theory of welfare is acceptable to a wide range of positions and should be adopted. It
is therefore necessary to articulate a theory of desire, and I demonstrate by reference
to discourse in ethics that a phenomenological conception of desire is most
compatible with the way ethical theory has been discussed.
From there, we need to establish a test for possessing the capacity for
phenomenological desire. This can be accomplished by finding observed cases where
a lack of specific morally-relevant phenomenal states inhibits the performance of a
certain task in humans. If a machine can consistently exhibit the behaviour in
question, we have evidence that it has the phenomenal states necessary for moral
standing. With reference to recent experimental results, I present clear and testable
criteria such that if an AI were to succeed at certain tasks, we would have a reason to
treat it as though it did have moral standing, and demonstrate that modern-day AI
has given no evidence as yet that it has the phenomenal experiences that would give
it moral standing. The tasks in question are tests of moral and social aptitude.
Success at these tests would not be certain proof of moral standing, but it would be
sufficient to base our decisions on, which is the best we can hope for at the moment.
Finally, I examine the practical consequences of these conclusions for our future
actions. The use of this particular criterion has significant and interesting results that
might change things significantly in terms of whether applications of this research
are worth the cost and risks.