Edinburgh Research Archive logo

Edinburgh Research Archive

University of Edinburgh homecrest
View Item 
  •   ERA Home
  • Philosophy, Psychology and Language Sciences, School of
  • Philosophy
  • Philosophy PhD thesis collection
  • View Item
  •   ERA Home
  • Philosophy, Psychology and Language Sciences, School of
  • Philosophy
  • Philosophy PhD thesis collection
  • View Item
  • Login
JavaScript is disabled for your browser. Some features of this site may not work without it.

On the granting of moral standing to artificial intelligence: a pragmatic, empirically-informed, desire-based approach

View/Open
Novelli2020.pdf (858.9Kb)
Date
26/07/2020
Author
Novelli, Nicholas Alexander
Metadata
Show full item record
Abstract
Ever-increasingly complex AI technology is being introduced into society, with ever-more impressive capabilities. As AI tech advances, it will become harder to tell whether machines are relevantly different from human beings in terms of the moral consideration they are owed. This is a significant practical concern. As more advanced AIs become part of our daily lives, we could face moral dilemmas where we are forced to choose between harming a human, or harming one or several of these machines. Given these possibilities, we cannot withhold judgement about AI moral standing until we achieve logical certainty, but need guidance to make decisions. I will present a pragmatic framework that will enable us to have sufficient evidence for decision-making, even if it does not definitively prove which entities have moral standing. First, I defend adopting a welfarist moral theory, where having the capacity for well-being determines that a being has moral standing. I then argue that a desire-based theory of welfare is acceptable to a wide range of positions and should be adopted. It is therefore necessary to articulate a theory of desire, and I demonstrate by reference to discourse in ethics that a phenomenological conception of desire is most compatible with the way ethical theory has been discussed. From there, we need to establish a test for possessing the capacity for phenomenological desire. This can be accomplished by finding observed cases where a lack of specific morally-relevant phenomenal states inhibits the performance of a certain task in humans. If a machine can consistently exhibit the behaviour in question, we have evidence that it has the phenomenal states necessary for moral standing. With reference to recent experimental results, I present clear and testable criteria such that if an AI were to succeed at certain tasks, we would have a reason to treat it as though it did have moral standing, and demonstrate that modern-day AI has given no evidence as yet that it has the phenomenal experiences that would give it moral standing. The tasks in question are tests of moral and social aptitude. Success at these tests would not be certain proof of moral standing, but it would be sufficient to base our decisions on, which is the best we can hope for at the moment. Finally, I examine the practical consequences of these conclusions for our future actions. The use of this particular criterion has significant and interesting results that might change things significantly in terms of whether applications of this research are worth the cost and risks.
URI
https://hdl.handle.net/1842/37297

http://dx.doi.org/10.7488/era/583
Collections
  • Philosophy PhD thesis collection

Library & University Collections HomeUniversity of Edinburgh Information Services Home
Privacy & Cookies | Takedown Policy | Accessibility | Contact
Privacy & Cookies
Takedown Policy
Accessibility
Contact
feed RSS Feeds

RSS Feed not available for this page

 

 

All of ERACommunities & CollectionsBy Issue DateAuthorsTitlesSubjectsPublication TypeSponsorSupervisorsThis CollectionBy Issue DateAuthorsTitlesSubjectsPublication TypeSponsorSupervisors
LoginRegister

Library & University Collections HomeUniversity of Edinburgh Information Services Home
Privacy & Cookies | Takedown Policy | Accessibility | Contact
Privacy & Cookies
Takedown Policy
Accessibility
Contact
feed RSS Feeds

RSS Feed not available for this page