Edinburgh Research Archive logo

Edinburgh Research Archive

University of Edinburgh homecrest
View Item 
  •   ERA Home
  • Social and Political Sciences, School of
  • Science Technology and Innovation Studies
  • Science Technology and Innovation Studies thesis and dissertation collection
  • View Item
  •   ERA Home
  • Social and Political Sciences, School of
  • Science Technology and Innovation Studies
  • Science Technology and Innovation Studies thesis and dissertation collection
  • View Item
  • Login
JavaScript is disabled for your browser. Some features of this site may not work without it.

Seeing affect: knowledge infrastructures in facial expression recognition systems

View/Open
Catanzariti2023.pdf (2.975Mb)
Date
16/06/2023
Author
Catanzariti, Benedetta
Metadata
Show full item record
Abstract
Efforts to process and simulate human affect have come to occupy a prominent role in Human-Computer Interaction as well as developments in machine learning systems. Affective computing applications promise to decode human affective experience and provide objective insights into usersʼ affective behaviors, ranging from frustration and boredom to states of clinical relevance such as depression and anxiety. While these projects are often grounded in psychological theories that have been contested both within scholarly and public domains, practitioners have remained largely agnostic to this debate, focusing instead on the development of either applicable technical systems or advancements of the fieldʼs state of the art. I take this controversy as an entry point to investigate the tensions related to the classification of affective behaviors and how practitioners validate these classification choices. This work offers an empirical examination of the discursive and material repertoires ‒ the infrastructures of knowledge ‒ that affective computing practitioners mobilize to legitimize and validate their practice. I build on feminist studies of science and technology to interrogate and challenge the claims of objectivity on which affective computing applications rest. By looking at research practices and commercial developments of Facial Expression Recognition (FER) systems, the findings unpack the interplay of knowledge, vision, and power underpinning the development of machine learning applications of affective computing. The thesis begins with an analysis of historical efforts to quantify affective behaviors and how these are reflected in modern affective computing practice. Here, three main themes emerge that will guide and orient the empirical findings: 1) the role that framings of science and scientific practice play in constructing affective behaviors as “objective” scientific facts, 2) the role of human interpretation and mediation required to make sense of affective data, and 3) the prescriptive and performative dimensions of these quantification efforts. This analysis forms the historical backdrop for the empirical core of the thesis: semi-structured interviews with affective computing practitioners across the academic and industry sectors, including the data annotators labelling the modelsʼ training datasets. My findings reveal the discursive and material strategies that participants adopt to validate affective classification, including forms of boundary work to establish credibility as well as the local and contingent work of human interpretation and standardization involved in the process of making sense of affective data. Here, I show how, despite their professed agnosticism, practitioners must make normative choices in order to ʻseeʼ (and teach machines how to see) affect. I apply the notion of knowledge infrastructures to conceptualize the scaffolding of data practices, norms and routines, psychological theories, and historical and epistemological assumptions that shape practitionersʼ vision and inform FER design. Finally, I return to the problem of agnosticism and its socio-ethical relevance to the broader field of machine learning. Here, I argue that agnosticism can make it difficult to locate the technologyʼs historical and epistemological lineages and, therefore, obscure accountability. I conclude by arguing that both policy and practice would benefit from a nuanced examination of the plurality of visions and forms of knowledge involved in the automation of affect.
URI
https://hdl.handle.net/1842/40685

http://dx.doi.org/10.7488/era/3446
Collections
  • Science Technology and Innovation Studies thesis and dissertation collection

Library & University Collections HomeUniversity of Edinburgh Information Services Home
Privacy & Cookies | Takedown Policy | Accessibility | Contact
Privacy & Cookies
Takedown Policy
Accessibility
Contact
feed RSS Feeds

RSS Feed not available for this page

 

 

All of ERACommunities & CollectionsBy Issue DateAuthorsTitlesSubjectsPublication TypeSponsorSupervisorsThis CollectionBy Issue DateAuthorsTitlesSubjectsPublication TypeSponsorSupervisors
LoginRegister

Library & University Collections HomeUniversity of Edinburgh Information Services Home
Privacy & Cookies | Takedown Policy | Accessibility | Contact
Privacy & Cookies
Takedown Policy
Accessibility
Contact
feed RSS Feeds

RSS Feed not available for this page