Double-edged sword of AI & big data: a socio-technical examination of fairness in alternative lending
Files
Item Status
Embargo End Date
Date
Authors
Kim, Savina Dine
Abstract
In recent years, algorithmic fairness has emerged as a significant social concern,
particularly with AI and automated decision-making systems being employed in highstakes environments. They play a growing role in the distribution of crucial resources
and opportunities which can ultimately affect an individual’s livelihood. However,
the literature on fairness in AI has developed according to several problematic
assumptions and agendas, namely: 1) it generalizes and abstracts away from the social
and application-specific context in which the system is deployed and 2) it often
assumes fairness is solely a technical problem with a technical solution. By contrast,
this thesis proposes an alternative agenda using an empirical approach which extracts
cross-disciplinary synergies from learnings in computational sciences and social
sciences combined with expert domain knowledge in credit lending. The thesis
investigates how to create fairer outcomes in practice for a specific industry, i.e.,
financial services, considering its rich history of combatting discrimination,
regulatory requirements and pre-established norms and methods. Specifically, it
addresses this necessary reconceptualization of algorithmic fairness and provides a
socio-technical analysis within alternative lending by means of three studies.
The first part of the thesis is a 40-year systematic literature review which
engages with the state-of-the-art in fair ML research specific to the credit domain,
covering multiple disciplines, geographies, technological eras, and phases in the
application process. Its major contribution is an innovative systematic framework
representing the multi-dimensional knowledge structure. It also identifies five key
gaps in the literature, three of which are addressed in the subsequent experiments.
Next, is the first empirical study which engages with the question of Open Banking
and use of bank transaction data, particularly how it can be used for good (i.e.,
predicting financial vulnerability) but also how it harbors risk of indirect
discrimination towards marginalized and disadvantaged segments of society. The
second empirical study then builds on the former, empowered with post-lending
default behavior as well as an expansion of alternative credit data types. It addresses
issues of fairness and intersectionality, taking inspiration from social studies, noting
that individuals can and often hold multiple, overlapping identities.
Taken together, the three studies contribute to responsible and ethical AI
initiatives by translating the proposed normative concepts into good practices which
practitioners can ultimately adopt. The thesis provides demonstrated examples of how
algorithmic fairness can materialize, be examined, and ultimately, safeguarded.
Furthermore, it extends implications for discrimination policy, critiquing the limits of
current regulation which are quickly becoming outdated and rendered ineffective in a
changing technological environment. Greater volumes of digitalized data have
enabled easier triangulation of sensitive information, making certain individuals more
vulnerable to discriminatory harms. In other words, the technology designed to make
those invisible in contemporary financial markets visible, is the same technology with
the ability to precisely identify them, for better or for worse. Practical implications for
financial services, its heads of function, computer scientists, product managers, as
well as regulators and policy makers are provided.
This item appears in the following Collection(s)

