Interdisciplinary Lunchtime Seminar

Why Explainable AI Needs Feminist Philosophy: Epistemic Injustice, Situated Knowledge, and the Amelioration of Algorithmic Bias

2021-09-07 12:00:002021-09-07 13:00:00Asia/Hong_KongWhy Explainable AI Needs Feminist Philosophy: Epistemic Injustice, Situated Knowledge, and the Amelioration of Algorithmic Bias

Interdisciplinary Lunchtime Seminar
Why Explainable AI Needs Feminist Philosophy: Epistemic Injustice, Situated Knowledge, and the Amelioration of Algorithmic Bias

Dr. Linus Huang
(Society of Fellows in the Humanities, HKU)

Date/Time: September 7, 2021 12:00 nn – 1:00 pm (HK time)
Venue: Conducted via Zoom
Enquiry: ihss@hku.hk

    2021-09-07 12:00:002021-09-07 13:00:00Asia/Hong_KongWhy Explainable AI Needs Feminist Philosophy: Epistemic Injustice, Situated Knowledge, and the Amelioration of Algorithmic Bias

    Interdisciplinary Lunchtime Seminar
    Why Explainable AI Needs Feminist Philosophy: Epistemic Injustice, Situated Knowledge, and the Amelioration of Algorithmic Bias

    Dr. Linus Huang
    (Society of Fellows in the Humanities, HKU)

    Date/Time: September 7, 2021 12:00 nn – 1:00 pm (HK time)
    Venue: Conducted via Zoom
    Enquiry: ihss@hku.hk

      Overview

      Title:

      Why Explainable AI Needs Feminist Philosophy: Epistemic Injustice, Situated Knowledge, and the Amelioration of Algorithmic Bias

      Speaker:

      Dr. Linus Huang (Society of Fellows in the Humanities, HKU)

      Date/Time:

      September 7, 2021, 12:00 nn – 1:00 pm (HK time)

      Language:

      English

      Enquiry:

      Title:

      Why Explainable AI Needs Feminist Philosophy: Epistemic Injustice, Situated Knowledge, and the Amelioration of Algorithmic Bias

      Speaker:

      Dr. Linus Huang (Society of Fellows in the Humanities, HKU)

      Date/Time:

      September 7, 2021, 12:00 nn – 1:00 pm (HK time)

      Language:

      English

      Enquiry:

      Abstract

      Artificial intelligence (AI) systems are increasingly adopted to make decisions in business, criminal justice, education, etc. However, such algorithmic decision systems can have prevalent algorithmic biases against marginalized social groups and undermine social justice. Explainable artificial intelligence (XAI) is a recent development aiming to make an AI system’s decision processes less opaque to expose its problematic biases. This paper will argue against a dominant view in XAI for bias reduction, the Technical XAI. According to Technical XAI, the identification and interpretation of algorithmic bias can be handled more or less by technical experts who specialize in XAI methods. Drawing on the resources from feminist philosophy of science and epistemology, we show why Technical XAI is mistaken: the proper identification and interpretation of algorithmic bias require rich background knowledge and interpretive resource, which can only be made available by involving a diverse group of stakeholders in the relevant processes. We also suggest how feminist theories can help shape such social-epistemic processes in XAI to facilitate the amelioration of algorithmic bias.

      About the Speaker

      Linus Huang is a philosopher of cognitive science, technology, and artificial intelligence. He received his Ph.D. in History and Philosophy of Science from University of Sydney and held postdoctoral fellowships at Academia Sinica, Taiwan and University of California, San Diego. Linus’s research program explores the implications of computational cognitive neuroscience on the nature of agency and the human mind. His research has been published in Synthese, Philosophical Psychology, and Philosophy & Technology.  At HKU, Linus conducts a new research project entitled ‘Engineering Equity: How AI Can Help Reduce the Harm of Implicit Bias’. Implicit bias — when one unintentionally acts on the basis of stereotypes or prejudice concerning social identities (ethnicity, sexuality, class) — causes harm. Therefore, reducing implicit bias is a key issue in promoting social justice. Current approaches on how to intervene largely assume a dualistic framework that studies cognition detached from its bodily and environmental contexts. However, their shortcomings can be rectified by incorporating insights from embodied approaches. With this embodied perspective, we open up a large area of the solution space for promising interventions that can accelerate social progress. His project will expand AI’s potential for promoting equity, contributing to debates in philosophy of mind, psychology of implicit social cognition, and critical technology studies.

      POSTER