Public Writing, Policy, and Media
Audio & Video
1. "Technology Can't Fix Algorithmic Injustice".
Boston Review [January 9, 2020].
Co-authored with Elena Di Rosa and Hochan "Sonny" Kim. Link.
This essay is currently being featured on a number of philosophy syllabi across the world. If you are using our text in your teaching, please let me and my co-authors know.
Abstract: There may be some machine learning systems that should not be deployed in the first place, no matter how much we can optimize them. Developers cannot just ask, “What do I need to do to fix my algorithm?” They must rather ask: “How does my algorithm interact with society at large, and as it currently is, including its structural inequalities?” Ultimately, we must resist the apocalypse-saturated discourse on AI that encourages a mentality of learned helplessness—and citizens must come to view issues surrounding AI as a collective problem for all of us rather than a technical problem just for them.
2. "AI Ethics: Seven Traps".
Freedom to Tinker, the Center for Information Technology Policy's blog (Princeton University). Co-authored with Bendert Zevenbergen. March 25, 2019. Link. Download.
Abstract: The pursuit of AI Ethics is subject to a range of possible pitfalls, which has recently led to a worrying trend of industry practitioners and policy-makers dismissing ethical reasoning about AI as futile: ‘is ethical AI even possible?’. Much of the public debate on the ethical dimensions of machine learning systems does not actively include ethicists, or experts in relevant adjacent disciplines, such as political and legal philosophers. Therefore, a number of inaccurate assumptions about the nature of ethics, and its usefulness for evaluating the larger social impact of AI, have permeated the public debate. We outline seven ‘AI ethics traps’: the reductionism trap, the simplicity trap, the relativism trap, the value alignment trap, the dichotomy trap, the myopia trap, and the rule of law trap. In doing so, we hope to provide a resource for readers who want to navigate the public debate on the ethics of AI in an informed and nuanced way, and who want to think critically and constructively about ethical considerations in science and technology more broadly.
1. "Written evidence submission to the UK Public Bill Committee: Voyeurism (Offences) (No. 2) Bill, 'Upskirting'".
July 10, 2018. Co-authored with Alice Schneider.
DOI: 10.13140/RG.2.2.35120.81925. Link. Download.
Abstract: We welcome the addition of the proposed section 67A to the Sexual Offences Act 2003 in an effort to tackle the practice of ‘upskirting’ in a comprehensive, conceptually clear, and victim-centered way, instead of relying on the option of prosecuting upskirting perpetrators under the more general offence of outraging public decency. However, we argue that the current draft of 67A relies on an overly restrictive picture of the relevant purposes of upskirting. In addition, we draw the Committee’s attention to upskirting-adjacent practices of image-based online sexual harassment currently not covered by 67A. Lastly, we provide a number of critical feminist reflections on defining particular areas of persons’ bodies in an explicitly sexualised way, which fails to take into account important cultural and religious differences, and which might thus constitute an obstacle to the adequate legal protection of minorities.
1. Podcast: The Verdict: Law & Society.
Audio recording of the Law & Justice Forum "AI and the Criminal Law," King's College London (2019).
My comments on a panel with Roger Brownsword and Sylvie Delacroix, chaired by John Tasioulas. My comments begin at 50:00 and end at 1:12:00. Topics include my research on how wrongful treatment can compound over time in iterative decisions, why choices about technological design are choices of political significance, and what kinds of challenges arise for determining appropriate sentencing constraints when we make predictive assessments in a criminal justice context. Link. Download.
2. Video Interview, Humanising Machine Intelligence, ANU (2019).
An interview my research on the ethics of risk and uncertainty in the context of artificial intelligence, machine learning, and algorithmic decision-making. Filmed August 13, 2019, for the Humanising Machine Intelligence research project at the Australian National University. [Link forthcoming]
3. Radio Interview, WPRB Princeton 103.3 FM (2019).
An interview about the democratic implications of AI Ethics, and algorithmic injustice in the criminal justice system.
Aired April 9, 2019, on These Vibes Are Too Cosmic, a science and music show on WPRB Princeton 103.3 FM. The interview starts at 49:00. Link.
4. Radio Interview, Uptown Radio, Columbia University (2019).
An interview about facial recognition technology and surveillance.
Aired May 3, 2019, on Uptown Radio (Columbia University), "Facial Recognition in NYC Apartments: Ethics and Results". Link.