Mind-reading machines: The rise of ‘empathic’ AI
‘Empathic’ AI (EAI) describes technologies that make inferences about human emotion and mood. This project aims to:
Identify the founding assumptions EAI makes about the nature of emotion, with a particular focus on the controversial claim that emotions form ‘natural kinds’;
Assess the prospects for designing transparent and trustworthy EAI suitable for screening, diagnosis, and adjunctive treatment of mental illness;
Contribute to larger debates about the legal and regulatory frameworks needed to ensure EAI is not used for purposes that foster discrimination and enable human rights violations.
Work from this project has been presented to the Australian National University’s Humanising Machine Intelligence group (2022), to the Center for Technomoral Futures, University of Edinburgh (2023), at the Inaugural Sino-Australian Philosophy of Life Sciences Network Conference (SAPoLSN), hosted by Macquarie University and the University of Sydney, and at the International Society for Research on Emotion’s 2024 Conference at Queens University Belfast.
The project is supported with an AEGiS career development grant from the University of Wollongong (2022-2023).
Above: Empathy is a moral and intellectual virtue. It helps us act with concern for others by providing knowledge and understanding of another’s inner state and motivations. Could machines ever truly empathise? If they could, should we ‘delegate’ our empathy, or would this diminish our ability to be good people and our capacity to understand each other?
Image from Indeed/Getty, iBrandify via Noun Project.