人工知能と感情知性に関する倫理(2020.07.30)
こんにちは、丸山満彦です。
「人工知能と感情知性に関する倫理」についてのペーパーがPartnership on AI [wikipedia]から出されていました。。。7月30日だから1ヶ月ほど前です。。。
関連する多くの論文等が引かれていて参考になりそうです。感情はプライバシーに深く関連してくるので、しっかりとした議論をしてコンセンサスをとっていく必要がありそうですね。。。
・2020.07.30 (Paper) The Ethics of AI and Emotional Intelligence
・[pdf] Full paper [Downloaded]
-----
Contents
Executive Summary
Introduction
Affective Computing Definition and Language Problems
- Affective computing and AI
- Language problems in public discussions
Sensors, Inputs, Inferences and Applications
- Breadth of signals in use
- Examples of affective computing input data sources
Inferences and Applications
- Detecting, treating or assisting with disease or disability
- Agriculture
- Social robots
- Education and audience engagement
- Gaming, movies and entertainment
- Advertising and retail
- Hiring and employment
- Chatbots, call centers, and home/auto voice assistants
- Wearables and stress relief
- Automotive and industrial safety
- Threat detection/intervention and law enforcement
- Communities, politics, and social networks
Questions for Discussion
- Thinking big
- How does affective computing fit in existing frameworks?
- Human vs. machine
- Accuracy and inclusivity
- Privacy and other rights
- Autonomy and best interest
- Transparency and communications
Question Exploration as a Tool for Evaluating Ethics Risk
- Who has access to the inferences in depression detection and why does it matter?
- What are the biggest opportunities and benefits of affective computing for society?
- What are the greatest risks of affective computing for society?
COVID-19, Black Lives Matter Protests, and Affective Computing
- AI could alleviate COVID-19 mental health problems
- Social robots could help reduce human proximity and disease transmission
- The shift to remote work and education raises issues of privacy and control and may increase demand for affective computing technology
- Pandemic health monitoring and affective computing share the privacy risks of tracking biometric, health, and location data
- The Black Lives Matter protests and affective computing raise overlapping bias issues
- The Black Lives Matter protests are changing the law around law enforcement camera use and facial analysis
Conclusion
Acknowledgments
Introduction
This report is about artificial intelligence that targets emotions or other affective states and the ethical issues that it raises. It is for anyone who is thinking about how, as a society, we want to develop and use AI, what data we should collect, what inferences we should make, and what safeguards we should put in place.
Artificial intelligence designed to recognize, influence, and simulate emotion, interest, and engagement has been increasingly in the marketplace and in the news. Policymakers around the world are considering how to protect privacy, reduce bias, protect workers’ and citizens’ rights, and ensure that the development and deployment of artificial intelligence is done in a responsible way that benefits society. Individual scientists and companies are thinking about what best practices should be, how their work might be misused,12 and whether there are applications that should be avoided. Human rights advocates and ethicists are joining the conversation, asking how artificial intelligence is affecting and will affect society.
These discussions have been happening more generally about artificial intelligence, ethics, and policy, and they have also been happening specifically about artificial intelligence related to emotion and affect.13 This report was informed and inspired by conversations the author had with more than 200 individuals from industry, academia, government, civil society and news media organizations, while leading the work at Partnership on AI on affective computing - computing related to emotions or other affective states14 - and ethics, over twelve months in 2019 and 2020.15 It became clear in the course of those conversations and the affective computing and ethics convenings led by the author in the U.S. and the U.K. in 2019, that a resource was needed that would provide a common starting point. Participants with diverse backgrounds needed a shared understanding of how affective computing is being used, to be able to think about how it should be used. This report creates a foundation for multi-stakeholder conversations, policy making, and public debate about the development, use, benefits and risks of affective computing. Specifically, it presents:
- Definition and categories of affective computing
- Types of human expressions, sensors, and data types used in affective computing
- Categories and examples of current applications of affective computing
- A collection of questions to use as tools for developing best practices for the ethical and beneficial development and deployment of affective computing
- Sources for further reading: research, reports, news, polls, and legislation
ーーーーー
12 For example, starting in 2020, a leading machine learning research conference, NeurIPS, is requiring societal impact statements to accompany technical paper submissions. Johnson, K. (2020, February). Neurips requires AI researchers to account for societal impact and financial conflicts of interest. VentureBeat.
https://venturebeat.com/2020/02/24/neurips-requires-ai-researchers-to-account-for-societal-impact-and-financial-conflicts-of-interest/
13 Smaller discussions of affective computing and ethics started years ago, with the founding of the modern field of affective computing.
See Picard, R. W. (1997). Chapter 4: Potential concerns, Affective computing. The MIT Press.
14 “Affective computing is computing that relates to, arises from, or deliberately influences emotion or other affective phenomena.” Picard, R. W. (1997). Affective computing. The MIT Press.
15 A limitation of the research was geographic diversity. Most of the conversations were in the U.S. or the U.K. and all were conducted in English. The author led convenings in the U.S. and the U.K. and had many smaller discussions in both countries. Conversations, including those at the ACII conference, the Harvard Berkman Klein Center, and the Canadian consulate in NYC and in the international human rights community, did include some participants beyond the U.K. and the U.S. but planned meetings in Hong Kong and Brazil were cancelled by COVID-19.
Conclusion
In the middle of 2020, many things are unclear. 2019 might have been the year of emotion recognition.
It might have marked the beginning of widespread deployment. It might lead to a future of remarkable technical achievement. It might only have been the high point for hype. It is too soon to say.
It is not yet clear how effective AI will get at recognizing, influencing, and simulating human emotion and affect. What is clear is that if it improves enough, it will be a very powerful tool, however we choose to wield it. Before we walk further down that path, we should think hard about what it would mean if it worked really well and what the implications could be. We should also think hard about what it would mean if it does not work very well, but we use it anyway.
We should think about which questions will help us the most in exploring the ethical issues and unintended consequences of creating and deploying AI connected to emotion and affect. We should direct our ethical analysis to the applications that are now coming into the marketplace and that are just behind them in the research labs. We should ask how affective computing overlaps with our most pressing needs today. We should look further ahead and more broadly, asking what are the best and worst foreseeable uses of this kind of AI and asking how its widespread use might create societal changes and problems that any single use would not. As a society, we should ask together, if, when, and how we want to develop and use AI to sense, recognize, influence, and simulate human emotion and affect.
Comments