Emotions, Dignity, & Concerns about Robots and Autonomous systems
Fosch-Villaronga, E., Lutz, C., Tamò-Larrieux, A. (2020) Gathering expert opinions for social robots' ethical, legal, and societal concerns. Findings from Four International Workshops. International Journal of Social Robotics, 12(4), 959-972.
Social robots, those that exhibit personality and communicate with us using high-level dialogue and natural cues, will soon be part of our daily lives. In this paper, we gather expert opinions from different international workshops exploring ethical, legal, and social (ELS) concerns associated with social robots. In contrast to literature that looks at specific challenges, often from a certain disciplinary angle, our contribution to the literature provides an overview of the ELS discussions in a holistic fashion, shaped by active deliberation with a multitude of experts across four workshops held between 2015 and 2017 held in major international workshops (ERF, NewFriends, JSAI-isAI). It also explores pathways to address the identified challenges. Our contribution is in line with the latest European robot regulatory initiatives but covers an area of research that the latest AI and robot governance strategies have scarcely covered. Specifically, we highlight challenges to the use of social robots from a user perspective, including issues such as privacy, autonomy, and the dehumanization of interactions; or from a worker perspective, including issues such as the possible replacement of jobs through robots. The paper also compiles the recommendations to these ELS issues the experts deem appropriate to mitigate compounding risks. By then contrasting these challenges and solutions with recent AI and robot regulatory strategies, we hope to inform the policy debate and set the scene for further research.
In this article, we provide an overview of the literature on chilling effects and corporate profiling, while also connecting the two topics. We start by explaining how profiling, in an increasingly data-rich environment, creates substantial power asymmetries between users and platforms (and corporations more broadly). Inferences and the increasingly automated nature of decision-making, both based on user data, are essential aspects of profiling. We then connect chilling effects theory and the relevant empirical findings to corporate profiling. In this article, we first stress the relationship and similarities between profiling and surveillance. Second, we describe chilling effects as a result of state and peer surveillance, specifically. We then show the interrelatedness of corporate and state profiling, and finally spotlight the customization of behavior and behavioral manipulation as particularly significant issues in this discourse. This is complemented with an exploration of the legal foundations of profiling through an analysis of European and US data protection law. We find that while Europe has a clear regulatory framework in place for profiling, the US primarily relies on a patchwork of sector-specific or state laws. Further, there is an attempt to regulate differential impacts of profiling via anti-discrimination statutes, yet few policies focus on combating generalized harms of profiling, such as chilling effects. Finally, we devise four concise propositions to guide future research on the connection between corporate profiling and chilling effects.
Zardiashvili, L. & Fosch-Villaronga, E. (2020) ‘Oh, dignity too?’ said the robot. Human Dignity as the Basis of the Governance of Robotics. Minds and Machines, 30, 121–143.
Fosch-Villaronga, E. (2019) “I love you,” said the robot. Boundaries of the use of emotions in human-robot interaction. In Ayanoglu, H., and Duarte, E. (eds.) (2019) Emotional Design in Human Robot Interaction: Theory, Methods, and Application. Human-Computer Interaction Series, Springer, 93-110.
This chapter reflects upon the ethical, legal, and societal (ELS) implications of the use of emotions by robot technology. The first section introduces different cases where emotions play a role in human-robot interaction (HRI) contexts. This chapter draws particular attention to disparities found in recent technical literature relating to the appropriateness of the use of emotions in HRIs. These examples, coupled with the lack of guidelines on requirements, boundaries, and the appropriate use of emotions in HRI, give rise to a vast number of ELS implications that the second section addresses. Recent regulatory initiatives in the European Union (EU) aim at mitigating the risks posed by robot technologies. However, these may not entirely suffice to frame adequately the questions the use of emotions entails in these contexts.
Søraa, R. A., Fosch-Villaronga, E., Quintas, J., Dias, J., Tøndel, G., Sørgaard, J., … & Serrano, J. A. (2020). Mitigating isolation and loneliness with technology through emotional care by social robots in remote. In Ray, K. P., Nakashima, N., Ahmed, A., Ro, S. C., Shoshino, Y. (2020) Mobile Technologies for Delivering Healthcare in Remote, Rural Or Developing Regions, 255
This book chapter explores how experiences of isolation and loneliness in remote areas can be mitigated through social robots' emotional care. We first discuss the concept of being social and how that notion is changing with rapid digitalization. The research for this chapter zooms in the context of remote regions, characterized by vast geographical distances between cities, public services, and people's homes, in concrete, in northern and southern European regions. Then, the chapter discusses the Scandinavian term 'welfare technology,' and investigate different technological advances aiming towards bridging the gap loneliness poses. We propose the use and development of social robots equipped with emotional care support as a way of mitigating loneliness, given that the users do not experience the implementation of the technology into their daily life as paternalistic. We close the article with reflections on the consequences of such a sensible choice.