Reflections on robots & autonomous systems
Reflections on Robots & Autonomous Systems
This essay provides a legal-economic canvas of the institutional co-evolutionary dynamics that are disrupting the legal foundations of digital markets to make sense of contested 'commodification bets' from personal data to AI. Drawing from the experience of information capitalism, we make a general argument about the anomalous coordination between legal, technological, and economic dynamics, which is reinforced by the over-reliance on co-regulatory strategies proposed by current regulatory measures (e.g., the European AI Act proposal) as a solution to regulate disruptive technologies. It is argued that the co-regulatory model exacerbates legal instability in emerging markets, as it is subject to moral hazard dynamics, all too favored by an over-reliance on the goodwill of private actors as well as on the enforcement priorities of Member States’ DPAs. Stalling strategies, and regulatory captures can flourish within such a model to the advantage of economic agents' commodification bets, thus prolonging what we call the 'extended legal present', in which a plurality of possible legal futures compete with each other, and which is fraught with economic and political uncertainty. The first part briefly recounts how the fundamental commodification bets over personal data at the core of the digital economy have been deployed and partly rejected by the EU judiciary and Data Protection Authorities, pointing to the poor adaptation of economic agents' legal practices despite the emerging legal fault lines at the heart of European digital markets. The second part outlines a theoretical framework for the functioning of the 'market for legal rules' that the co-regulatory model allows to make sense of and explain the current legal-economic drifts. It also highlights the presence in the legal domain of anomalies and distortions typically associated with market exchanges in general. The third part juxtaposes such a theoretical framework with the current flares of legal uncertainty surrounding the commercial release of AI-based products to highlight the sense of déjà vu in the creation of the legal foundations for AI and to call for a course correction in the over-reliance on co-regulation. It concludes with several policy considerations, including institutional diversification as a resilient strategy in the face of legal uncertainty to avoid possible "legal bubbles" in the making.
EU law and emotion data
Hausselman, A., Sears, A., Zardiashvili, L. & Fosch-Villaronga, E. (2023) EU law and emotion data. ACII ‘23 Affective Computing + Intelligent Interaction. MIT Media Lab, Cambridge, MA, USA, Sept. 10–13, forthcoming. Arxiv link.
This article sheds light on legal implications and challenges surrounding emotion data processing within the EU's legal framework. Despite the sensitive nature of emotion data, the GDPR does not categorize it as special data, resulting in a lack of comprehensive protection. The article also discusses the nuances of different approaches to affective computing and their relevance to the processing of special data under the GDPR. Moreover, it points to potential tensions with data protection principles, such as fairness and accuracy. Our article also highlights some of the consequences, including harm, that processing of emotion data may have for individuals concerned. Additionally, we discuss how the AI Act proposal intends to regulate affective computing. Finally, the article outlines the new obligations and transparency requirements introduced by the DSA for online platforms utilizing emotion data. Our article aims at raising awareness among the affective computing community about the applicable legal requirements when developing AC systems intended for the EU market, or when working with study participants located in the EU. We also stress the importance of protecting the fundamental rights of individuals even when the law struggles to keep up with technological developments that capture sensitive emotion data.
Sanz-Urquijo, B., Lopez-Belloso, M., & Fosch-Villaronga, E. (2022) The disconnect between the goals of trustworthy AI for law enforcement and the EU research agenda. AI & Ethics, 1-12.
In this paper, we investigate whether AI deployment for law enforcement will enable or impede the exercise of citizens' fundamental rights by juxtaposing the promises and policy goals with the crude reality of practices, funded projects, and practicalities of law enforcement. To this end, we map the projects funded by H2020 in AI for law enforcement and juxtapose them to the goals and aims of the EU in terms of Trustworthy AI and fundamental rights. We then bring forward existing research stressing that AI implementation in sensitive domains such as defense and law enforcement does not come without drawbacks, especially regarding discrimination, surveillance, data protection, and human dignity. We thoroughly analyze and assess human-centric and socially-driven lens risks and threats of using AI factors from an ethical, legal, and societal perspective (ELSA), including organizational and gender worries.
This chapter provides an introduction to this book (Law and Artificial Intelligence: Regulating AI and Applying it in Legal Practice) and an overview of all the chapters. The book deals with the intersection of law and Artificial Intelligence (AI). Law and AI interact in two different ways, which are both covered in this book: law can regulate AI and AI can be applied in legal practice. AI is a new generation of technologies, mainly characterized by being self-learning and autonomous. This means that AI technologies can continuously improve without (much) human intervention and can make decisions that are not pre-programmed. Artificial Intelligence can mimic human intelligence, but not necessarily so. Similarly, when AI is implemented in physical technologies, such as robots, it can mimic human beings (e.g., socially assistive robots acting like nurses), but it can also look completely different if it has a more functional shape (e.g., like an industrial arm that picks boxes in a factory). AI without a physical component can sometimes be hardly visible to end users, but evident to those that created and manage the system. In all its different shapes and sizes, AI is rapidly and radically changing the world around us, which may call for regulation in different areas of law. Relevant areas in public law include non-discrimination law, labour law, humanitarian law, constitutional law, immigration law, criminal law and tax law. Relevant areas in private law include liability law, intellectual property law, corporate law, competition law and consumer law. At the same time, AI can be applied in legal practice. In this book, the focus is mostly on legal technologies, such as the use of AI in legal teams, law-making, and legal scholarship. This introductory chapter concludes with an overview of the structure of this book, containing introductory chapters on what AI is, chapters on how AI is (or could be) regulated in different areas of both public and private law, chapters on applying AI in legal practice, and chapters on the future of AI and what these developments may entail from a legal perspective.
Drukarch, H. & Fosch-Villaronga, E. (2022). X. In: Custers, B., Fosch-Villaronga, E. (eds) Law and Artificial Intelligence. Regulating AI and Applying AI in Legal Practice. Information Technology and Law Series, vol 35. T.M.C. Asser Press, The Hague, 345–364.
The rapid emergence of increasingly autonomous AI systems within corporate governance and decision-making is unprecedented. AI-driven boardrooms bring about legal challenges within the field of corporate law, mainly due to the expanding autonomy and capabilities AI has to support corporate decisions. Recurrent legal questions revolve around the attribution of legal personhood to autonomous systems and who is responsible if something goes wrong due to a decision taken thanks to the power of AI. This chapter introduces autonomy levels for AI in the boardroom and discusses potential legal and regulatory challenges expected from a corporate law frame of reference. Building on existing literature and other related examples from the automotive and medical sectors, this chapter presents a six-layered model depicting the changing roles and responsibilities among human directors and AI systems as the latter become increasingly autonomous. This research shows that although boardrooms appear to move towards autonomous corporate governance and decision-making without human responsibility, this is not true in practice. What this does indicate, however, is that the more autonomous and powerful AI systems become, the more decision-making processes shift from human-based to AI-powered. This shift raises a number of concerns from a corporate law perspective tied to the role of autonomy in the boardroom, especially with respect to responsibility and liability. This chapter alerts corporations about the potential consequences of using increasingly autonomous AI systems in the boardroom, helps policymakers understand and address the potential responsibility gap that may arise from this development, and lays a basis for further research and regulatory initiatives in this regard.
Sætra, H. Nordahl-Hansen, A., Fosch-Villaronga, E. & Dahl, C. (2022) Normativity assumptions in the design and application of social robots for autistic children. 18th Scandinavian Conference on Health Informatics, August 22-24, 2022.
Opportunities and risks of artificial intelligence in the criminal process
Fosch-Villaronga, E. & Lopez-Belloso, M. (2023) Oportunidades y riesgos de la inteligencia artificial en el proceso penal [Opportunities and risks of artificial intelligence in the criminal process]. In: Chilean Supreme Court (2023) 20 Years of Criminal Procedure Reform in Chile, available online.
Büchi, M., Fosch-Villaronga, E., Lutz, C., Tamò-Larrieux, A., & Velidi, S. (2021). Making sense of algorithmic profiling: user perceptions on Facebook. Information, Communication & Society, 1-17.
Algorithmic profiling has become increasingly prevalent in many social fields and practices, including finance, marketing, law, cultural consumption and production, and social engagement. Although researchers have begun to investigate algorithmic profiling from various perspectives, socio-technical studies of algorithmic profiling that consider users’ everyday perceptions are still scarce. In this article, we expand upon existing user-centered research and focus on people’s awareness and imaginaries of algorithmic profiling, specifically in the context of social media and targeted advertising. We conducted an online survey geared toward understanding how Facebook users react to and make sense of algorithmic profiling when it is made visible. The methodology relied on qualitative accounts as well as quantitative data from 292 Facebook users in the United States and their reactions to their algorithmically inferred ‘Your Interests’ and ‘Your Categories’ sections on Facebook. The results illustrate a broad set of reactions and rationales to Facebook’s (public-facing) algorithmic profiling, ranging from shock and surprise, to accounts of how superficial – and in some cases, inaccurate – the profiles were. Taken together with the increasing reliance on Facebook as critical social infrastructure, our study highlights a sense of algorithmic disillusionment requiring further research.
Putting children and their rights at the forefront of the artificial intelligence revolution
Fosch-Villaronga, E., van der Hof, S., Lutz, C., & Tamò-Larrieux, A. (2021). Toy story or children story? Putting children and their rights at the forefront of the artificial intelligence revolution. Ai & Society, 1-20.
Policymakers need to start considering the impact smart connected toys (SCTs) have on children. Equipped with sensors, data processing capacities, and connectivity, SCTs targeting children increasingly penetrate pervasively personal environments. The network of SCTs forms the Internet of Toys (IoToys) and often increases children's engagement and playtime experience. Unfortunately, this young part of the population and, most of the time, their parents are often unaware of SCTs’ far-reaching capacities and limitations. The capabilities and constraints of SCTs create severe side effects at the technical, individual, and societal level. These side effects are often unforeseeable and unexpected. They arise from the technology's use and the interconnected nature of the IoToys, without necessarily involving malevolence from their creators. Although existing regulations and new ethical guidelines for artificial intelligence provide remedies to address some of the side effects, policymakers did not develop these redress mechanisms having children and SCTs in mind. This article provides an analysis of the arising side effects of SCTs and contrasts them with current regulatory redress mechanisms. We thereby highlight misfits and needs for further policymaking efforts.
In this article we look at the home as an arena for care by exploring how care robots and technological care-systems can become part of older adults’ lives. We investigate the domestication of robot technology in the context of what in Scandinavia is called “welfare technology” (relating to the terms “gerontechnology” and “Active Assisted Living,”) that especially aims to mitigate older adults´ challenges with living in their own homes. Through our case study, we investigate a system called eWare, where a flowerpot robot called “Tessa” works in symbiosis with a sensor technology “SensaraCare.” Together, they create a socio-technical ecosystem involving older adult end-users living at home, formal caregivers (e.g. healthcare workers), and informal caregivers (normally family members). We analyze our ethnographic fieldwork through the theoretical concept of “domestication of technology,” focusing on an established three-dimensional model that includes practical, symbolic, and cognitive levels of analysis. We found that social bonds and different ways of using the same technology ecosystem were crucial, and so we supplement this model by suggesting a fourth dimension, which we term the social dimension of the domestication of technology.
Fosch-Villaronga, E., Lutz, C., Tamò-Larrieux, A. (2020) Gathering expert opinions for social robots' ethical, legal, and societal concerns. Findings from Four International Workshops. International Journal of Social Robotics, 12(4), 959-972.
Social robots, those that exhibit personality and communicate with us using high-level dialogue and natural cues, will soon be part of our daily lives. In this paper, we gather expert opinions from different international workshops exploring ethical, legal, and social (ELS) concerns associated with social robots. In contrast to literature that looks at specific challenges, often from a certain disciplinary angle, our contribution to the literature provides an overview of the ELS discussions in a holistic fashion, shaped by active deliberation with a multitude of experts across four workshops held between 2015 and 2017 held in major international workshops (ERF, NewFriends, JSAI-isAI). It also explores pathways to address the identified challenges. Our contribution is in line with the latest European robot regulatory initiatives but covers an area of research that the latest AI and robot governance strategies have scarcely covered. Specifically, we highlight challenges to the use of social robots from a user perspective, including issues such as privacy, autonomy, and the dehumanization of interactions; or from a worker perspective, including issues such as the possible replacement of jobs through robots. The paper also compiles the recommendations to these ELS issues the experts deem appropriate to mitigate compounding risks. By then contrasting these challenges and solutions with recent AI and robot regulatory strategies, we hope to inform the policy debate and set the scene for further research.
In this article, we provide an overview of the literature on chilling effects and corporate profiling, while also connecting the two topics. We start by explaining how profiling, in an increasingly data-rich environment, creates substantial power asymmetries between users and platforms (and corporations more broadly). Inferences and the increasingly automated nature of decision-making, both based on user data, are essential aspects of profiling. We then connect chilling effects theory and the relevant empirical findings to corporate profiling. In this article, we first stress the relationship and similarities between profiling and surveillance. Second, we describe chilling effects as a result of state and peer surveillance, specifically. We then show the interrelatedness of corporate and state profiling, and finally spotlight the customization of behavior and behavioral manipulation as particularly significant issues in this discourse. This is complemented with an exploration of the legal foundations of profiling through an analysis of European and US data protection law. We find that while Europe has a clear regulatory framework in place for profiling, the US primarily relies on a patchwork of sector-specific or state laws. Further, there is an attempt to regulate differential impacts of profiling via anti-discrimination statutes, yet few policies focus on combating generalized harms of profiling, such as chilling effects. Finally, we devise four concise propositions to guide future research on the connection between corporate profiling and chilling effects.
Zardiashvili, L. & Fosch-Villaronga, E. (2020) ‘Oh, dignity too?’ said the robot. Human Dignity as the Basis of the Governance of Robotics. Minds and Machines, 30, 121–143.
Fosch-Villaronga, E. (2019) “I love you,” said the robot. Boundaries of the use of emotions in human-robot interaction. In Ayanoglu, H., and Duarte, E. (eds.) (2019) Emotional Design in Human Robot Interaction: Theory, Methods, and Application. Human-Computer Interaction Series, Springer, 93-110.
This chapter reflects upon the ethical, legal, and societal (ELS) implications of the use of emotions by robot technology. The first section introduces different cases where emotions play a role in human-robot interaction (HRI) contexts. This chapter draws particular attention to disparities found in recent technical literature relating to the appropriateness of the use of emotions in HRIs. These examples, coupled with the lack of guidelines on requirements, boundaries, and the appropriate use of emotions in HRI, give rise to a vast number of ELS implications that the second section addresses. Recent regulatory initiatives in the European Union (EU) aim at mitigating the risks posed by robot technologies. However, these may not entirely suffice to frame adequately the questions the use of emotions entails in these contexts.
Søraa, R. A., Fosch-Villaronga, E., Quintas, J., Dias, J., Tøndel, G., Sørgaard, J., … & Serrano, J. A. (2020). Mitigating isolation and loneliness with technology through emotional care by social robots in remote. In Ray, K. P., Nakashima, N., Ahmed, A., Ro, S. C., Shoshino, Y. (2020) Mobile Technologies for Delivering Healthcare in Remote, Rural Or Developing Regions, 255
This book chapter explores how experiences of isolation and loneliness in remote areas can be mitigated through social robots' emotional care. We first discuss the concept of being social and how that notion is changing with rapid digitalization. The research for this chapter zooms in the context of remote regions, characterized by vast geographical distances between cities, public services, and people's homes, in concrete, in northern and southern European regions. Then, the chapter discusses the Scandinavian term 'welfare technology,' and investigate different technological advances aiming towards bridging the gap loneliness poses. We propose the use and development of social robots equipped with emotional care support as a way of mitigating loneliness, given that the users do not experience the implementation of the technology into their daily life as paternalistic. We close the article with reflections on the consequences of such a sensible choice.
Sætra, H. S. & Fosch-Villaronga, E. (2021) Healthcare Digitalisation and the Changing Nature of Work and Society. Healthcare, 9(8) 1007, 1-15.
Digital technologies have profound effects on all areas of modern life, including the workplace. Certain forms of digitalisation entail simply exchanging digital files for paper, while more complex instances involve machines performing a wide variety of tasks on behalf of humans. While some are wary of the displacement of humans that occurs when, for example, robots perform tasks previously performed by humans, others argue that robots only perform the tasks that robots should have carried out in the very first place and never by humans. Understanding the impacts of digitalisation in the workplace requires an understanding of the effects of digital technology on the tasks we perform, and these effects are often not foreseeable. In this article, the changing nature of work in the health care sector is used as a case to analyse such change and its implications on three levels: the societal (macro), organisational (meso), and individual level (micro). Analysing these transformations by using a layered approach is helpful for understanding the actual magnitude of the changes that are occurring and creates the foundation for an informed regulatory and societal response. We argue that, while artificial intelligence, big data, and robotics are revolutionary technologies, most of the changes we see involve technological substitution and not infrastructural change. Even though this undermines the assumption that these new technologies constitute a fourth industrial revolution, their effects on the micro and meso level still require both political awareness and proportional regulatory responses.