Privacy & Transparency

Robots, AI, Transparency, and Privacy

The research line Robots, AI, Transparency, and Privacy explores the complexity between transparency and privacy as normative ideals and their translation to practical applications such as robots, algorithmic decision-making processes, and AI systems.

EU law and emotion data

Hausselman, A., Sears, A., Zardiashvili, L. & Fosch-Villaronga, E. (2023) EU law and emotion data. ACII ‘23 Affective Computing + Intelligent Interaction. MIT Media Lab, Cambridge, MA, USA, Sept. 10–13, forthcoming. Arxiv link.

This article sheds light on legal implications and challenges surrounding emotion data processing within the EU's legal framework. Despite the sensitive nature of emotion data, the GDPR does not categorize it as special data, resulting in a lack of comprehensive protection. The article also discusses the nuances of different approaches to affective computing and their relevance to the processing of special data under the GDPR. Moreover, it points to potential tensions with data protection principles, such as fairness and accuracy. Our article also highlights some of the consequences, including harm, that processing of emotion data may have for individuals concerned. Additionally, we discuss how the AI Act proposal intends to regulate affective computing. Finally, the article outlines the new obligations and transparency requirements introduced by the DSA for online platforms utilizing emotion data. Our article aims at raising awareness among the affective computing community about the applicable legal requirements when developing AC systems intended for the EU market, or when working with study participants located in the EU. We also stress the importance of protecting the fundamental rights of individuals even when the law struggles to keep up with technological developments that capture sensitive emotion data.

ACII_2023_paper_68_eu law and emotion data.pdf
Making sense of algorithmic profiling user perceptions on Facebook.pdf

Buechi, M., Fosch-Villaronga, E., Lutz, C., Tamò, A., Velidi, S., (2021) Making sense of algorithmic profiling. Information Communication & Society, 1-17.

Algorithmic profiling has become increasingly prevalent in many social fields and practices, including finance, marketing, law, cultural consumption and production, and social engagement. Although researchers have begun to investigate algorithmic profiling from various perspectives, socio-technical studies of algorithmic profiling that consider users’ everyday perceptions are still scarce. In this article, we expand upon existing user- centered research and focus on people’s awareness and imaginaries of algorithmic profiling, specifically in the context of social media and targeted advertising. We conducted an online survey geared toward understanding how Facebook users react to and make sense of algorithmic profiling when it is made visible. The methodology relied on qualitative accounts as well as quantitative data from 292 Facebook users in the United States and their reactions to their algorithmically inferred ‘Your Interests’ and ‘Your Categories’ sections on Facebook. The results illustrate a broad set of reactions and rationales to Facebook’s (public-facing) algorithmic profiling, ranging from shock and surprise, to accounts of how superficial – and in some cases, inaccurate – the profiles were. Taken together with the increasing reliance on Facebook as critical social infrastructure, our study highlights a sense of algorithmic disillusionment requiring further research.

Newlands, G., Lutz, C., Tamò-Larrieux, A., Fosch-Villaronga, E., Scheitlin, G., and Harasgama, R. (2020), Innovation under Pressure: Implications for Data Privacy during the Covid-19 Pandemic. Big Data & Society, SAGE, 7(2), 1-14

The global Covid-19 pandemic has resulted in social and economic disruption unprecedented in the modern era. Many countries have introduced severe measures to contain the virus, including travel restrictions, public event bans, non-essential business closures and remote work policies. While digital technologies help governments and organizations to enforce protection measures, such as contact tracing, their rushed deployment and adoption also raises profound concerns about surveillance, privacy and data protection. This article presents two critical cases on digital surveillance technologies implemented during the Covid-19 pandemic and delineates the privacy implications thereof. We explain the contextual nature of privacy trade-offs during a pandemic and explore how regulatory and technical responses are needed to protect privacy in such circumstances. By providing a multi-disciplinary conversation on the value of privacy and data protection during a global pandemic, this article reflects on the implications digital solutions have for the future and raises the question of whether there is a way to have expedited privacy assessments that could anticipate and help mitigate adverse privacy implications these may have on society.

Fosch Villaronga, E., Kieseberg, P., and Li, T. (2018) Humans Forget, Machines Remember: Artificial Intelligence and the Right to Be Forgotten. Computer Law & Security Review, 34(2), 304-313

This article examines the problem of AI memory and the Right to Be Forgotten. First, this article analyzes the legal background behind the Right to Be Forgotten, in order to understand its potential applicability to AI, including a discussion on the antagonism between the values of privacy and transparency under current E.U. privacy law. Next, the authors explore whether the Right to Be Forgotten is practicable or beneficial in an AI/machine learning context, in order to understand whether and how the law should address the Right to Be Forgotten in a post-AI world. The authors discuss the technical problems faced when adhering to strict interpretation of data deletion requirements under the Right to Be Forgotten, ultimately concluding that it may be impossible to fulfill the legal aims of the Right to Be Forgotten in artificial intelligence environments. Finally, this article addresses the core issue at the heart of the AI and Right to Be Forgotten problem: the unfortunate dearth of interdisciplinary scholarship supporting privacy law and regulation.

Felzmann, H., Fosch-Villaronga, E., Lutz, C. and Tamò-Larrieux, A. (2020) Towards Transparency by Design for Artificial Intelligence. Science Engineering Ethics, 1-29

In this article, we develop the concept of Transparency by Design that serves as practical guidance in helping promote the beneficial functions of transparency while mitigating its challenges in automated-decision making (ADM) environments. With the rise of artificial intelligence (AI) and the ability of AI systems to make automated and self-learned decisions, a call for transparency of how such systems reach decisions has echoed within academic and policy circles. The term transparency, however, relates to multiple concepts, fulfills many functions, and holds different promises that struggle to be realized in concrete applications. Indeed, the complexity of transparency for ADM shows tension between transparency as a normative ideal and its translation to practical application. To address this tension, we first conduct a review of transparency, analyzing its challenges and limitations concerning automated decision-making practices. We then look at the lessons learned from the development of Privacy by Design, as a basis for developing the Transparency by Design principles. Finally, we propose a set of nine principles to cover relevant contextual, technical, informational, and stakeholder-sensitive considerations. Transparency by Design is a model that helps organizations design transparent AI systems, by integrating these principles in a step-by-step manner and as an ex-ante value, not as an afterthought.

Felzmann, H., Fosch-Villaronga, E., Lutz, C. and Tamò-Larrieux, A. (2019) Transparency You Can Trust: Transparency Requirements for Artificial Intelligence between Legal Norms and Contextual Concerns, Big Data & Society, SAGE, 1-14

Transparency is now a fundamental principle for data processing under the General Data Protection Regulation. We explore what this requirement entails for artificial intelligence and automated decision-making systems. We address the topic of transparency in artificial intelligence by integrating legal, social, and ethical aspects. We first investigate the ratio legis of the transparency requirement in the General Data Protection Regulation and its ethical underpinnings, showing its focus on the provision of information and explanation. We then discuss the pitfalls with respect to this requirement by focusing on the significance of contextual and performative factors in the implementation of transparency. We show that human–computer interaction and human-robot interaction literature do not provide clear results with respect to the benefits of transparency for users of artificial intelligence technologies due to the impact of a wide range of contextual factors, including performative aspects. We conclude by integrating the information- and explanation-based approach to transparency with the critical contextual approach, proposing that transparency as required by the General Data Protection Regulation in itself may be insufficient to achieve the positive goals associated with transparency. Instead, we propose to understand transparency relationally, where information provision is conceptualized as communication between technology providers and users, and where assessments of trustworthiness based on contextual factors mediate the value of transparency communications. This relational concept of transparency points to future research directions for the study of transparency in artificial intelligence systems and should be taken into account in policymaking.

Felzmann, H., Fosch-Villaronga, E., Lutz, C. and Tamò-Larrieux, A. (2019) Robots and transparency. Understanding the multiple dimensions of the transparency requirement of the General Data Protection Regulation for robot technologies. IEEE Robotics & Automation Magazine, 26(2), 71-78

Transparency is often seen as a means to provide accountability and show that something is done with due diligence. This approach to transparency regards it as a remedy to hidden (potentially malevolent) practices. We, therefore, require transparency from manufacturers of products and services. While the outcry for more transparency often occurs in response to a particular scandal (e.g., the various controversies surrounding Facebook in 2018), the European Union's General Data Protection Regulation (GDPR) includes transparency as a proactive requirement for information technologies that process personal data.

Ienca, M. and Fosch-Villaronga, E. (2019) Privacy and Security Issues in Assistive Technologies for Dementia. In: Jottera F., Ienca M., Wangmo T. and Elger B. (2019) Assistive Technologies for Dementia Care. Oxford University Press, 221-239

The collection of a large volume and variety of physiological and behavioral data is critical for the effective development, deployment, and implementation of intelligent assistive technologies (IATs) and for the subsequent effective support of older adults with dementia. Yet it raises privacy and security issues. This chapter reviews the major privacy and security implications associated with the use of three major families of IATs for dementia: ambient assisted living systems, wearable devices, and service robotics, especially telepresence robots. After exploring a number of both category-specific and cross-categorical ethical and legal implications, the chapter proposes a list of policy recommendations with the purpose of maximizing the uptake of IATs while minimizing possible adverse effects on the privacy and security of end-users.

Fosch-Villaronga, E., Felzmann, H., Pierce, R. L., de Conca, S., de Groot, A., Ponce del Castillo, A., Robbins, S. (2018) “Nothing comes between my robot and me”: Privacy and human-robot interaction in robotized healthcare. In: Leenes, R., van Brakel, R., Gutwirth, S., and de Hert, P. (2018) Computers, Privacy, and Data Protection 2018 – The Internet of Bodies. Hart,

The integration of cyber-physical robotic systems in healthcare settings is accelerating, with robots used as diagnostic aids, mobile assistants, physical rehabilitation providers, cognitive assistants, social and cognitive skills trainers, or therapists. This chapter investigates currently still underexplored privacy and data protection issues in the use of robotic technologies in healthcare, focusing on privacy issues that are specifically related to human engagement with robots as cyber-physical systems in healthcare contexts. It addresses six relevant privacy concerns and analyses them with regard to the European context: 1. The distinctive privacy impacts of subconsciously incentivised disclosure in human-robot interaction. 2. The complexity of consent requirements, including consent for data processing as well as consent for robotic care provision, both governed by different norms and user expectations. 3. Privacy challenges and opportunities arising from conversational approaches to privacy management with robots. 4. The application of data portability requirements in the context of a person’s substantive reliance on robots. 5. The privacy risks related to robot-based data collection in the workplace. 6. The need to go beyond simpler Privacy by Design approaches, which reduce privacy to data protection, towards designing robots for privacy in a wider sense. We argue that the communication and interaction with robots in healthcare contexts impacts not just data protection concerns, but wider consideration of privacy values, and that these privacy concerns pose challenges that need to be considered during robot design and their implementation in healthcare settings.

Strengthening the link between cybersecurity and safety in the context of care robot

Fosch-Villaronga, E., & Mahler, T. (2021). Cybersecurity, safety and robots: Strengthening the link between cybersecurity and safety in the context of care robots. Computer Law & Security Review, 41, 105528

This paper addresses the interplay between robots, cybersecurity, and safety from a European legal perspective, a topic under-explored by current technical and legal literature. The legal framework, together with technical standards, is a necessary parameter for the production and deployment of robots. However, European law does not regulate robots as such, and there exist multiple and overlapping legal requirements focusing on specific contexts, such as product safety and medical devices. Besides, the recently enacted European Cybersecurity Act establishes a cybersecurity certification framework, which could be used to define cybersecurity requirements for robots, although concrete cyber-physical implementation requirements are not yet prescribed. In this article, we illustrate cybersecurity challenges and their subsequent safety implications with the concrete example of care robots. These robots interact in close, direct contact with children, elderly, and persons with disabilities, and a malfunctioning or cybersecurity threat may affect the health and well-being of these people. Moreover, care robots may process vast amounts of data, including health and behavioral data, which are especially sensitive in the healthcare domain. Security vulnerabilities in robots thus raise significant concerns, not only for manufacturers and programmers, but also for those who interact with them, especially in sensitive applications such as healthcare. While the latest European policymaking efforts on robot regulation acknowledge the importance of cybersecurity, many details, and their impact on user safety have not yet been addressed in depth. Our contribution aims to answer the question whether the current European legal framework is prepared to address cyber and physical risks from care robots and ensure safe human–robot interactions in such a sensitive context. Cybersecurity and physical product safety legal requirements are governed separately in a dual regulatory framework, presenting a challenge in governing uniformly and adequately cyber-physical systems such as care robots. We conceptualize and discuss the challenges of regulating cyber-physical systems’ security with the current dual framework, particularly the lack of mandatory certifications. We conclude that policymakers need to consider cybersecurity as an indissociable aspect of safety to ensure robots are truly safe to use.


Nišević, M., Sears A. M., Fosch-Villaronga, E., & Custers, B. (2022) Chapter 17: Understanding the legal bases for automated decision-making under the GDPR. In: Kostas, E., Leenes, R. & Kamara, I (2022) Research Handbook on EU data protection.  Edward Elgar Publishing, 435–454.
This chapter explores the rules of the General Data Protection Regulation (GDPR) for automated decision-making (ADM) stipulated under Article 22. In doing so, this chapter briefly investigates automated decision-making systems, including the concept of ADM, in order to reveal which forms of ADM are covered by Article 22 of the GDPR. After a short clarification of the rules laid down in Article 22 of the GDPR, this chapter more closely examines the conditions required under Article 22(2)(a) and (c) of the GDPR concerning contract and consent, respectively, aiming to provide a deeper understanding of ADM's legal bases. Finally, this chapter provides an overview of the state of current thinking in the field of European data protection law concerning issues revolving around consent and contract as adequate legal bases for ADM in an increasingly algorithmic society.


Custers, B., Fosch-Villaronga, E., Hof, S. van der., Schermer, B., Sears, A.M., & Tamò-Larrieux, A. (2022) The role of consent in an algorithmic society. Its evolution, scope, failings, and re-conceptualization. In: Kostas, E., Leenes, R., & Kamara, I (2022) Research Handbook on EU data protection. Edward Elgar Publishing, 455–473
Consent has been enshrined in data protection law since its inception and is today an important lawful ground under the GDPR for the processing of personal data by the private sector. Given the main objectives of consent, namely user empowerment, informational self-determination, and autonomy, this specific legal basis seems to be very user-oriented. Consent, combined with the information requirements of the GDPR, can be useful in managing the (privacy) expectations of a data subject. However, there are many shortcomings in consent mechanisms. Complex data processing (particularly in algorithmic systems), manipulative design (such as dark patterns), and cognitive limitations of individuals (such as bounded rationality) make it difficult, or even impossible, for data subjects, adults and children alike, to understand how data is processed, what the potential risks are, and how to balance these risks with the benefits of data processing. There are serious doubts about the ability of data subjects to make sensible decisions and, hence, consent as an effective legal mechanism for the control of the data subject's personal data. Therefore, legal scholars are proposing re-conceptualizations of consent that should mitigate the current problems, particularly in algorithmic systems. Some of the solutions put forward fit within the current data protection framework (e.g., a right to customization); others require a reform of that framework. One option is to abandon the self-determination approach altogether and focus more on regulating the design of technologies in ways that contribute to data protection.