Privacy & Transparency
Newlands, G., Lutz, C., Tamò-Larrieux, A., Fosch-Villaronga, E., Scheitlin, G., and Harasgama, R. (2020), Innovation under Pressure: Implications for Data Privacy during the Covid-19 Pandemic. Big Data & Society, SAGE, 7(2), 1-14
The global Covid-19 pandemic has resulted in social and economic disruption unprecedented in the modern era. Many countries have introduced severe measures to contain the virus, including travel restrictions, public event bans, non-essential business closures and remote work policies. While digital technologies help governments and organizations to enforce protection measures, such as contact tracing, their rushed deployment and adoption also raises profound concerns about surveillance, privacy and data protection. This article presents two critical cases on digital surveillance technologies implemented during the Covid-19 pandemic and delineates the privacy implications thereof. We explain the contextual nature of privacy trade-offs during a pandemic and explore how regulatory and technical responses are needed to protect privacy in such circumstances. By providing a multi-disciplinary conversation on the value of privacy and data protection during a global pandemic, this article reflects on the implications digital solutions have for the future and raises the question of whether there is a way to have expedited privacy assessments that could anticipate and help mitigate adverse privacy implications these may have on society.
Fosch Villaronga, E., Kieseberg, P., and Li, T. (2018) Humans Forget, Machines Remember: Artificial Intelligence and the Right to Be Forgotten. Computer Law & Security Review, 34(2), 304-313
This article examines the problem of AI memory and the Right to Be Forgotten. First, this article analyzes the legal background behind the Right to Be Forgotten, in order to understand its potential applicability to AI, including a discussion on the antagonism between the values of privacy and transparency under current E.U. privacy law. Next, the authors explore whether the Right to Be Forgotten is practicable or beneficial in an AI/machine learning context, in order to understand whether and how the law should address the Right to Be Forgotten in a post-AI world. The authors discuss the technical problems faced when adhering to strict interpretation of data deletion requirements under the Right to Be Forgotten, ultimately concluding that it may be impossible to fulfill the legal aims of the Right to Be Forgotten in artificial intelligence environments. Finally, this article addresses the core issue at the heart of the AI and Right to Be Forgotten problem: the unfortunate dearth of interdisciplinary scholarship supporting privacy law and regulation.
Felzmann, H., Fosch-Villaronga, E., Lutz, C. and Tamò-Larrieux, A. (2020) Towards Transparency by Design for Artificial Intelligence. Science Engineering Ethics, 1-29
In this article, we develop the concept of Transparency by Design that serves as practical guidance in helping promote the beneficial functions of transparency while mitigating its challenges in automated-decision making (ADM) environments. With the rise of artificial intelligence (AI) and the ability of AI systems to make automated and self-learned decisions, a call for transparency of how such systems reach decisions has echoed within academic and policy circles. The term transparency, however, relates to multiple concepts, fulfills many functions, and holds different promises that struggle to be realized in concrete applications. Indeed, the complexity of transparency for ADM shows tension between transparency as a normative ideal and its translation to practical application. To address this tension, we first conduct a review of transparency, analyzing its challenges and limitations concerning automated decision-making practices. We then look at the lessons learned from the development of Privacy by Design, as a basis for developing the Transparency by Design principles. Finally, we propose a set of nine principles to cover relevant contextual, technical, informational, and stakeholder-sensitive considerations. Transparency by Design is a model that helps organizations design transparent AI systems, by integrating these principles in a step-by-step manner and as an ex-ante value, not as an afterthought.
Felzmann, H., Fosch-Villaronga, E., Lutz, C. and Tamò-Larrieux, A. (2019) Transparency You Can Trust: Transparency Requirements for Artificial Intelligence between Legal Norms and Contextual Concerns, Big Data & Society, SAGE, 1-14
Transparency is now a fundamental principle for data processing under the General Data Protection Regulation. We explore what this requirement entails for artificial intelligence and automated decision-making systems. We address the topic of transparency in artificial intelligence by integrating legal, social, and ethical aspects. We first investigate the ratio legis of the transparency requirement in the General Data Protection Regulation and its ethical underpinnings, showing its focus on the provision of information and explanation. We then discuss the pitfalls with respect to this requirement by focusing on the significance of contextual and performative factors in the implementation of transparency. We show that human–computer interaction and human-robot interaction literature do not provide clear results with respect to the benefits of transparency for users of artificial intelligence technologies due to the impact of a wide range of contextual factors, including performative aspects. We conclude by integrating the information- and explanation-based approach to transparency with the critical contextual approach, proposing that transparency as required by the General Data Protection Regulation in itself may be insufficient to achieve the positive goals associated with transparency. Instead, we propose to understand transparency relationally, where information provision is conceptualized as communication between technology providers and users, and where assessments of trustworthiness based on contextual factors mediate the value of transparency communications. This relational concept of transparency points to future research directions for the study of transparency in artificial intelligence systems and should be taken into account in policymaking.
Felzmann, H., Fosch-Villaronga, E., Lutz, C. and Tamò-Larrieux, A. (2019) Robots and transparency. Understanding the multiple dimensions of the transparency requirement of the General Data Protection Regulation for robot technologies. IEEE Robotics & Automation Magazine, 26(2), 71-78
Transparency is often seen as a means to provide accountability and show that something is done with due diligence. This approach to transparency regards it as a remedy to hidden (potentially malevolent) practices. We, therefore, require transparency from manufacturers of products and services. While the outcry for more transparency often occurs in response to a particular scandal (e.g., the various controversies surrounding Facebook in 2018), the European Union's General Data Protection Regulation (GDPR) includes transparency as a proactive requirement for information technologies that process personal data.
Ienca, M. and Fosch-Villaronga, E. (2019) Privacy and Security Issues in Assistive Technologies for Dementia. In: Jottera F., Ienca M., Wangmo T. and Elger B. (2019) Assistive Technologies for Dementia Care. Oxford University Press, 221-239
The collection of a large volume and variety of physiological and behavioral data is critical for the effective development, deployment, and implementation of intelligent assistive technologies (IATs) and for the subsequent effective support of older adults with dementia. Yet it raises privacy and security issues. This chapter reviews the major privacy and security implications associated with the use of three major families of IATs for dementia: ambient assisted living systems, wearable devices, and service robotics, especially telepresence robots. After exploring a number of both category-specific and cross-categorical ethical and legal implications, the chapter proposes a list of policy recommendations with the purpose of maximizing the uptake of IATs while minimizing possible adverse effects on the privacy and security of end-users.
Privacy and human robot interaction in robotized healthcare
Fosch-Villaronga, E., Felzmann, H., Pierce, R. L., de Conca, S., de Groot, A., Ponce del Castillo, A., Robbins, S. (2018) “Nothing comes between my robot and me”: Privacy and human-robot interaction in robotized healthcare. In: Leenes, R., van Brakel, R., Gutwirth, S., and de Hert, P. (2018) Computers, Privacy, and Data Protection 2018 – The Internet of Bodies. Hart,
The integration of cyber-physical robotic systems in healthcare settings is accelerating, with robots used as diagnostic aids, mobile assistants, physical rehabilitation providers, cognitive assistants, social and cognitive skills trainers, or therapists. This chapter investigates currently still underexplored privacy and data protection issues in the use of robotic technologies in healthcare, focusing on privacy issues that are specifically related to human engagement with robots as cyber-physical systems in healthcare contexts. It addresses six relevant privacy concerns and analyses them with regard to the European context: 1. The distinctive privacy impacts of subconsciously incentivised disclosure in human-robot interaction. 2. The complexity of consent requirements, including consent for data processing as well as consent for robotic care provision, both governed by different norms and user expectations. 3. Privacy challenges and opportunities arising from conversational approaches to privacy management with robots. 4. The application of data portability requirements in the context of a person’s substantive reliance on robots. 5. The privacy risks related to robot-based data collection in the workplace. 6. The need to go beyond simpler Privacy by Design approaches, which reduce privacy to data protection, towards designing robots for privacy in a wider sense. We argue that the communication and interaction with robots in healthcare contexts impacts not just data protection concerns, but wider consideration of privacy values, and that these privacy concerns pose challenges that need to be considered during robot design and their implementation in healthcare settings.