Robot & AI Governance

Robot & AI Governance

The line of research on Robot and AI Governance investigates dissonances between robots, AI, and public and private regulatory development and works towards developing science for robot policy. The ERC StG SAFE & SOUND works to that end; it has the ambition to connect the policy cycle with data generated in robot testing zones to support evidence-based policymaking for robot technologies.

Scientific contributions to the governance of robots

Prifti, K., & Fosch-Villaronga, E. (2024). Towards experimental standardization for AI governance in the EU. Computer Law & Security Review, 52, 105959. 

The EU has adopted a hybrid governance approach to address the challenges posed by Artificial Intelligence (AI), emphasizing the role of harmonized European standards (HES). Despite advantages in expertise and flexibility, HES processes face legitimacy problems and struggle with epistemic gaps in the context of AI. This article addresses the problems that characterize HES processes by outlining the conceptual need, theoretical basis, and practical application of experimental standardization, which is defined as an ex-ante evaluation method that can be used to test standards for their effects and effectiveness. Experimental standardization is based on theoretical and practical developments in experimental governance, legislation, and innovation. Aligned with ideas and frameworks like Science for Policy and evidence-based policymaking, it enables co-creation between science and policymaking. We apply the proposed concept in the context of HES processes, where we submit that experimental standardization contributes to increasing throughput and output legitimacy, addressing epistemic gaps, and generating new regulatory knowledge. 

Fosch-Villaronga, E., Calleja, C., Drukarch, H., Torricelli, D. (2023) How can ISO 13482:2014 account for the ethical and social considerations of robotic exoskeletons? Technology in Society, 102387, 1-21, https://doi.org/10.1016/j.techsoc.2023.102387

This paper analyzes and classifies regulatory gaps and inconsistencies in ISO 13482:2014 (‘Safety Requirements for Personal Care Robots'), specifically regarding robotic lower-limb exoskeletons, being personal care robots, for everyday activities. Following a systematic literature review, our findings support the conclusion that, even though ISO 13482:2014 has proven to be a substantial step towards regulating that type of wearable robot, it fails to address safety sufficiently and comprehensively. That failure results in a general overlook of critical legal, ethical, and social considerations when designing robots, with the consequence that seemingly safe systems might nonetheless harm end-users. Notwithstanding those limitations and impediments to the development of safe technologies, to date, there has been no thorough assessment of how the standard regulates the development of exoskeletons and whether it requires any improvement in light of ethical, legal, and societal considerations. To bridge this gap, we compile relevant areas for improvement concerning ISO 13482:2014 fueled by these considerations. We do so in an accessible manner and provide concrete recommendations to help decision-makers overcome the standard's drawbacks.

INNOVATION LETTER_Experimenting with competing legal standards for robotics.pdf

There are legitimacy and discriminatory issues relating to overreliance on private standards to regulate new technologies. On the legitimacy plane, we see that standards shift the centralization of regulation from public democratic processes to private ones that are not subject to the rule of law guarantees reviving the discussion on balancing the legitimacy and effectiveness of techno-legal solutions, which only further aggravates this complex panorama. On the discriminatory plane, incentive issues exacerbate discriminatory outcomes over often marginalized communities. Indeed, standardization bodies do not have incentives to involve and focus on minorities and marginal groups because 'unanimity' of the voting means among those sitting at the table, and there are no accountability mechanisms to turn this around. In this letter, we put up some ideas on how to devise an institutional framework such that standardization bodies invest in anticipating and preventing harm to people's fundamental rights.

A legal sustainability approach to align the order of rules and actions in the context of digital innovation.

Fosch-Villaronga, E., Drukarch, H., Giraudo, M.  (2023). A legal sustainability approach to align the order of rules and actions in the context of digital innovation. In: Sætra, H. (2023) Technology and Sustainable Development. The Promise and Pitfalls of Techno-Solutionism. Routledge, 127-143, available online

While the pace of digitization and its impacts on society and markets have become an independent topic of research and debate, far less is clear on how the traditional regulatory functions of governments should evolve with these transformative changes. In this article, we explain how technology disrupts the legal ecosystem and how an uncontrolled legal environment may provide carte blanche to techno-solutionism and cause disruptions that affect practices and society. In the face of uncertainty regarding the implications of fundamental rights and liberties at the core of liberal democracies, we outline two possible legal sustainability approaches, weak and strong, respectively. We do so by borrowing from the economic literature on environmental externalities and sustainability paradigms of economic growth. In this frame, we present a three-step process to align the order of rules with that of actions to create better conditions for a smooth and sustainable co-evolution between technological ecosystems and the prevailing institutions. Such a process aims at bridging information asymmetries by generating policy-relevant data, sharing knowledge among stakeholders to understand and make sense of such information, and creating opportunities for those ideas to turn into an “action” in the world of actions. In doing so, we strive for brokering knowledge between economic agents and regulators since only by having a shared common understanding of the state of affairs a shared normative view on the matter can follow.

10.1201_9781003325086-10_chapterpdf.pdf

Calleja, C., Drukarch, H., and Fosch-Villaronga, E. (2022). Harnessing robot experimentation to optimize the regulatory framing of emerging robot technologies. Data & Policy, Cambridge University Press, 1-15.

From exoskeletons to lightweight robotic suits, wearable robots are changing dynamically and rapidly, challenging the timeliness of laws and regulatory standards that were not prepared for robots that would help wheelchair users walk again. In this context, equipping regulators with technical knowledge on technologies could solve information asymmetries among developers and policymakers and avoid the problem of regulatory disconnection. This article introduces pushing robot development for lawmaking (PROPELLING), an financial support to third parties from the Horizon 2020 EUROBENCH project that explores how robot testing facilities could generate policy-relevant knowledge and support optimized regulations for robot technologies. With ISO 13482:2014 as a case study, PROPELLING investigates how robot testbeds could be used as data generators to improve the regulation for lower-limb exoskeletons. Specifically, the article discusses how robot testbeds could help regulators tackle hazards like fear of falling, instability in collisions, or define the safe scenarios for avoiding any adverse consequences generated by abrupt protective stops. The article’s central point is that testbeds offer a promising setting to bring policymakers closer to research and development to make policies more attuned to societal needs. In this way, these approximations can be harnessed to unravel an optimal regulatory framework for emerging technologies, such as robots and artificial intelligence, based on science and evidence.

Drukarch, H., Calleja, C., and Fosch-Villaronga, E. (2023). An iterative regulatory process for robot governance. Data & Policy, Cambridge University Press, 5:e8, 1-22.

There is an increasing gap between the policy cycle’s speed and that of technological and social change. This gap is becoming broader and more prominent in robotics, that is, movable machines that perform tasks either automatically or with a degree of autonomy. This is because current legislation was unprepared for machine learning and autonomous agents. As a result, the law often lags behind and does not adequately frame robot technologies. This state of affairs inevitably increases legal uncertainty. It is unclear what regulatory frameworks developers have to follow to comply, often resulting in technology that does not perform well in the wild, is unsafe, and can exacerbate biases and lead to discrimination. This paper explores these issues and considers the background, key findings, and lessons learned of the LIAISON project, which stands for “Liaising robot development and policymaking,” and aims to ideate an alignment model for robots’ legal appraisal channeling robot policy development from a hybrid top-down/bottom-up perspective to solve this mismatch. As such, LIAISON seeks to uncover to what extent compliance tools could be used as data generators for robot policy purposes to unravel an optimal regulatory framing for existing and emerging robot technologies.


Drukarch, H.,  Calleja, C., and Fosch-Villaronga, E. (2022). LIAISON: Liaising robot development and policymaking to reduce the complexity in robot legal compliance. In: Pons, J. L. (2022) Interactive Robotics: Legal, Ethical, Social and Economic Aspects. Biosystems & Biorobotics, vol. 30, Springer., 212-219, https://doi.org/10.1007/978-3-031-04305-5_37
The relationship between robots and policy development is complex. Technology and regulation evolve, but not always simultaneously or in the same direction. At the same time, robot developers struggle to find suitable safeguards in existing norms applicable to them. This often results in disconnections between both worlds. New robots and applications may not fit into existing (robot) categories (a robotic garbage collector or a robotic wheelchair with a robotic arm with a feeding function). Also, regulations may be hard to follow for developers who are concerned about their particular case because legislation (private and public policy making) may be outdated and with confusing types (such as ‘personal care robots’ not for medical purposes from ISO 13482:2014), and technology-neutral. Since legal responsiveness does not always follow as a consequent step in response to technology’s dramatic pace, we initiated the LIAISON Research Project. LIAISON follows the ideal that lawmaking ‘needs to become more proactive, dynamic, and responsive’ to achieve its desired policy goals and explores to what extent compliance tools could be used as data generators for robot policy purposes to reduce the complexity in emerging robot governance, and unravel an optimal regulatory framing for existing and emerging robot technologies.


Calleja, C., Drukarch, H., and Fosch-Villaronga, E. (2022). Towards Evidence-Based Standard-Making for Robot Governance. In: Pons, J. L. (2022) Interactive Robotics: Legal, Ethical, Social and Economic Aspects. Biosystems & Biorobotics, vol. 30, Springer., 220-227, https://doi.org/10.1007/978-3-031-04305-5_36.  
Despite the growing body of literature highlighting the legal and ethical questions robots raise, robot developers struggle to incorporate other aspects than mere physical safety into robot design to make them comprehensively safe. The chemical, food, and pharmaceutical industries established years ago use evidence-based frameworks that ensure the safety of these products EU-wide. However, these evidence-based frameworks have yet to be seen for robot technology. As a result, current robot technology raises many legal and ethical issues. The PROPELLING project aims to investigate how robot testbeds can be harnessed as data generators for standard-makers. To this end, the project focuses on testing safety requirements for lower-limb exoskeletons to understand whether standards, particularly ISO 13482:2014, address safety sufficiently and comprehensively and use the H2020 Eurobench testing beds and data as a means for appraising the standard. We suggest that linking experimentation settings with standard-making processes could speed up the creation, revision, or discontinuation of norms governing robot technology.

Concepts, definitions, and considerations for healthcare robot governance

Fosch-Villaronga, E. & Drukarch, H. (2021). On Healthcare Robots. Concepts,  Definitions, and considerations for healthcare robot governance. ArXiv pre-print, 1-87, https://arxiv.org/abs/2106.03468

Although healthcare is a remarkably sensitive domain of application, and systems that exert direct control over the world can cause harm in a way that humans cannot necessarily correct or oversee, it is still unclear whether and how healthcare robots are currently regulated or should be regulated. Existing regulations are primarily unprepared to provide guidance for such a rapidly evolving field and accommodate devices that rely on machine learning and AI. Moreover, the field of healthcare robotics is very rich and extensive, but it is still very much scattered and unclear in terms of definitions, medical and technical classifications, product characteristics, purpose, and intended use. As a result, these devices often navigate between the medical device regulation or other non-medical norms, such as the ISO personal care standard. Before regulating the field of healthcare robots, it is therefore essential to map the major state-of-the-art developments in healthcare robotics, their capabilities and applications, and the challenges we face as a result of their integration within the healthcare environment. This contribution fills in this gap and lack of clarity currently experienced within healthcare robotics and its governance by providing a structured overview of and further elaboration on the main categories now established, their intended purpose, use, and main characteristics. We explicitly focus on surgical, assistive, and service robots to rightfully match the definition of healthcare as the organized provision of medical care to individuals, including efforts to maintain, treat, or restore physical, mental, or emotional well-being. We complement these findings with policy recommendations to help policymakers unravel an optimal regulatory framing for healthcare robot technologies.

On Healthcare Robots_ Concepts, definitions, and considerations for healthcare robot governance (1).pdf

Setting a research agenda to mitigate overtrust in automation

Aroyo, A. M., de Bruyne, J., Dheu, O., Fosch-Villaronga, E., Gudkov, A., Hoch, H., Jones, S., Lutz, C., Sætra, H., Solberg, M., & Tamò-Larrieux, A. (2021) Overtrusting Robots: Setting a Research Agenda to Mitigate Overtrust in Automation. Paladyn Journal of Behavioral Robotics 12(1), 1-14.

There is increasing attention given to the concept of trustworthiness for artificial intelligence and robotics. However, trust is highly context-dependent, varies among cultures, and requires reflection on others’ trustworthiness, appraising whether there is enough evidence to conclude that these agents deserve to be trusted. Moreover, little research exists on what happens when too much trust is placed in robots and autonomous systems. Conceptual clarity and a shared framework for approaching overtrust are missing. In this contribution, we offer an overview of pressing topics in the context of overtrust and robots and autonomous systems. Our review mobilizes insights solicited from in-depth conversations from a multidisciplinary workshop on the subject of trust in human–robot interaction (HRI), held at a leading robotics conference in 2020. A broad range of participants brought in their expertise, allowing the formulation of a forward-looking research agenda on overtrust and automation biases in robotics and autonomous systems. Key points include the need for multidisciplinary understandings that are situated in an eco-system perspective, the consideration of adjacent concepts such as deception and anthropomorphization, a connection to ongoing legal discussions through the topic of liability, and a socially embedded understanding of overtrust in education and literacy matters. The article integrates diverse literature and provides a ground for common understanding for overtrust in the context of HRI.

Fosch-Villaronga, E. and Heldeweg, M. A., (2018) "Regulation, I Presume?" Said the Robot. Towards an Iterative Regulatory Process for Robot Governance. Computer Law and Security Review, 34(6), 1258-1277

This article envisions an iterative regulatory process for robot governance. In the article, we argue that what lacks in robot governance is actually a backstep mechanism that can coordinate and align robot and regulatory developers. In order to solve that problem, we present a theoretical model that represents a step forward in the coordination and alignment of robot and regulatory development. Our work builds on previous literature, and explores modes of alignment and iteration towards greater closeness in the nexus between research and development (R&D) and regulatory appraisal and channeling of robotics’ development. To illustrate practical challenges and solutions, we explore different examples of (related) types of communication processes between robot developers and regulatory bodies. These examples help illuminate the lack of formalization of the policymaking process, and the loss of time and resources that the waste of knowledge generated for future robot governance instruments implies. We argue that initiatives that fail to formalize the communication process between different actors and that propose the mere creation of coordinating agencies risk being seriously ineffective. We propose an iterative regulatory process for robot governance, which combines the use of an ex ante robot impact assessment for legal/ethical appraisal, and evaluation settings as data generators, and an ex post legislative evaluation instrument that eases the revision, modification and update of the normative instrument. In all, the model breathes the concept of creating dynamic evidence-based policies that can serve as temporary benchmark for future and/or new uses or robot developments. Our contribution seeks to provide a thoughtful proposal that avoids the current mismatch between existing governmental approaches and what is needed for effective ethical/legal oversight, in the hope that this will inform the policy debate and set the scene for further research.

Fosch-Villaronga, E., & Heldeweg, M.A. (2020) “Meet me halfway,” said the robot to the regulation. Linking ex-ante technology impact assessments to legislative ex-post evaluations via shared data repositories for robot governance. In: Pons-Rovira, J.L. (2020) Inclusive Robotics for a Better Society. INBOTS 2018. Biosystems & Biorobotics, vol. 25., Springer, Cham., 113-119.

Current legislation may apply to new developments. Even so, these developments may raise new challenges that call into question the applicability of this legislation. This paper explains what happens at this moment for new robot technologies. We argue that there is no formal communication process between robot developers and regulators from which policies could learn. To bridge this gap, we propose a model that links technology impact assessments to legislative ex-post evaluations via shared data repositories.

Martinetti, A., Chemweno, P., Nizamis, K., & Fosch-Villaronga, E. (2021) Redefining safety in light of human-robot interaction: a critical review of current standards and regulations. Frontiers in Chemical Engineering.

Policymakers need to consider the impacts that robots and artificial intelligence (AI) technologies have on humans beyond physical safety. Traditionally, the definition of safety has been interpreted to exclusively apply to risks that have a physical impact on persons' safety, such as, among others, mechanical or chemical risks. However, the current understanding is that the integration of AI in cyber-physical systems such as robots, thus increasing interconnectivity with several devices and cloud services, and influencing the growing human-robot interaction challenges how safety is currently conceptualised rather narrowly. Thus, to address safety comprehensively, AI demands a broader understanding of safety, extending beyond physical interaction, but covering aspects such as cybersecurity, and mental health. Moreover, the expanding use of machine learning techniques will more frequently demand evolving safety mechanisms to safeguard the substantial modifications taking place over time as robots embed more AI features. In this sense, our contribution brings forward the different dimensions of the concept of safety, including interaction (physical and social), psychosocial, cybersecurity, temporal, and societal. These dimensions aim to help policy and standard makers redefine the concept of safety in light of robots and AI's increasing capabilities, including human-robot interactions, cybersecurity, and machine learning.

Fosch-Villaronga, E. (2015) Creation of a Care Robot Impact Assessment. WASET, International Science Journal of Social, Behavioral, Educational, Economic and Management Engineering, 9(6), 1817 – 1821.

Fosch-Villaronga, E. (2019). Robot Impact Assessment. In: Fosch-Villaronga, E. (2019). Healthcare, Robots, & the Law. Regulatin Automation in Personal Care, 39-56.

A roboticist building a robot that interacts with humans may be clueless about what regulations s/he needs to follow, whether the robot behavior needs to be regulated by design or after, or whether s/he is in charge of it. The methodology Robot Impact Assessment (ROBIA) helps robot engineers in the process to identify the legal aspects associated with their technology. A ROBIA involves setting the context, describing the robot, identifying the relevant framework, identify the risks and mitigate them. Following ROBIA, robot engineers may identify relevant safeguards to make robots safe to the whole extent of the meaning of the word.

Robot Impact Assessment_EFoschVillaronga.pdf

Fosch-Villaronga, E. and Golia, A. Jr. (2019) Robots, Standards and the Law. Rivalries between private standards and public policymaking for robot governance. Computer Law & Security Review, 35(2), 129-144

This article explains the complex intertwinement between public and private regulators in the case of robot technology. Public policymaking ensures broad multi-stakeholder protected scope, but its abstractness often fails in intelligibility and applicability. Private standards, on the contrary, are more concrete and applicable, but most of the times they are voluntary and reflect industry interests. The ‘better regulation’ approach of the EU may increase the use of evidence to inform policy and lawmaking, and the involvement of different stakeholders. Current hard-lawmaking instruments do not appear to take advantage of the knowledge produced by standard-based regulations, virtually wasting their potential benefits. This fact affects the legal certainty with regards to a fast-paced changing environment like robotics. In this paper, we investigate the challenges of overlapping public/private regulatory initiatives that govern robot technologies in general, and in the concrete of healthcare robot technologies. We wonder until what extent robotics should be governed only by standards. We also reflect on how public policymaking could increase their technical understanding of robot technology to devise an applicable and comprehensive framework for this technology. In this respect, we propose different ways to integrate the technical know-how into policymaking (e.g., collecting the data/knowledge generated from the impact assessments in shared data repositories, and using it for evidence-based policies) and to strengthen the legitimacy of standards.

Fosch-Villaronga, E. and Golia, A. Jr. (2019) The Intricate Relationships between Private Standards and Public Policymaking in the Case of Personal Care Robots. Who Cares More? In: Barattini, P., Vicentini, F., Virk, G. S., & Haidegger, T. (Eds.). (2019). Human-Robot Interaction: Safety, Standardization, and Benchmarking. CRC Press Taylor & Francis Group

One of the consequences of the fast development of technology is that public policymakers struggle to develop policies that adequately frame the technology impacts, challenges and opportunities in time. This favours private actors, who develop their own standards, decentralizing the power to regulate. Industrial standards have been developed to govern industrial robot technology and one type of service robots – personal care robots. This chapter explains the complex intertwinement between public and private regulators in the case of robot technology.

We argue that while safety requirements have been set for these types of robots, legal principles and values deeply embedded in the social environment where these are implemented – privacy, dignity, data protection, cognitive safety, autonomy or ethics – have been disregarded, leaving the users’ (fundamental) rights ignored. Moreover, there are currently no safety requirements for other types of robots, including surgical, therapeutic, rehabilitation, educational and sexual; specific requirements for different types of users are missing; and a code of practice for robot developers, even if being discussed, is currently unavailable.

Public policymaking can ensure a broader multi-stakeholder protected scope, but its abstractness often fails to intelligibility and applicability. On the other side, whereas private standards may be much more concrete, most of the time, they are made by voluntary work with no juridical guarantees, normally reflecting industry interests. To give comprehensive protection to robot users without losing technical concreteness, we advocate for a better intertwinement between private standard-setting and public lawmaking approaches.

This chapter explains the distinction between two different modes of regulation, i.e. standard-setting and lawmaking, highlighting their respective features and their reciprocal interrelationships. One of the consequences of the inability to keep up with technology is that industry, and more generally, private actors, usually take the lead and develop their own standards. The problem with such testing regimes is that they are conceived under the idea that new technology may need new testing zones, obviating the fact that machine learning capabilities and more “real” robots may challenge that. Lawmaking does not necessarily relate to “the achievement of the optimum degree of order in a given context,” but rather to the achievement of consensus and agreement among the relevant members of the community on the substantive content of the rules themselves. Setting a clear distinction between soft law and hard law, between standard-setting and lawmaking. This is further shown by their intertwinement and reciprocal cross-references, justified by factual and technical needs.






A self-guiding guide to conduct research with embodiment technologies responsibly

Aymerich-Franch & Fosch-Villaronga, E. (2020). A self-guiding tool to conduct research with embodiment technologies responsibly. Frontiers in Robotics and AI, Perspective, 7, 22

The extension of the sense of self to the avatar during experiences of avatar embodiment requires thorough ethical and legal consideration, especially in light of potential scenarios involving physical or psychological harm caused to, or by, embodied avatars. We provide researchers and developers working in the field of virtual and robot embodiment technologies with a self-guidance tool based on the principles of Responsible Research and Innovation (RRI). This tool will help them engage in ethical and responsible research and innovation in the area of embodiment technologies in a way that guarantees all the rights of the embodied users and their interactors, including safety, privacy, autonomy, and dignity.






Aymerich-Franch, L. and Fosch-Villaronga, E. (2019) What we learned from mediated embodiment experiments and why it should matter to policymakers. Presence: Teleoperators and Virtual Environments, MIT Press Journals, 27:1

When people embody a virtual or a robotic avatar, their sense of self extends to the body of that avatar. We argue that, as a consequence, if the avatar gets harmed, the person embodied in that avatar suffers the harm in the first person. Potential scenarios involving physical or psychological harm caused to avatars gives rise to legal, moral, and policy implications that need to be considered by policymakers. We maintain that the prevailing distinction in law between “property” and “person” categories compromises the legal protection of the embodied users. We advocate for the inclusion of robotic and virtual avatars in a double category, property–person, as the property and the person mingle in one: the avatar. This hybrid category is critical to protecting users of mediated embodiment experiences both from potential physical or psychological harm and property damage.

Sætra, H. S. & Fosch-Villaronga, E. (2021) Research in AI has Implications for Society: How do we Respond? The Balance of Power between Science, Ethics, and Politics. Morals & Machines, 1(1), 62-75, https://doi.org/10.5771/2747-5182-2021-1-62.


Artificial intelligence (AI) offers previously unimaginable possibilities, solving problems faster and more creatively than before, representing and inviting hope and change, but also fear and resistance. Unfortunately, while the pace of technology development and application dramatically accelerates, the understanding of its implications does not follow suit. Moreover, while mechanisms to anticipate, control, and steer AI development to prevent adverse consequences seem necessary, the current power dynamics on which society should frame such development is causing much confusion. In this article, we ask whether AI advances should be restricted, modified, or adjusted based on their potential legal, ethical, societal consequences. We examine four possible arguments in favor of subjecting scientific activity to stricter ethical and political control and critically analyze them in light of the perspective that science, ethics, and politics should strive for a division of labor and balance of power rather than a conflation. We argue that the domains of science, ethics, and politics should not conflate if we are to retain the ability to adequately assess the adequate course of action in light of AI‘s implications. We do so because such conflation could lead to uncertain and questionable outcomes, such as politicized science or ethics washing, ethics constrained by corporate or scientific interests, insufficient regulation, and political activity due to a misplaced belief in industry self-regulation. As such, we argue that the different functions of science, ethics, and politics must be respected to ensure AI development serves the interests of society.

Fosch-Villaronga, E. and Millard, C. (2019) Cloud Robotics Law and Regulation. Challenges in the Governance of Complex and Dynamic Cyber-Physical Ecosystems. Robotics and Autonomous Systems 119, 77-91

This paper assesses some of the key legal and regulatory questions arising from the integration of physical robotic systems with cloud-based services, also called “cloud robotics.” The literature on legal and ethical issues in robotics has a strong focus on the robot itself, but largely ignores the background information processing. Conversely, the literature on cloud computing rarely addresses human–machine interactions, which raise distinctive ethical and legal concerns. In this paper, we investigate, from a European legal and regulatory perspective, the growing interdependence and interactions of tangible and virtual elements in cloud robotics environments. We highlight specific problems and challenges in regulating such complex and dynamic ecosystems and explore potential solutions. To illustrate practical challenges, we consider several examples of cloud robotics ecosystems involving multiple parties, various physical devices, and various cloud services. These examples illuminate the complexity of interactions between relevant parties. By identifying pressing legal and regulatory issues in relation to cloud robotics, we hope to inform the policy debate and set the scene for further research.

Policy and Standard Contribution

Contribution to the revision of the Product Safety Directive

During Jan-Dec 2020 Eduard Fosch-Villaronga served as an expert at the  Consumer Safety Network, advising the European Commission in the revision of the General Product Safety Directive in the Sub-Group on Artificial Intelligence (AI), connected products and other new challenges in product safety. See the opinion here.

CWA 17835:2022 on Guidelines for the development and use of safety testing procedures in human-robot collaboration

As a result of his participation to LIAISON and the H2020 COVR, Dr. Eduard Fosch-Villaronga contributed to the CEN Workshop Agreement CWA 17835:2022 on Guidelines for the development and use of safety testing procedures in human-robot collaboration.

cwa17835_2022.pdf