Interacting with Robots and AI
November 28, 2019 from 13h-17h
Lorentzzaal, Steenschuur 25, 2311 ES Leiden, Netherlands
eLaw Center for Law and Digital Technologies at Leiden University
Interacting with Robots and AI
Due to the novelty of practices and impacts, the development of robot and AI technologies may bring about unclear rules and areas of legal ambiguity. Robots process vast amounts of data, can learn from experience and self-improve their performance, challenging this way, the applicability of existing regulations that were not designed for progressive, adaptive, and autonomous behavior. Moreover, these systems increasingly interact with children, older adults, and persons with disabilities in private, professional, or public settings, although it is not always clear what safeguards are needed to ensure a safe interaction.
The eLaw Center for Law and Digital Technologies at Leiden University welcomes leading international scholars with interdisciplinary backgrounds addressing how humans interact with robots and AI-driven technologies. The seminar aims at building bridges between disciplines investigating different aspects of robots and AI technologies and at establishing in-depth conversations reflecting on the use and development of such technologies.
Co-Designing the Behavior of Agentic Things with Children
The perception of agency and the way we explain an agent behavior play a role in the interaction with agentic things, like robotic object and intelligent playthings. Children easily imbue robotic objects and playthings with agency and explain their behavior on psychological term attributing intelligence relying on agency perception. These phenomena impact child-robot interactions also in negative ways, causing deception and mismatch of expectations. However, human robot interaction researchers and practitioners often do not tackle in the design process how children make sense of a robot, its agency and how children explain a robot’s behavior and intelligence. In this talk, I will shed light on these phenomena and their implications for child-interactions. I will argue that responsible human-centred design should play a more prominent role in the human-robot interaction design cycle to understand how children perceive a robot’s agency and explain a robot’s behavior. I will present PeerPlay a co-design technique based on perspective tacking and role play theories that enables children to play with and reflect on the agency of robotic objects and intelligent playthings. Peerplay facilitates children to express their tacit knowledge about how a robot should behave to express agency without causing deception and expectation mismatch.
Cristina Zaga is a researcher and lecturer at the Human-Centred Design Group (Design and Production Management department) and at The DesignLab at the University of Twente. At the Human-Centred Design group, she contributes to teaching and research, specifically focusing design methods for embodied AI and interactive agency. Cristina interests lay in the interaction design and related methodologies for technological products and systems that aim at social good, especially physically embodied AI-driven systems (e.g., robots, IoTs).At the DesignLab, she investigates methodologies, methods, tools, and techniques to connect science, technology, and society through responsible design. Previously, Cristina has worked at the Human-Media Interaction Group (University of Twente) and the Robots in Groups Lab (Cornell University, USA) on her doctoral research about “robothings”, everyday robotic objects and toys, to promote children’s prosocial behaviors in collaborative play. Her award-winning work in Human-Robot Interaction has received many accolades. She is regularly invited as a speaker at events (e.g., TEDx), symposia, and conferences. Cristina was selected as Google Women TechMaker Scholar 2018 for her research quality and her support to empower women and children in STEM
Roger Andre Søraa
Social robots mitigating loneliness in rural areas
How can isolation and loneliness be mitigated through social robots’ emotional care? This presentation problematizes the concept of 'being social', and how that notion is changing with rapid digitalization, also in remote Northern regions as I base my case study on.
I analyse this through the Scandinavian term ‘welfare technology,’ while also situating loneliness in Northern Scandinavian regions. The use and development of social robots equipped with emotional care support can be a way of mitigating loneliness, what what are the consequences of such choices?
Dr Roger A. Søraa is a Researcher at the Department of Interdisciplinary Studies of Culture at NTNU Norwegian University of Science and Technology. He has a PhD in Studies of Technology and Society, and leads a research group on Digitalization and robotization of society. He leads the project My Robot Friend, and works on large EU projects such as eWare – Early Warning Accompanies Robotics Excellence and LIFEBOTS Exchange. He is also the Editor in chief of the Nordic Journal of Science and Technology Studies (NJSTS).
Maria Luce Lupetti
Reframing HRI challenges through Embodied Manifestos
As we are more and more surrounded by a multiplicity of autonomous systems that perform daily life tasks with or instead of us, the risks depending on their judgment of situations and actions are also increasing. From the unintended emphasizing of human biases to security risks and potential malicious uses, the diffusion of autonomous systems, then, calls for reflections on the ethical implications these might have on society. To this end, a growing body of research is working on the development of methods and metrics for ensuring that the design, engineering and use of such systems embrace human values. Due to the complexity of such systems, however, unintended consequences are often difficult to predict and prevent.
So, how can we account for such complexity when designing autonomous systems? In this regard, critical design approaches provide an alternative perspective in which ethical reflections are addressed in non-prescriptive manners, rather as dialogic interventions. Critical artefacts are developed and used to explore our understanding of ethical implications and for reframing the challenges we should address when designing autonomous systems. In particular, Embodied Manifestos, a type of critical design artefacts, represent deliberate and tangible manifestations of design ideas that can be used to invite public audiences to reflect and act on certain ethical principles related to the coexistence of humans and autonomous systems.
Dr. Lupetti Lupetti is a postdoctoral design researcher, part of the AiTech research group, at TU Delft. Her work is focused on the intersection between design, AI and robotics. She holds a PhD cum Laude in “Production, Management and Design” from Politecnico di Torino, Italy (2018). Her doctoral research, focused on human-robot interaction and play for children, was supported by the Italian telecommunication company TIM. Prior to this position, she was a Research Fellow at Amsterdam Metropolitan Solution Institute (AMS) (2018-2019) and visiting scholar at X-Studio, Academy of Art and Design, Tsinghua University, Beijing, China (2016-2017).
Ethical challenges regarding implementation of care robots
Dr. Felzmann is a lecturer in Philosophy/Ethics in the discipline of Philosophy at the School of Humanities, NUI Galway. She is affiliated to the Centre of Bioethical Research and Analysis (COBRA) at NUIG. Her area of specialisation is Bioethics, with focus on research ethics, health care ethics and information ethics, especially robot ethics. She also works in moral theory, with focus on theory in/of Bioethics and feminist bioethics.
Dr. Felzmann has been involved in a number of European research projects:
- Partner (with Dympna Casey) in ERASMUS+ project PROSPERO on education for the use of robots in the caring professions (since 2018)
- WG4 lead on ELS issues in Wearable Robotics and STSM coordinator for the COST Action on Wearable Robotics (CA16116) (since 2017)
- WG3 member of the COST Action RANCARE (CA14208) on Rationing in Nursing Care (since 2016)
- Leader of ethics deliverables and chair of the Ethics Board of a H2020 project (MARIO) on assistive robotics for older persons with dementia (2015-2018)
- Chair of COST Action CHIPME (IS1303) on ELS issues in genetic testing, "Citizen's Health through public-private Initiatives: Public health, Market and Ethical perspectives" (2015-2017);
- Subject expert in research ethics on a capacity building project (UNIVERSITARIA) for Romanian Universities (2015)
In addition, she has also served as:
- Ethics reviewer for REA (European Commission)
- Member of the National Forum for Teaching and Learning Expert Group on Ethics, Law and Policy in Learning Analytics (2017)
- Member of a HEA funded project on digitally mediated interprofessional learning (IPL) in education in the health care professions (2015)
Emotions, and transparency in human-robot interaction
In this talk, I will present intelligent machine abilities that relate to conscious processing and agency in natural minds: emotions and rational explanations of behavior. I will explain how in artificial beings such abilities can be engineered. I will focus on two recent research lines: explanation of robot actions based on the robot's beliefs and desires, and transparency of robot behavior by simulation of emotion during the learning process.
Dr. Joost Broekens is assistant professor of Affective Computing at Leiden Institute of Advanced Computer Science (LIACS) of Leiden University, and co-founder and CTO of Interactive Robotics. His research includes computational modeling of emotion (mood, appraisal, applied in games, robots and agents, and theoretical), human-robot interaction and Reinforcement Learning. He is member of the executive board of the Association for the Advancement of Affective Computing (AAAC), associate editor of the Adaptive Behavior journal, and member of the steering committee of the IEEE Affective Computing and Intelligent Interaction Conference. He organized interdisciplinary workshops on topics including computational modelling of emotion (Lorentz, Leiden, 2011), grounding emotion in adaptation (IROS, 2016), and emotion as feedback signals (Lorentz, Leiden, 2016), and edited special issues on these topics (in e.g., Springer LNAI, IEEE Transactions on Affective Computing, and Adaptive Behavior). His research interests include emotions in reinforcement learning, computational models of cognitive appraisal, emotion psychology, emotions in games, human perception and effects of emotions expressed by virtual agents and robots, emotional and affective self-report, and human-robot interaction. He is a frequent public speaker on these topics and has media appearances on a regular basis
Robin L. Pierce
What Human Rights Require of Dementia Care Robots
Robin Pierce is at the Tilburg Institute for Law, Technology, and Society (TILT) in The Netherlands. She obtained a law degree (Juris Doctor) from University of California, Berkeley and a PhD from Harvard University where her work focused on genetic privacy. Currently, her work focuses on AI in medicine, addressing translational challenges (e.g. research ethics and regulation, data protection, privacy, medical decision-making) for the development and integration of AI-based applications for healthcare. She has published across disciplines in such journals as European Data Protection and Law Review, Social Science and Medicine, and The Lancet Neurology. She serves on the editorial boards of the Journal of Bioethical Inquiry and The Journal of Technology Regulation.
Designing for human rights in AI
In the age of Big Data, companies and governments are increasingly using algorithms to inform hiring decisions, employee management, policing, credit scoring, insurance pricing, and many more aspects of our lives. AI systems can help us make evidence-driven, efficient decisions, but can also confront us with unjustified, discriminatory decisions wrongly assumed to be accurate because they are made automatically and quantitatively. Despite increasing attention to these urgent challenges in recent years, technical solutions to these complex socio-ethical problems are often developed without the critical input of societal stakeholders who are impacted by the technology. On the other hand, calls for more ethically- and socially-aware AI often fail to provide answers of how to proceed beyond stressing the importance of transparency, explainability, and fairness. Bridging these socio-technical gaps and the deep divide between abstract value language and technical requirements is essential to facilitate nuanced, context-dependent design choices that will support moral and social values. In this work, we bridge this divide through the framework of Design for Values, introducing a roadmap for proactively engaging societal stakeholders to translate fundamental human rights into context-dependent design requirements through a structured, inclusive, and transparent process.
Evgeni Aizenberg is a post-doctoral researcher at AiTech (TU Delft), leading the project on human rights-based design in AI. He is developing a multidisciplinary design framework rooted in human rights and proactive engagement of societal stakeholders, in order to address a broad scope of ethical and social challenges arising from algorithms and AI, such as discrimination, unjustified action, data determinism, and job market effects of automation. He is also the author of the Data Crow Watch blog, in which he engages with the public on these topics.
The objective of this short presentation is to introduce the VIROS project at the University of Oslo, Norway. This project will run over 5 years, starting in 2019, facilitating a collaboration between researchers in robot engineering, law, and social sciences.
The increasing deployment of robots and artificial intelligence (AI) systems is introducing new layers of critical infrastructures in various areas of our society. This, in turn, contributes to new digital vulnerabilities and poses novel legal and regulatory questions. The VIROS project investigates the challenges and solutions in regulating robotics – legally and technically – particularly with respect to addressing the safety, security and privacy concerns such systems raise. The impact of the project will be ensured by involving multiple relevant stakeholders in the Norwegian public sector, consumer advocates, three robotics companies (two Norwegian and one Japanese), and leading international roboticists.
Prof. Tobias Mahler teaches law at the faculty of law at the University of Oslo. He is a professor at the Norwegian Research Center for Computers and Law (NRCCL), specializing in information and communications technology law. His research interests cover a broad range of legal issues arising in the context of (i) robots, particularly with artificial intelligence capabilities, (ii) Internet governance (especially the domain name system), as well as (iii) cybersecurity and privacy. This focus on legal issues is complemented with research interests in legal informatics more closely related to computer science. The latter line of research has focused on software applications for legal practice, such as, legal risk management and visual representations of legal reasoning. He holds a PhD from the University of Oslo, an LLM degree in legal informatics from the University of Hannover, and a German law degree (first state exam). He has practised law in Norway as corporate lawyer in the automotive industry, primarily working with international commercial contracts. He teaches primarily robot regulation, cybersecurity regulation, legal tech and artificial intelligence, as well as Norwegian and German law of obligations. Tobias is the deputy director of the NRCCL and the director of the centre's LLM programme.