Diversity, Robots & AI

Diversity, Robots & AI

The research line on Diversity, Robots & AI investigates from an intersectional approach how robots, AI, and algorithmic decision-making processes impact different communities, including the LGBTQAI+, the elderly, or under-resourced communities. This research also covers how diversity, equity, and inclusion considerations are considered in robot, AI, and algorithm development.

Poulsen, A., Fosch-Villaronga, E., & Søraa, R.A. (2020) Queering Machines. Nature Machine Intelligence, Correspondence, 1-1.

s42256-020-0157-6.pdf

Verhoef, T. & Fosch-Villaronga, E. (2023) Towards affective computing that works for everyone. ACII ‘23 Affective Computing + Intelligent Interaction. MIT Media Lab, Cambridge, MA, USA, Sept. 10–13, forthcoming. Arxiv link.

Missing diversity, equity, and inclusion elements in affective computing datasets directly affect the accuracy and fairness of emotion recognition algorithms across different groups. A literature review reveals how affective computing systems may work differently for different groups due to, for instance, mental health conditions impacting facial expressions and speech or age-related changes in facial appearance and health. Our work analyzes existing affective computing datasets and highlights a disconcerting lack of diversity in current affective computing datasets regarding race, sex/gender, age, and (mental) health representation. By emphasizing the need for more inclusive sampling strategies and standardized documentation of demographic factors in datasets, this paper provides recommendations and calls for greater attention to inclusivity and consideration of societal consequences in affective computing research to promote ethical and accurate outcomes in this emerging field.

Queering the Ethics of AI

Fosch-Villaronga, E. & Malgieri, G. (2024). Queering the ethics of AI. In: Gunkel, D. J. (2024) Handbook on the Ethics of Artificial Intelligence. Edward Elgar Publishing, forthcoming.

This book chapter delves into the pressing need to "queer" the ethics of AI to challenge and re-evaluate the normative suppositions and values that underlie AI systems. The chapter emphasizes the ethical concerns surrounding the potential for AI to perpetuate discrimination, including binarism, and amplify existing inequalities due to the lack of representative datasets and the affordances and constraints depending on technology readiness. The chapter argues that a critical examination of the neoliberal conception of equality that often underpins non-discrimination law is necessary and cannot stress more the need to create alternative interdisciplinary approaches that consider the complex and intersecting factors that shape individuals' experiences of discrimination. By exploring such approaches centering on intersectionality and vulnerability-informed design, the chapter contends that designers and developers can create more ethical AI systems that are inclusive, equitable, and responsive to the needs and experiences of all individuals and communities, particularly those who are most vulnerable to discrimination and harm.

Queering the Ethics of AI.pdf

Mitigating Diversity Biases of AI in the Labor Market 

Rigotti, C., Puttick A., Fosch-Villaronga, E., and Kurpicz-Briki, M. (2023) The BIAS project: Mitigating diversity biases of AI in the labor market. European Workshop on Algorithmic Fairness (EWAF '23), Winterthur, Switzerland, June 7-9, 2023. 

In recent years, artificial intelligence (AI) systems have been increasingly utilized in the labor market, with many employers relying on them in the context of human resources (HR) management. However, this increasing use has been found to have potential implications for perpetuating bias and discrimination. The BIAS project kicked off in November 2022 and is expected to develop an innovative technology (hereinafter: the Debiaser) to identify and mitigate biases in the recruitment process. For this purpose, an essential step is to gain a nuanced understanding of what constitutes AI bias and fairness in the labor market, based on cross-disciplinary and participatory approaches. What follows is a preliminary overview of the design and expected implementation of the project, as well as how our project aims to contribute to the existing literature on law, AI, bias, and fairness 

2023_journal article_AI & Medicine_Fair and equitable AI in biomedical research and healthcare

Fair and equitable AI in biomedical research and healthcare: social science perspectives

Baumgartner, R., Arora, P., Bath, C., Burljaeva, D., Ciereszko, K., Custers, B., Ding, J., Ernst, W., Fosch-Villaronga, E., Galanos, V., Gremsl, T., Hendl, T., Kropp, C., Lenk, C., Martin, P, Mbelu, S., Morais dos Santos Bruss, S., Napiwodzkad, K., Nowakd, E., Roxanne, T., Samerski, S., Schneeberger, D., Tampe-Maik, K., Vlantoni, K., Wiggert, K., and Williamsh, R. (2023). Fair and equitable AI in biomedical research and healthcare: social science perspectives. Artificial Intelligence In Medicine, Volume 144, 102658, 1-9, https://doi.org/10.1016/j.artmed.2023.102658.  

Artificial intelligence (AI) offers opportunities but also challenges for biomedical research and healthcare. This position paper shares the results of the international conference “Fair medicine and AI” (online 3–5 March 2021). Scholars from science and technology studies (STS), gender studies, and ethics of science and technology formulated opportunities, challenges, and research and development desiderata for AI in healthcare. AI systems and solutions, which are being rapidly developed and applied, may have undesirable and unintended consequences including the risk of perpetuating health inequalities for marginalized groups. Socially robust development and implications of AI in healthcare require urgent investigation. There is a particular dearth of studies in human-AI interaction and how this may best be configured to dependably deliver safe, effective and equitable healthcare. To address these challenges, we need to establish diverse and interdisciplinary teams equipped to develop and apply medical AI in a fair, accountable and transparent manner. We formulate the importance of including social science perspectives in the development of intersectionally beneficent and equitable AI for biomedical research and healthcare, in part by strengthening AI health evaluation.


Fosch-Villaronga, E. & Poulsen, A. (2022). Diversity and Inclusion in Artificial Intelligence. In: Custers, B., Fosch-Villaronga, E. (eds) Law and Artificial Intelligence. Regulating AI and Applying AI in Legal Practice. Information Technology and Law Series, vol 35. T.M.C. Asser Press, The Hague, 109–134.

Discrimination and bias are inherent problems of many AI applications, as seen in, for instance, face recognition systems not recognizing dark-skinned women and content moderator tools silencing drag queens online. These outcomes may derive from limited datasets that do not fully represent society as a whole or from the AI scientific community's western-male configuration bias. Although being a pressing issue, understanding how AI systems can replicate and amplify inequalities and injustice among underrepresented communities is still in its infancy in social science and technical communities. This chapter contributes to filling this gap by exploring the research question: what do diversity and inclusion mean in the context of AI? This chapter reviews the literature on diversity and inclusion in AI to unearth the underpinnings of the topic and identify key concepts, research gaps, and evidence sources to inform practice and policymaking in this area. Here, attention is directed to three different levels of the AI development process: the technical, the community, and the target user level. The latter is expanded upon, providing concrete examples of usually overlooked communities in the development of AI, such as women, the LGBTQ+ community, senior citizens, and disabled persons. Sex and gender diversity considerations emerge as the most at risk in AI applications and practices and thus are the focus here. To help mitigate the risks that missing sex and gender considerations in AI could pose for society, this chapter closes with proposing gendering algorithms, more diverse design teams, and more inclusive and explicit guiding policies. Overall, this chapter argues that by integrating diversity and inclusion considerations, AI systems can be created to be more attuned to all-inclusive societal needs, respect fundamental rights, and represent contemporary values in modern societies.

Fosch-Villaronga, E., Drukarch, H, Khanna, P., Verhoef, T., & Custers, B. (2022). Accounting for diversity in AI for medicine. Computer Law and Security Review, 47, 105735

In healthcare, gender and sex considerations are crucial because they affect individuals' health and disease differences. Yet, most algorithms deployed in the healthcare context do not consider these aspects and do not account for bias detection. Missing these dimensions in algorithms used in medicine is a huge point of concern, as neglecting these aspects will inevitably produce far from optimal results and generate errors that may lead to misdiagnosis and potential discrimination. This paper explores how current algorithmic-based systems may reinforce gender biases and affect marginalized communities in healthcare-related applications. To do so, we bring together notions and reflections from computer science, queer media studies, and legal insights to better understand the magnitude of failing to consider gender and sex difference in the use of algorithms for medical purposes. Our goal is to illustrate the potential impact that algorithmic bias may have on inadvertent discriminatory, safety, and privacy-related concerns for patients in increasingly automated medicine. This is necessary because by rushing the deployment of AI technologies that do not account for diversity, we risk having an even more unsafe and inadequate healthcare delivery. By promoting the account for privacy, safety, diversity, and inclusion in algorithmic developments with health-related outcomes, we ultimately aim to inform the Artificial Intelligence (AI) global governance landscape and practice on the importance of integrating gender and sex considerations in the development of algorithms to avoid exacerbating existing or new prejudices.

Fosch-Villaronga, E., & Drukarch, H (2023) Accounting for diversity in robot design, testbeds, and safety standardization. International Journal of Social Robotics

Science has started highlighting the importance of integrating diversity considerations in medicine and healthcare. However, there is little research into how these considerations apply, affect, and should be integrated into concrete healthcare innovations such as rehabilitation robotics. Robot policy ecosystems are also oblivious to the vast landscape of gender identity understanding, often ignoring these considerations and failing to guide developers in integrating them to ensure they meet user needs. While this ignorance may be for the traditional heteronormative configuration of the medical, technical, and legal world, the ending result is the failure of roboticists to consider them in robot development. However, missing diversity, equity, and inclusion considerations can result in robotic systems that can compromise user safety, be discriminatory, and not respect their fundamental rights. This paper explores the impact of overlooking gender and sex considerations in robot design on users. We focus on the safety standard for personal care robots ISO 13482:2014 and zoom in on lower-limb exoskeletons. Our findings signal that ISO 13482:2014 has significant gaps concerning intersectional aspects like sex, gender, age, or health conditions and, because of that, developers are creating robot systems that, despite adherence to the standard, can still cause harm to users. In short, our observations show that robotic exoskeletons operate intimately with users’ bodies, thus exemplifying how gender and medical conditions might introduce dissimilarities in human–robot interaction that, as long as they remain ignored in regulations, may compromise user safety. We conclude the article by putting forward particular recommendations to update ISO 13482:2014 to reflect better the broad diversity of users of personal care robots.

While robots in medical care are becoming increasingly prevalent, direct interaction with users raises new ethical and social issues that have an impact on the law and regulatory initiatives. One of those concerns, still underexplored, is how to make these robots fit for users that come in different shapes, sizes, and genders. Although mentioned in the literature, these concerns have not yet been reflected in industry standards. For instance, ISO 13482:2014 on safety requirements for personal care robots briefly acknowledges that future editions might include more information about different kinds of people. More than seven years following its approval and after undergoing revision, those requirements are nonetheless still missing. Based on the tests with robotic exoskeletons as part of the H2020 EUROBENCH FSTP PROPELLING, we argue that being oblivious to differences in gender and medical conditions or following a one-size-fits-all approach hides important distinctions and increases the exclusion of specific users. Our observations show that robotic exoskeletons operate intimately with users' bodies, thus exemplifying how gender and medical conditions might introduce dissimilarities in human-robot interaction that, as long as they remain ignored in regulations, may compromise user safety. We conclude the article by putting forward particular recommendations to update ISO 13482:2014 to reflect better the broad diversity of users of personal care robots.

Singh, D. K., Kumar, M., Fosch-Villaronga, E., Singh, D., & Shukla, J. (2022). Ethical Considerations from Child-Robot Interactions in Under-Resourced Communities. International Journal of Social Robotics, 1-17.

Recent advancements in socially assistive robotics (SAR) have shown a significant potential of using social robotics to achieve increasing cognitive and affective outcomes in education. However, the deployments of SAR technologies also bring ethical challenges in tandem, to the fore, especially in under-resourced contexts. While previous research has highlighted various ethical challenges that arise in SAR deployment in real-world settings, most of the research has been centered in resource-rich contexts, mainly in developed countries in the ‘Global North,’ and the work specifically in the educational setting is limited. This research aims to evaluate and reflect upon the potential ethical and pedagogical challenges of deploying a social robot in an under-resourced context. We base our findings on a 5-week in-the-wild user study conducted with 12 kindergarten students at an under-resourced community school in New Delhi, India. We used interaction analysis with the context of learning, education, and ethics to analyze the user study through video recordings. Our findings highlighted four primary ethical considerations that should be taken into account while deploying social robotics technologies in educational settings; (1) language and accent as barriers in pedagogy, (2) effect of malfunctioning, (un)intended harms, (3) trust and deception, and (4) ecological viability of innovation. Overall, our paper argues for assessing the ethical and pedagogical constraints and bridging the gap between non-existent literature from such a context to evaluate better the potential use of such technologies in under-resourced contexts.

Fosch-Villaronga, E., Poulsen, A., Søraa, R. A., & Custers, B. H. M. (2020) Gendering Algorithms in Social Media. ACM SIGKDD Explorations Newsletter Volume 23 Issue 1, June 2021, 24–31. 

Social media platforms employ inferential analytics methods to guess user preferences and may include sensitive attributes such as race, gender, sexual orientation, and political opinions. These methods are often opaque, but they can have significant effects such as predicting behaviors for marketing purposes, influencing behavior for profit, serving attention economics, and reinforcing existing biases such as gender stereotyping. Although two international human rights treaties include express obligations relating to harmful and wrongful stereotyping, these stereotypes persist both online and offline, and platforms often appear to fail to understand that gender is not merely a binary of being a 'man' or a 'woman,' but is socially constructed. Our study investigates the impact of algorithmic bias on inadvertent privacy violations and the reinforcement of social prejudices of gender and sexuality through a multidisciplinary perspective including legal, computer science, and queer media viewpoints. We conducted an online survey to understand whether and how Twitter inferred the gender of users. Beyond Twitter's binary understanding of gender and the inevitability of the gender inference as part of Twitter's personalization trade-off, the results show that Twitter misgendered users in nearly 20% of the cases (N=109). Although not apparently correlated, only 8% of the straight male respondents were misgendered, compared to 25% of gay men and 16% of straight women. Our contribution shows how the lack of attention to gender in gender classifiers exacerbates existing biases and affects marginalized communities. With our paper, we hope to promote the online account for privacy, diversity, and inclusion and advocate for the freedom of identity that everyone should have online and offline. Also available on ResearchGate.

Gender inferences in social media

Fosch-Villaronga, E., Poulsen, A., Søraa, R. A., & Custers, B. H. M. (2020) A little bird told me your gender: Gender inferences in social media. Information Processing and Management, 58(3), 102541.

1-s2.0-S0306457321000480-main (1).pdf

Poulsen, A., Fosch-Villaronga, E., Burmeister, O. K. (2020) Cybersecurity, value sensing robots for LGBTIQ+ elderly, and the need for revised codes of conduct. Australasian Journal of Information Systems 24, 1-16.

Until now, each profession has developed their professional codes of conduct independently. However, the use of robots and artificial intelligence is blurring professional delineations: aged care nurses work with lifting robots, tablet computers, and intelligent diagnostic systems, and health information system designers work with clinical teams. While robots assist the medical staff in extending the professional service they provide, it is not clear how professions adhere and adapt to the new reality. In this article, we reflect on how the insertion of robots may shape codes of conduct, in particular with regards to cybersecurity. We do so by focusing on the use of social robots for helping LGBTIQ+ elderly cope with loneliness and depression. Using robots in such a delicate domain of application changes how care is delivered, as now alongside the caregiver, there is a cyber-physical health information system that can learn from experience and act autonomously. Our contribution stresses the importance of including cybersecurity considerations in codes of conduct for both robot developers and caregivers as it is the human and not the machine which is responsible for ensuring the system’s security and the user’s safety.

Søraa, R. A. & Fosch-Villaronga, E., (2020) Exoskeletons for all: The interplay between exoskeletons, inclusion, gender and intersectionality. Paladyn Journal of Behavioral Robotics 11(1), 217-227

In this article, we investigate the relation between gender and exoskeleton development through the lens of intersectionality theory. Exoskeleton users come in a wide variety of shapes, sizes, and genders. However, it is often the case that wearable robot engineers do not develop such devices primarily on the premise that the product should fit as many end users as possible. Instead, designers tend to use the one-size-fits-all approach – a design choice that seems legitimate from the return of an investment viewpoint but that may not do as much justice to end users. Intended users of exoskeletons have a series of user criteria, including height, weight, and health condition, in the case of rehabilitation. By having rigid inclusion criteria for whom the intended user of the technology can be, the exclusion criteria will grow in parallel. The implications and deep-rootedness of gender and diversity considerations in practices and structural systems have been largely disregarded. Mechanical and robot technology were historically seen as part of a distinct male sphere, and the criteria used today to develop new technology may reflect the biases that existed in another time that should no longer be valid. To make this technology available for all, we suggest some tools to designers and manufacturers to help them think beyond their target market and be more inclusive.