AI used in the labour market needs to be trustworthy and socially responsible

Welcome to BIAS

A four-year project, funded by the European Union’s Horizon Europe Research and Innovation program that will empower the Artificial Intelligence (AI) and Human Resources Management (HRM) communities by addressing and mitigating algorithmic biases.

Background & Mission

Artificial Intelligence (AI) is increasingly deployed in the labor market to recruit, train, and engage employees or monitor for infractions that can lead to disciplinary proceedings. One type of AI is Natural Language Processing (NLP) based tools that can analyze text to make inferences or decisions. However, NLP-based systems face the implicit biases of the models they are based upon that they learn. Such bias can be already encoded in the data used for machine learning training, which contains our society's stereotypes, and thus be reflected inside the models and the decision-making.

This can lead to partial decisions that run contrary to the European Pillar of Social Rights goals concerning work and employment and the United Nations' Sustainable Development Goals.

Despite a strong desire in Europe to ensure equality in employment, most studies of European labor markets have concluded that there is discrimination across many factors, such as gender, nationality, or sexual orientation. Therefore, addressing how AI used in the labor market contributes to or can help mitigate this discrimination is very important. That is the primary concern of the BIAS project.


Mitigating Diversity Biases of AI in the Labor Market 

Rigotti, C., Puttick A., Fosch-Villaronga, E., and Kurpicz-Briki, M. (2023) The BIAS project: Mitigating diversity biases of AI in the labor market. European Workshop on Algorithmic Fairness (EWAF '23), Winterthur, Switzerland, June 7-9, 2023. 

In recent years, artificial intelligence (AI) systems have been increasingly utilized in the labor market, with many employers relying on them in the context of human resources (HR) management. However, this increasing use has been found to have potential implications for perpetuating bias and discrimination. The BIAS project kicked off in November 2022 and is expected to develop an innovative technology (hereinafter: the Debiaser) to identify and mitigate biases in the recruitment process. For this purpose, an essential step is to gain a nuanced understanding of what constitutes AI bias and fairness in the labor market, based on cross-disciplinary and participatory approaches. What follows is a preliminary overview of the design and expected implementation of the project, as well as how our project aims to contribute to the existing literature on law, AI, bias, and fairness 

More information

Follow all the information in our website page.

Funded by the European Union. The Associated Partner Bern University of Applied Sciences has received funding from the Swiss State Secretariat for Education, Research and lnnovation (SERI).