KeepA(n)I: Tackling Social Stereotypes in AI

KeepA(n)I: Tackling Social Stereotypes in AI

Dr. Jahna Otterbacher Talks About Identifying AI Bias, Human-in-the-Loop Evaluation, and Enhancing Fairness and Transparency in AI Systems

In the following interview, Dr. Jahna Otterbacher delves into the groundbreaking KeepA(n)I project, an initiative under the CYENS Centre of Excellence aimed at identifying and mitigating social stereotypes in artificial intelligence applications.

Dr. Otterbacher explains how KeepA(n)I diverges from traditional Fair ML approaches by focusing on the expression of social stereotypes and utilizing a human-in-the-loop methodology to dynamically and culturally assess AI decisions. She provides concrete examples of how biases can manifest in AI systems and outlines the project's potential to provide developers with tools to enhance the fairness and transparency of AI technologies.

Photo: Project Coordinator Dr. Evgenia Christoforou and fAIre (Fairness and Ethics in AI - Human Interaction) MRG Leader Dr. Jahna Otterbacher

Please tell us about the KeepA(n)I project and its objectives.

The KeepA(n)I project is an exciting initiative under the CYENS Centre of Excellence. It aims to identify social stereotypes in artificial intelligence applications. Given the potential of algorithmic systems to influence the social world, our goal is to develop a structured, methodological approach that helps developers and machine learning practitioners detect social bias in both input datasets and output data.

How does KeepA(n)I differ from other approaches in the Fair ML community?

Most existing methods in the Fair ML community focus on evaluating group and individual fairness in datasets and algorithmic results, often attempting to reduce or mitigate bias.

KeepA(n)I takes a different approach by concentrating on the expression of social stereotypes and how these stereotypes are reflected in biases shared by groups of people interacting with the system

KeepA(n)I takes a different approach by concentrating on the expression of social stereotypes—such as those based on gender, race, or socio-economic status—and how these stereotypes are reflected in biases shared by groups of people interacting with the system. Our project involves a human-in-the-loop approach, engaging diverse individuals in the evaluation process to achieve a dynamic and culturally aware assessment of social norms.

Can you give us a simple example of how social stereotypes might appear in AI applications and how KeepA(n)I would identify them?

Certainly. Take, for example, an AI system used by a company to screen job applicants. If the training data for this AI includes historical hiring data, it might inadvertently learn to prefer candidates similar to those who were hired in the past, which could perpetuate existing biases. For instance, if there was a historical bias towards hiring more men for tech roles, the AI might continue this trend, disadvantaging female applicants.

KeepA(n)I aims to identify such biases by looking for patterns in how different groups are treated by the AI. We would involve a diverse group of people in evaluating the AI’s decisions, helping to spot where and how stereotypes are influencing the outcomes. By doing this, we can provide actionable insights to developers on what limits their models from being fair and more inclusive.

How does the human-in-the-loop approach work in this context?

By involving humans through crowdsourcing, KeepA(n)I will evaluate social norms dynamically and diversely across different cultures and contexts. For example, people from various backgrounds might review the AI's decisions and highlight instances where they see unfair treatment or bias. This approach allows us to expose social stereotypes methodically and reduce their negative impact, potentially enhancing people’s access to opportunities and resources when interacting with AI applications. Our focus will initially be on computer vision applications that analyze people-related media, such as image content analysis, gender or age recognition from profile photos, and other areas with significant implications for high-risk applications like job applicant screening or dating apps.

That sounds incredibly impactful. What are some of the expected outcomes of the KeepA(n)I project?

We expect KeepA(n)I to provide developers and practitioners with robust tools to identify and address social biases in AI systems. By doing so, we hope to reduce the negative impacts of these biases and foster a more inclusive interaction with AI technologies.

Ultimately, our aim is to enhance the fairness and transparency of AI applications, ensuring they serve all individuals equitably and ethically

For example, if an image tagging system tends to label images with women as "nurse" more often than "doctor," we can identify and correct that bias. Ultimately, our aim is to enhance the fairness and transparency of AI applications, ensuring they serve all individuals equitably and ethically.

The funding for this project is quite substantial. Can you tell us about the sources of this funding and the key collaborators involved?

Yes, the project has been awarded €200,000 under the Cyprus Research and Innovation Foundation Excellence Hubs programme. It is coordinated by CYENS, specifically by Dr. Evgenia Christoforou, a CYENS research associate and adjunct lecturer at OUC. We are also partnering with Algolysis Ltd to help us address the technological objectives of the project. The Open University of Cyprus, as one of the founding members of CYENS, plays a crucial role in this initiative, especially towards aiding in the completion of the scientific objectives of the project.

It’s clear that KeepA(n)I has the potential to make a significant impact. What inspired you to pursue this line of research?

My passion for the role of artificial intelligence and its ethical use has always driven my work.

Innovating AI auditing processes rooted in transparency and accountability is crucial to promoting inclusion and fairness

Innovating AI auditing processes rooted in transparency and accountability is crucial to promoting inclusion and fairness. This project aligns perfectly with my research interests and the goals of the Fairness and Ethics in AI – Human Interaction Group (fAIre), formerly known as the Transparency in Algorithms Group (TAG) at CYENS.

Thank you for sharing your insights and the exciting details about the KeepA(n)I project. We look forward to seeing the positive changes it will bring to the field of AI.

Thank you. I’m excited about the future of this project and the impact it will have.

What: KeepA(n)I

More Info

Who: Jahna Otterbacher is Associate Professor at the Open University of Cyprus (OUC), where she directs the Cyprus Center for Algorithmic Transparency (CyCAT), devoted to basic and applied research that aims to make data-driven technologies more transparent and useful to everyone. Otterbacher holds a concurrent appointment at the CYENS CoE, where she co-leads the Fairness and Ethics in AI-Human Interaction (fAIre) team. Otterbacher has successfully served as coordinator of both European (e.g., CyCAT, 5-partner consortium), National R&I projects (e.g., DESCANT, 3-partner consortium), as well as faculty mentor for young (post-doc) project coordinators (e.g.,KeepA(n)I). In Horizon Europe, she serves as a reviewer and ethics auditor for proposals and projects. With over 70 publications on topics related to various areas of human-centered data science, she has received over 3.500 citations on her work, and is included in Elsevier’s 2023 list of top-cited scientists in the area of Artificial Intelligence, based on standardized citation indicators.

Loader