KeepA(n)I: Tackling Social Stereotypes in AI
Dr. Jahna Otterbacher Talks About Identifying AI Bias, Human-in-the-Loop Evaluation, and Enhancing Fairness and Transparency in AI Systems
In the following interview, Dr. Jahna Otterbacher delves into the groundbreaking KeepA(n)I project, an initiative under the CYENS Centre of Excellence aimed at identifying and mitigating social stereotypes in artificial intelligence applications.
Dr. Otterbacher explains how KeepA(n)I diverges from traditional Fair ML approaches by focusing on the expression of social stereotypes and utilizing a human-in-the-loop methodology to dynamically and culturally assess AI decisions. She provides concrete examples of how biases can manifest in AI systems and outlines the project's potential to provide developers with tools to enhance the fairness and transparency of AI technologies.
Photo: Project Coordinator Dr. Evgenia Christoforou and fAIre (Fairness and Ethics in AI - Human Interaction) MRG Leader Dr. Jahna Otterbacher
The KeepA(n)I project is an exciting initiative under the CYENS Centre of Excellence. It aims to identify social stereotypes in artificial intelligence applications. Given the potential of algorithmic systems to influence the social world, our goal is to develop a structured, methodological approach that helps developers and machine learning practitioners detect social bias in both input datasets and output data.
Most existing methods in the Fair ML community focus on evaluating group and individual fairness in datasets and algorithmic results, often attempting to reduce or mitigate bias.
KeepA(n)I takes a different approach by concentrating on the expression of social stereotypes and how these stereotypes are reflected in biases shared by groups of people interacting with the system
KeepA(n)I takes a different approach by concentrating on the expression of social stereotypes—such as those based on gender, race, or socio-economic status—and how these stereotypes are reflected in biases shared by groups of people interacting with the system. Our project involves a human-in-the-loop approach, engaging diverse individuals in the evaluation process to achieve a dynamic and culturally aware assessment of social norms.
Certainly. Take, for example, an AI system used by a company to screen job applicants. If the training data for this AI includes historical hiring data, it might inadvertently learn to prefer candidates similar to those who were hired in the past, which could perpetuate existing biases. For instance, if there was a historical bias towards hiring more men for tech roles, the AI might continue this trend, disadvantaging female applicants.
KeepA(n)I aims to identify such biases by looking for patterns in how different groups are treated by the AI. We would involve a diverse group of people in evaluating the AI’s decisions, helping to spot where and how stereotypes are influencing the outcomes. By doing this, we can provide actionable insights to developers on what limits their models from being fair and more inclusive.
By involving humans through crowdsourcing, KeepA(n)I will evaluate social norms dynamically and diversely across different cultures and contexts. For example, people from various backgrounds might review the AI's decisions and highlight instances where they see unfair treatment or bias. This approach allows us to expose social stereotypes methodically and reduce their negative impact, potentially enhancing people’s access to opportunities and resources when interacting with AI applications. Our focus will initially be on computer vision applications that analyze people-related media, such as image content analysis, gender or age recognition from profile photos, and other areas with significant implications for high-risk applications like job applicant screening or dating apps.
We expect KeepA(n)I to provide developers and practitioners with robust tools to identify and address social biases in AI systems. By doing so, we hope to reduce the negative impacts of these biases and foster a more inclusive interaction with AI technologies.
Ultimately, our aim is to enhance the fairness and transparency of AI applications, ensuring they serve all individuals equitably and ethically
For example, if an image tagging system tends to label images with women as "nurse" more often than "doctor," we can identify and correct that bias. Ultimately, our aim is to enhance the fairness and transparency of AI applications, ensuring they serve all individuals equitably and ethically.
Yes, the project has been awarded €200,000 under the Cyprus Research and Innovation Foundation Excellence Hubs programme. It is coordinated by CYENS, specifically by Dr. Evgenia Christoforou, a CYENS research associate and adjunct lecturer at OUC. We are also partnering with Algolysis Ltd to help us address the technological objectives of the project. The Open University of Cyprus, as one of the founding members of CYENS, plays a crucial role in this initiative, especially towards aiding in the completion of the scientific objectives of the project.
My passion for the role of artificial intelligence and its ethical use has always driven my work.
Innovating AI auditing processes rooted in transparency and accountability is crucial to promoting inclusion and fairness
Innovating AI auditing processes rooted in transparency and accountability is crucial to promoting inclusion and fairness. This project aligns perfectly with my research interests and the goals of the Fairness and Ethics in AI – Human Interaction Group (fAIre), formerly known as the Transparency in Algorithms Group (TAG) at CYENS.
Thank you. I’m excited about the future of this project and the impact it will have.