Introduction to AI Fairness
We have been stuck with outdated notions of what fairness and bias mean for a long time. According to Divya Siddarth, founder and executive director of the Collective Intelligence Project, "We have to be aware of differences, even if that becomes somewhat uncomfortable." This is especially important when it comes to Artificial Intelligence (AI). AI is used in many contexts and needs to understand the complexities of society.
The Complexity of Fairness in AI
The work by Wang and her colleagues is a step in the right direction. As Miranda Bogen, director of the AI Governance Lab at the Center for Democracy and Technology, notes, "AI is used in so many contexts that it needs to understand the real complexities of society, and that’s what this paper shows." Simply trying to force fairness into AI models without considering these complexities will miss important nuances and fail to address the harms that people are worried about.
New Benchmarks for Fairness
Benchmarks like the ones proposed in the Stanford paper could help teams better judge fairness in AI models. However, actually fixing those models could take some other techniques. One approach may be to invest in more diverse datasets, though developing them can be costly and time-consuming. As Siddarth says, "It is really fantastic for people to contribute to more interesting and diverse datasets." Feedback from people saying "Hey, I don’t feel represented by this. This was a really weird response" can be used to train and improve later versions of models.
Mechanistic Interpretability
Another exciting avenue to pursue is mechanistic interpretability, or studying the internal workings of an AI model. As Augenstein notes, "People have looked at identifying certain neurons that are responsible for bias and then zeroing them out." This approach could help to identify and address biases in AI models.
The Role of Humans in AI Fairness
Another camp of computer scientists believes that AI can never really be fair or unbiased without a human in the loop. As Sandra Wachter, a professor at the University of Oxford, says, "The idea that tech can be fair by itself is a fairy tale. An algorithmic system will never be able, nor should it be able, to make ethical assessments in the questions of ‘Is this a desirable case of discrimination?’" Law is a living system that reflects what we currently believe is ethical, and that should move with us.
The Challenge of Cultural Differences
Deciding when a model should or shouldn’t account for differences between groups can quickly get divisive. Since different cultures have different and even conflicting values, it’s hard to know exactly which values an AI model should reflect. One proposed solution is "a sort of a federated model, something like what we already do for human rights," says Siddarth—that is, a system where every country or group has its own sovereign model.
Conclusion
Addressing bias in AI is going to be complicated, no matter which approach people take. However, giving researchers, ethicists, and developers a better starting place seems worthwhile. As Wang says, "Existing fairness benchmarks are extremely useful, but we shouldn’t blindly optimize for them. The biggest takeaway is that we need to move beyond one-size-fits-all definitions and think about how we can have these models incorporate context more."
FAQs
- Q: What is the main challenge in addressing bias in AI?
A: The main challenge is that AI needs to understand the complexities of society and that different cultures have different and even conflicting values. - Q: How can we improve the fairness of AI models?
A: We can improve the fairness of AI models by investing in more diverse datasets, using mechanistic interpretability, and considering the role of humans in AI fairness. - Q: Can AI ever be completely fair and unbiased?
A: Some computer scientists believe that AI can never be completely fair and unbiased without a human in the loop, as law is a living system that reflects what we currently believe is ethical. - Q: What is the proposed solution to address cultural differences in AI models?
A: One proposed solution is a federated model, where every country or group has its own sovereign model.