Introduction to AI Fairness
We have been sort of stuck with outdated notions of what fairness and bias means for a long time. According to Divya Siddarth, founder and executive director of the Collective Intelligence Project, we need to be aware of differences, even if that becomes somewhat uncomfortable. The work by Wang and her colleagues is a step in that direction, as it shows that AI needs to understand the real complexities of society.
The Complexity of Fairness in AI
AI is used in many contexts and needs to understand the complexities of society. As Miranda Bogen, director of the AI Governance Lab at the Center for Democracy and Technology, notes, "Just taking a hammer to the problem is going to miss those important nuances and [fall short of] addressing the harms that people are worried about." Benchmarks like the ones proposed in the Stanford paper could help teams better judge fairness in AI models, but actually fixing those models could take some other techniques.
Techniques for Improving AI Fairness
One technique may be to invest in more diverse data sets, though developing them can be costly and time-consuming. Feedback from people saying "Hey, I don’t feel represented by this. This was a really weird response" can be used to train and improve later versions of models. Another exciting avenue to pursue is mechanistic interpretability, or studying the internal workings of an AI model. For example, people have looked at identifying certain neurons that are responsible for bias and then zeroing them out.
The Role of Human Judgment in AI Fairness
Another camp of computer scientists believes that AI can never really be fair or unbiased without a human in the loop. As Sandra Wachter, a professor at the University of Oxford, notes, "The idea that tech can be fair by itself is a fairy tale. An algorithmic system will never be able, nor should it be able, to make ethical assessments in the questions of ‘Is this a desirable case of discrimination?’" Deciding when a model should or shouldn’t account for differences between groups can quickly get divisive, however.
Addressing Cultural Differences in AI Fairness
Since different cultures have different and even conflicting values, it’s hard to know exactly which values an AI model should reflect. One proposed solution is "a sort of a federated model, something like what we already do for human rights" – a system where every country or group has its own sovereign model. This approach acknowledges the complexity of cultural differences and the need for context-specific solutions.
Conclusion
Addressing bias in AI is going to be complicated, no matter which approach people take. However, giving researchers, ethicists, and developers a better starting place seems worthwhile. As Wang notes, "Existing fairness benchmarks are extremely useful, but we shouldn’t blindly optimize for them. The biggest takeaway is that we need to move beyond one-size-fits-all definitions and think about how we can have these models incorporate context more."
FAQs
- Q: What is the main challenge in achieving fairness in AI?
A: The main challenge is that AI needs to understand the real complexities of society, which are nuanced and context-dependent. - Q: How can AI models be improved to address bias?
A: Techniques such as investing in diverse data sets, mechanistic interpretability, and human judgment can help improve AI models. - Q: Can AI ever be completely fair and unbiased?
A: Some experts believe that AI can never be completely fair and unbiased without human judgment and oversight. - Q: How can cultural differences be addressed in AI fairness?
A: A proposed solution is to use a federated model, where every country or group has its own sovereign model that reflects their unique values and context.