The Shift in Digital Rights and AI
The US government’s withdrawal from funding global digital rights work has significant implications for civil society organizations worldwide. At the RightsCon conference in Taiwan, attendees grappled with the consequences of this loss, particularly in the context of American tech companies and their operations beyond US borders.
Impact on Tech Companies
The Trump administration’s rapid changes to the US government have affected American tech companies, which have users globally. People at RightsCon reported that these companies are becoming less willing to engage with and invest in communities with smaller user bases, especially those that are non-English-speaking.
Reconsidering Reliance on US-Based Tech
As a result, policymakers and business leaders, particularly in Europe, are reevaluating their reliance on US-based tech and exploring alternative, homegrown solutions. This is especially true for artificial intelligence (AI). The need for more community-driven approaches to AI is becoming increasingly apparent, both within and outside the social media context.
Social Media and Content Moderation
Social media platforms are struggling to detect and address issues like gender-based violence in non-English-speaking countries, such as India, South Africa, and Brazil. The use of large language models (LLMs) for content moderation is exacerbating the problem, as these models are often poorly moderated and trained primarily on English-language data.
Limitations of Large Language Models
LLMs perform less well with local languages and contexts, leading to errors and amplification of problematic content. Even multilingual language models struggle with non-Western languages, as evidenced by ChatGPT’s poor performance in Chinese and Hindi. This highlights the need for more targeted, community-driven approaches to AI and content moderation.
Community-Driven Solutions
Many attendees at RightsCon advocated for small language models, chatbots, and data sets designed for specific languages and cultural contexts. These solutions could be trained to recognize local slang, slurs, and reclaimed language, addressing the limitations of current LLMs. Examples of such initiatives include startups like Shhor AI, which is developing a content moderation API focused on Indian vernacular languages.
Conclusion
The shift in digital rights and AI is prompting a reevaluation of the role of US-based tech companies and the need for more community-driven solutions. As the world becomes increasingly digital, it is essential to prioritize diversity, inclusivity, and cultural sensitivity in AI development to ensure that these technologies serve the needs of all communities.
FAQs
- Q: What is the impact of the US government’s withdrawal from funding global digital rights work?
A: The withdrawal has significant implications for civil society organizations worldwide, affecting the operations and policies of American tech companies and their engagement with non-English-speaking communities. - Q: Why are policymakers reconsidering their reliance on US-based tech?
A: The rapid changes in the US government and the effects on tech companies have led to concerns about the ability of these companies to meet the needs of diverse, global communities, prompting a search for alternative, homegrown solutions. - Q: What are the limitations of large language models in content moderation?
A: LLMs are often poorly moderated, trained primarily on English-language data, and perform less well with local languages and contexts, leading to errors and the amplification of problematic content. - Q: What are community-driven solutions in AI and content moderation?
A: These include small language models, chatbots, and data sets designed for specific languages and cultural contexts, trained to recognize local nuances and address the limitations of current LLMs.