AI's Possible Threat to Democracy: Why the G20 Must Act on Copyright

AI's Possible Threat to Democracy: Why the G20 Must Act on Copyright

Based on insights from Anya Schiffrin, Columbia University.

The rapid advancement of artificial intelligence has brought both remarkable opportunities and significant challenges to our global society. While AI promises to revolutionise everything from healthcare to education, it also poses potential threats to democratic institutions and the media ecosystem that underpins informed public discourse. As AI companies continue to train their models on copyrighted news content without compensation, the very foundation of quality journalism faces an existential crisis that demands attention.

The Copyright Crisis Undermining Journalism

At the heart of this democratic threat lies a fundamental issue of intellectual property rights. AI companies have been systematically using copyrighted news content from media organisations to train their large language models, often without permission or payment. This practice essentially amounts to what critics describe as "stealing quality news from media companies".

The implications extend far beyond simple copyright infringement. Major publishers like Ziff Davis have filed lawsuits against OpenAI for "intentionally and relentlessly" using copyrighted content, while several Indian news publishers have joined forces in a copyright battle, arguing that AI companies present "a clear and present danger to the valuable copyrights" of news organisations.

The economic model of journalism depends on the ability of news organisations to monetise their content. When AI systems can provide instant summaries and answers based on journalistic work without compensating the original creators, it undermines the financial sustainability of quality news production. This creates a vicious cycle: fewer resources for journalism lead to less investigative reporting, reduced fact-checking, and ultimately, a less informed citizenry.

Democracy Depends on Quality Information

The relationship between journalism and democracy has never been more critical. Democratic societies rely on an informed electorate to make decisions about governance, policy, and leadership. Quality journalism serves as the cornerstone of this informed discourse, providing fact-based reporting, investigative analysis, and diverse perspectives essential for democratic debate.

However, AI's current trajectory threatens to hollow out this information ecosystem. As AI systems become more sophisticated at generating human-like responses, users increasingly turn to these tools for news and information rather than visiting news websites directly. This shift in consumption patterns deprives news organisations of the web traffic and engagement that traditionally support their advertising models and subscription services.

Moreover, AI threatens not just election integrity but also democratic norms and the rule of law, as misinformation and AI-generated content can spread faster than fact-checkers can debunk false claims. Without robust journalism to serve as a counterbalance, democratic discourse becomes increasingly vulnerable to manipulation and decay.

The Global Nature Requires Global Solutions

The challenge of regulating AI transcends national boundaries. AI companies operate globally, their models are trained on international datasets, and their impacts affect democracies worldwide. This global reach means that piecemeal national regulations will prove insufficient to address the scope of the problem.

The G20, representing the world's most powerful economies, emerges as the natural forum to begin these crucial discussions. These nations collectively host the major AI companies, control significant portions of global internet infrastructure, and have the economic leverage necessary to enforce meaningful regulations. Their coordinated action could establish international norms that protect both innovation and democratic institutions.

Current efforts remain fragmented. AI copyright jurisprudence is set to have a big year in 2025, with various courts addressing different aspects of the problem. However, legal battles in individual jurisdictions cannot address the systemic nature of the challenge. What's needed is a comprehensive framework that acknowledges both the transformative potential of AI and the fundamental importance of compensating content creators.

Learning from Music Rights: A Blueprint for Journalism

The solution doesn't require stifling AI innovation. Instead, it demands the establishment of fair compensation mechanisms that ensure news organizations and other content creators receive appropriate payment for their intellectual property when it's used to train AI models - a "think tank".

The music industry offers a proven blueprint for this approach. When streaming services like Spotify or Apple Music use copyrighted songs, they don't simply take the content without permission. Instead, they operate under established royalty systems where artists, songwriters, and publishers receive compensation every time their work is played. Performance rights organisations like ASCAP and BMI have created sophisticated tracking and payment systems that ensure creators are credited and compensated for their intellectual property.

A similar model could revolutionise how AI companies use journalistic content. Just as musicians receive royalties when their songs are streamed, journalists and news organisations could receive ongoing compensation when their articles are used to train AI models or when AI systems reference their work in responses to users. This would create sustainable revenue streams that could help fund quality journalism while allowing AI companies to access the high-quality content they need.

Several promising models have already emerged. Amazon has signed deals to license New York Times content for AI-related use, including real-time display of summaries, while other publishers have pursued similar arrangements. These licensing agreements demonstrate that mutually beneficial relationships between AI companies and news organisations are possible.

However, such voluntary arrangements remain insufficient. Just as the music industry required regulatory frameworks to establish fair royalty rates and collection mechanisms, the journalism sector needs similar protection. Smaller news organizations lack the negotiating power of major publishers, and AI companies have little incentive to proactively seek licensing agreements when they can train models on copyrighted content with minimal legal consequences. Regulatory frameworks that require compensation for content use could level the playing field and ensure sustainable funding for journalism across the media landscape.

The Path Forward for the G20

The G20 should prioritise several key areas in addressing AI's impact on democracy. First, establishing clear international norms around copyright protection in the AI era, modeled after successful frameworks in the music industry. This could include creating journalism rights organisations similar to ASCAP or BMI that would track AI usage of news content and distribute royalties to journalists and publishers.

Second, developing frameworks for AI transparency and accountability, particularly regarding training data sources and model outputs. When AI systems provide information based on journalistic work, they should be required to provide attribution—much like how streaming services display artist and songwriter credits. This transparency would not only ensure proper crediting but also help users understand the sources behind AI-generated responses.

The G20 could also establish standardised royalty rates for different types of content usage, from training data inclusion to real-time content referencing. Just as mechanical royalties in music are set through established formulas, journalism royalties could follow similar principles, ensuring consistent and fair compensation across the industry.

Additionally, the G20 should consider creating international mechanisms for monitoring AI's impact on information ecosystems and democratic processes. This could include establishing standards for AI-generated content disclosure and developing rapid response capabilities for addressing AI-enabled misinformation campaigns.

The stakes could not be higher. As AI capabilities continue to advance, the window for establishing protective frameworks grows narrower. The choice facing the G20 gathering in South Africa, is clear: act now to preserve the information ecosystem that democracy requires, or risk watching it erode under the weight of unchecked technological disruption.

Conclusion

The intersection of AI and democracy represents one of the defining challenges of our time. While artificial intelligence offers tremendous potential benefits, its current trajectory threatens to undermine the economic foundations of quality journalism and, by extension, democratic discourse itself.

The solution requires international cooperation, fair compensation mechanisms, and regulatory frameworks that balance innovation with the protection of democratic institutions. The G20 has both the opportunity and responsibility to lead this effort, establishing norms that will shape the relationship between AI and democracy for generations to come.

The complexity and contentiousness of these discussions should not deter action. The cost of inaction—the gradual erosion of the information ecosystem that democracy depends upon—is simply too high. The time for global cooperation on AI governance is now, and the stakes are nothing less than the future of democratic society itself.

Back to blog