The EU against freedom of speech in the US (II): Surrogate censorship
The clash between this project and the First Amendment is neither a cultural accident nor a legal misunderstanding. It is an irresolvable contradiction between two worldviews: one that treats freedom of speech as an absolute right that the state must protect even when it inconveniences the powers that be, and another that treats it as a privilege that the state administers.

Ursula Von der Leyen and Donald Trump during negotiations on a trade agreement.
On Aug. 12, 2024, Thierry Breton, European Commissioner for the Internal Market, published an open letter addressed to Elon Musk. It was the afternoon before Musk interviewed Donald Trump on X. Was the letter a threat? Breton wrote that he was monitoring "potential risks in the EU associated with the dissemination of content that may incite violence, hatred and racism in conjunction with major political or social events in the world, including debates and interviews in the context of elections," and that the EU was prepared to "make full use of our toolbox" under the DSA, including interim measures, to protect European citizens from "serious harm."
The target was American. The platform was American. Theinterviewee was an American presidential candidate. The conversation would take place in American territory!
None of that mattered. The commissioner was calling for censorship of political content in particular. His conduct had already been condemned by the U.S. Congressional Judiciary Committee in a letter dated Aug. 15, 2024. X CEO Linda Yaccarino called the episode "an unprecedented attempt to extend a law designed for Europe to political activities in the United States." Musk responded with a meme. But beyond the anecdote, the Breton-Musk episode is the sharpest image of what this part of the report documents: The European Union didn't just build a censorship machine for domestic use. It exported it. And it did so with method, with funding and with an identifiable political agenda.
One of the most opaque and dangerous provisions of the DSA is the trusted flaggers system. Under this scheme, the commission and national regulators accredit civil society organizations, academic institutions and government bodies as entities whose content complaints must be processed by platforms with priority and enhanced diligence.
In practice, this creates a parallel network of outsourced censorship that operates outside any direct democratic control.
The system is perverse precisely because it keeps European governments one step behind direct censorship. It is not the European Commission that deletes the tweet: It is the platform, which acts on the priority complaint of an accredited NGO, which in turn operates under guidance from the commission, which threatens million-dollar fines if the platform does not demonstrate sufficient diligence. The chain of responsibility fragments until it becomes opaque. No one "censors": Everyone simply "complies."
More than 50 European NGOs pointed out that the broad terms of "systemic risks," "disinformation" and "illegal content," combined with the activist role of "trusted flaggers," could violate freedom of expression and information under Article 11 of the EU Charter of Fundamental Rights. The following month, more than 100 free speech organizations warned of global online censorship under the DSA in a letter to the commission coordinated by ADF International.
Who are these trusted whistleblowers and what is their orientation? The Judiciary Committee documents provide concerning answers: in multiple European election rounds, trusted flagger status was granted to government bodies and NGOs with defined political orientations, coordinated by Commission officials in closed meetings prior to the polls. The pattern was not neutral.
How can a European law affect what a citizen in Houston, Phoenix or Atlanta can read or post on social networks?
The answer lies in the very architecture of digital platforms and in the logic of the Brussels Effect described in Part I of this article.
The big platforms do not operate with a different set of rules for each country. They operate with global content moderation policies that apply more or less uniformly to all their users around the world, for reasons of scale, cost and operational consistency. Maintaining 27 different versions of the content rules, one for each EU member state, is technically possible but enormously expensive. Maintaining one global version adjusted to the most restrictive standard available is much simpler.
Gatestone Institute
Welcome to the 'EUSSR': Unpopular European Regimes Grasping for Power Crack Down on Dissent
Robert Williams
The DSA forces platforms to change their content moderation policies that apply in the U.S. and apply EU-mandated standards to content posted by U.S. citizens. The threat to American speech is clear: European regulators define political speech, humor and other First Amendment-protected content as disinformation and hate speech, and then demand that platforms change their overall content moderation policies to censor it.
Although nominally applicable only to speech in the EU, the DSA, as written, can limit or restrict constitutionally protected speech by Americans in the United States. Companies that censor an insufficient amount of "misleading or deceptive" speech face fines of up to 6% of their annual global revenues, which would amount to billions of dollars for many American companies.
The First Amendment guarantees of the U.S. Constitution cannot coexist with the DSA.
The mechanism is called "jawboning": indirect censorship. European governments cannot directly suppress speech protected by the First Amendment. But they can pressure private intermediaries to do so instead, with economic threats so massive that resistance becomes commercially unviable. FCC Commissioner Brendan Carr argued that regulation is "incompatible with the American tradition of free speech." The vice president stated during his speech at the Munich Security Conference in February 2025 that EU content moderation policies amount to "authoritarian censorship." In other words: Undemocratically elected European officials, in closed-door meetings, instructed platforms with billions of users worldwide on how they should write their global content policies.
If the extraterritorial reach of the DSA is worrisome in the abstract, what the documents in the second report of the Judiciary Committee reveal about its concrete application during electoral processes is of a different gravity. We are dealing with a deliberate strategy, executed with method and with identifiable results, aimed at suppressing specific political narratives at electorally decisive moments.
The Judiciary Committee report documents that the commission convened more than 90 meetings since 2020 under frameworks such as the Code of Practice on Disinformation, which evolved into mandated DSA obligations. At those meetings, Commission officials issued "election guidelines" that, while presented as voluntary, functioned as a de facto compliance floor, with warnings that non-compliance could trigger massive fines of up to 6% of global revenues.
The countries and elections specifically singled out in the report include:
- Ireland (2024 and 2025): Irish regulator Coimisiún na Meán allegedly hosted "DSA election roundtables" with commission officials and fact-checkers deemed biased, creating what the report calls "censorship pressure."
- Slovakia (2023): Under European pressure, platforms censored statements such as "there are only two genders" or criticisms of certain gender policies as "hate speech." The direct effect was the suppression of positions held overwhelmingly by conservative and populist parties.
- Netherlands (2023 and 2025): Coordination included granting trusted flagger status to government bodies, allowing accelerated removal of content critical of the political establishment.
- France (2024): Demands prior to parliamentary elections focused on moderating political discourse critical of government positions, disproportionately affecting the Rassemblement National and right-wing parties.
- Romania (2024): The report questions the annulment of the initial presidential results based on allegations of primarily Russian foreign interference, noting that TikTok reportedly found "no evidence" of foreign coordinated campaigns, while suggesting that EU-driven moderation shaped online narratives.
Poland, Spain, Belgium and Germany are similarly mentioned in the context of pressures prior to their respective electoral processes, with the same pattern: closed meetings, ideologically aligned trusted flaggers, pressure on content critical of the establishment.
In all documented cases, the content suppressed or moderated under European pressure corresponded to narratives contrary to the EU agenda.
The 160-plus page report argues that this activity, intensified since the DSA came into force in 2023, undermines democratic fairness.
The connection to the United States is not direct in the sense that there are documents proving explicit coordination between the European Commission and the Democratic Party. But the mechanism that operated in Europe operated in parallel in America via the Brussels Effect: The same platforms, with the same globalized moderation policies, applying the same criteria that suppressed conservative discourse in Europe, also in the U.S. The Breton-Musk case is the clearest evidence that this extension was not accidental but intentional.
The central value of the Judiciary Committee's work lies in the fact that these are not inferences or hypotheses but concrete documents, including email communications between commission staff and technology companies about "voluntary" codes of conduct and internal documents showing the May 2025 DSA workshop that the Commission hosted with platforms behind closed doors. What those document reveal exceeds what any analysis of the legal text would anticipate. The May 2025 workshop exercises show the gap between the commission's public discourse and its private expectations. The Judiciary Committee demanded information about the European Commission's efforts to intimidate, threaten or coerce Elon Musk in connection with the Trump interview; efforts to use EU law to force American companies to censor American speech; and any communications the European Commission has had with the Biden-Harris Administration to use EU law as a way to circumvent the First Amendment. This latest line of inquiry is the most politically explosive and the least publicly documented so far.
In February 2025, Judiciary Committee Chairman Jim Jordan issued subpoenas to 10 of the world's largest technology companies. In July he issued the first report. In September he convened a hearing titled "Europe's Threat to American Free Speech and Innovation." The committee took its critique directly to Dublin, London and Brussels, meeting in London with the UK regulator responsible for implementing the Online Safety Act and the digital minister, and in Brussels with Commissioner Henna Virkkunen. In February 2026, it published the second report with thousands of new documents and the denunciation of interference in eight electoral processes.
The executive branch accompanied with an unprecedented diplomatic escalation for a regulatory dispute. Secretary of State Rubio, Vice President Vance and Ambassador Puzder came out publicly to attack the fine on X by name, in terms that leave no room for ambiguity:
This is not a technical dispute over account verification but a conflict over who controls political discourse in the Western world. ADF International announced that its lawyers in the U.S. and internationally are providing support to X's legal team as it challenges the unprecedented fine.
On the legislative front, debate is advancing over the GRANITE Act, General Retaliation Against Non-Transparent International Net Censorship Act, which would allow American companies and individuals to sue foreign entities in U.S. courts for acts of censorship. Musk shared it on X. Several states are studying it. Its federal passage would turn every European fine on an American company into the starting point of a litigation on American soil. And that is because the DSA presents an inhibitory effect on speech, raises privacy concerns and puts American startups at a disadvantage in entering the European market. Six of the seven gatekeepers identified under European digital law are American. This last fact is not minor: The DSA and its sister law, the Digital Markets Act, target almost exclusively American companies. Not because they are the only large platforms in the world but because they are the ones that dominate the European market and, crucially, the ones that represent the free speech ecosystem that Brussels considers a threat to its ability to control the narrative.
The European Union built an instrument with global reach ambition as a political control system of global interference, with a special obsession with the U.S. The European Union considers freedom of expression a privilege that bureaucrats take away or grant. When Brussels fines X for not censoring enough, when a European commissioner threatens an American citizen for interviewing a presidential candidate, when closed workshops redesign the global policies of platforms with audiences of billions, when European regulators coordinate with aligned fact-checkers to suppress dissenting narratives before elections, we are faced with the exercise of political power to control discourse on a planetary scale.
The clash between that project and the First Amendment is neither a cultural accident nor a legal misunderstanding. It is an irresolvable contradiction between two worldviews: one that treats free speech as an absolute right that the state must protect even when it inconveniences power, and another that treats it as a privilege that the state administers.
Between these two visions there is no possible agreement.