August 06, 2025
by Babette De Naeyer
Remember 2003? “Metrosexual” was the word of the year, Outkast’s “Hey Ya!” was everywhere, and “Finding Nemo” captivated younger audiences. In the legal world, 2003 also saw the European Court of Human Rights (ECtHR, the Court) deliver its landmark Appleby a.o. v. UK ruling. In Appleby, the Court had to balance a shopping mall’s property rights with individuals’ rights to expression and assembly. It concluded that freedom of expression does not guarantee access to a particular forum: property rights should only yield if access restrictions prevent ‘any effective exercise’ of expression or destroy ‘the essence of the right’ (Appleby, §47).
Appleby’s framework, crafted for the brick-and-mortar public sphere, endures two decades later. It thus remains the leading reference for scholars considering how the Court might address users’ freedom of expression claims against online platforms’ house rules. Scholars increasingly argue that private platforms look and act more like public forums, with greater duties to protect their users’ expressive rights. Judge Pavli’s recent concurrence in Google LLC a.o. v. Russia (8 July 2025) brings these questions back to the ECtHR’s doorstep. This blog explores how Google v. Russia cracks open the forum-doctrine for the digital age.
Google v. Russia revolved around two Russian court proceedings against Google’s various entities. The first was administrative: in 2020, Russia empowered its telecom regulator to fine platforms that ignored take-down requests (TDRs) for content it deemed unlawful under Section 15.3 of the Information Act. By early 2021, Google complied with some TDRs, removing content it accepted was unlawful, but drew the line at requests targeting political speech. Eight further TDRs targeted videos critical of the government’s COVID-19 response or supportive of opposition figures and constitutional reform. Google geo-blocked five but refused to block three it deemed legitimate political dissent. Non-compliance led to escalating fines calculated from Google’s global revenue, which soon reached hundreds of millions of euros (§5-9).
Separately, Google had suspended the YouTube account of Tsargrad TV – an Orthodox conservative media outlet – following EU sanctions on its owner for supporting Russia’s annexation of Crimea. This led to a civil suit in which Tsargrad claimed this suspension was unlawful, and the Russian courts agreed. Facing enormous daily mounting fines, Google reinstated Tsargrad’s account, though it remained demonetised (§15-20). Tsargrad’s victory sparked a flood of “copycat” claims by other state-affiliated media, resulting in fines exceeding USD 16 trillion. Eventually, Google Russia filed for bankruptcy (§33-36).
Google brought both cases to Strasbourg, alleging that the proceedings violated Article 10 ECHR.1 The ECtHR unanimously found that Russian actions, particularly the ‘grossly disproportionate fines’ (§100), violated Google’s right to freedom of expression, especially given the political nature of much of the targeted content (§82).
While the judgement has its own merits, this blog post will mainly reflect on Judge Pavli’s concurring opinion, which goes beyond the specific facts of Google v. Russia.
Judge Pavli’s opinion emphasized the clear pivot that is Google v. Russia: for the first time, Strasbourg explicitly addressed ‘the rights and responsibilities of a major online platform under Article 10’ (§2). He retraced the path to this moment. In Delfi AS v. Estonia (2015, see analysis here), liability rules for news portals didn’t necessarily apply to social media; Sanchez v. France (2023, see analysis here and here) introduced the idea of ‘shared liability’ between platforms and users. Pavli argued the time had come to further define what these platform obligations should entail (§3-4).
Regarding the first complaint, he welcomed the Court’s confirmation that Article 10 applies to both platforms and their users (Tamiz v. UK, 2017, §90; see analysis here). The majority held that Russia’s pressure on Google undermined its role as a ‘provider of a platform for the free exchange of ideas and information’ (§66). The concurring judge welcomes the Court’s first step in clarifying the rights and duties of online platforms: when platforms manage, curate, or moderate content, including through algorithms, their influence on public debate creates ‘duties of care and due diligence’ (Concurring Opinion, §7; Decision, §79). But Pavli pushes further, arguing that major platforms are not “mere” intermediaries, but key gatekeepers within the online information environment. He pointed to the EU’s Digital Services Act (DSA) as a reference point for platforms’ ‘responsible practices’ (§8). While accepting that states may, in principle, require platforms to prevent large-scale harm, the concurring opinion noted that the real danger in the case at hand was the state’s coercion of private platforms to suppress lawful speech (§9).
Pavli then turned to Google v. Russia’s second Article 10 complaint. He faulted the majority for sidestepping the heart of the issue, users’ rights vis-à-vis online platforms, by focusing on the excessiveness of Russia’s penalties. This approach risks implying that if the sanctions were proportionate, the interference might have been justified (§11).
The concurrence calls for the majority to explicitly recognize that users have minimum procedural protection – notice, reasons, and appeal – when platforms remove content or suspend and demonetise accounts. The judge pointed to the EU’s DSA and some national courts’ cases (Meta v. Vandendriessche and Danny Mekić v. X) which demanded platforms to give ‘users some degree of due process’ (§13).
Pavli argued that the ECtHR should join this challenge by revisiting Appleby’s forum doctrine. He finds that the Appleby principles no longer suit the online environment. In his view, unlike old ‘brick-and-mortar shopping malls’, major platforms today operate squarely in the information business. Most importantly, the concurring opinion maintains that the question of whether genuine alternatives for expression exist is now far more complex (§15).
While he leaves it an open question whether private platforms should be deemed the kind of ‘public spaces to which everyone must have unhindered access,’ Pavli insisted that, at a minimum, users should have ‘basic due-process safeguards’ to guard against ‘arbitrary exclusion from the marketplace of ideas’ (§16).
It’s been a long time coming, but here it is: Strasbourg’s first judgment directly addressing the rights and responsibilities of major content-sharing platforms – like Facebook, Instagram, Twitter/X, and YouTube – under Article 10 ECHR. A belated but necessary move at the complex crossroads of user expression, platforms’ property and expression rights, and states’ interests in regulating illegal content. The two Article 10 complaints each address a different side of the problem.
The first complaint, concerning Russia’s TDRs targeting dissent, was the most straightforward Article 10 violation. When a government cannot censor directly, it often pressures companies like Google to silence critics on its behalf. The practice of indirect government censorship by coercing private entities has been coined “jawboning” in U.S. legal scholarship. Google v. Russia could have been a good opportunity to introduce the term in Strasbourg.
Anyway, the current case shows us why jawboning is hot and happening: despite resisting, Google still ended up geo-blocking five of eight flagged videos (§7). No big surprises here: long before Big Tech, governments knew that enough financial pressure makes private companies yield.
However, the new Google case shows that a legal system allowing TDRs against illegal content works properly only if two key conditions are met. First, so-called ‘orders to act against illegal content’ (as Article 9 DSA calls them) are acceptable only if robust safeguards, such as an independent judiciary, exist to prevent governments from targeting legitimate dissent.
Second, if platforms tend to bend under government pressure, users’ procedural safeguards against platform governance become even more pressing. Indeed, when governments use platforms to censor users, platforms need to inform users of any (government-induced) action taken against their content. In such cases, platform transparency becomes crucial in shedding light on government censorship.
Both the majority (§97) and Judge Pavli (§12) spotlighted Russia’s hypocritical policy: forcing Google to silence some users, while demanding it protect Tsargrad’s expression rights. But this is hardly new: authoritarian regimes have always paired silencing dissent with promoting propaganda. They go together like apple pie and ice cream, so of course Russia wants to have its slice and eat it too.
Still, Russia’s cynicism shouldn’t distract from the heart of the matter. The second complaint was the Court’s moment to clarify what rights users can enforce against platforms, but it declined to do so. Instead, the majority focused on the wildly disproportionate fines, as Pavli criticized (§8–11). This is understandable, given that they truly were ‘grossly disproportionate’ (§100) – or as the BBC headlined: ‘Russia fines Google more money than there is in [the] entire world.’ But it meant the Court sidestepped more nuanced questions around platform obligations on users’ expression.
Actually, some elements of the Russian civil courts’ ruling against Google were quite reasonable – even a broken clock is right twice a day! The Russian court claimed Tsargrad’s account termination was unlawful because: the suspension came six years after EU sanctions against its owner; Google didn’t specify which sanctions it was applying; and the company violated its own contract requiring 60 days’ notice before termination (§18). This last argument actually mirrors EU law, which requires notice-and-appeal before suspending or terminating business accounts (Article 4(1) and (2) P2B Regulation and Recital 50 EMFA). None of these valid arguments were addressed by the Strasbourg Court.
The second complaint also showed that putting your money where your mouth is gets tough when financial pressure boils over: faced with mounting fines, Google caved and restored Tsargrad’s account. However, monetization features that allowed the account to generate revenue remained disabled (§99, see ‘demonitisation’).
The Russian courts had argued that monetization was essential, but the ECtHR claimed that this went beyond the original court order (§ 99). But is that really true? For many business users, like news outlets, monetization of their content is central to the platform’s service. Google v. Russia would have been a good opportunity for the Court to elaborate, particularly given that for many news outlets – “democracy’s public watchdog,” as the Court likes to call them – revenue from online services is becoming increasingly vital.
The Court likely avoided this issue for two reasons. First, Google, not Tsargrad TV, was the applicant. Second, the Court would likely limit indirect Article 10 protection to good-faith news providers, not EU-sanctioned disinformation outlets. But the continued avoidance of dealing with the underlying issues is precisely the problem: symbolic Russian cases could be used to lay down standards for future case law in which the impacted party is not a Kremlin mouthpiece. News outlets’ dependence on platforms is highly relevant outside of authoritarian Russia as well. Across Europe, legitimate (business) users face content removals and account suspensions without proper recourse all the time. So, what is the Court going to do about this?
Judge Pavli rightly criticised the majority for not engaging with the triangular freedom of expression dynamic between users, platforms, and governments. Yes, this was a Russian case, and the Court could comfortably condemn Russia for yet another Article 10 violation. But that’s exactly why the Court should have gone further by using symbolic cases to shape case law for upcoming, harder questions in more democratic settings (I made a similar argument in the context of online disinformation here).
Indeed, platform due diligence issues are not going to disappear anytime soon, as users across Europe increasingly challenge platforms over arbitrary moderation practices. The concurring opinion mentioned a Belgian and a Dutch case regarding “shadowbanning”, where users successfully claimed compensation for the opaque reduction of their content’s reach. When it comes to adjudicating content moderation practices, visibility reduction measures, like shadowbans, present trickier problems than more straightforward suspensions or removals. But to handle such complex cases, the Court must first return to basics: (i) clarify whether users have procedural safeguards – such as notice, reasoning, and appeal – against platform actions, and (ii) determine whether the Appleby doctrine extends to online platforms or reconsider its applicability. Pavli’s concurring opinion called upon the Court to do just that. Let’s see whether his proposal makes its way into a majority decision next time.
The Court still seems unwilling to clearly recognise that users enjoy procedural rights vis-à-vis platforms. Yes, the issue is novel and still developing, but it is not exactly groundbreaking either. The EU’s DSA already enshrines basic procedural safeguards for users, following years of expert warnings about the urgent need to protect freedom of expression online.
Strasbourg’s hesitation couldn’t come at a worse moment. The Court is already under fire for ‘walking back on human rights’, especially in polarizing areas like migration, abortion, and same-sex marriage. Yet, in the digital sphere, regulations like the DSA show that due diligence safeguards for users may be one of the few areas where broad European consensus is actively emerging. The DSA isn’t perfect, as academics often note, but hope now rests on national courts to clarify and apply its standards. When national courts need to interpret laws that impact freedom of expression, Strasbourg’s Article 10 principles usually help guide them. However, on users’ procedural rights against platforms, the Court still refuses to take a clear stance, leaving a gap where leadership is needed most.
If the ECtHR wants to remain ‘a court that matters’, it can’t simply tread water as others surge ahead. As Dory would have said in ‘03, Strasbourg needs to ‘just keep swimming’ – or risk being left behind, as the EU’s Court of Justice and national courts set the current and sail away.
2 Comments
I’d disagree that the ECtHR is not trying to lay down lines for the future in this judgment. I’d say the main direction of its effort is to protect companies providing Social Media Services from states in two ways – first by defining Proportionality, and second by allowing Legal Sophisms that work in favour of corporations.
There are two ways of defining proportionality – the least onerous means necessary to achieve a legitimate objective versus limits on permissible coercion even if it means a legitimate objective is not achieved. All legal systems start with the premise that decisions following their legal procedures are just and correct. From the Russian legal system’s point of view, then, Google is a recidivist that repeatedly fails to follow legal orders and even judicial decisions. Indeed, Google was so contemptuous of the initial 3-8 million rouble fines it didn’t even try to appeal them, which clearly contributed to the decision to move to a percentage scheme that at least gained Google’s attention. Similarly, the rapidly ballooning penalty concerning Tsargrad is clearly intended to reward quick compliance and punish delay – there would be no disproportionate punishment there had Google executed the judicial decision in good faith.
Nevertheless, ECtHR came down hard on defending the second view of proportionality and avoided even discussing the need to enforce compliance as a factor in proportionality calculations. And while it does retain the possibility a lower penalty might pass muster, historically the ECtHR has at times interpreted proportionality such that in practice no penalty can in fact be inflicted – the large number of Administrative Offence cases in Russia where the Court purports to accept the legitimacy of the values being defended, yet insists the inflicted penalty is excessive and not “necessary” is a path the Court can choose to replicate here.
For the second point, the Court clearly realizes that one of Google’s points – specifically that related to its many legal personalities, is formalistic legal sophistry. To be blunt, all of them are really Google, formally subdivided as recommended by their legal team with the objective of evading liability as much as possible. Entities nominally controlling controversial services such as Youtube are assigned as few earnings they can lose as feasible. The Russians merely applied the principle from Civil Code Article 170 concerning sham transactions and ignored the sham to evaluate them as they really are. The ECtHR is also not shy to remind us how shallow Google’s claim is, for example by pointing out how some Google entities are entirely owned by other Google entities (¶ 3), or that Google cannot even be bothered to ensure the consistency of its legal documents with its claim to separate legal personalities (¶ 12).
Having volunteered to show that at least on this point the ruling against Google can hardly be said to be substantively unfair, the Court nevertheless chose to find unfairness on procedural grounds. Bringing up this substantively weak point is not even necessary to find the Article 6(1) violation – they can just find it using the content in paragraph 106.
I perceive this as a signal from the Court while they can hardly say it is impossible to have a sham company, they will make maximum effort to prioritize the legal fiction.
Between these two points – States having to be proportional even at the cost of ineffectiveness, and only able to target the decoy shells Corporations leave behind – the result is that European States would not be able to apply meaningful coercion against large providers like Google. They can issue demands, and they can find violations, but the company can just shrug them off as a business expense.
If I’m to make a criticism of this judgment, it is the handling of the Tsargrad case. First, one can argue the part should be dismissed for failure to exhaust domestic remedies. Granted, Google did seem to go to all the relevant courts. However, I must note Google seemed to have presented completely different arguments to the ECtHR versus the Russian authorities. In Russia, they pleaded the Competing Duty of complying with sanctions issued by their registered countries (¶ 18). To the ECtHR, they say it is an infringement on their Freedom of Expression (¶ 86).
The difference between “It’s our will” and “It’s NOT our will. We were forced” is not a minor detail but fundamentally contradictory. It means the pleading to the ECtHR had not been presented to the Russian courts at all – so can one say they “exhausted domestic remedies”?
Emphasizing the case as it was defended in the Russian court makes it interesting. The sanctions are themselves an interference of freedom by States against speech and issued both by states within and outside ECHR. To what extent can an ECHR state be allowed to protect its own citizens from sanctions when they result in interference with freedom of speech (or worse, have as their purpose to interfere with said freedom)? Obviously, as far as the state is concerned, any sanctions against it have no legitimacy (the Russian Court’s ruling hardly seems unusual in this regard). And any high penalties applied also double as a way for the company to defend its compliance with the Judicial order versus any critics back home.
Does the ECtHR really want to say States can’t protect their citizens from foreign sanctions? This is what the Court currently implies by refusing to even include this factor into its proportionality calculation which only uses the low monetary losses to Tsargrad while excluding all other factors (¶ 98). Are they not letting sanctions become a way where free speech rights can be effectively circumvented?
Finally, I’d suggest there is no internal inconsistency in Russia’s approach here, at least not more than any country that has issued Take Down requests on any pretext. Any speech that’s not illegal should be permitted. Also, paragraph 97 can be criticized for acting as if every Russian State organ is a monolith in a coordinated action when that is not necessarily the case.
Thank you for this detailed and thought-provoking comment. It raises an interesting point about how the Court’s reasoning in Google v. Russia can be read as protecting corporations, particularly through its handling of proportionality and the treatment of Google’s corporate structure. I agree this dimension deserves more attention, since it highlights how judgments in this field inevitably carry implications not only for freedom of expression, but also for the effective sanctioning of Big Tech by the state. Yet, I understand why the Court did not focus on this line of reasoning, since in this case the state was not rightfully enorcing TDRs of illegal content against online platforms, but trying to silence legitimate dissent. It would be interesting to see how the Court treats proportionality in a more nuanced, grey-area of speech case, where state authorities issued TDRs against Covid misinformation for example.
That said, your perspective underscores a larger issue: the Court is navigating not just individual rights but also the political economy of online speech and the enforcement of state authority. This makes the possible changes to the forum doctrine I highlighted in my post all the more relevant, because these structural questions about who can be held accountable, and on what basis, will only become more pressing.