Strasbourg Observers

Sanchez v France: The Expansion of Intermediary Liability in the Context of Online Hate Speech

July 17, 2023

by Jacob van de Kerkhof

Self-confident people are usually not too concerned about what other people post on their social media pages. But they should be. On 15 May 2023, the Grand Chamber of the European Court of Human Rights (‘ECtHR’ or ‘the Court’) released its judgment in the case of Sanchez v France, following an earlier Chamber decision which found no violation of Article 10 of the European Convention on Human Rights (ECHR).  In a lengthy decision, the Grand Chamber explored the limits of freedom of expression in the context of hate speech in an election-related setting and intermediary liability for actors on the internet. Building on earlier case law with respect to hate speech in a political setting (Perincek, Jersild and Soulas and others), the Court significantly expanded the liability for intermediaries for third-party content on the internet compared to earlier case law in Delfi v Estonia. This contribution argues that Sanchez v France takes an untenable position regarding third-party liability for content on social media.


The applicant, Julien Sanchez, is a politician for the National Rally (a far-right party in France)and mayor of Beaucaire. While running for election to Parliament for the party in the Nîmes constituency, he posted a message about one of his political opponents, F.P., on his publicly accessible Facebook wall which he ran personally. The post itself was not inflammatory and only his friends could comment on it. Two third parties, S.B. and L.R, added a number of comments under his post, referring to F.P.’s partner Leila T. and expressing dismay at the presence of Muslims in Nîmes. Leila T. confronted S.B. who she knew personally and he deleted his comment later that day.

The next day, Leila T. lodged a criminal complaint against the applicant, Mr. Sanchez, as well as those who wrote the offending comments. The Nîmes Criminal Court found them all guilty of incitement to hatred or violence against a group or an individual on account of their origin/belonging or not belonging to a specific ethnic group, nation, race or religion. The Nîmes Court concluded that by creating a public Facebook page Mr. Sanchez had set up a service for communication with the public by electronic means on his own initiative, for the purpose of exchanging opinions. By leaving the offending comments visible on his wall, he had failed to act promptly to stop their dissemination and was guilty as the principal offender. (para 18) In its decision, the Nimes Criminal Court noted that only ‘friends’ could comment on the applicant’s Facebook wall and that being a political actor, he had to be more thorough in monitoring his comments, as he was more likely to attract polemical content.

This decision was upheld by the Nîmes Court of Appeal which held that the comments had clearly defined the group concerned (Muslims), associating them with crime and insecurity in the city in a provocative way. The Court of Appeal also noted that by knowingly making his Facebook ‘wall’ public, the applicant had assumed responsibility for the offending content. Mr. Sanchez’ appeal to the Court of Cassation on points of law was rejected. He then turned to the ECtHR, alleging that his criminal conviction for incitement to hatred violated Article 10.

The Chamber majority found that no violation had occurred.

Grand Chamber Judgment

The Court had to decide whether the applicant’s criminal liability for hosting third-party comments containing hate speech on his social media page is an infringement of his right to freedom of expression, and if so, whether that infringement was lawful, served a legitimate aim and was necessary in a democratic society. In this case it is undisputed that the imposition of criminal liability interfered with his freedom of expression. As to the lawfulness of the interference, the Court observed that applicant was held liable as a ‘producer’ for comments under section 93-3 of Law no. 82-625. The concept of a ‘producer’ was sufficiently defined in French case law. It is undecided at what point a ‘producer’ has to have had knowledge of unlawful remarks in order to be held liable for them. Contrary to the applicant’s arguments and in line with Delfi v Estonia, there is no requirement for the prior notification of the producer under domestic law. And Therefore, knowledge of illegal comments has to be decided on a case-by-case basis. Even though social media account holders have not been held liable of illegal comments before, this does not violate requirements of accessibility and foreseeability in law.

Further, the Court found that the interference served a legitimate aim: the protection of the rights and reputation of others (Perincek). Although political speech calls for an elevated level of protection, it may still be subject to restrictions and penalties – even during electoral periods. Politicians in particular have a responsibility not to incite hatred and must avoid advocating for racial discrimination (Feret). In order to judge the necessity of the infringement, the Court reiterated its previous findings from Delfi to outline the factors required to attribute liability to an internet intermediary for speech by third parties: (i) the context of the comments (ii) the measures applied by applicant to prevent or remove defamatory comments, (iii) the liability of the actual authors of the comments as an alternative and (iv) the consequences of domestic proceedings for the applicant.

Applying these criteria, the Court considered the comments in this case to qualify as hate speech, targeting a specific group (Muslims) with insulting and hurtful language. The electoral setting in which the comments were made meant that they were more likely to spread, and creating a larger risk of xenophobia. The comments were made on the applicant’s social media page. The Court disagreed with the labelling of the applicant as a ‘large professional Internet news portal,’ but found that his social media presence fell in the category of ‘fora on the internet where third-party comments can be disseminated’ (Delfi). The French criterium of ‘producer’ creates a shared liability regime, allowing both original posters and intermediaries to be held liable. This requires a level of content moderation from the intermediary. The required level is highly context-dependent; domestic courts are best situated to determine the required level of responsibility.

The Nîmes Court was correct in finding that the applicant must have been aware of the risk an open profile would pose for the dissemination of illegal content in election times. In his original post, the applicant had opened a dialogue which the litigious comments were part of. His liability was not based specifically on one comment, but on the dialogue as a whole. The French courts were justified in finding that he had erred in his responsibilities, by virtue of his public office and his choice to make his Facebook wall public during an election period, by starting a dialogue resulting in the spread of hateful content and failing to monitor the content generated appropriately. The decisions were proportional in light of the relatively small pecuniary damage suffered by the applicant (EUR 4,000). As to the possibility of only holding the original posters liable for the litigious comments, the Court found that applicant and the original posters were not held liable for the same crime; the applicant was liable for failing in his duties as a ‘producer’, whereas the commenters were directly criminally liable for hate speech. International law does not monitor who is held liable for what crime; the Court only oversees whether that criminal liability leads to an unjustifiable infringement of an applicant’s freedom of expression.


Three points stand out in this decision. Firstly, the Court found that the litigious comments were ‘clearly unlawful’ due to their hateful nature. Due to the lack of a clear definition for hate speech – as the Court itself acknowledges – this is a difficult position to defend. It is clarified in the decision that the comments clearly single out one minority group (Muslims) and even one person (Leila T.). The applicant argued that the comments were uttered in an electoral context, meriting greater protection of freedom of expression; the Court found the opposite. During an election, the impact of xenophobic and racist discourse is greater, especially on the internet due to its greater possibility for spreading. (paras 153, 156) The vulgar tone of the comments as well as the context in which they were made, rendered them ‘clearly unlawful’. The manifest unlawfulness of the comments, particularly in a sensitive electoral setting, is up for debate, but very much dependent on the local setting. I agree with the Court that local courts, such as the Nimes Court, are best equipped to deal with such questions. (para 189)

The consequence of the ‘clear unlawfulness’ of the comments is that applicant is responsible for removing them with sufficient expedience. The Court finds that by virtue of his political office and his public Facebook timeline, the applicant has more responsibilities than the ordinary internet user in monitoring content on his social media page – thus, he has to remove clearly unlawful content with sufficient expedience. In my opinion, the French Court jumped through a number of hoops in inferring liability from these circumstances, labelling the applicant as a ‘producer’ under Law no. 82-625. This ruling creates a scenario in which every person – naturally dependent on their public role and the accessibility of their social media presence – who owns a public Facebook profile might bear responsibility for the comments posted thereon. Especially for larger Facebook profiles that is problematic, requiring significant resources to monitor all traffic. This creates an even larger tension when the Court stretches the notion of ‘clearly unlawful’, or ‘manifestly unlawful’ content to include borderline cases of hate speech, which may be difficult for private individuals to identify, especially if those private individuals are not well-versed in the intricacies of free speech law. The resources required to monitor all traffic could be significant, dependent on the timeframe that the Court expects illegal content to be removed in. In this case, the Court mentions that requiring applicant to remove content in any timeframe shorter than 24 hours after posting would be excessive. It does not clarify another timeframe, however. If the Court maintains the 24-hour time period, that could prove burdensome for individuals. In comparison, under Germany’s NetzDG (Section 3(2)(2)), manifestly unlawful content needs to be removed within 24 hours, or intermediaries face liability. That regulation is aimed at large internet intermediaries, with sufficient resources to effectively tackle – as best they can – illegal content online. Maintaining the same standard for private individuals seems excessive. One possible consequence of this approach would be a chilling effect on public profiles, requiring all those who own a social media presence to make it ‘private,’ to avoid any unwanted comments from strangers.

However, the most important missing factor in the potential impact of this decision is the role of the social media platform. If one equates every social media profile to a little forum in itself (which has been done in cases of government officials, see for example Knight v Biden), too much responsibility is placed on the individual who has a social media profile, and not enough on the social media platform facilitating the connectivity of those profiles. (see José van Dijck, Culture of Connectivity) In this case, the applicant has 1800 friends – not such a big social media presence as to warrant terming it part of the public sphere. If it is, then it would be good to clarify size limits for such responsibilities, as the Digital Services Act (Regulation 2022/2065) has recently done for large social media platforms (<45,000,000 users). The platform is better equipped to judge the ‘clearly unlawful’ nature of comments, and certainly has better (even if still insufficient) resources for monitoring heavy traffic. Placing the responsibility on the owner of the social media profile is a liability regime that is not going to lead to anything but self-censorship.


Sanchez v France takes a significant step in relation to the establishment of intermediary liability for clearly unlawful content. Firstly, the Court took a bold step in upholding the French labelling of the litigious comments as ‘clearly unlawful,’ requiring expedient removal by the intermediary. This can be very burdensome, especially in light of the nature of the comments in this case, which were – in my opinion – not so clearly unlawful that an uninformed individual – such as a person monitoring their Facebook profile – would be able to spot their unequivocal unlawfulness. Secondly, the decision dramatically expands the range of people and entities that need to worry about being held liable as an internet intermediary, to the point where this could extend to everyone with a social media presence. Thirdly, the responsibilities falling on those potential internet intermediaries require significant legal prowess and resources, which are unattainable for most private individuals. The Court has understandably been accommodating to the French court holding applicant liable under the French liability regimes, but from the perspective of freedom of expression, the resulting decision is awkward at best… and untenable at worst.

Print Friendly, PDF & Email

Related posts

Leave a Reply

Your email address will not be published. Required fields are marked *

4 Trackbacks