September 02, 2025
By Jerome Joseph
The case of Vyacheslavova and others v. Ukraine stems from an incident of civil unrest which occurred in Odesa on 2 May 2014 and which claimed 48 lives. Released in March 2025, the case appears to have escaped the same level of academic attention which followed Ukraine and the Netherlands v. Russia (see, Khachatryan and Milanovic). Nevertheless, like Ukraine and the Netherlands, it reveals the potentially devastating consequences of foreign interference and state-sponsored disinformation. As will be shown below, given the narrow way in which the applicants’ complaints were framed, the Court stopped short of engaging with how Convention principles could be applied to online disinformation operations – even though ‘aggressive and emotional’ disinformation played an important role during the events in Odesa (para. 336). Nevertheless, the mass riots which occurred in May 2014, and the ongoing threat posed by online disinformation today, forces us to grapple with how the Convention system should respond to these technological challenges.
It is from this angle that the Court’s judgment will be explored in the current post. Given the narrow scope of the post, the Court’s findings regarding the procedural aspect of Article 2 will not be addressed. Instead, the current submission discusses what impact online disinformation should have on the state’s duty to put in place an adequate framework and to take operational measures in response to real and immediate threats to life.
The events in Odesa stemmed from the pro-unity ‘Maidan protests,’ which followed the decision by President Yanukovych to suspend association negotiations with the EU (para.5). These protests were marked by violent clashes and were subsequently followed by the Russian Federation obtaining effective control of the Crimea Peninsula and the Donetsk and Lugansk Regions.
Odesa itself had not been immune from the events affecting the country. By May 2014, it had already undergone numerous violent incidents (para.11), with both pro-unity and ‘anti-maidan’ protestors forming ‘self-defence’ groups (para.13). These events coincided with an ‘aggressive and emotional’ propaganda campaign on the part of pro-Russian elements, designed to undermine Maidan supporters and the new interim Ukrainian government that had replaced President Yanukovych following the outbreak of the Maidan protests. Much of this took place online (para.25).
Against this backdrop, the Odesa Chornomorets and Kharkiv Metalist football clubs announced their intention to hold a joint rally in favour of a united Ukraine. The objective was for the protestors to march from Sobrana Square to Odesa’s football stadium, where a match had been scheduled (para.24). However, the march was marked by violent clashes almost as soon as it began on 2 May 2014. Fuelled by online speculation that the pro-unity protestors were ‘Nazis’ determined to attack anti-Maidan activists (para.25), pro-Russian activists opened fire in the direction of pro-unity protesters using short-barrelled weapons and Molotov cocktails, leading to casualties. Anti-Maidan protesters subsequently took refuge in a trade union building (para.62).
The scene was characterised by escalating violence from both sides, with pro-unity activists throwing Molotov cocktails at the trade union building and anti-Maidan activists firing at pro-unity protestors from the roof. At 7.45 p.m. a fire broke out, forcing protestors trapped inside the building to escape by jumping from the upper floor windows (para.67). By the time fire engines would eventually arrive, however, the fire had claimed forty-two lives (para.73).
The applicants in the case consisted of individuals whose next of kin had died during the events of 2 May 2014 and survivors who suffered burns and other injuries during the fires at the trade union building (para.287). The Court found various failures under the substantive limb of Article 2.
Firstly, the Court found that the Ukrainian authorities had been furnished with information ‘sufficient to confirm the existence of a real and immediate risk to life’. This information consisted of intelligence about the possible risk of mass clashes and social media posts, identified by Ukrainian Cybercrime authorities, concerning mass riots (paras 341 and 342). Despite this, no action had been taken ‘with a view to ensuring enhanced security in the specified public areas’ (para.342). Instead, the police authorities had ignored the available intelligence and the relevant warning signs and had prepared for an ordinary football match on 2 May 2014 (para.343). The police remained passive at several crucial moments, notably when the anti-Maidan activists approached and attacked the pro-unity march for no obvious reason and when the first fatal firearm injury was inflicted on a pro-unity activist (para.348). Finally, the failure of the police to make any meaningful attempt to stop the initial wave of violence directed against the pro-unity protesters, together with clear indications of possible collusion between the police and the anti-Maidan activists, played a crucial escalatory role in the violence (para.349). On account of these omissions, the Court concluded that Ukraine had failed to do everything that could reasonably be expected of it in order to prevent the violence, thereby breaching the substantive limb of Article 2 (para.362).
The riots in Odesa can be seen as a clear illustration of the threat posed by online disinformation and state-sponsored information campaigns. This threat has become widely studied (see the following report by the European Parliament); however, it is often conceptualised in overly technocratic terms as posing harm to democratic processes and institutions (see Parliamentary Assembly of the Council of Europe Resolution 2593 (2025) para.1, for example). The Court’s findings testify to the fact that, quite apart from interfering with elections and referendum campaigns, online disinformation operations can be a proximate cause of physical harms in the offline world. Indeed, the Court confirmed as much when it noted that ‘aggressive and emotional disinformation and propaganda messages about the new Ukrainian government and Maidan supporters’ might have had an impact on the tragic events in question (para.336). Vyacheslavova thus joins a growing body of evidence concerning the role played by algorithmically charged disinformation in exacerbating tense social and political contexts.
In what follows, I: (1) highlight some of the doctrinal questions arising from online disinformation that were unaddressed in this judgment; (2) place the judgment within the wider policy and academic discussions regarding online disinformation, showing why there is a need for clarity on these doctrinal issues; and (3) suggest a path forward on how Article 2 can be interpreted so as to accommodate the novel threat posed by online information operations.
The Court’s treatment of the role played by online disinformation during the events of April 2014 was relatively shallow. This is not to say that the Court downplayed the role of foreign interference: it acknowledged the ‘need for recognising and exposing Russian disinformation and propaganda warfare’ and highlighted that social media posts concerning the threat of mass riots formed part of the evidence confirming the existence of an immediate threat to life. This suggested that it was the posts’ virulent and incendiary nature that triggered the operational duty of the State to take action under Article 2 (see para.336). However, the Court concluded that its task was ‘limited to examining the applicants’ complaints, which concern the acts and omissions of the Ukrainian authorities’ (para.328). It therefore primarily dealt with the passivity and slow physical reaction shown by Ukrainian police forces in the face of the escalating violence. This in turn meant that the Court did not engage in more in-depth discussions regarding the role that social media business models might have played in exacerbating social tensions and what the Ukrainian authorities’ obligations should have been vis-a-vis the incendiary disinformation that had been circulating online. There are also important doctrinal questions regarding how Article 2 applies to digital activities, which were not addressed in this ruling. For instance, at what point does online disinformation trigger a ‘real and immediate’ threat to life and a duty to take operational measures? Can inflammatory rhetoric that stops short of open calls to incitement violence also trigger the duty to take operational measures under Article 2? Although the Court was constrained to a certain extent by the scope of the applicant’s complaints, a discussion of these issues would be useful for the reasons set out below.
Firstly, the impact of disinformation and propaganda warfare has gained prominent political attention (see for instance, the European Parliament resolution (no. 2016/2030(INI)) cited by the Court at para.281). In only June of this year, the UK Parliament’s Science and Technology Committee heard evidence that: ‘foreign influence operations may have played a role’ in the unrest that followed the tragic killings in South Port. According to the Committee’s report into these events (para. 56): ‘Some state actors, such as Russia and China, invest heavily in online information campaigns and influence operations, disseminating false or polarising content to widen social divides’ and ‘technology such as bots is used to amplify messages through social media recommendation algorithms,’. This was compounded by the fact that several social media companies appeared slow, unwilling or unable to act against incendiary content and may have even profited from the increased engagement garnered by such material (Science and Technology Committee report para.13). Such findings illustrate the danger of leaving it entirely to social media firms to design their own crisis response mechanisms. Accordingly, there is an important debate to be had about the kinds of operational measures that governments can legitimately bring to bear on social media services when harmful content is contributing to unrest in real time and such companies are failing to act with due diligence.
Secondly, there is a growing academic debate about how international law applies to foreign information campaigns (Milanovic and Schmitt, Lahmann, Baade). However, this debate typically centres on the law relating to interstate relations. For instance, scholars have noted that traditional concepts such as the prohibition on interference in the internal affairs of other states cannot easily be applied to online disinformation (see Corn), thereby creating protective gaps. Corn notes that the rule prohibiting intervention in the internal affairs of other states is triggered ‘only when a state seeks to overcome the free will of another state by use of coercive measures aimed at depriving or substantially impairing the targeted state’s freedom of choice’. However, ‘efforts to sow societal division and distrust do not readily lead to a finding of coercion’ since ‘the citizenry is free to accept or disavow the disinformation being disseminated’. There is therefore a useful discussion to be had over whether human rights law can be used to help fill protective gaps in general public international law.
One way of filling this gap is by arguing that Article 2 of the Convention imposes a duty to put in place and enforce an adequate framework to act against disinformation. Indeed, as the Court pointed out in Vyacheslavova (para.319), Article 2 not only imposes a positive obligation to offer personal protection in favour of an individual under threat from a third party. It also raises an obligation to afford general protection to society at large. The Court has often applied this obligation to more tangible, physical threats such as the management of dangerous activities and the mitigation of natural hazards (see the cases cited at para.319). Of course, online disinformation cannot readily be compared to industrial activities or foreseeable natural disasters since, unlike such activities, social media posts are not by their nature harmful. Furthermore, online disinformation may not always consist of clear incitements to violence or hate speech but can manifest in subtler ways, such as deliberately distorted narratives of historical events (European Parliament adopted a resolution (no. 2016/2030(INI))).
That being said, there is now a growing evidence that online disinformation is not simply a threat to state sovereignty or interests, but can equally be framed as a public health and safety issue. As pointed out by the European Centre of Excellence for Countering Hybrid Threats online disinformation campaigns are deliberately designed to amplify divisive narratives, thereby heightening tensions between different societal groups and creating ‘a conflictual environment in which dialogue and compromise become increasingly difficult to achieve,’. Disinformation operations are often described as ‘parasitic in nature’ since they deliberately prey upon historical grievances likely to inflame tensions (Report by the Parliamentary Assembly of the Council of Europe, ‘Foreign interference: a threat to democratic security in Europe’ para.42). Disinformation campaigns also deliberately exploit social media algorithms that prioritise content likely to trigger strong emotional reactions such as anger and outrage, since such content is likely to maximise user-engagement. When combined with tense political climates such as riots and public disorder, such content increases the prospect of violence (see the evidence given to the UK Science and Technology Committee). In this way, a disinformation operation deliberately designed to sow social discord can in the right circumstances, be characterised as a real and immediate threat to life – thereby triggering the duty to take operational measures. There are growing signs that such an argument would not be novel but would be in touch with the mood on the Strasbourg bench. In the case of Google v Russia (for a full discussion, see De Naeyer), the Court correctly noted that the role of social media firms in ‘facilitating and shaping public debate engenders duties of care and due diligence,’ (para. 79). Hence, framing the use of social media by bad actors as a public safety issue is gaining traction.
Undoubtedly, any operational measures would need to be carefully tailored so as not to disproportionately interfere with users’ rights to freedom of expression under Article 10. However, it is important to point out that disinformation operations are ‘seldom politically aligned’. According to the Hybrid Centre for Excellence (cited above), their strategy seeks to amplify opposing sides in political debates simultaneously to undermine social cohesion in the target state. Thus, online accounts contributing to disinformation operations do not contribute genuinely to questions of public interest. Furthermore, there are a range of creative ways of tackling online disinformation that stop-short of removing vast-swathes of online content, such as requiring social media firms to give users more control over the content they see, or demoting harmful content so that it reaches a lower audience. Finally, the Court has repeatedly noted that remarks directed against the Convention’s underlying values, notably justice, peace, tolerance, non-discrimination, may be removed from the protection of Article 10 by virtue of Article 17 (see guide on Article 10 and Hate Speech). Given what is now known about the destabilising effects of disinformation campaigns in Council of Europe Member States, the public safety element of the equation should be given important weight.
Although the facts giving rise to the Court’s judgment are now over ten years old, the threat posed by state-sponsored disinformation campaigns remains current. With such campaigns causing real-world harms in Council of Europe Member states, the prospect of an applicant raising Article 2 in light of a state’s failure to act effectively against online disinformation is not far-fetched. There is a need to reframe the debate on regulating online disinformation as not simply being a matter of users’ Freedom of Expression versus national security. There is equally a need to bring in the voice of victims of online harms and to highlight the range of other Convention interests triggered by these threats. Though the Court in this case did go some of the way in highlighting the destabilising offline effects of online disinformation, a more nuanced discussion on how such challenges trigger rights beyond Article 10 will have to await another set of facts and another applicant.