n July 2025, our research group decided to investigate several toxic TikTok hashtags used by Belarusian state propaganda against Belarusians outside the country and opposition activists. We discovered how seemingly innocent hashtags become instruments of stigmatization and psychological pressure.
Our publication is based on analysis of 115 videos, 108 transcripts and 2,657 comments, tens of thousands of interactions and detailed study of botnets for the period from June 1 to July 28, 2025. The results show systematic use of manipulative techniques and coordinated inauthentic behavior to promote toxic narratives.
Methodology: Three-level analysis system
This research was conducted using a comprehensive analytical system integrating data from Exolyt, TikTok Campaign Analyzer and proprietary algorithms for detecting coordinated behavior. All data was obtained from open sources while adhering to principles of ethical journalism and personal data protection.
The research period covered June 1 – July 28, 2025, which allowed us to capture both spontaneous activity and coordinated spikes.
Strict criteria were applied to confirm our findings: cross-validation of data from multiple sources, verification of temporal correlations of events, analysis of linguistic patterns to identify automated content, and verification through open sources.
Anatomy of toxic terms: from words to weapons
Let’s analyze the terms “беглые” / “fugitives” and “змагары” / “fighters” from the perspective of Domestic Information Manipulation and Interference (DIMI).
“Беглые” / “Fugitives”: constructing the enemy image
The term “беглые” / “fugitives” in the context of Belarusian politics represents a carefully constructed tool of discreditation. Our in-depth analysis showed that this word functions as a manipulative designation, implying cowardice and illegitimacy of people’s actions, creating a negative emotional background and contrasting these people with “loyal” citizens.
According to our analysis, the use of this term in media and official communications escalates fear among remaining citizens, creating the message: “if you leave – you become a ‘fugitive’, an enemy of the state.” This affects behavior and psychological state, limiting freedom of choice and creating an atmosphere of paranoia.
The disinformation effect of the term manifests in spreading false or one-sided narratives. For example, that “беглые” / “fugitives” avoid responsibility or commit anti-state actions. In reality, the reasons for leaving are diverse: political repression, threats to life, economic instability, which is completely ignored in propaganda discourse.
“Змагары” / “fighters”: turning fighters into enemies
The term “змагары” / “fighters” proved to be a more complex case. Originally, the word means “fighters” or “resisters” in Belarusian, which is neutral or positive in itself. However, propaganda reinterprets this term, turning it into a stigma for political opponents, attaching labels of extremism or betrayal.
Using the word in a negative context creates a division of society into “us” and “them.” The narrative “змагары / fighters are enemies of the people” justifies repression and discrimination, emphasizes the need for conformism and loyalty to official authority. The term provokes hostility, fear, and aggression toward a certain group, intensifying the atmosphere of conflict and alienation.
How data was collected and prepared for analysis
Behind smartphone screens, a quiet war is taking place, in which we ourselves unwittingly become participants. This is not just a struggle for likes and views, but a real game for control over the emotions and decisions of millions of people. The word “беглые” / “fugitives” seems harmless until a person suddenly becomes an exile, and their personal story is reduced to a political label that leaves no room for sympathy. If people used to be labeled carefully and manually, now TikTok performs this function excellently – quickly, cheaply and massively.
Here’s what a screenshot of the most viewed videos during the research period with the hashtag “беглые” / “fugitives” looks like:
TikTok turned out not to be a place where teenagers dance and attention degrades on entertainment content. These are “serious” programs on Belarusian state TV channels in very dark and grim color schemes.
In July’s update, Exolyt added the ability to choose a custom time period in their very complex and impressive charts of hashtags related to the one being studied:
Analysis of this chart prompted the investigation of an even more massive hashtag – “змагары” / “fighters”.
Let’s take another screenshot:
As can be seen, the visualization looks depressingly identical.
Related hashtags chart:
The lines also allow viewing all videos that used two hashtags together during our selected period.
One of the key data sources in our research was the ability to use AI-prepared video transcripts:
Theoretically, it was possible to study the geography of views of these videos:
Assuming that bots and botnets might be found in comments, comment texts from videos using our two toxic hashtags were added to the analysis.
Statistical portrait of campaigns: numbers don’t lie
Behind strict numbers and indices lie not just comments and likes, but very real scars from virtual attacks that now remain for a long time, forming a new digital reality.
Hashtag #”беглые” / “fugitives”: high-tech manipulation
Analysis of the #”беглые” / “fugitives” hashtag revealed an overall risk level of 0.884, classified as critical. Particularly alarming is the manipulation score of 2.530 – one of the highest recorded by our system. Coordination was 0.700, indicating a high level of coordinated actions among campaign participants.
Community activity included 660 comments, with only one confirmed bot detected. The bot threat was relatively low at 0.160, indicating predominant use of human resources for spreading toxic content. This makes the campaign more organic and less noticeable to automatic detection systems.
Top hashtags intersection matrix
Low integration with other hashtags. Main activity concentrates on self-reference. Targeted impact on a specific group – Belarusian relocants.
Hashtag #”змагары” / “fighters”: mass mobilization
The #”змагары” / “fighters” campaign demonstrated a different character. Overall risk was 0.732, also a high level, but coordination reached maximum value of 1.000. This means practically perfect synchronization of campaign participants’ actions.
The manipulation score was 0.759, significantly lower than #”беглые” / “fugitives”, but bot threat reached critical level of 0.902. 1,997 comments were recorded – three times more than “беглые” / “fugitives”, and four confirmed bots were detected working within multiple coordinated groups.
Top hashtags intersection matrix
Maximum intensity (~470 mentions) when intersecting with main hashtags. High integration with mass hashtags: #recommendations (~465), #top (~450). Systematic promotion through platform algorithms. Disinformation strategy through mass impact by integrating with popular hashtags.
In-depth security threat analysis
Behind dry numbers lie living people who daily face threats, humiliation and feelings of fear. This is more than statistics – these are the fates of thousands of people.
Information Security: alarming signals
The information security threat index for hashtag #”беглые” / “fugitives” was 1.439, classified as critical level. This indicator is formed from several components: coordination at 0.700 level, extremely high manipulative techniques 2.530, FIMI/DIMI indicator 0.383 and activity of one extended bot.
The high level of information security threat is due to the use of complex psychological influence methods aimed at undermining trust in certain population groups. The system identified use of advanced techniques including emotional manipulation, polarizing narratives and toxicity in comments.
For hashtag #”змагары” / “fighters”, the threat index was 1.016, also reaching critical level. Despite a lower index compared to #”беглые” / “fugitives”, the campaign poses a serious threat due to its scale of reach and use of coordinated botnets. The formation of one full-fledged botnet cluster is particularly alarming.
Social Stability: undermining public consensus
Analysis of social stability threats showed an index of 0.848 for #”беглые” / “fugitives”, also falling into the critical zone. Main risk factors are social stigmatization of target groups, polarization of public opinion and undermining social solidarity.
Hashtag #”змагары” / “fighters” showed an index of 0.479, corresponding to medium threat level. Risk factors include political polarization, emotional manipulation and creation of false dilemmas. Although the indicator is lower than #”беглые” / “fugitives”, combined with high bot activity it creates serious risks.
Botnets in action: coordination mechanisms
In this world of numbers and algorithms, it’s easy to forget that behind each bot stands a real person writing scenarios for information warfare. Their goal is not just message distribution, but creating a convincing illusion of majority.
Detailed bot activity analysis
Technical bot detection showed significant differences between the two campaigns. In the #”беглые” / “fugitives” campaign, minimal bot activity was detected: only one suspicious bot with average probability 0.45. The system classified this account as medium risk without forming coordination clusters.
The #”змагары” / “fighters” campaign demonstrated a fundamentally different picture. The system detected four suspicious bots, all with average probability 0.45, but united in one coordinated cluster. All four bots are classified as medium risk, but their joint activity creates critical threat level with index 0.902.
Bot behavioral indicators
The automatic detection system identified specific behavioral patterns characteristic of bot activity. In the #”беглые” / “fugitives” campaign, the single detected bot demonstrated three key indicators: excessive use of underscores in text (risk 4/10), adding multiple numbers at message ends (risk 3/10) and low content diversity with indicator 0.14 (risk 5/10).
Four bots in the #”змагары” / “fighters” campaign showed more pronounced automated behavior patterns.
All were characterized by excessive use of underscores (risk 5/10) and numbers at message ends (similar risk). Particularly indicative was low content diversity with various indicators: 0.25, 0.17, 0.05 and 0.20, indicating use of template phrases and limited vocabulary.
#”змагары” / “fighters” botnet coordination
Coordination analysis revealed formation of one full cluster of four bots with coordination strength 0.333 and temporal correlation 0.750. This means bots demonstrate coordinated behavior in three-quarters of their activity cases. Content similarity was 0.500, indicating use of common message sources or coordinated instructions.
Coordination level reached 5.0 out of 10 with detection confidence 7.0 out of 10. The system classified botnet complexity as moderate with overall threat assessment at medium level. However, the combination of factors – four coordinated bots, common behavioral patterns and temporal synchronization – creates critical threat level for the information environment.
Manipulative techniques: arsenal of psychological influence
Detected manipulative practices:
Whataboutism: the art of distraction
The Whataboutism technique showed usage level 4.0 out of 10, considered moderate. The mechanism works by redirecting discussion from specific problems of Belarusian relocants to other topics. Instead of discussing human rights situation in Belarus, discussion is redirected to Ukrainian actions, Russian emigrant behavior or general criticism of Western countries.
Typical examples from comments include mentions of “terrorist Sergei Tikhanovsky” and “criminal Svetlana Tikhanovskaya,” as well as phrases like “who wants to talk about “беглые” / “fugitives”? Better look at what the West is doing.” The goal of this technique is blurring focus and avoiding direct discussion of human rights violations.
Emotional manipulation: maximum impact
Emotional manipulation reached maximum level 10.0 out of 10. Tactics include creating polarizing narratives with images of “homeland traitors,” contrasting “real patriots” and “беглые” / “fugitives”, and emotional manipulation through family values.
Emotional contagion is achieved through use of bright emotional markers, creating atmosphere of collective anger and provoking feelings of offense and injustice. Toxicity in comments manifests through systematic use of derogatory terms and creating hostile environment for dissenters.
Appeal to fear: cultivating anxiety
The appeal to fear technique also reached maximum level 10.0 out of 10. Intimidation strategies include reputation threats through formulations “if you leave – you become “беглые” / “fugitives”, enemies of the state.” Social isolation is achieved by creating image of society outcasts, threats of losing connection with homeland and forming guilt toward those remaining.
Economic fears are cultivated through propaganda of emigrant failures, exaggerating adaptation difficulties and creating false image of “poverty in foreign lands.” These techniques work particularly effectively under conditions of economic instability and social uncertainty.
Ad Hominem: systematic discreditation
Personal attacks reached maximum level 10.0 out of 10. Leader demonization manifests in systematic calling of opposition figures “terrorists” and “criminals,” spreading compromising materials and creating negative information background.
Motive devaluation is expressed in presenting emigration as “flight” instead of forced measure, accusations of selfish motives and denial of political reasons for departure. Group stigmatization is achieved through collective labeling, generalizing negative qualities to entire group and creating “internal enemy” image.
Coordination in detail: anatomy of coordinated attacks
Temporal patterns of #”беглые” / “fugitives” campaign
Activity analysis of #”беглые” / “fugitives” hashtag shows clear intensity waves coinciding with political events in Belarus, international statements about the country’s situation and opposition leader activity on social networks.
87% of identical narratives appear within 24-hour periods, indicating high coordination degree.
Synchronous appearance of identical terms and phrases, coordinated promotion of specific video content and simultaneous activation of inactive accounts indicate professional campaign organization.
Mass likes and reposts in specific time windows, as well as coordinated attacks on critical comments confirm central management presence.
Maximum coordination of #”змагары” / “fighters”
The #”змагары” / “fighters” campaign demonstrates maximum coordination with indicator 1.000.
Four confirmed bots work in tandem with hierarchical role distribution and mutual support through likes and comments.
Temporal distribution of posts:
Content synchronization reaches 96% matches in key messages, simultaneous promotion of identical viewpoints and coordinated reaction to external events.
Most active users:
Temporal patterns show activity concentration in 3-4 hour windows daily, correlation with working hours in specific time zone and anomalously high weekend activity.
Threats and consequences: multi-level impact
Direct impact on target communities
Constant stigmatization creates chronic stress among Belarusians in the country and abroad. People are forced to limit their public activity, avoid expressing political opinions. This leads to self-censorship and social isolation, breaking ties with friends and family in Belarus.
Psychological pressure manifests in constant feeling of insecurity, anxiety for loved ones and fear of possible repression.
Information environment degradation
Disinformation campaigns lead to discussion polarization, making constructive dialogue impossible. Emotion predominance over facts reduces information quality, while undermining trust in media, institutions and expert opinions creates favorable environment for conspiracy theory spread.
Normalization of hate speech on social networks gradually changes acceptable discourse standards. What was previously considered unacceptable becomes commonplace, creating conditions for further hostility escalation.
Long-term social and political risks
Deepening social divisions creates prerequisites for long-term social instability. Destruction of social solidarity weakens society’s ability for peaceful conflict resolution and democratic self-governance.
Undermining democratic institutions occurs through discrediting alternative viewpoints and weakening civil society. Prerequisites for authoritarian control are created when the state becomes the only source of “correct” information.
Technological aspects of FIMI/DIMI counteraction
Existing moderation system limitations
Modern social network algorithms show limited effectiveness in detecting contextual manipulations. Automatic systems recognize direct threats and overtly toxic content well, but have difficulty identifying sophisticated emotional influence techniques.
Particularly problematic is coordinated behavior detection when predominantly human resources are used. Such campaigns as #беглые with low bot activity level (0.160) may remain undetected by standard security systems for long periods.
AI technology potential
Using language models for content analysis shows promising results. Our system could identify manipulative techniques with high accuracy, analyzing not only text but context, emotional coloring and message connections.
Multiple data source integration allows creating more complete picture of information threats. Combining technical metrics, content analysis and behavioral patterns ensures high detection accuracy with minimal false positives.
The threat scale extends far beyond Belarusian context. Identified methods can be easily adapted for other social groups, political movements and election campaigns. This requires immediate attention from international community and coordinated counteraction efforts.
Conclusion
Our methodology proved effectiveness of integrated approach to information threat analysis. Combining data from Exolyt, AI analysis technologies and proprietary coordinated behavior detection algorithms allows identifying both traditional botnets and new forms of information influence.
This approach can be successfully applied for election campaign monitoring, social stability analysis and information threat identification in various countries and contexts. Only through joint efforts of technology platforms, researchers, journalists and civil society can we protect democratic values in the era of information wars.
This publication was developed by a research team under the leadership of Mikhail Doroshevich, PhD.