TikTok is under scrutiny for hosting a network of accounts that openly promote Nazi ideology. These accounts, numbering 200, disseminate hate speech, glorify historical atrocities, and employ sophisticated tactics, such as the use of coded language, image obfuscation, cross-platform coordination, and off-platform recruitment to evade moderation.
The content posted by these accounts received significant traction, with tens of millions of views on videos that include Holocaust denial, the adulation of Hitler and Nazi Germany, and support for white supremacist violence.
The investigation by the Institute for Strategic Dialogue began with the discovery of a single pro-Nazi account, which led to the identification of a broader network of similar accounts. These accounts were then analysed, focusing on those that explicitly promoted Nazism through videos, usernames, profile pictures, and other indicators.
The perpetrators use TikTok to reach a wide audience, including new users with minimal followers. The platform’s algorithm, which quickly recommends similar content, exacerbates the problem by amplifying these hateful messages.
Researchers found that TikTok’s response to reported violations was lacklustre. Out of 50 accounts reported for breaking community guidelines, none were initially down. These accounts collectively garnered over 6.2 million monthly views.
Even after a month, only half of the reported accounts were banned, allowing harmful content to proliferate and gain a significant audience before any action was taken.
One of the most concerning aspects of the investigation is the role of TikTok’s algorithm in promoting extremist content. Researchers created dummy accounts for the study and received recommendations for Nazi propaganda after minimal engagement with similar content. This suggests that TikTok’s content recommendation system inadvertently facilitates the spread of hate content.

“ISD watched 10 videos from the network of pro-Nazi users, occasionally clicking on comment sections but without engaging (e.g. liking, commenting or bookmarking), and viewed 10 accounts’ pages. After this superficial interaction, ISD scrolled through TikTok’s FYP, which almost immediately recommended Nazi propaganda,” explained researchers. “Within just three videos, the algorithm suggested a video featuring a World War II-era Nazi soldier overlayed with a chart of the US murder rate, broken down by race.”
The investigation also highlighted the use of generative artificial intelligence by these networks to modernise Nazi propaganda. AI-generated translations of Hitler’s speeches and other extremist content help these groups evade detection and reach a broader audience. This technological twist makes it even more challenging for moderation efforts to keep up.
The spread of Nazi propaganda on TikTok is not an isolated phenomenon. Researchers found that self-identified Nazi activists are coordinating efforts across various platforms, including Telegram, to amplify their reach. These activists use TikTok’s video format to promote extremist documentaries and share tactics for evading content moderation.

They also found that several pro-Nazi materials, like the film Europa: The Last Battle, are being distributed via these platforms.
“We found countless promotions for ‘Europa: The Last Battle’, including several videos with more than 100k views. TikTok searches for the film, as well as minor variations on its title, yield dozens of videos. Some posted clips using tweaked tags like #EuropaTheLastBattel,” researchers said.
The extent of coordination can be seen from the fact that even after a TikTok account has been banned, the new account reached nearly 100k views within three days.
Researchers also discovered that these accounts offer ‘follow-for-follow’, a common tactic those with fewer followers use.
The accounts analysed in the study also recruit, directing followers to off-platform channels where more extreme content and mobilisation efforts occur. Instructions for making weapons and explosives were shared along with the materials, illustrating the real-world dangers posed by these online networks.
The investigation reveals a troubling gap in TikTok’s ability to moderate and curb the spread of extremist content. Researchers found the platform’s current measures to be insufficient due to the campaign’s networked nature. Accounts that do get banned on the platform head over to another prominent social media like Telegram and order the people to follow their new account.
Furthermore, researchers also found that TikTok’s policy of providing users with final warnings allows the accounts to create a backup and start posting from that backup account.
“The network of accounts explored in this study, using follow-for-follow tactics and organizing their efforts off-platform, indicates a level of sophistication that individual account reviews will likely miss. This approach enables faster amplification of their content than they would otherwise achieve. Additionally, providing users with final warnings has repeatedly allowed accounts to activate their back-ups in advance and move their followers to new, warning-free accounts,” researchers concluded.
In the News: Over 3,000 fake GitHub accounts set up to distribute malware