Photo: Alexander Fox / PlaNet Fox from Pixabay
Microsoft has warned India, South Korea, and the United States that threat actors originating from China and North Korea can target the elections in these countries.
In a report titled ‘Same targets, new playbooks: East Asia threat actors employ unique methods,’ the company sheds light on the evolving tactics employed by threat actors in East Asia, particularly the Chinese Communist Party (CCP)-affiliated actors and North Korean cyber operations.
One of the most concerning findings is China’s use of fake social media accounts to conduct polls aimed at understanding and exploiting divisions among US voters. These deceptive accounts affiliated with the CCP pose contentious questions on sensitive US domestic issues, likely to gather intelligence to influence the upcoming US election.
Additionally, China has significantly increased its use of AI-generate content to spread misinformation and sow discord in the United States and globally. The country has commenced the influence operation (IO) attacks in the South Pacific islands, the South China Sea region, and the United States defence industrial base.
Chinese state actors have begun testing social media during the US 2022 midterm elections. After the elections, the accounts’ activities gradually declined. However, as the presidential elections loom near, social media accounts have again popped up.

The researchers also found that these accounts use videos, memes and infographics to garner public reaction. Also, these social media accounts write on a diverse range of topics such as global warming, border policies, racial tensions, immigration issues, and prevalent drug use.
The report also highlights North Korea’s aggressive cyber activities, focusing on cryptocurrency heists and supply chain attacks to fund its military ambitions and intelligence collection efforts.
Notably, the researchers found out that North Korea has embraced AI technologies to enhance the effectiveness and efficiency of its cyber operations.

Threat actors like Storm-1376, attributed to China, have been actively spreading AI-generated content to influence events and elections. During the Taiwanese presidential elections in January 2024, Storm-1376 deployed AI-generated content, including fake audio and memes, to manipulate public opinion and influence the election outcome.
This marks a concerning trend where nation-state actors with infinite resources leverage AI in their malicious activities, posing significant challenges for cybersecurity efforts.
The report also warns that China will likely ramp up its use of AI-generated content while North Korea will continue to fund its military with cybercrime activities.
As India and South Korea’s elections are on April 19 and April 10, respectively, it is important to understand the Chinese modus operandi to understand how elections could be manipulated.
Social media has already taken steps to counter fake AI and non-AI-generated imagery. For this purpose, Elon Musk-owned X has expanded the Community Notes feature to India ahead of the elections. Similarly, Google suspended Gemini’s ability to answer election queries as the queries may result in wrong information.
In the News: India’s AI law plans to safeguard news publishers, content creators