Skip to main content

Hundreds of fake Twitter accounts linked to China sowed disinformation prior to the US election - report

28 January 2021

Woman using mobile phone

A sophisticated China-linked social media operation played a key role in spreading disinformation during and after the US election, a report from Cardiff University concludes.

The study, from the the Crime and Security Research Institute (now known as the Security, Crime, and Intelligence Innovation Institute) shows evidence of the network’s activities reaching a wide audience, most successfully through a now debunked viral video that was later shared by Eric Trump, son of former US President Donald Trump, falsely showing ballots being burned on election day.

Researchers also found evidence of the same network spreading anti-US propaganda which amplified calls for violence before and after the Capitol riot in Washington on 6 January 2021.

Professor Martin Innes, Director of the Crime and Security Research Institute, who leads the Open Source Communications, Analytics Research (OSCAR) team said: “Although only Twitter can fully certify an attribution, our analysis using open-source traces strongly suggests multiple links to China. Our initial findings suggested that the operation was not especially complex, but as we have dug deeper into the network, we have had to substantially revise our original view. The behaviour of the accounts was sophisticated and disciplined, and seemingly designed to avoid detection by Twitter’s counter-measures. There is at least one example of these accounts helping to propagate disinformation that went on to receive more than a million views."

The network appears designed to run as a series of almost autonomous ‘cells’, with minimal links connecting them. This structure is designed to protect the network as a whole if one ‘cell’ is discovered, which suggests a degree of planning and forethought. Therefore, this marks the network as a significant attempt to influence the trajectory of US politics by foreign actors.

Professor Martin Innes Co-Director (Lead) of the Security, Crime and Intelligence Innovation Institute

On US election day (03/11/20), a misleading video of a man filming himself allegedly burning Trump-voting ballots on Virginia Beach was detected circulating across several platforms. Although the ballots were later revealed to be samples, the video quickly went viral when Eric Trump’s official Twitter page shared a link to it a day later, with this version alone receiving more than 1.2 million views.

Initially, the video was widely assumed to originate from a QAnon-associated account, but the Cardiff University investigation has uncovered evidence that two China-linked accounts, one of which has since been suspended by Twitter, shared the video prior to this. Researchers believe this led to the content, which continues to be shared today, gaining significant spread.

OSCAR’s initial research into this network began seven days before the US election. The team uncovered more than 400 accounts engaging in suspicious activities. These were forwarded to Twitter, which suspended them within a few days.

The team’s analysis revealed a number of additional accounts associated with the network which are still operational, suggesting it is more complex and resilient than previously estimated. Their findings show operators reacted quickly to the events in the Capitol on 6 January by introducing a new range of high quality, English-language propaganda videos targeting the US within hours of the violence taking place.

There is strong evidence of links to China; posts include use of the Chinese language and a focus upon topics suited to Chinese geopolitical interests. More recent analysis shows the accounts were solely active in Chinese office hours; there was limited activity during a Chinese national holiday; and English language use appears to have been derived from machine translation tools.

Professor Innes said: “Since notifying Twitter about the 400 accounts that they subsequently suspended, there is evidence of them having identified additional activity associated with the network, with further suspicious accounts being taken offline. But there are still many active accounts which continue to spread potentially harmful content – as demonstrated by their engagement with the recent violent events in Washington DC.

“The behaviour patterns of this operation are unusual and appear to have been designed to try and avoid detection by Twitter. For example, the signals of co-ordination were frequently quite subtle and there was clearly a deliberate avoidance of using hashtags.

“This raises an intriguing question about how the operators had learned what was and was not likely to ‘trip’ Twitter’s detection algorithms. This could have been gleaned from experience and learning from past disinformation operations but could also plausibly derive from other sources. Further investigation is urgently needed.”

Share this story

The School is an internationally recognised centre of high quality teaching and research.