EU funding for new project on disinformation
17 April 2024
Professor Martin Innes, Co-Director of the Security, Crime and Intelligence Innovation Institute and member of the School of Social Sciences, has secured more than £250,000 in funding from Horizon Europe.
The 3-year project, Attribution Data Analysis Countermeasures Interoperability, will build on existing research into Foreign Information Manipulation and Interference (FIMI), enhancing understanding of how it can be detected, categorised, analysed, shared, and countered.
FIMI, which is more popularly known or labelled as disinformation, is a significant political and security issue in Europe and around the world.
The European External Action Service (EEAS), the EU’s diplomatic service, defines FIMI as a“pattern of behaviour that threatens or has the potential to negatively impact values, procedures and political processes. Such activity is manipulative in character, conducted in an intentional and coordinated manner. Actors of such activity can be state or non-state actors, including their proxies inside and outside of their own territory”.
The aim of the project is to improve technical standards, develop research on attribution impact, linguistic and visual analysis, cross-platform manipulation, and gendered disinformation.
We spoke to Professor Innes to better understand the project’s aims, how the work will unfold and to discuss the intended outcomes.
Q: How would you, in simple terms, explain what this project involves?
There has been a lot of public and political concern about disinformation and associated information disorders over the past few years, especially since the discovery of the St Petersburg based Internet Research Agency trying to interfere in the 2016 US presidential election. In response to which, there has been an extensive research effort, spanning governments, NGOs, social media platforms and the academy, to understand the causes and consequences of disinforming communications. As part of which, an awful lot of attention has been directed to finding bad actors and their malign influence attempts, but rather less effort has been directed towards determining the efficacy of our societal responses to the challenges posed by disinformation and information manipulation. Perhaps the key innovation of the ADAC project is to try and contribute to the development of a more evidence-informed perspective about ‘what works’ in countering these kinds of problems.
Q: Why should we be concerned about FIMI and what are some of its recent impacts?
Disinformation and malign information manipulation is a significant social problem in its own right, but also because of the shaping effects it has more widely across areas such as public health, democratic elections, climate change and conflict. Pretty much every major public controversy or high-profile event is now acting as a magnet for disinforming and deceptive communications. This applies to the Covid pandemic, through to the ongoing wars in Ukraine and Gaza.
This year we are especially worried about how different nation states and their proxies might seek to use disinforming, distorting and deceptive tactics to influence the large numbers of elections that are taking place across the world. In 2019 and 2020, my research team were involved in identifying foreign state backed efforts to interfere in the elections in the UK and USA respectively. Today, in 2024, the tools, techniques and technologies to enable covert manipulation are a lot more sophisticated and powerful.
Q: How will your work align with other projects and strands of activity focused on the same theme?
Our team at Cardiff University were ‘early adopters’ in terms of spotting the potential threat that disinformation and allied information disorders might pose to social order. As a result, we have now built up a significant back catalogue of evidence and expertise in terms of understanding how information manipulation campaigns are organised and conducted, and the methodologies used to try and influence public understanding and political decision-making. The aim is to leverage this background knowledge to engage with several important issues around how to construct ‘smarter’ and more impactful counter-measures.
Q: Who will be working with you on the project? Are they existing partners/collaborators or are you forging new networks?
The project is being led by Lund University in Sweden and involves partners in Lithuania and Poland, amongst others. We have worked with a couple of individuals involved in the project over the years, but for the most part these are new partners for our team.
Q: What outcomes do you anticipate the project delivering? And in what ways will the findings be used to combat disinformation and interference?
Research on disinformation and information campaigns has grown so rapidly over the past few years that some of our key understandings and responses to the problem are based on assumptions and anecdotes, rather than robust and rigorous evidence. For example, an awful lot of effort is put into the careful ‘attribution’ of responsibility for malign information manipulation, especially where the involvement of foreign states is suspected. However, we do not know whether attributing in this way makes any difference to public understanding. Does it make a difference whether an attribution is made by government intelligence agencies, social media platforms or independent researchers? This is just one of the critical questions ADAC hopes to address.
Q: How do we best counteract the impacts of FIMI and disinformation practices? And what role can your research play in a consolidated response?
In terms of where we are at this current moment in time, there are lots of things that can be done, and are being done, to counteract the impacts of information manipulation and disinformation. The trouble is we really don’t know what is working, when and why, and what is not. Indeed, in findings from an earlier study published a couple of years ago now, we were able to show that ‘de-platforming’ some social media accounts for spreading disinformation actually saw them grow their number of followers. One role research can play is helping avoid unintended consequences such as this, where the proposed ‘cure’ actually results in more harm being caused.
Q: For those unfamiliar with the work of the Security, Crime and Intelligence Innovation Institute, could you describe your work and some of your current focuses (in addition to disinformation)?
The Security, Crime and Intelligence Innovation Institute was set up a couple of years ago as one of Cardiff’s flagship innovation institutes, with a particular accent upon the role of a global civic mission and the importance of having impact. The underpinning idea being to adopt an inter-disciplinary, challenge-led approach to researching and understanding how new forms of technology and data are creating new security threats, whilst also enabling creative responses to these. Pivoting around this focus, we have active externally funded research programmes on topics of crime, violence and policing, as well as how artificial intelligence technologies are transforming the defence and security sectors. In addition, to our own research, we are also seeking to grow the University’s capacity and capability to do research in this area, in terms of the skills, partnerships and infrastructure needed to support sustained work in this area.
Q: This project is funded by Horizon Europe. Why is it so important that the UK government agreed its association last autumn?
We are already living in a historical moment where the world around us feels more uncertain, unstable and unsafe, and all the indicators suggest this is likely to be the prevailing condition for some time to come. Many of these security challenges that confront us are ‘wicked problems’ in the sense that there are rarely simple straightforward solutions to them, but rather a series of trade-offs, and risks to be balanced and negotiated. We cannot and should not think that we can tackle them alone. Working closely with our friends and partners across Europe and further afield really is the best way in which our research can make a difference, but also provides an opportunity for us to learn from the valuable experiences and insights of others, who have often been tackling similar challenges already.