A new EU initiative will help tackle the rising problem of disinformation, a Brussels conference was told.
The event, part of a series focusing on disinformation, heard from several experts who each called for more transparency from online platforms in addressing the issue.
It coincided with the publication by the European Commission of its strengthened Code of Practice on Disinformation.
One of the speakers, Siim Kumpas, a policy officer at the European External Action Service, told the virtual conference that the Code had 34 signatories, including platforms, tech companies and civil society.
It took into account the “lessons learnt” from the COVID19 crisis and the conflict in Ukraine.
“The reinforced Code builds on the first Code of 2018 which has been widely acknowledged as pioneering framework globally – a ground breaker,” he noted.
The new Code sets out extensive and precise commitments by platforms and industry to fight disinformation and marks another important step for a more transparent, safe and trustworthy online environment, said Kumpas.
The webinar on 16 June, part of a series launched two months ago, was organised by the European Foundation for Democracy and the U.S. Mission to the EU.
Kumpas told the event, “There is a positive side but there are also many problems for online platforms.”
He focused on what the EU has done to “rein” this in, including, most recently, the new Code which he said is about the EU “showing the way to the rest of the world.”
The strengthened Code of Practice is an essential part of the Commission’s toolbox for fighting the spread of disinformation in the EU, he said.
“It is ground breaking and addresses the points raised at this meeting as problematic. This includes transparency, something the code takes into account.”
One aim, he said, is to cut financial incentives for those who spread disinformation, for example, so that people cannot benefit from advertising revenues.
“This,” he said, “will hopefully cover a large share of the business model for disinformation purveyors.”
Many of those responsible are not governments but companies or individuals “who are just in it for the money.”
The Code makes “big steps” on transparency, for example, the issue of political advertising.
“The code seeks to ensure that users, be they journalists, researchers or others, can easily tell the difference between political ads and other types of adverts.
“It provides a robust framework and the platforms themselves have committed to conduct research into the problem of disinformation.”
Another important element of the Code is that those signing up to it support fact checking and for this to be done “in all languages,” he said.
A transparency centre will also be set up with a permanent task force to have dialogue with Code signatories and platforms.
“This is a complex problem and the Code is a self regulatory tool which sets up stricter rules for online platforms. We must mitigate the risks and one way of doing this is with this Code.”
Another speaker was Marwa Fatafta, Middle East and North Africa Policy and Advocacy Manager at the campaign group Access Now, an organisation that seeks to defend digital rights around the world.
She spoke about how disinformation impacts on human rights and is used to target the likes of human rights defenders and journalists
She said, “Social media platforms have become a weaponised space by many governments in our region and the online eco system has become the target of disinformation campaigns to harm human rights defenders and journalists.”
One example, she said, was the Tunisian government recently sacking 57 judges who then went on strike. The judges were then targeted by an online campaign with the aim of harming them.
Journalists, she noted, have also been wrongly accused rape, undermining national security and extra marital affairs in order to secure their arrest and detention and tarnish their reputation.
“This shows how important it is to look at how state media has been used to spread disinformation.”
She also highlighted how disinformation was used to influence the outcome of elections, adding that the pandemic “has exacerbated the problem with disinformation widely disseminated.”
“It is a big problem and there is a big need to tackle it.”
Turning to the response from online platforms, she said, their business model “is geared to amplifying disinformation and influencing public opinion.”
She also addressed the issue of non English language platforms, saying these often don’t have clear content moderation and suffer from lack of enforcement.
Resources are not been allocated effectively such as labelling of inappropriate content, she argued.
“So, where do we go from here? Well, it is important to remind policymakers that passing a new law is not always the way to go. Instead, the aim should be to focus more on transparency, enforcement of existing policies, better training and for platforms to invest in tackling the problem.”
Raquel Miguel Serrano, a researcher and writer at EU DisinfoLab which tracks “inauthentic behaviour” and helps investigators unearth disinformation, also spoke and focused on the “mechanics” of disinformation and the need to talk about the issue.
She defined disinformation as “manipulative” which is typified by deceptive behaviour which can, potentially, cause harm. Perpetrators might typically buy adverts to spread their message and generate income or masquerade as representatives of the media.
Often, the main goals are financial gain, to push a political agenda and to spread influence.
She said, “We are not just talking about foreign influence but domestic campaigns.”
“This is very complex issue so I also want to highlight the need for transparency. We need to understand how these people operate so that we can devise methods to counter it.”
In a Q & A the three speakers were asked about tackling content moderation and defining the “intent” to deceive.”
Serrano said, “It is difficult to assess this but misinformation can be just as dangerous as disinformation so we must fight both of them.”
Fatafta replied, “Distinguishing between misinformation and disinformation is not easy and finding out about the intent of the speaker is very difficult.
“But the harm cause by both is probably equal regardless of intention.”
Kumpas said, “It is like a car crash. If you get hit, it doesn’t matter if the driver intended to hit you: the harm is the same. The same applies to disinformation and misinformation.”
He said the commission now prefers to use another term, “foreign manipulation and interference”, and focus on behaviour not just the intent.”