Peter Cunliffe-Jones is a visiting scholar at the University of Westminster and founder of Africa Check.
When it comes to technological regulation, what happens in Europe does not usually stay in Europe. Legislation developed in Brussels is poised to become a de facto standard for governments around the world seeking out-of-the-box solutions to the challenges of the digital age.
With the European Union’s historic proposal on combating disinformation – the Digital Services Act (DSA) and its accompanying code of practice on disinformation – this is bad news. The approach taken by Brussels simply does not work, in Europe or elsewhere. Not only does it fail to remedy the damage caused by misinformation, our research suggests that it risks causing real damage itself.
The poster child of the EU’s technological clout is its data privacy law, the General Data Protection Regulation. “Since the adoption of the GDPR, we have seen the start of a race to the top for the adoption or upgrade of data protection laws around the world,” according to Estelle Massé, political analyst with activists Access Now digital rights.
The Personal Information Protection Act (POPIA), which came into effect in South Africa in 2020, is often compared to the GDPR and is expected to become more compliant over time. Kenya’s data privacy law is also “largely modeled on” EU regulations (but not enough for its critics), analysts say.
This is very good in the case of GDPR, where the model is generally good. It would be terrible if the same happened with the DSA and the Code of Good Practice.
Our research into how misinformation causes damage suggests that the two will fall short of one of their primary goals: reducing the negative effects of misinformation.
Although it is complex legislation, the DSA’s approach is straightforward. This is tantamount to ordering tech companies to quickly remove “illegal content” once it has been identified or reported to them, or face significant fines if they fail to do so.
The accompanying code, first introduced in 2018 and in the process of being updated, applies the same principle to disinformation: forcing companies to remove or downgrade content deemed false and the accounts that promote it.
The problems with the DSA, the Code of Practice and similar models for combating disinformation and disinformation are threefold:
The first is the responsibility or license that laws give private technology companies to decide, behind closed doors, what constitutes harmful content. Few people oppose the removal of child pornography, terrorism-related material or hate speech. But this is all pretty well defined. What counts as misinformation or misinformation is not, and determining what is harmful is even more difficult.
Second, even if tech companies identify harmful disinformation in a way the public would agree with, simply forcing them to remove it after the fact does not reverse the harm done, like other more proactive approaches. , such as teaching verification disinformation can.
Third, while the DSA and the Code are touted as a solution to harmful disinformation, they offer no answer to the broader problems of Information Disorder (a technical term for the willful or unintentional sharing of lies). As Christine Czerniak, technical lead for the World Health Organization team battling the Covid-19 information disorder, said “infodemic” is “much more than misinformation or misinformation. disinformation”. She added that she was speaking in general terms, not specifically in relation to the DSA or other proposals.
Any approach to disinformation must first take into account the reasons why people share it.
In addition to not working, the withdrawal approach can itself be harmful. Czerniak noted that it has the potential to intensify polarization or make it difficult for public health teams to hear and respond to people’s concerns.
Bad laws designed to stop disinformation can be used to limit public debate. When we looked at the laws of 11 African countries, we found that the number of laws targeting “fake” news doubled between 2016 and 2020, spurred on by and using the language of repression in Europe and elsewhere.
The laws provided vague or nonexistent definitions of what counts as “false” or how “harm” had to be proven, but provided for heavy fines or jail terms for those who transgressed. Not surprisingly, most of those punished under these new laws were political opponents and journalists.
If the EU is serious about tackling harmful disinformation, at home or abroad, our research suggests it should take a different approach.
First, the EU and national governments should agree on a transparent approach to content moderation, with common definitions and standards of evidence, and a preference for correcting disinformation rather than mere censorship.
Second, European education systems need to rethink their approach to teaching media literacy. The type of broad media education taught today in much of Europe is not as effective in reducing susceptibility to fake news as media education – knowledge-based media education and specific disinformation skills.
Third, national governments must take action to counter the misrepresentation of national politicians in an official capacity, one of the most dangerous forms of disinformation. The EU cannot impose practices in national parliaments, but the European Parliament could lead by example by requiring MPs and officials to correct misleading statements they make in parliament. It’s not that far-fetched. It is required of ministers in the United Kingdom, for example.
Finally, the best way to counter disinformation is information. When it comes to subjects particularly vulnerable to disinformation, it is crucial that official sources provide citizens with a place where they can find the real facts.
If the EU put in place a measured and effective approach to tackling disinformation, it would have the potential to do the world a lot of good.