EU Regulators Seek to Formalize “Disinformation Code” Under Censorship Law

By Didi Rankovic – Reclaim The Net

In the wake of the EU elections, those at the helm of the bloc are working to make the already contentious Digital Services Act (DSA) – which critics say is a sweeping censorship law – even more controversial.

Namely, EU regulators now want to make what was previously a set of “voluntary” guidelines implemented by online platforms – the Code of Practice on Disinformation – a formal part of the DSA.

This is the stance taken by the Digital Services Board, which just issued a report that now wants that “voluntary code” to be “swiftly converted” so that it becomes subject to the DSA. The Board consists of coordinators from the EU’s nation-states.

The report said their demand to bring the Code under the DSA umbrella is supported by the EU Commission (EC).

In 2022, the Code was initially joined by 34 signatories, while this number is currently 44. Among them are Adobe, Google, Meta, Microsoft, Twitch, and TikTok, but also several journalist and research groups, “fact-checking” groups, and the World Federation of Advertisers (WFA).

When it was announced, the EU said the Code represented a “strengthened” version of the one from 2018, and noted that it was the result of EC’s guidance issued in 2021.

Even before this latest initiative, it was unclear to what degree the rules were “voluntary” in practice, given that on many occasions top EU bureaucrats did not hide that, especially when it comes to online platforms, it was there to make sure they “self-regulate” – or the EU would do it for them.

If the Code is included in the DSA, it will mean that is exactly what has happened.

“In view of the important added value of the Code regarding mitigating systemic risks, the Commission considers a swift conversion of the Code as crucial, with the aim to finish this process in the coming months,” the report said.

The text of the 2022 Code states that its purpose is to, among other things, create scrutiny of ad placements, which includes demonetization of “disinformation” and “cooperation with relevant players”; the “integrity of services” section forks into “common understanding of impermissible manipulative behavior” and “transparency obligations for AI systems.”

The Code further seeks to “empower” both users and the research community. The first category was to expect signatories to focus on “enhanced media literacy” and, “better equipping users to identify misinformation,” as well as “functionality to flag harmful false and/or misleading information.”

Meanwhile, researchers were supposed to be provided with the signatories’ data, for the purpose of “researching disinformation.”

Start the Conversation

Your email address will not be published. Required fields are marked *


*