Brussels, Europe Brief News – Some of the world’s major online sites, including Facebook-owner Meta, Microsoft, Google, Twitter, Twitch, and TikTok, have signed an EU rules to combat online misinformation.
These companies and others must do more to stop the spread of false news and propaganda on their platforms and share more data with EU member states. The European Commission stated the new “Code of Practice on Disinformation” was molded by “lessons learned from COVID19 and Russia’s actions in Ukraine.”
EU Rules to Fight Misinformation
Vera Jourová, the Commission’s vice president for values and openness, stated, “This new anti-misinformation Code comes at a time when Russia is weaponizing disinformation as part of its military action against Ukraine.”
The agreement provides 44 “commitments” for corporations targeting misinformation damages. Commitments include:
- Create political ad libraries
- Remove bogus news sites’ advertising money.
- Reduce disinformation-spreading botnets and bogus accounts.
- Flag falsehoods and provide “authoritative sources.”
- Provide researchers with “greater, broader data access.”
- Collaborate with independent fact-checkers
Many US digital corporations like Facebook and Twitter have embraced similar measures under political and regulatory pressure. Still, the EU argues its new code of behavior will allow for better supervision and tighter enforcement.
Despite the extent of the anti-disinformation code, several signatures are missing. Apple hasn’t joined despite its booming advertising industry and the code’s aim of demonetizing falsehoods by reducing adverts. Telegram, a significant propaganda battlefield during Russia’s invasion of Ukraine, is also missing.
The 2018 Code of Practice on Disinformation was voluntary, but this new code will be enforced by the EU’s Digital Services Act or DSA.
The New Code of Practice
To be trustworthy, the new Code of Practice will be supported by the DSA, including hefty dissuasive consequences, said EU internal market commissioner Thierry Breton. Big platforms that consistently infringe the Code and don’t take sufficient risk mitigation measures face penalties of up to 6% of worldwide revenue.
Although the EU presents the code as a powerful deterrent against misinformation with clear enforcement procedures, it’s impossible even to quantify the impact of disinformation, much alone reduce its detrimental effects.
Take the EU Rule’s 31st commitment: signatories pledge to “integrate, display, or routinely utilize fact-checkers’ work” Platforms signing up to this part of the code will have to reveal statistics about fact-checkers work on their platform, including “number of fact-check articles published, reach of fact-check articles, number of content pieces examined by fact-checkers.”
Such statistics will provide fresh information, but they can’t show the fact-checkers whole effort. Facebook has partnered with fact-checkers since 2016 but has been criticized for employing political organizations (like the Daily Caller-affiliated Check Your Fact team) to verify sources.