Disinformation is a powerful weapon in Russia’s war against Ukraine. With deep fake videos, misleading “facts” and outright lies going viral, some media outlets have called it the first social media war. But, how do Big Tech companies profit from spreading disinformation about the war? And what can we do to combat disinformation online? Patrick Breyer MEP explains how the EU’s Digital Services Act plays an important role in the fight against disinformation.
As the old adage goes, truth is the first casualty of war. Since the start of Russia’s war against Ukraine, Big Tech platforms (like Facebook or Instagram, both owned by mega-corporation Meta) have hosted posts that deny, glorify and justify war crimes. Online platforms have a responsibility to identify and stop disinformation. And yet, these malicious posts continue to spread.
The truth is that disinformation is a profitable business for online platforms. The more outrageous and provocative their content is, the longer we stay on their apps and websites. And then there are automated ‘recommender systems’, algorithms that decide the content for your facebook feed or Youtube watch list. Those algorithms will often resort to showing us conspiracy theories, disinformation and polarising content in their desperation to keep us clicking.
Disinformation poses a severe threat to European societies. Extremis groups or authoritarian governments can use these recommender systems to spread lies and manipulate their followers. Stumble onto one Youtuber with extremist views? Here are five more you can follow. Right now, the Russian government is deceiving Russian citizens with disinformation to justify the aggression against Ukraine.
Read on to find out how the EU’s new legislation regulating online platforms – the EU Digital Services Act – could play an important role in the fight against disinformation.
The EU vs. Big Tech: who should decide what constitutes disinformation
What is Big Tech?
Big Tech is used to describe the four or five largest and most dominant technology companies, usually Alphabet (which controls Google), Amazon, Apple, Meta (which owns Facebook, Whatsapp and Instagram) and Microsoft.. These companies represent a formidable economic force. Big Tech account for a fifth of all earnings accrued by the S&P 500 by 2023. Because they dominate the tech market and are used by billions of people, these companies also wield an immense influence over the way we communicate, work and do business online.
In the battle against disinformation, it could be tempting to put the responsibility entirely in the hands of the tech companies. Big Tech is already policing some of what’s people are posting on their social networks. On February 26th, Meta took the decision to restrict access to Russia Today and Sputnik, two Russian media outlets, across Europe. Meanwhile, Twitter has added extra labels to “Tweets that share links to Russian state-affiliated media websites.” Twitter now also labels “accounts and Tweets sharing links of state-affiliated media outlets in Belarus.”
Who should rule the internet?
However, the CEOs of Big Tech should not be encouraged to take backroom decisions over what content is visible. This only entrenches their control over what users in the European Union get to see and which information is credible. Governments making direct calls to Google and Meta represents a threat to democracy.
Filtering, removing or demoting legal content is the wrong approach. It’s prone to abuse and censorship and will drive people towards uncensored and unmoderated channels. These channels present them with often with even more extreme content. A better approach to this problem is to let users appreciate the credibility of information. Users can do this with fact-checking, warnings, background information and user rating (or flagging) systems. We can’t solve this problem with a quick-fix sticking plaster of censorship and bans. In the long term, we need an approach that encourages critical thinking, media literacy and media diversity to build a society that’s more resilient to the spread of disinformation.
Why is disinformation a profitable business for platforms?
In a study “the future of online advertising” commissioned by the Greens/EFA, Duncan McCann, Will Stronge and Phil Jones exposed how platforms manipulate our personal data for profit. The study highlights how disinformation is a very profitable business. According to a 2020 report from the Global Disinformation Index, over $76m is paid by advertisers to disinformation sites every single year.
We commissioned a study on “The Future of Online Advertising” earlier this year. Our study is exposing how online platforms and Big Tech manipulate our personal data for profit.
Big Tech’s business models rely on ‘surveillance-based advertising’. Everything from our web searches to our clicks to our personal details is tracked. Our private data is used to choose which online ads to show us. But online platforms also profit from spreading and amplifying disinformation through their ‘recommender systems’. Algorithmic recommender systems curate which content users see while scrolling, based on content that they or their friends have interacted with before.
Recommender systems – why are they profit making machines for Big Tech?
During a hearing at the European Parliament, a whistle-blower from Meta (previously Facebook), Frances Haugen, revealed that algorithmic recommender systems actually favour disinformation and violence over factual content. Extreme content and disinformation are more likely to keep users scrolling on their social media feed. Therefore they create more income through ads for the Big Tech platforms.
So long as this remains the business model, problematic content will thrive. In fact, without the lucrative revenue from surveillance-based ads, disinformation sites would be less prolific. As a result we would potentially have less radicalisation and polarisation in our society.
We cannot tolerate that companies are making profit from the promotion of hatred and disinformation. Read on to find out how the Greens/EFA Group will continue the fight for EU-wide protection from hate speech and disinformation.
Information over profit – How the Greens/EFA want to stop disinformation
Russia’s war on Ukraine has caused a sudden spike in online disinformation, as the Kremlin scrambles to manipulate ordinary Russians into supporting the war. Manipulated photos, deepfake videos, fabricated news stories, unofficial social media accounts and outright lies have cropped up on all online platforms.
This has proven the urgency for the EU to step in and regulate online platforms and their algorithms. A new piece of EU legislation, the Digital Services Act (DSA), has been negotiated which should do just this. The Digital Services Act aims to create a better and safer internet, protect our private data and give more power to people online. This is the perfect chance to crack down on disinformation.
How we will fight toxic algorithms and the spread of disinformation with the EU’s Digital Services Act:
- Banning surveillance advertising. We have to ban platforms from presenting ads to people-based profiling and tracking people using sensitive data (such as health, sexual orientation, religion etc).
- Having fair choices. Users should have a fair choice to say no to tracking advertising. And it shouldn’t be possible to trick internet users by making it harder to say no than yes. It should be easy to switch it off.
- Tackling manipulative algorithms and Big Tech’s divisive business models. We need to introduce clear and meaningful transparency rules and control over recommender systems and algorithms. Users have the right to opt out of the commercial recommender algorithms. Tech corporations should not be allowed to decide on their own what appears in the timelines of users and what does not.
- Ensuring researchers and non-governmental organisations (NGOs) get access to the right data. The Digital Services Act will give researchers and NGOs the opportunity to analyse platform data and how profit-driven algorithms spread disinformation. This way we can make better laws to protect ourselves from it.
- Making sure Big Tech don’t get too powerful. The EU Commission has strong, centralised supervisory powers when it comes to the obligations for very large online platforms.
On 20th January 2022, the European Parliament voted on its position on the Digital Services Act. In April 2022, ministers from EU governments along with representatives from the European Parliament have negotiated the final text of the law. The European Parliament is expected to vote on the final text in the upcoming months.