Why we need an internet that works for people and is led by people

New study on “community-led online platforms”

03.05.2021

In our new study on “community-led online platforms”, we’ve found that strong online communities and a transparent bottom-up approach to moderating content is best for our online safety and our digital rights.

Online democracy visual
Online democracy visual

Wikipedia is a household name. We’ve all used it: whether it’s at 2AM and we’re reading about the world’s most baffling unsolved crimes, or it’s in the middle of a family argument about which actor starred in that 90s television program. Over the last twenty years, Wikipedia has become one of the most popular “community-led online platforms”. Anyone with an account can help to edit, moderate, and manage the content posted in its articles. This way, Wikipedia taps into the knowledge and expertise of people around the world.  All posts are written by real people (rather than auto-generated by bots) and online communities decide openly whether a post should be removed or not. As our new study has found, this community spirit is also great at combatting hate speech and cyberbullying.

The trouble with Tiktok: where did my content go?

All of this is in stark contrast to companies such as Facebook, Twitter, Instagram, Tiktok and Snapchat, who use a top-down approach by removing posts with the help of the latest technology – like advanced AI (artificial intelligence). As these companies have grown, they have developed complex technologies to avoid having to personally examine and curate the swathes of data that are posted to their platforms every day. These technologies rely heavily on the removal of harmful and otherwise undesirable content. However, there are growing concerns about the impact of these AI-automated decisions on freedom of expression and the digital rights of individuals online.

The rules that govern the responsibility of platforms to remove and moderate content are murky and unclear. Companies face unduly short time frames for content removal and the threat of heavy fines for non-compliance. Big tech platforms frequently over-comply with legal demands and swiftly remove large amounts of online content with no transparency and public scrutiny. The sheer volume of requests inevitably leads to erroneous takedowns, while too often hateful content, especially targeting minority groups, remains online.  Most online platforms do not divulge how they reach content moderation decisions and resort to deleting content without providing the user with an explanation. The lack of transparency and accountability leaves us powerless in the face of these huge tech companies. Users are left scratching their heads about why their content was removed.

Social media icon / CC0 alexander-shatov
Social media icon / CC0 alexander-shatov

The internet is my social life: how can I build a strong online community?

This all has a very real effect on our daily lives, particularly as we’ve been spending more and more time online due to the COVID-19 pandemic. Online platforms aren’t just for memes and cat pictures. For many people, they now form the basis of our social lives. They are public spaces that influence public debate and human behaviour.

We need to make sure that the rules are clear so that everyone can participate confidently and fairly. We need to make sure that our online communities are strong so that we’re all safe from hate speech and discrimination. That our free speech – and the free speech of minority groups – won’t be violated by an automated bot using an algorithm based on biased or discriminatory code.

We’re better than ever at building caring communities online. Great alternatives to the giant tech companies already exist, like Mastodon, Diaspora and Wikipedia. And they’re working. These decentralised, “community-led” platforms are human powered. They give the power back to the users. They try to strengthen their communities so that content moderation is considerably less necessary. When they do have to remove or censor content, they share the same common goals:

  • They improve transparency by providing explanations to users about why their content was removed.
  • It’s for the community, and not a business decision that is based on preserving advertising revenues.
  • They encourage diversity in content moderation. Members of society can safely participate, regardless of skin colour, gender identity or sexual orientation.

Our study showed clearly that automating content moderation is not a solution. Automated tools are prone to making mistakes, which could jeopardize users’ trust in the moderation system. Meaningful human involvement in content moderation decisions helps build trust and promote good behaviour. Community-led platforms are the best and fairest way to combat disinformation and hate speech, and preserve the privacy and free expression of people in the online world.

If you want to know more about community-led platforms and how they can protect our digital rights in the future: 


As part of our My Content, My Rights campaign, the Greens/EFA want to focus on putting rights and freedoms at the heart of the Digital Services Act, while promoting an internet that works for people and is led by people.

The Digital Services Act is a chance for policy-makers to get inspired by content governance systems of smaller, alternative, federated or decentralised platforms in order to establish best practices and create incentives in future legislation.

*********

SUMMARY OF THE STUDY

REIMAGINING CONTENT MODERATION AND SAFEGUARDING FUNDAMENTAL RIGHTS:

A STUDY ON COMMUNITY-LED PLATFORMS

SYNOPSIS

The internet is an empowering tool that allows us to communicate globally, to meet each other, to build networks and join forces, to access information and culture, and to express and, spread political opinions. Unfortunately, platforms such as Youtube, Instagram, Twitter and TikTok filter and moderate with a lot of collateral damage: Too often, hateful content especially targeting minority groups remains online. On the other hand, legitimate posts, videos, accounts and ads are removed and the platforms make it difficult to contest. A few very large tech companies are dominating actors for our everyday social interactions – their content governance mechanisms, largely based on unilaterally imposed automated tools, have serious implications not only for rights and freedoms online but also for the establishment and development of innovative and alternative models in Europe.

Wikipedia webpage / CC0 luke-chesser
Wikipedia webpage / CC0 luke-chesser

OBJECTIVES OF THE STUDY

The study for: 

  • The promotion of platforms controlled by user communities instead of big corporations, to explain the sustainability of existing models and how they work in practice.
  • Analysing and summarising governance models of alternative and community-led federated and other decentralised platforms;
  • Documenting the benefits and challenges of governance models of existing community-led platforms and content moderation policies;
  • Including policy recommendations specifically addressed to the EU lawmakers with the goal to inform the ongoing debate on the proposed EU Digital Services Act

METHODOLOGY: HOW THE RESEARCH WAS CARRIED OUT?

This study relies on a qualitative research design. Researchers employed qualitative research methods, notably “case studies”, to investigate community-led online platforms applying alternative content governance models. 

The study presents five case studies of community-led social media platforms employing alternative approaches to content governance and moderation. Five platforms applying alternative content governance models were investigated in more detail, namely: 

  1. Wikipedia,
  2. diaspora,
  3. Mastodon,
  4. Der Standard,
  5. and Slashdot.

Multiple methods were used to gather data for the case studies: semi-structured interviews (with moderators and administrators of different online platforms, and researchers on the topic) and literature review of existing publications on the topic. 16 semi-structured interviews were conducted with moderators, administrators, various online platforms and researchers to discuss alternative content moderation practices and content governance models. Whereas the case studies will showcase some aspects of alternative content governance models, the additional material serves to discuss other alternatives and different perspectives on alternative content governance models.

Not secure visual / CC0 visuals
Not secure visual / CC0 visuals

WHAT DID THE STUDY FIND OUT?

  • Deleting content is not a solution. It is simply a ‘Band-Aid’ for an already existing problem.
  • Automating content moderation is not a solution. Rather, automation can support certain limited areas of content moderation and content governance. Unduly short time frames for content removals create unhelpful pressure for platforms to use highly problematic automated tools.
  • ​​​​Meaningful human involvement in content moderation decisions is key to effective content moderation. –
  • Democratise platforms’ Terms of Services: jointly developed and clearly communicated Terms of Service, regardless of their content, are more likely to be adhered to and seen as legitimate by community members. 
  • ​​​​Platforms need to clearly communicate to those affected by their decisions what kind of moderation the platform is empowered to exercise and what tools are being used.
  • ​​​​​​Community-led and community-driven content moderation seems to be highly effective, because it offers “alternative models of top-down moderation”, in contrast to commercial content moderation on platforms that mainly relies on assessing and deleting a single piece of content. 
  • ​​​​​​Platforms need to build on best practices for content moderation and create mechanisms for sharing them.
  • ​​​​​Systematically audit content moderation and content governance policies (including code reviews or examination of training data) could enable an ecosystem that improves policies rather than the current race to the bottom.
  • ​​​​​Policy makers should support community-oriented platforms in building public spaces and ​​​​​​​​​​​​​​​​​​​​​contribute to diversity in spaces for public debate.
  • ​​​​​​​​​​​​​​​​​​​​Meaningful transparency measures have to be an integral part of any content moderation system
  • ​​​​​​​​​​​​​​​​​Effective and easily accessible accountability measures need to be in place. This includes a notification that should take place before any action is taken against flagged or notified content and should contain adequate explanation of what rule was breached, how, and what next steps will be taken with regard to the piece of content, introducing safeguard of procedural fairness. Users should be provided with meaningful explanations of how breaches of the rules are being identified and enforced.
  • It is key to ​​​​​​​​​​​​​​​​​​​​build a sustainable research ecosystem on content moderation and content governance in Europe. At present, there is insufficient empirical research on content moderation and content governance in Europe. What little research exists in this area takes place in United States-based private companies and is rarely publicly available.
  • Online platforms s need to enable free expression so all members of society can safely engage in them, regardless of skin colour, gender or sexual preferences.
  • There is a need to i​​​​​​​​​​​mprove support, protection and training of content moderation staff. Currently, there are no common standards on minimum levels of support, protection and training for human beings in online content moderation. Even for the worst types of content, we are familiar with cases (outside the scope of this study) where insufficient support and protection is provided to those staff. Platforms should be required to have a basic standard of care for these staff, as well as for any volunteers engaged in content moderation. To this end, a clear category of volunteer content moderators should be created, which defines the rights and responsibilities of this group.

***

Responsible MEPs

This study was commissioned by Alexandra Geese, Marcel Kolaja, Patrick Breyer, Kim Van Sparrentak, Rasmus Andresen on behalf of the Greens/EFA Green and social economy cluster of MEPs, within the framework of the “My Content, My Rights” campaign. This study is the view of the authors and does not represent the views of the Greens/EFA group.

More information:
Narmine Abou Bakari
– Digital Rights Campaigner
narmine.aboubakari@ep.europa.eu