Access Now, an international non-profit group that advocates for human rights online, have
formulated 26 human right-centered recommendations for content governance, based on ideals of freedom, openness and democratic values. The following is an abstract of the problems identified by Access Now relating to content curation, the full guide on content governance can be found here.
When the internet had its breakthrough, many foresaw that it would bring about more democratization by connecting people and giving them access to information. It did, but it also created undesired opportunities such as violation of copyrights and the spread pornography. To stop this, the first content filters for detecting illegal and undesired material was evolved. But these filters had less focus on addressing issues such as defamation, hate speech and terrorist content and they were not designed with a focus of respecting human rights online. The consequences of this initial narrow approach to content regulation can still be felt today.
”Today, we are still dealing with the early influence of governance undertaken with an intellectual property-driven mindset, versus a more holistic approach that considers the full spectrum of issues raised by online speech.”
Three Types of Regulations
Today we see three types of content governance: state-regulation, self-regulation and co-regulation. State-regulation is any binding legislation that defines illegal content. Self-regulation is determined by private companies and are often called “terms of service” or “community guidelines”. They define what is accepted behavior and content, and can include the banning of legal content, if it is considered undesired by the company. Co-regulation is when public authorities encourage, support and sometimes monitor private companies’ self-regulation. But even though several types of regulations are enacted, multiple problems with content regulation still exist.
Challenges of Content Regulation
One of them being the consequences of “attention economy” that we are living in now. Public authorities have been unprepared for this scenario and users are now invaded by non-transparent and non-consensual recommendation algorithms that determines the content presented to them. By using tools such as “timelines” and “news feeds”, social media platforms can organize user-generated content to nudge people to spend more time on their platform and thus share more data and be exposed to more ads. These recommendation algorithms are designed to influence and control user’s behavior, not designed with the focus to respect users right to form and express opinions online.
”Actors in our increasingly complex online communications ecosystem have the duty to consider human rights. Governments are obligated to protect these rights, while companies are responsible for respecting them.”
Transparency is needed for independent auditing, but today it is unclear how online platforms decided and enforce which content to take down. Most of the self-regulated curation mechanisms are not transparent or exposed to scrutiny by users, general public, researchers or regulators. It is largely up to the platforms which content is taken down and users often have little or no possibility to object to content takedowns.
”Transparency is essential to enable independent auditing and avoid undesired outcomes in automated curation and ad-targeting, which in the context of platforms, can have discriminatory effects.”
But in spite of the lack of transparency, private companies are often pressured by regulators and users to do more monitoring and more controlling of abusive content. This pressure has made private companies tighten their rules to restrict harmful content. But often these rules do not have a focus on free expression, access to information and privacy and thus can lead to rash decisions such as disproportionate removal of content or even downright censorship.
Another problem with content regulation is the lack of research determining the relation between online and offline behavior or explaining the consequences of online content to vulnerable groups. Without evidence-based policy making, innocent users such as journalist and vulnerable communities are at risk of having content removed swiftly or in large quantities. Regulators need to enable debates and consult relevant stakeholders before rushing enact legislation, to protect the rights of users online.
Advices of Access Now
Regulators should not pressure private companies to evaluate and remove user-generated content on their platforms, since it cannot be guaranteed that public interest are put in center when private platforms make decisions. Instead content should be removed by an independent adjudicator, whose aim is to comply with national legislation and human rights law and respect the values of a democratic state, such as the right to express opinions and also the right for appeal and redress.
Content regulations needs to be based on transparent mechanisms with independent auditing. Laws needs to be based on research and must be formed in democratic ways that also enables redress mechanisms. Regulators and decision-makers of self-regulation must include context and human rights when deciding which content to ban. User must be informed and allowed to influence the recommendation mechanisms influencing which content are shown to them to ensure their online agency.
”If the governance solutions developed by governments and platforms (through regulation and self-regulation respectively) are ill-advised, rushed, or do not incorporate international human rights principles and safeguards, they can increase, rather than decrease, the risks for users.”
Signe Agerskov is a master student at IT University in Copenhagen where she studies Digital Innovation and Management with specialisation in Blockchain Economics.