The Forum on Information and Democracy has published a report of on how to fix infodemics. Based on more than 100 contributions from international experts, it offers 250 recommendations on how to rein in a phenomenon that threatens democracies and human rights, including the right to health.
Legally enforce transparency is one of the main recommendations. For a long time, ‘transparency’ has been widely abused as a self-regulatory mechanism, and the worst abuse has been from some big tech companies releasing ‘transparency reports’ on how ofte ‘bad’ government – with a warrant – have asked for access to data in order to fight crime. But they forgot to be open about their own commercial (ab)use of personal data.
“Legally enforced transparency is not a silver bullet that will fix all the issues, but is a necessary condition to develop a more balanced equilibrium of power between the private platforms and democratic societies,” according to the report.
To prevent misuse of data, the report recommends using ‘differential privacy’ when sharing data with regulators or researchers, as personal data is then anomymized and does not affect the individual user’s privacy.
Explainability, Openess and Revenues are also Transparency
Part of transparency is also explainability. It should be required by the platforms to explain to their users, why their content has been restricted. And the platforms should be totally transparent and publize number of content takedowns, content flagged, labeled, downranked, delayed, masked with a warning, and accounts disabled.
Transparency also covers openess about how the platforms use algorithms for e.g. personalization and the report has a long list of detailed info, that platforms must disclose and explain. Also advertising revenues must be transparent, according to the report, eg what the dominant platform get of revenues from targeted advertising. And it demands that personal data collection is disclosed and GDPR’s ‘data portability’ is a possibility all over the world and not just Europe.
Kitchen Appliances are more Regulated Than Tech
One of the core recommendations is the creation of a “statutory building code”, which describes mandatory safety and quality requirements for digital platforms.
“If I were to produce a kitchen appliance, I have to do more safety testing and go through more compliance procedures to create a toaster than to create Facebook,” Christopher Wylie told the BBC. Wylie, who revealed how Cambridge Analytica used millions of people’s Facebook data for targeted campaigns, is one of the driving forces behind the report.
Independent Audits Necessary
For a long time, regulators, civil society and users have been trusting online platforms that they are doing what they are saying. The report goes further:
Online service providers must be open to audit by the appropriate regulator and/or by an independent auditor, it says and “accredited outside researchers should have access the data necessary to implement research of general interest.”.
Included in ‘vetted reserachers’ according to the report, is civil society. The demands of eg Facebooks is thus that vetted researchers and regulators have access to a whole range of information, to e.g. content reach and the objective of algorithms.
Heavy Santions Necessary
The report also recomments that the sanctions for non-compliance should be financial and up to three-four percent of the network’s global turnover.
“It is important to note that when the US Federal Trade Commission fined Facebook the unprecedented amount of US$ 5 billion in July 2019 for privacy violations, this didn’t make much of a dent on the company. Fines need to be potentially extremely significant, especially in case of recurring non-compliance, in order to be efficient.”
Promote Reliable Content
The report has a really interesting recommendation for journalistic communities on promoting reliable content:
Participate in establishing unified benchmarks based on internationally accepted best- practices and ethical norms.
This is something I was working on in 2013 and 2014 when I was a fellow at University of Southern Denmark and afterwards left the journalism business. In the flood of fake news, press releases, paid influcencers I suggested a Trustmark for News (read the report there). It might have been too early days, but it is still a better idea than what Facebook is doing today together with fact-chekkers all over the world: flagging the bad content. As it happens with conventional/organic food, conventional/sustainably forrestry the good choice is flagged. It is easier to set common standards for doing it right and looking at the content at Facebook today, there are more and more fake content out there.
The Journalism Trust Initiative initiated in 2019 a collaborative process of standardization designed to encourage respect for journalistic ethics and methods and reinforce the right to information by promoting online content produced in accordance with these principles.
The report urge states to force online service platforms to implement mechanisms aimed at highlighting information sources that comply with standardized professional and ethical self-regulation standards.
In general, the report is so full of good recommendations, that some of them are drowning e.g. the recommendation of limiting micro-targetting. It would have been better to publish the four chapters separately, such as the last chapter on closed messaging services (e.g. WhatsApp, WeChat and Telegram) where large groups spread disinformation, as we risk that only a fraction of the many good ideas are implementet.
The 12 Main Recommendations from the Report
- Transparency requirements should relate to all platforms’ core functions in the public information ecosystem: content moderation, content ranking, content targeting, and social influence building.
- Regulators in charge of enforcing transparency requirements should have strong democratic oversight and audit processes.
- Sanctions for non-compliance could include large fines, mandatory publicity in the form of banners, liability of the CEO, and administrative sanctions such as closing access to a country’s market.
- Platforms should follow a set of Human Rights Principles for Content Moderation based on international human rights law: legality, necessity and proportionality, legitimacy, equality and non discrimination.
- Platforms should assume the same kinds of obligation in terms of pluralism that broadcasters have in the different jurisdictions where they operate. An example would be the voluntary fairness doctrine.
- Platforms should expand the number of moderators and spend a minimal percentage of their income to improve quality of content review, and particularly, in at-risk countries.
- Safety and quality standards of digital architecture and software engineering should be enforced by a Digital Standards Enforcement Agency. The Forum on Information and Democracy could launch a feasibility study on how such an agency would operate.
- Conflicts of interests of platforms should be prohibited, in order to avoid the information and communication space being governed or influenced by commercial, political or any other interests.
- A co-regulatory framework for the promotion of public interest journalistic contents should be defined, based on self-regulatory standards such as the Journalism Trust Initiative; friction to slow down the spread of potentially harmful viral content should be adde
- Measures that limit the virality of misleading content should be implemented through limitations of some functionalities; opt-in features to receive group messages, and measures to combat bulk messaging and automated behavior.
- Online service providers should be required to better inform users regarding the origin of the messages they receive, especially by labelling those which have been forwarded.
- Notification mechanisms of illegal content by users, and appeal mechanisms for users that were banned from services should be reinforced.