Tech giants should be forced to give researchers unrestricted access to raw data so they can independently investigate what their algorithms are doing to our society
One of the world’s most powerful companies, Meta, has been accused of using its algorithms to polarise populations and manipulate political opinions. The company recently wrote in a blog post that there is no evidence that this is true and referred to 4 new academic studies published in the renowned scientific journals Science and Nature.
“Groundbreaking Studies Could Help Answer the Thorniest Questions About Social Media and Democracy,” read the headline from Meta’s Head of Global Affairs, Nick Clegg, who concluded: “there is little evidence that key features of Meta’s platforms alone cause harmful ‘affective’ polarization or have meaningful effects on these outcomes. They also challenge the now commonplace assertion that the ability to reshare content on social media drives polarization.”
Meta was clearly pleased with the conclusions of the studies, which we’ll come back to. What’s most important here is how these studies came to be and whether we can trust them at all. It’s all about data control.
In 2020, after Meta’s services were labelled as subversive to democracy, Meta contacted 17 American university researchers to investigate whether their algorithms were also so destructive. The researchers were not paid, they decided what to study, and they also had the final say on the results. But they weren’t allowed to handle raw data themselves – Meta did that in order ‘to protect user privacy’ (as if Meta does that, by the way).
“This is not how research into the potential dangers of social media should be conducted”, Joe Bak-Coleman, a social scientist at the Columbia School of Journalism, tells Science.org, calling the partnership between researchers and Meta too restrictive. He questions “completely trusting the company to treat the raw data in a way that is suitable for analysis by researchers.”
The four studies show that
- conservative Americans were exposed to far more fake news than liberals,
- that removing the share button (which has been criticised) did not affect political attitudes or polarisation, and that
- switching from curated news (as Facebook does to increase time spent) to reverse chronological order in the news feed didn’t make a big difference to political opinions either (but it did significantly reduce time spent!)
There are 12 other studies in the pipeline, with Meta’s own researchers also checking the raw data and co-publishing the scientific articles.
The criticism is also that the studies do not analyse the broader societal impacts of using social media algorithms, and several of the participating researchers disagree with Clegg’s interpretation according to CJR. One critic believes that the researchers have lent their credibility to big tech, as has been revealed in similar climate change research collaborations in the past.
The independent observer of the research projects, Michael W. Wagner, Professor of Journalism at the University of Wisconsin, is also critical. The research is, on the whole, a net good, he tells Science, but emphasises that it should not be a model for the future because Meta has too much power, and thus the research is not independent.
As part of the Danish government’s expert group on tech giants, we have proposed that publicists should have access to their own social media data, and that researchers should have extended access in relation to what the Digital Service Act gives them: “It is thus proposed to extend this so that researchers can gain wider access to the tech giants’ data, and not only to those aspects that the tech giants themselves consider may pose systemic risks.”
Making clear demands for access to raw data can’t happen fast enough. We simply have to have independent research on Meta’s proft-optimising algorithms.
PHOTO: Barefoot Communication, Unsplash.com