Skip links

Follow the Money: How Big Tech buys Academia

Photo by Pepi Stojanovski on Unsplash

“Big Tech manipulates Academia in order to avoid regulation”, claims Rodrigo Ochigame in a recent article in The Intercept. But  maybe the headline ought to be: “Big Tech buys Academia in order to avoid regulation”?

It started back in 2016 when Silicon Valley lobbying effort consolidated academic interest in “ethical AI” and “fair algorithms”. One of the initial steps was  the $27 million Ethics and Governance of AI Fund made by MIT and Harvard. The initial director was the former “global public policy lead” for AI at Google, and behind the Fund was the spider in the web, Joichi Ito, the former director of the MIT Media Lab. After the establishing of the Fund many other universities and new institutes received money from the tech industry to work on AI ethics. Most such organizations are also headed by current or former executives of tech firms.

Examples are: “the Data & Society Research Institute is directed by a Microsoft researcher and initially funded by a Microsoft grant; New York University’s AI Now Institute was co-founded by another Microsoft researcher and partially funded by Microsoft, Google, and DeepMind; the Stanford Institute for Human-Centered AI is co-directed by a former vice president of Google; University of California, Berkeley’s Division of Data Sciences is headed by a Microsoft veteran; The MIT Schwarzman College of Computing is headed by a board member of Amazon”.

By proactively lobbying for moderate legal regulation encouraging or requiring technical adjustments Big Tech brand themselves as being “ethical” without conflicting significantly with their ultimate aim of gaining profit. While civil society have been – and still is – pledging for bans of facial recognition software corporations try to shift the discussion to focus on voluntary “ethical principles”:

  • In January 2018, Microsoft published its “ethical principles” for AI, starting with “fairness.”
  • In May, Facebook announced its “commitment to the ethical development and deployment of AI” and a tool to “search for bias” called “Fairness Flow.”
  • In June, Google published its “responsible practices” for AI research and development.

There are many more examples. And where did the money to this come from? Well, writes Ochigame, all these corporate initiatives frequently cited academic research that Ito had supported, at least partially, through the MIT-Harvard fund.

Get the Intercept story

More about the author