Skip links

An Analysis Facebook’s Data Ethics

By Lara Friedel, Bella Hintergard, Javier Lede


Algorithms are progressively shaping more and more aspects of our lives. The evolution of this issue leads the need to raise an urgent debate on the ethical use of data. We are at a critical juncture where we must decide whether the limits of artificial intelligence are democratically defined, or it will be the artificial intelligence that determines the limits of democracy.

Specially for companies that process large volumes of personal data, ethical questionings must play a major role. In this sense, over the last few years, Facebook has been surrounded by a series of scandals that have unveiled the sensitivity of these matters.

The following is a brief summary of the analysis results for Facebook’s data ethics, based on the five criteria suggested by the DataEthics think-do tank:

  • The human being at the centre
  • Individual data control
  • Transparency
  • Accountability
  • Equality


With 2.5 billion monthly active users by the end of 2019, Facebook, the leading social media plattform, define their mission ​to give people the power to build community and bring the world closer together​. Conversely, in the course of the mentioned year, 99% of their income, U$D 69.655 billion, came from advertisement.

The human being at the centre

Facebook claims to collect data aiming to provide a personalized and consistent use experience throughout their products. Through this argument the company gathers high volumes of data, alleging their use in favour of the users.

It should be explained that in order to segment the market into targeted groups, this social media platform collects and analyzes users’ data to pick the profiles that are more likely to match the wishes of the advertisers.

This asymmetrical deal, in which people provide their most private information to receive in return a set of vaguely mentioned features, makes it clear that the purpose of data processing is still away from giving the primary benefit to the users.

Individual data control

The social network is not only collecting data through the platform, but from thousands of websites and apps using Facebook services, that are automatically sending them information back. Users do not even need to be logged in or have a Facebook account in order to get their data collected. Under these circumstances, “shadow profiles” are created.

Registered users are able to download certain records of their collected personal data. When they delete their profiles, Facebook claims that all the information will be completely erased, but this does not include content uploaded by third parties. This possibility does not even exist for shadow profile owners.

In the light of these facts, it is possible to argue that Facebook is not providing users’ self-determination on the collection and processing of their own personal data.


Facebook publishes regular reports on how they enforce policies, respond to data requests and protect intellectual property. The company also notifies their users in case of any policy changes, so users have the opportunity to review the new terms and conditions before using their services.

But on the other hand, Facebook’s AI algorithms are considered as inherently black boxes, of which not even most of their own developers know exactly how they work.

In regard of these facts, it is possible to be said that there is a certain degree of negligence about the clarity of how and for what purposes their users’ data is used, as well as the risks this poses to themselves as individuals and as a collective.


When Facebook describes their handling of user data, they do not give a proper explanation, but instead they just declare to comply with the General Data Protection Regulations. For interested users it is almost impossible to find out which of their data is collected from where, how it is processed and how long it is stored by the corporation.

Facebook shares and exchanges user data with third parties in the form of cookies, tracking tools or advertisements. Although the company provides guidelines on how third parties should handle the data, it is not clear whether and how they monitor compliance with these guidelines.

The fact that this deep knowledge about their users also entails a high responsibility towards them is not yet really recognised by the company, as they only comply with the legislation and do not act responsible for what later happens to the data they collect, within Facebook or third parties.


Through profiling and usage behavior, algorithms segment users into different groups, which receive targeted content and advertising that matches the posts that they have liked, shared or viewed; promoting self-validation loops which lead to reinforce users’ opinions, whether based on true or false facts. In this context, there is a risk of further alienating the groups from each other.

It might be said, that under the pretext of providing a tailored experience, it is hard to verify the equality which Facebook claims to treat their users. Inequalities are easy to hide behind customized walls.


Past events, such as the Cambridge Analytica scandal, seem to show that Facebook is not engaged in individual surveillance, but in collective manipulation, where our societies are most vulnerable.

Profiling and targeted communication allow to identify user groups which are more likely to react in the ways that advertisers expect. Users barely have control on their own data. Algorithms are dark boxes. Responsibility is diffuse. Equality is not possible to be contrasted. Eventhough Facebook complies with the regulations, these ​neglections point out their deficient policy regarding the ethical use of data.

As a result of this research, it can be said that constant monitoring of the data collection and processing by tech companies, is a fundamental task for the states to take measures in order to guarantee the future of social welfare.

The writers are master students at Neu-Ulm University of Applied Sciences, Information Management Department