Skip links

German Commission: Safeguarding Digital Sovereignty of Europe is Ethical Responsibility

The Germans are ahead of other countries when it comes to data ethics and responsible use of data and AI. The government established a Data Ethics Commission in 2018 and has published and opinion on data ethics covering both data and algorithmic systems. It is foresighted and take into account many aspects – especially regarding AI  –  which are not regulated today.  Get the gist of it below.

The commission holds the view that regulation is necessary, and cannot be replaced by ethical principles – but also states that not everything that is relevant from an ethical perspective can and should be enshrined in legislation. Regulation must not unduly inhibit technological and social innovation and dynamic market growth.

The Data Ethics Commission believes that the state has a particular responsibility to develop and enforce ethical benchmarks for the digital sphere that reflect this value system and that excessive dependence on others turns a nation into a rule taker rather than a rule maker, e.g. by private corporations that are exempt from democratic legitimacy and oversight.

“Embarking on efforts to safeguard the digital sovereignty of Germany and Europe in the long term is therefore not only a politically far-sighted necessity, but also an expression of ethical responsibility,” according to the opinion.

The Data Perspective

In the opinion of the Data Ethics Commission, responsible data governance must be guided by the following data ethics principles:

  • Foresighted responsibility: Possible future cumulative effects, network effects and effects of scale, technological developments and changing actor constellations must be taken into account when gauging the potential impact of collecting, processing and forwarding data on individuals or the general public.
  • Respect for the rights of the parties involved: Parties who have been involved in the generation of data – whether as data subjects or in a different role – may have rights in relation to such data, and these rights must be respected. These rights could be requiring an economic share in profits derived with the help of the data.
  • Data use and data sharing for the public good: As a non-rivalrous resource, data can be duplicated and used in parallel by many different individuals for many different purposes, thereby furthering the public good.
  • Fit-for-purpose data quality: Responsible use of data includes ensuring a high level of data quality that is fit for the relevant purpose.
  • Risk-adequate level of information security: Data are vulnerable to external attacks, and it is difficult to recover them once they have gone astray. The stand- ard of information security applied must therefore be commensurate with the potential for risk inherent to the situation in question.
  • Interest-oriented transparency: Controllers must be prepared and in a position to account for their data-related activities. This requires appropriate documentation and transparency and, if necessary, a corresponding liability regime in place.

 What is Ethically Indefensible?        

Ethically indefensible uses of data is total surveillance, profiling that poses a threat to personal integrity, the targeted exploitation of vulnerabilities, addictive designs and dark patterns, methods of influencing political elections that are incompatible with the principle of democracy, vendor lock-in and systematic consumer detriment, and many practices that involve trading in personal data.

Recommendations from the Commission

  • take measures against the unethical use of data
  • enforce existing laws much harsher on especially the big market players – and when it comes to protecting children
  • flesh out existing law for example blacklist data-specific unfair contract terms or unfair commercial practices
  • make data authorities works closer together
  • don’t adapt anything like ‘data ownership’
  • don’t accept data to be provided in exchange for a service; consumers must be offered reasonable alternatives to releasing their data for commercial use, e.g. pay options
  • limit personalised risk assessments in e.g. insurance
  • protect employee data
  • expand health care services, so individuals can empower themselves with data
  • introduce binding requirements – and possible certifications – to ensure privacy-friendly design
  • implement incentives for manufacturers to work with privacy-friendly design, including considering to built in such requirements into tender specifications, procurement guidelines for public bodies and conditions for funding programmes.
  • clarify and harmonize laws concerning academic research and data use and promote innovative consent models
  • make sure that standards for data anonymisation and pseudonymisation are accompanied by rules that prohibit the de-anonymisation of anonymised data
  • promote research and development in innovative data management and data trust schemes where individuals can control their data and empower themselves with data
  • introduce standards for data portability and make interoperability and interconnectivity in e.g messaging services manadatory
  • promote open government data (ogd) concepts and standards
The Algorithmic Perspective

The algorithmic perspective differs from the data perspective in that the data processed by the system might have no connection whatsoever with the persons affected by it.

The German Data Ethics Commission recommends to

  • adopt a risk-adapted regulatory approach to algorithmic systems
  • develop a criteria-based assessment scheme as a tool for determining the criticality of algorithmic systems including a mechanism determining the potential for harm (see illutration above), so we know when to regulate and not to regulate, when to ban it partially or completely and when to demand extra oversigth
  • include corrective and oversight mechanisms, specifications of transparency, explainability and comprehensibility of the systems’ results, and rules on the allocation of responsibility and liability for using the system
  • an individual affected by a decision should be able to exercise his or her right to “meaningful information about the logic involved, as well as the scope and intended consequences” of an algorithmic system
  • introduce a mandatory labelling scheme for algorithmic systems of enhanced criticality (Level 2 upwards)
  • enforce legal requirement for the operators of algorithmic systems with at least some potential for harm (Level 2 upwards) to produce and publish a proper risk assessment
  • introduce licensing procedures or preliminary checks carried out by supervisory institution in the case of algorithmic systems with regular or significant (Level 3) or even serious potential for harm (Level 4)
  • demand that operators document and log the data sets and models and give access to them
  • ensure competent authorities and set up a national centre of competence for algorithmic systems
  • develop technical and statistical quality standards for test procedures and audits and examine various models of co-regulation and self-regulation as a potentially useful solution in certain situations.
  • consider developing an Algorithmic Accountability Code – not only as government but with civil society and a quallity seal aimed at consumers
  • stop pursuing the idea that the systems themselves would be liable for damages (“electronic person”). As far as this concept is, by some protagonists, based on a purported equivalence between human and machine it is ethically indefensible. The operator of an AI is liable
  • enforce a general rule that all AI systems should be designed in such a way that a human can override technical enforcement in a specific case.
 
Humans Must Always Have The Last Say. Photo: Tim Marshall, Unsplash.com

The Data Ethics Commission believes that the measures it has proposed should be implemented in a new EU Regulation on algorithmic systems enshrining general horizontal requirements (Regulation on Algorithmic Systems, EU-ASR).

Get the full opinion here

In  the commission you find Marit Hansen, who spoke at DataEthics Forum 2018. And Paul Nemitz who spoke at DataEthics Forum 2019.