A recent change in the Danish legislation on annual reporting for large companies has come into force. The change requires covered companies to supplement their annual report with a statement on their policies on data ethics. The change will have effect for the fiscal year commencing on January 2021 or later.
In this article, we take a closer look at the reporting requirement, data ethics and what companies should consider.
In March 2018, the the Ministry of Industry, Business and Financial Affairs appointed an expert group on data ethics. One of the purposes of the group was to look at how responsible data use and processing can be leveraged as a competition advantage.
The expert group consisted of 12 members, predominately from Danish tech businesses but also with representatives from academia.
At the end of 2018, the expert group launched its report “Data i menneskets tjeneste” (“Data in the service of man”). The report consisted of 9 recommendations on data ethics. The fourth recommendation was to introduce data ethical reporting as part of companies’ annual reports. The recommendation links closely to the existing requirement for companies to report on corporate social responsibility..
What is data ethics?
There is no clear definition in the law of data ethics. The preparatory works to the act, however, refer to the mission statement of the Data Ethics Council, a Danish advisory body. It states: ”Generally, data ethics is understood as the ethical dimension to the relationship between technology and the fundamental rights of individuals, rule of law and fundamental societal values that the technological development gives rise to. The term comprises ethical consideration in the use of data.”
While it is natural to equate data ethics with the legislation on data protection, it is arguably higher and wider. Complying with an ethical code or set of values must be something more or else than merely complying with legal requirements. Furthermore, data also concerns such information that does not relate to identified or identifiable individuals.
The expert group lists 6 data ethical values that you should consider. The values concern design of data technical systems, subsequent policies and possible legislation and daily use of data and systems:
- Individual must retain most possible control over own data (Self-determination)
- Technology may not discriminate (Equality and justice)
- Individual’s inherent worthiness must be weighed over profits (Worthiness)
- Societal progress by use of data can be achieved by use of data ethical solutions (Progressiveness)
- All parts of the chain must be responsible for the consequences of their technical solutions (Responsibility), and
- As many different professional groups with different sex, age, ethnicity, etc. are involved in the development of new technological solutions (Diversity)
It is clear that the above comprises elements found in the data protection legislation. This applies for both autonomy and accountability. The values also dip into issues or possibilities where the legislation is opaque. Or where it requires significant interpretation, e.g. in relation to fairness and transparency. Automated decision making, AI and machine learning are good examples of where data ethics could come into play. In these, algorithms designed by data scientists and programmers make certain assumptions/decisions based on data. The algorithms and their decision-making reflect the knowledge and bias of the programmers and data scientists and the underlying data. Lack of inclusion of professional insights or existence of bias stemming from lack of diversity may lead to faulty assumptions. Those assumptions bring bad algorithms and poor machine learning. Bad algorithms and poor machine learning may have unintended – or even illegal – negative repercussions for individuals.
In addition to the expert group’s own values, I will also point to the European Commission’s work regarding AI. In 2019, they published the “Ethics Guidelines for Trustworthy Artificial Intelligence”. The guidelines describe a framework for trustworthy AI which includes 4 ethical principles (Respect for human autonomy, prevention of harm, fairness and explicability).
Aside from the above-mentioned values and principles, any such decisions or uses of data borne out of genuine interests in ethical thinking in regard to data processing is worth including. I can present an example from one of my former working places, LEGO. LEGO has done enormous work in securing digital child safety. Specifically, securing digital child safety has meant limiting the processing of children’s data. LEGO has made restrictions on their platforms for the sake of creating a safe, trusted and fun environment for children. Another example is also be the conscious choice of not discriminating customers based on geographical location. Instead, you choose to extend the same data protection rights irrespective of nationality and geography.
Do I have to make a statement on data ethics and what should I consider?
If you are part of a large Danish enterprise, then you must apply the well-known principle of “comply-or-explain” in your annual report. Either you make a statement on your policies on data ethics or you explain why you do not have any such policies. The purpose of the requirement is not that you should make a policy. Rather, the point is to elevate data ethics to one of the (several) issues that top management has to consider and be transparent about.
In fact, having a policy on data ethics will not be relevant for all companies. This is likely the case if your organisation has little to no development or use of technological products and services based on data used on individuals.
If on the other hand, you develop or use such data-driven solutions, e.g. automated or semi-automated decision making, profiling, machine learning/AI, having a data ethical position is relevant to consider. This applies even if you are not covered by the act. It is similarly also worth considering, if you already have a particular position on data processing promoting ethical thinking and responsiveness, see my examples above, such may also be worth identifying, formalizing and reporting on.
The preparatory works mentions that the statement for the annual report could encompass a description of how and to what ends the company is using new technology, e.g. machine learning and AI in the development and supply of products and services, including whether the company uses AI for the purpose of pricing, decisions etc. A true data ethical statement, however, covers other areas than merely new technology. Other genuine choices that introduce, promote, support or otherwise reinforce fundamental rights of individuals in data processing can be a result of a particular ethical stance on data collection, consumption and disclosure.
The introduction of data ethical reporting is very much thought as a competition advantage. With the increasing attention to data privacy and to the effects that the application of new technology and globalization have on the individual, it very well may be so. Organisations are exploring new ways of using data and technology and choices, also of ethical nature, are necessary to consider.
A small final take away in the debate. In my own view, the national Danish debate on data ethics has had a rather unfortunate notion. Ethics is very intangible for the most of us, and easily confused with mere compliance with GDPR. The two are very different and must be separated. Ethics is more than being compliant with laws. Ethics is about a creating a fair startingpoint in your actions. So to get to the core of the matter one must consider how to implement fairness in one’s data processing. Not for the sake of the authorities, but for the sake of the object of the handling and for societal reasons.
At White Label Consultancy we can help facilitate, develop or review your data ethics policy. The policy isrooted in your organisation’s data processing activities and your organisation’s values. We can help identify areas where ethical thinking is a competition advantage– or needed.