In the healthcare context, the right to information is pivotal to allow the patient to make informed decisions as regards their health. This right to information has a twofold meaning. First, this right applies to the patient, which demands that patients are aware of how their condition has been diagnosed, their treatment options and the corresponding benefits and side effects. Second, the right to information also relates to healthcare professionals as they are to provide the necessary information, which is dependent on their understanding of the medical assessments, to their patients. As a result, these two aspects of the right to information are intertwined: the patient relies on the comprehension of the healthcare provider to exercise an effective right to information. This contribution focuses on the latter aspect of the right to information in the context of the use of automated decision-making (ADM) systems – a specific AI tool that predicts an outcome – in healthcare diagnosis and treatment in the Scandinavian countries.
An illustration of such an ADM system includes the Swedish company 2cureX, which has developed the algorithm-based laboratory test IndiTreat, which reveals patients’ drug sensitivity by testing their tumour cells. Nonetheless, it could all go very wrong. In the Copenhagen Rigshospital, physicians inserted data from 31 patients into an ADM system, namely IBM’s AI Watson. These patients were already treated for breast, lung or colon cancer. In one-third of the cases, AI Watson proposed life-threatening treatment plans. Crucial to these risks is the lack of transparency in ADM systems caused by the unexplainability of AI, also known as the ‘black box’ of AI. This opacity is predominantly generated by the self-learning ability of AI that is built by, for example, machine learning techniques. An additional hurdle is created by intellectual property rights – most likely trade secret protection – used by the private party to protect their ADM system. In theory, AI transparency achieves freedom and autonomy for citizens by providing them with knowledge. In health care, the patient’s freedom and autonomy are ensured by their right to information. However, the ADM systems used for healthcare diagnosis and treatment may inherently lack transparency, which may render the right to information ineffective.
The aim of this contribution is to explore whether the right to information from the perspective of a medical specialist enables an adequate level of transparency that allows the patient to make decisions about their health when ADM systems play a role in their diagnosis or treatment. I will show that the Scandinavian legal framework of the right to information forms a fitting baseline, which, however, does not necessarily tackle the lack of transparency in ADM systems in health care. Nevertheless, I am not convinced that – at this time – measures can be adopted to attain transparency surrounding the ‘black box’ of such ADM systems. Rather, I argue for a holistic approach to ensure reliable and trustworthy ADM systems. First, research should be conducted in various fields of expertise, which means that legal studies is a part of the solution but does not form the sole part. Second, research should consider the whole procedure, which includes the output phase and the input phase.
The right to information
General
The right to information in a medical context is a vital patient’s right, as it empowers the patient to make decisions about their health. For example, should a patient diagnosed with breast cancer opt for an aggressive approach, including several rounds of chemotherapy and radiotherapy or rather choose treatment aimed at pain and symptom management? Patients can only meaningfully agree or deny treatment if they understand their condition, the alternatives, and the consequences. ADM systems in health care may enhance medical diagnosis and treatment in the field of, amongst others, cancer by a speedier diagnosis and a more efficient treatment.
Looking at our example, an ADM system may suggest a less intrusive treatment involving the removal of the malignant tissue and immunotherapy as opposed to amputation and hormone therapy. However, unleashing this potential largely depends on AI transparency. The need for AI transparency in a medical context is apparent and forms a pivotal point in various contributions. AI transparency can be reached by various measures in the domain of, for example, interpretability, communication, and explainability. I hold that these three components of AI transparency can be led back to the patient’s right to information. Unfortunately, precisely these three facets of AI transparency of ‘interpretability’, ‘communication’, and ‘explainability’ may be curbed due to the ‘black box’ and the self-learning ability of ADM systems used in healthcare diagnoses and treatment, and thus hamper the right to information.
First, the notion ‘interpretability’ refers to the capacity to comprehend the medical results, which translates to a certain diagnosis or treatment. In our case, this would mean that the medical practitioner needs to understand the outcome suggested by the ADM system: why did the ADM system propose one treatment over another? Second, the concept ‘communication’ relates to merely transferring the results and their consequences to the patient. Turning to our example, this would require the medical professional to relay the suggested treatment of the ADM system and its consequences to their patient. Third, the notion ‘explainability’ adds to the concept ‘communication’ as it points to the competence to making clear the medical results and the corresponding diagnosis and procedures to the patient. Put differently, the notion ‘explainability’ entails communicating in a comprehendible way. Moving to our case, the explanations of the physician ought to be communicated in an intelligible manner. Concisely, the right to information may be hampered due to ADM’s ‘black box’ and self-learning capacity, since the medical specialist may struggle to understand the outcome proposed by the ADM system, which may thus prevent them to explain this information to their patients in a comprehensible manner.
National legislation
At the national level, the Norwegian pasient- og brukerrettighetslov (Patient and User Rights Act) embeds in its §3-2 the right to information. Specifically, patients are to be made aware of their health status, the available health care, and the corresponding side effects. According to §3-5 Patient and User Rights Acts, the information must be adapted to the patient, which means that healthcare providers should, amongst others, consider the patient’s age and background. The Swedish Patientlag (Patient Act) requires in 3 kap. 1 that the patient is informed about, amongst others, their health status, methods for treatment, and possible risks. In accordance with 3 kap 6 Patient Act, the information relayed by healthcare specialists needs to be tailored to the patient, which means that regard should be given to the patient’s background, including their age and language skills. The Danish Sundhedslov (Health Act) demands in Kapitel 5 §16 that the patient has the right to receive information about their health status and treatment plans, including the accompanying side effects. In Stk. 3 of the same Article, this information must provide an understandable overview of their health condition and the treatment, which means that healthcare professionals should deliver the information adapted to, amongst others, the patient’s age and maturity. In sum, all Nordic countries have set the patient’s right to information in their national legislation. Further, they all require that the patient is made aware of their health status, treatment options and related side effects in an intelligible manner.
European legislation
Not only national law deals with information rights, but also European legislation. Specifically, the General Data Protection Regulation (GDPR)1Note that the GDPR is also applicable in the European Economic Area, which means that Norway is also bound by its text. is applicable, as these ADM systems in healthcare diagnostics and treatment also process health data, a special category of personal data. The requirements stemming from the GDPR intends to facilitate data subjects to take ownership of their personal data by providing them with information, as opposed to the Scandinavian health law that aims to aid patients to make decisions as regards their health. Consequently, the various rights provided to data subjects – in this case, patients – under the GDPR are intended to enforce their right to data protection.
Nevertheless, I believe there is a link between the right to information in health law and the infamous Article 22 GDPR, which encompasses a distinct right aimed at ADM systems that issue a ‘decision based solely on automated processing’ and thus without any human intervention. In case of such a decision, Article 22 demands that data subjects – in this case, patients – are given meaningful information about the logic of the ADM system used, also known as ‘the right to meaningful explanations’. This has consequences for the right to information, as in the context of health care, this right to meaningful explanations also requires medical specialists to understand how the results stem from the ADM system (interpretability), which needs to be relayed (communication) in an understandable manner (explainability).
However, two remarks should be made about the right to meaningful explanations. First, the notion of a ‘decision based solely on automated processing’ is quite ambiguous, which means that it remains unclear whether minimal human involvement results in the inapplicability of this right. If so, in the context of healthcare diagnosis and treatment, this right has no bearing since physicians will need to interpret the outcome as proposed by the ADM system, which thus results in human involvement. Second, the same holds true as regards the notion of ‘meaningful explanations’, as the exact extent thereof is unclear. Consequently, the definite content of this right remains inconclusive. Moreover, in accordance with Article 12 GDPR, these meaningful explanations should be relayed in, amongst others, a transparent and intelligible manner. Simultaneously, this is the common denominator between the national health legislation and the GDPR; namely, under both legal frameworks the patient is entitled to be provided comprehendible information.
Currently, a preliminary ruling is pending in front of the European Court of Justice that may shed some light on the scope of Article 22 GDPR, including which degree of ‘human involvement’ would still demand meaningful explanations. If the European Court of Justice were to give a broader reading of the notion ‘human intervention’, this may very well entail that medical practitioners are to provide meaningful information about the logic of ADM systems – whatever its content may be. This may pose an additional hurdle to the physician’s ability to comply with the three features of AI transparency, as they may not understand the logic behind the ADM system that reaches a certain diagnosis or suggests a specific treatment (interpretability), which they cannot relay to their patients (communication) in an intelligible manner (explainability).
Beacon of hope or a right of little value?
The patient’s right to information across Scandinavia is a good starting point to provide answers to questions fundamental to achieving interpretability, communication, and explainability, which are all paramount to accomplishing AI transparency. The same holds true as regards the right to meaningful explanations, as embedded in Article 22 GDPR, provided the European Court ties a broader meaning to the notion ‘human involvement’. Thus, the solution does not stem from adopting novel requirements surrounding the right to information, nor is the answer rooted in re-interpreting the existing norms required under this right. Healthcare providers are bound to explain their patient’s diagnoses and treatment options in a transparent manner, which should – in theory – reduce the knowledge gap between the physician and the patient. However, in practice, solely relying on this right does not attain all-encompassing notions of ‘interpretability’, ‘explainability’, and ‘communication’ due to the ‘black box’ phenomenon and the self-learning ability of ADM systems, which may hinder medical specialists from facilitating the patient’s right to information. Consequently, other measures need to be explored. Nevertheless, even though the legal framework of the right to information is of limited importance, this does not exclude the adoption or revision of other legal provisions.
How further?
Undeniably, it remains fundamental to safeguard the patient’s right to information, especially since these new ADM systems in healthcare diagnosis and treatment have the potential to detrimentally effect a patient’s health. The question arises: how this can be realised when the right to information adequately allows the patient to grasp their diagnosis and their treatment options? The problem is situated in the novel technology used and its shortcomings related to explainability, which is primarily caused by the black box mechanism and the self-learning capacities systems. Consequently, the notion of ‘interpretability’ is cumbersome to safeguard, which hinders the notions of ‘communication’ and ‘explainability’.
I am not convinced that the lack of transparency can be overcome with the current technological knowledge. For example, all eyes are currently on explainable Artificial Intelligence (XAI), which centres around explaining the outcome of ADM systems. Particularly, local explanation techniques that are aimed at end-users – in this case healthcare professionals – seem relevant to uncloak how ADM systems work. The outcome will then facilitate the right to information, and thus the three facets of AI transparency; namely, interpretability, communication, and explainability. While a promising field aimed to demystify the ‘black box’ of AI, none of the local explanation methods appears to be infallible. So, from the XAI side, there is no proven solution yet at the outcome phase of ADM systems.
Nevertheless, further research is needed, which also ties into my claim that a more holistic approach to the transparency concerns in the context of ADM systems used in medical diagnosis and treatment. This would include not only research in the legal field but also in technical studies. Yet another method to realise a more holistic approach includes assessing the complete procedure of ADM systems, which also targets the input phase as opposed to solely focusing on its output phase. Consequently, research should scrutinise the training, validation and testing process of ADM systems, which I believe is necessary to create more accurate and credible ADM systems. ADM systems must be representative and avoid biases to minimise the harmful effects of the ‘black box’ and the self-learning ability. One important aspect is the absence of discriminatory grounds during the training, validation and testing stage. Another vital component is using more representative datasets during the input phase of ADM systems.
While originally these recommendations have a technical origin, the law has a significant role to play. First, the law may prohibit the use of unjustified discriminatory data2In the context of health care, it may be necessary to know, for example, the patient’s gender or age to reach an accurate diagnose or to propose a suitable treatment.to further the creation of bias-free ADM systems. Second, legislation may require developers to maintain a metadata sheet of the datasets used during the input phase, which may promote transparency surrounding the creation of the ADM system and prevent the use of unrepresentative datasets. In this respect, the proposal for the AI Act brings some hope. Since these ADM systems in health care are deemed ‘high-risk AI systems’, they are to comply with certain transparency requirements, including using relevant, representative, and complete data. Third, the legislator may demand that proxy data are inserted during the training, validation and testing stage to achieve more representative datasets.
Even though these proposals do not envisage tackling the transparency issues related to interpretability, communication, and explainability per se, these recommendations, however, may ensure the creation – and thus the use – of ADM reliable and trustworthy systems. These measures are not to be construed as the ultimate goal but rather as supplementary measures to ensure equal and safe health care, which will thus bolster trust in the ADM system used. As a result, more research should be conducted into uncloaking the ‘black box’ to accomplish full transparency.
- 1Note that the GDPR is also applicable in the European Economic Area, which means that Norway is also bound by its text.
- 2In the context of health care, it may be necessary to know, for example, the patient’s gender or age to reach an accurate diagnose or to propose a suitable treatment.

Sarah de Heer
Sarah de Heer is a PhD candidate at the Faculty of Law of Lund University, where she researches the right to good administration in AI systems used in health care. Her PhD project is part of the research project ‘AICARE – AI and Automated systems and the Right to Health: Revisiting law accounting for the exploitation of users’ preferences and values’.