Weiter zum Inhalt
  • «
  • 1
  • »

Die Suche erzielte 3 Treffer.

Are You AI’S Favourite? EU Legal Implications of Biased AI Systems in Clinical Genetics and Genomics Journal Artikel

Anastasiya Kiseleva, Paul Quinn

European Pharmaceutical Law Review, Jahrgang 5 (2021), Ausgabe 4, Seite 155 - 174

The article provides a legal overview of biased AI systems in clinical genetics and genomics. For the overview, two perspectives to look at bias are taken into consideration: societal and statistical. The paper explores how biases can be defined in these two perspectives and how generally they can be classified. Based on two perspectives, the paper explores three negative consequences of biases in AI systems: discrimination and stigmatization (as the more societal concepts) and inaccuracy of AI’s decisions (more related to the statistical perception of bias). Each of these consequences is analyzed within the frameworks they correspond to. Recognizing inaccuracy as harm caused by biased AI systems is one of the most important contributions of the article. It is argued that once identified, bias in an AI system indicates possible inaccuracy in its outcomes. The article demonstrates it through the analysis of the medical devices framework: if it is applicable to AI applications used in genomics and genetics, how it defines bias, and what are the requirements to prevent them. The paper also looks at how this framework can work together with anti-discrimination and stigmatization rules, especially in the light of the upcoming general legal framework on AI. The authors conclude that all the frameworks shall be considered for fighting against bias in AI systems because they reflect different approaches to the nature of bias and thus provide a broader range of mechanisms to prevent or minimize them.


Creating a European Health Data Space: Obstacles in Four Key Legal Areas Journal Artikel

Anastasiya Kiseleva, Paul de Hert

European Pharmaceutical Law Review, Jahrgang 5 (2021), Ausgabe 1, Seite 21 - 36

The creation of the European health data space is one of the core actions promoted by the European Commission in its EU Data Strategy. This task is challenging due to technical, organisational, economic issues that require different measures. This paper focuses on the issues in the legal field and identifies four key legal areas where the creation of a European health data space may face obstacles. These areas are: 1) rules on the provision of healthcare in the Member States; 2) protection of personal data in healthcare provision and medical research; 3) control and use of non-personal data and 4) the regulatory framework on AI. The article analyses and compares these areas and provides a systemised view on causes and consequences. The article concludes with an outlook further legislatives developments related to the European health data space.


AI as a Medical Device: Is it Enough to Ensure Performance Transparency and Accountability? Journal Artikel

Anastasiya Kiseleva

European Pharmaceutical Law Review, Jahrgang 4 (2020), Ausgabe 1, Seite 5 - 16

The ‘black-box’ nature of AI algorithms creates challenges in areas where the decision-making process should be transparent and accountable. One of these areas is the healthcare industry. While transparency and accountability are relatively unexplored in the healthcare domain, this article first examines these concepts and differentiates its types in healthcare generally and in relation to the use of AI. Following that, the paper provides an analysis of the current EU and US regulations on medical devices and their applicability to AI-based applications. The article concludes that the medical devices regulations can be considered as the initial legal framework for the use of AI in healthcare in terms of safety and performance but must be extended and further developed in terms of performance transparency and accountability.

  • «
  • 1
  • »