« Europol 犯罪者がCOVID-19パンデミックを使ってどうやって稼ぐか? | Main | 経済産業省 総務省 内閣官房 - 「政府情報システムのためのセキュリティ評価制度(ISMAP)における各種基準(案)」に対する意見公募 »


EU Ethics Guidelines for Trustworthy AI


タイムリーな話題というわけではないのですが、EUのEthics Guidelines for Trustworthy AIのウェブページ。。。

European Commission > Futurium

Ethics Guidelines for Trustworthy AI

・[PDF] Guidelines

 I. Chapter I: Foundations of Trustworthy AI
 II. Chapter II: Realising Trustworthy AI
  1. Requirements of Trustworthy AI
  2. Technical and non-technical methods to realise Trustworthy AI
 III. Chapter III: Assessing Trustworthy AI



The aim of the Guidelines is to promote Trustworthy AI. Trustworthy AI has three components, which should be met throughout the system's entire life cycle:
(1) it should be lawful, complying with all applicable laws and regulations
(2) it should be ethical, ensuring adherence to ethical principles and values and
(3) it should be robust, both from a technical and social perspective since, even with good intentions, AI systems can cause unintentional harm.
Each component in itself is necessary but not sufficient for the achievement of Trustworthy AI. Ideally, all three components work in harmony and overlap in their operation. If, in practice, tensions arise between these components, society should endeavour to align them.

These Guidelines set out a framework for achieving Trustworthy AI. The framework does not explicitly deal with Trustworthy AI’s first component (lawful AI).1 Instead, it aims to offer guidance on the second and third components: fostering and securing ethical and robust AI. Addressed to all stakeholders, these Guidelines seek to go beyond a list of ethical principles, by providing guidance on how such principles can be operationalised in sociotechnical systems. Guidance is provided in three layers of abstraction, from the most abstract in Chapter I to the most concrete in Chapter III, closing with examples of opportunities and critical concerns raised by AI systems.

I. Based on an approach founded on fundamental rights, Chapter I identifies the ethical principles and their correlated values that must be respected in the development, deployment and use of AI systems.

Key guidance derived from Chapter I:

  • Develop, deploy and use AI systems in a way that adheres to the ethical principles of: respect for human autonomy, prevention of harm, fairness and explicability. Acknowledge and address the potential tensions between these principles.
  • Pay particular attention to situations involving more vulnerable groups such as children, persons with disabilities and others that have historically been disadvantaged or are at risk of exclusion, and to situations which are characterised by asymmetries of power or information, such as between employers and workers, or between businesses and consumers.2
  • Acknowledge that, while bringing substantial benefits to individuals and society, AI systems also pose certain risks and may have a negative impact, including impacts which may be difficult to anticipate, identify or measure (e.g. on democracy, the rule of law and distributive justice, or on the human mind itself.) Adopt adequate measures to mitigate these risks when appropriate, and proportionately to the magnitude of the risk.

II. Drawing upon Chapter I, Chapter II provides guidance on how Trustworthy AI can be realised, by listing seven requirements that AI systems should meet. Both technical and non-technical methods can be used for their implementation.

Key guidance derived from Chapter II:

  • Ensure that the development, deployment and use of AI systems meets the seven key requirements for Trustworthy AI: (1) human agency and oversight, (2) technical robustness and safety, (3) privacy and data governance, (4) transparency, (5) diversity, non-discrimination and fairness, (6) environmental and societal well-being and (7) accountability.
  • Consider technical and non-technical methods to ensure the implementation of those requirements.
  • Foster research and innovation to help assess AI systems and to further the achievement of the requirements; disseminate results and open questions to the wider public, and systematically train a new generation of experts in AI ethics.
  • Communicate, in a clear and proactive manner, information to stakeholders about the AI system’s capabilities and limitations, enabling realistic expectation setting, and about the manner in which the requirements are implemented. Be transparent about the fact that they are dealing with an AI system.
  • Facilitate the traceability and auditability of AI systems, particularly in critical contexts or situations.
  • Involve stakeholders throughout the AI system’s life cycle. Foster training and education so that all stakeholders are aware of and trained in Trustworthy AI.
  • Be mindful that there might be fundamental tensions between different principles and requirements.
  • Continuously identify, evaluate, document and communicate these trade-offs and their solutions.

III. Chapter III provides a concrete and non-exhaustive Trustworthy AI assessment list aimed at operationalising the key requirements set out in Chapter II. This assessment list will need to be tailored to the specific use case of the AI system.

Key guidance derived from Chapter III:

  • Adopt a Trustworthy AI assessment list when developing, deploying or using AI systems, and adapt it to the specific use case in which the system is being applied.
  • Keep in mind that such an assessment list will never be exhaustive. Ensuring Trustworthy AI is not about ticking boxes, but about continuously identifying and implementing requirements, evaluating solutions, ensuring improved outcomes throughout the AI system’s lifecycle, and involving stakeholders in this.

A final section of the document aims to concretise some of the issues touched upon throughout the framework, by offering examples of beneficial opportunities that should be pursued, and critical concerns raised by AI systems that should be carefully considered.

While these Guidelines aim to offer guidance for AI applications in general by building a horizontal foundation to achieve Trustworthy AI, different situations raise different challenges. It should therefore be explored whether, in addition to this horizontal framework, a sectorial approach is needed, given the context-specificity of AI systems.

These Guidelines do not intend to substitute any form of current or future policymaking or regulation, nor do they aim to deter the introduction thereof. They should be seen as a living document to be reviewed and updated over time to ensure their continuous relevance as the technology, our social environments, and our knowledge evolve.

This document is a starting point for the discussion about “Trustworthy AI for Europe”.

Beyond Europe, the Guidelines also aim to foster research, reflection and discussion on an ethical framework for AI systems at a global level.




« Europol 犯罪者がCOVID-19パンデミックを使ってどうやって稼ぐか? | Main | 経済産業省 総務省 内閣官房 - 「政府情報システムのためのセキュリティ評価制度(ISMAP)における各種基準(案)」に対する意見公募 »


Post a comment

(Not displayed with comment.)

« Europol 犯罪者がCOVID-19パンデミックを使ってどうやって稼ぐか? | Main | 経済産業省 総務省 内閣官房 - 「政府情報システムのためのセキュリティ評価制度(ISMAP)における各種基準(案)」に対する意見公募 »