« NATOのサイバー演習 Cyber Coalition 2020はバーチャル環境で実施され成功裡に終わった | Main | 内閣官房 「第10回 個人情報保護制度の見直しに関する検討会」 地方自治体関係 »


米国 OMBが「人工知能アプリケーション規制のためのガイダンス」を発行



Whitehouse OMB

・2020.11.17 [PDF] M-21-06 MEMORANDUM FOR THE HEADS OF EXECUTIVE DEPARTMENTS AND AGENCIES "Guidance for Regulation of Artificial Intelligence Applications"


Federal Register

・2020.01.13 Request for Comments on a Draft Memorandum to the Heads of Executive Departments and Agencies, “Guidance for Regulation of Artificial Intelligence Applications





■ 参考



米国OMB (the Office of Management and Budget)“Guidance for Regulation of Artificial Intelligence Applications”メモランダムドラフトへの意見


● American Action Forum

・2020.03.09 Comments regarding “Guidance for Regulation of Artificial Intelligence Applications” by Jennifer Huddleston

● Business Roundtable

・2020.03.13 Business Roundtable Comments on Draft OMB Memorandum to the Heads of Executive Departments and Agencies on “Guidance for Regulation of Artificial Intelligence Applications”





・2020.04.15 How The White House “Guidance For Regulation Of Artificial Intelligence” Invites Overregulation



● Davis Wright Tremaine LLP

・2020.01.22 White House Issues Guidance for AI Regulation and “Non-Regulation” by K.C. Halm andKatori Copeland


・2020.01.14  AI Update: White House Issues 10 Principles for Artificial Intelligence Regulation b






Encouraging Innovation and Growth in AI

Principles for the Stewardship of AI Applications

  1. Public Trust in AI
  2. Public Participation
  3. Scientific Integrity and Information Quality
  4. Risk Assessment and Management
  5. Benefits and Costs
  6. Flexibility
  7. Fairness and Non-Discrimination
  8. Disclosure and Transparency
  9. Safety and Security
  10. Interagency Coordination

Non-Regulatory Approaches to AI9

Reducing Barriers to the Deployment and Use of AI

  • Access to Federal Data and Models for AI R&D
  • Communication to the Public
  • International Regulatory Cooperation

Agency Plans to Achieve Consistency with this Memorandum

Appendix A: Technical Guidance on Rulemaking

  • Regulatory Impact Analysis
  • Public. Consultation
  • Assessing Risk
  • Managing Risk

Appendix B: Template for Agency Plans


Principles for the Stewardship of AI Applications

Consistent with law, agencies should take into consideration the following principles when formulating regulatory and non-regulatory approaches to the design, development, deployment, and operation of AI applications, both general and sector-specific. These principles, many of which are interrelated, reflect the goals and principles in Executive Order 13859. Agencies should calibrate approaches concerning these principles and consider case-specific factors to optimize net benefits.

Given that many AI applications do not necessarily raise novel issues, the following principles also reflect longstanding Federal regulatory principles and practices that are relevant to promoting the innovative use of AI. Promoting innovation and growth of AI is a high priority of the U.S. government. Fostering AI innovation and growth through forbearing from new regulation may be appropriate in some cases. Agencies should consider new regulation only after they have decided, in light of the foregoing section and other considerations, that Federal regulation is necessary.

1. Public Trust in AI

AI is expected to have a positive impact across many sectors of social and economic life, including employment, transportation, education, finance, healthcare, personal security, and manufacturing. At the same time, AI applications could pose risks to privacy, individual rights, personal choice, civil liberties, public health, safety, and security that must be carefully assessed and appropriately addressed. Since the continued adoption and acceptance of AI will depend significantly on public trust and validation, the government's regulatory and non-regulatory approaches to AI should contribute to public trust in AI by promoting reliable, robust, and trustworthy AI applications. For example, an appropriate regulatory approach that reduces accidents can increase public trust and thereby support the development of industries powered by AI. Regulatory approaches may also be needed to protect reasonable expectations of privacy on the part of individuals who interact with AI and to ensure that AI does not compromise the ability of individuals to make their own informed decisions. The appropriate regulatory or non-regulatory response to privacy and other risks must necessarily depend on the nature of the risk presented and the tools available to mitigate those risks.

2. Public Participation

In accordance with Executive Order 13563, "Improving Regulation and Regulatory Review," regulations "shall be adopted through a process that involves public participation."4 Public participation, especially in those instances where AI uses information about individuals, will improve agency accountability and regulatory outcomes, as well as increase public trust and confidence. Agencies must provide ample opportunities for the public to provide information and participate in all stages of the rulemaking process, to the extent feasible and consistent with legal requirements (including legal constraints on participation to, for example, protect national security and address imminent threats or respond to emergencies). Agencies are also encouraged, to the extent practicable, to inform the public and promote awareness and widespread availability of voluntary frameworks or standards and the creation of other informative documents.

3. Scientific Integrity and Information Quality

The government's regulatory and non-regulatory approaches to AI applications should leverage scientific and technical information and processes. Agencies should hold information, whether produced by the government or acquired by the government from third parties, that is likely to have a clear and substantial influence on important public policy or private sector decisions (including those made by consumers) to a high standard of quality and transparency. 5 Consistent with the principles of scientific integrity in the rulemaking and guidance processes, agencies should articulate a clear public policy need when proposing a regulatory approach to AI and develop such approaches in a manner that both informs policy decisions and fosters public trust in AI. When an agency regulates AI applications, it should, as relevant, transparently articulate the strengths and weaknesses of the applications; intended optimizations or outcomes; bias and risk mitigations; potential impacts on competition, privacy and personal decision­ making; any national security implications; and appropriate uses of the AI application's results.

4. Risk Assessment and Management

Regulatory and non-regulatory approaches to AI should be based on a consistent application of risk assessment and risk management across various agencies and various technologies. It is not necessary to mitigate every foreseeable risk; in fact, a foundational principle of regulatory policy is that all activities involve tradeoffs. Instead, a risk-based approach should be used to determine which risks are acceptable and which risks present the possibility of unacceptable harm, or harm that has expected costs greater than expected benefits. Agencies should be transparent about their evaluations of risk and re-evaluate their assumptions and conclusions at appropriate intervals so as to foster accountability. Correspondingly, the magnitude and nature of the consequences should an AI tool fail, or for that matter succeed, can help inform the level and type of regulatory effort that is appropriate to identify and mitigate risks. Specifically, agencies should follow the direction in Executive Order 12866, "Regulatory Planning and Review,"6 to consider the degree and nature of the risks posed by various activities within their jurisdiction. Such an approach will, where appropriate, avoid hazard-based and unnecessarily precautionary approaches to regulation that could unjustifiably create anticompetitive effects or inhibit innovation.7 Whenever practical and consistent with applicable law, agencies should seek to apply consistent risk assessment and risk management frameworks and approaches to similar AI functionalities across sectors. Any assessment of risk should compare that risk to risk presented by the situation that would obtain absent the AI application at issue; if an AI application lessens risk that would otherwise obtain, any relevant regulations presumably should permit that application.

5. Benefits and Costs

When developing regulatory and non-regulatory approaches, agencies will often consider the application and deployment of AI into already-regulated industries. Presumably, such significant investments would not occur unless they offered significant economic potential. As in all technological transitions of this nature, the introduction of AI may also create unique challenges. For example, while the broader legal environment already applies to AI applications, the application of existing law to questions of responsibility and liability for decisions made by AI could be unclear in some instances, leading to the need for agencies, consistent with their authorities, to evaluate the benefits, costs, and distributional effects associated with any identified or expected method for accountability. Executive Order 12866 calls on agencies to "select those approaches that maximize net benefits (including potential economic, environmental, public health and safety, and other advantages; distributive impacts; and equity)."8 Agencies should, when consistent with law, carefully consider the full societal costs, benefits, and distributional effects when considering regulations related to the development and deployment of AI applications. Such consideration will include the potential benefits and costs of employing AI, when compared to the systems AI has been designed to complement or replace; whether implementing AI will change the type of errors created by the system; and comparison to the degree of risk tolerated in other existing systems. In cases where a comparison to a current system or process is not available, evaluation of risks and costs of not implementing the system should be evaluated as well.

6. Flexibility

When developing regulatory and non-regulatory approaches, agencies should pursue performance-based and flexible approaches that are technology neutral and that do not impose mandates on companies that would harm innovation. Rigid, design-based regulations that attempt to prescribe the technical specifications of AI applications will in most cases be impractical and ineffective, given the anticipated pace with which AI will evolve and the resulting need for agencies to react to new information and evidence. Targeted agency conformity assessment schemes, to protect health and safety, privacy, and other values, will be essential to a successful, and flexible, performance-based approach. To advance American innovation, agencies should keep in mind international uses of AI, ensuring that American companies are not disadvantaged by the United States' regulatory regime.

7. Fairness and Non-Discrimination

Agencies should consider in a transparent manner the impacts that AI applications may have on discrimination. AI applications have the potential of reducing present-day discrimination caused by human subjectivity. At the same time, applications can, in some instances, introduce real-world bias that produces discriminatory outcomes or decisions that undermine public trust and confidence in AI or be used in other ways that violate anti-discrimination statutes. When considering regulations or non-regulatory approaches related to AI applications, agencies should consider, in accordance with law, issues of fairness and non-discrimination with respect to outcomes and decisions produced by the AI application at issue, as well as whether the AI application at issue may reduce levels of unlawful, unfair, or otherwise unintended discrimination as compared to existing processes.

8. Disclosure and Transparency

In addition to improving the rulemaking process, transparency and disclosure can increase public trust and confidence in AI applications by allowing (a) non-experts to understand how an AI application works and (b) technical experts to understand the process by which AI made a given decision. Such disclosures, when required, should be written in a format that is easy for the public to understand and may include identifying when AI is in use, for instance, if appropriate for addressing questions about how the application impacts human end users. Disclosures may be required to preserve the ability of human end users and other members of the public to make informed decisions, although agencies should be aware that some applications of AI could improve or assist human decision-making. Agencies should carefully consider the sufficiency of existing or evolving legal, policy, and regulatory environments before contemplating additional measures for disclosure and transparency. What constitutes appropriate disclosure and transparency is context-specific, depending on assessments of potential harms (including those resulting from the exploitation of disclosed information), the magnitude of those harms, the technical state of the art, and the potential benefits of the AI application.

9. Safety and Security

Agencies should promote the development of AI systems that are safe, secure, and operate as intended, and encourage the consideration of safety and security issues throughout the AI design, development, deployment, and operation process. Agencies should pay particular attention to the controls in place to ensure the confidentiality, integrity, and availability of the information processed, stored, and transmitted by AI systems. Agencies should also consider methods for providing systemic resilience, and for preventing bad actors from exploiting AI systems, including cybersecurity risks posed by AI operation, and adversarial use of AI against a regulated entity. When evaluating or developing regulatory and non-regulatory approaches to AI applications, agencies should be mindful of any potential safety and security risks and vulnerabilities, as well as the risk of possible malicious deployment and use of AI applications. Moreover, agencies should consider, where relevant, any national security implications raised by the unique characteristics of AI and AI applications and take actions to protect national security as appropriate for their authorities.

10. Interagency Coordination

A coherent and whole-of-government approach to AI oversight requires interagency coordination. Interagency coordination will be achieved under the auspices of Executive Order 12866, which governs the Office of lnformation and Regulatory Affairs (OIRA) in its oversight of Federal regulation and establishes principles of regulation that are relevant to this Memorandum. Consistent with Executive Order 12866, agencies should coordinate with each other to share experiences to ensure consistency and predictability of AI-related policies that advance American innovation and adoption of AI, while appropriately protecting privacy, civil liberties, national security, and American values and allowing sector- and application-specific approaches. When OIRA designates AI-related draft regulatory action as "significant" for purposes of interagency review under Executive Order 12866, OIRA will ensure that all agencies potentially affected by or interested in a particular action will have an opportunity to provide input.



« NATOのサイバー演習 Cyber Coalition 2020はバーチャル環境で実施され成功裡に終わった | Main | 内閣官房 「第10回 個人情報保護制度の見直しに関する検討会」 地方自治体関係 »


Post a comment

(Not displayed with comment.)

Comments are moderated, and will not appear on this weblog until the author has approved them.

« NATOのサイバー演習 Cyber Coalition 2020はバーチャル環境で実施され成功裡に終わった | Main | 内閣官房 「第10回 個人情報保護制度の見直しに関する検討会」 地方自治体関係 »