Europol, UNICRI, Trendmicro 犯罪者もAIを活用!(ディープフェイクだけではない)
こんにちは、丸山満彦です。
Europolと国連の犯罪研究所であるUnited Nations Interregional Crime and Justice Research Institute (UNICRI) [wikipedia]がTrendmicroの協力を得て、Malicious Uses and Abuses of Artificial Intelligence (AIの悪意ある利用とAIの悪用)という報告書を出していますね。。。
● Europol
・2020.11.19 NEW REPORT FINDS THAT CRIMINALS LEVERAGE AI FOR MALICIOUS USE – AND IT’S NOT JUST DEEP FAKES
Europol, UNICRI and Trend Micro uncover current and future threats of AI and how to combat them
・・[PDF] Malicious Uses and Abuses of Artificial Intelligence
リコメンデーションとして、
- 犯罪と戦うツールとしてのAI技術を利用する
- 防御技術の開発を促進するための研究を継続する
- 安全なAI設計フレームワークを促進し、開発する
- サイバーセキュリティ目的でのAIの使用に関して政治的な美辞麗句を辞める
- 官民パートナーシップを活用し、学際的な専門家グループを設立する
という感じでしょうか・・・
内容を見出しレベルで書き出すと・・・
仮章番 | 原文 | 仮訳 |
[1] | Introduction | 序章 |
[2] | The Present State of Malicious Uses and Abuses of AI | AIの悪意ある利用とAIの悪用の現状 |
[2-1] | AI Malware | AIマルウェア |
[2-2] | AI Malware at Large | 大規模なAIマルウェア |
[2-3] | Abusing AI Cloud Services | AIクラウドサービスの悪用 |
[2-4] | Abusing Smart Assistants | スマートアシスタントの悪用 |
[2-5] | AI-Supported Password Guessing | AIに支援されたパスワード推測 |
[2-6] | AI-Supported CAPTCHA Breaking | AIに支援されたCAPTCHA突破 |
[2-7] | AI-Aided Encryption | AI援用暗号化 |
[2-8] | Trends Found on Underground Forums | 地下フォーラムで見つけたトレンド |
[2-8-1] | Human Impersonation on Social Networking Platforms | ソーシャル・ネットワーキング・プラットフォーム上での人間のなりすまし |
[2-8-2] | Online Game Cheats | オンラインゲームのチート |
[2-8-3] | AI-Supported Hacking | AIに支援されたハッキング |
[2-8-4] | AI-Supported Cryptocurrency Trading | AIに支援された暗号通貨取引 |
[2-8-5] | Social Engineering | ソーシャルエンジニアリング |
[3] | Future Scenarios of Malicious Uses and Abuses of AI | AIの悪用と悪用の将来シナリオ |
[3-1] | Social Engineering at Scale | スケールアップしたソーシャルエンジニアリング |
[3-2] | Content Generation | コンテンツ生成 |
[3-3] | Content Parsing | コンテンツ解析 |
[3-4] | Improved Social Profile Aging for Forums and Botnets | フォーラムやボットネットのためのソーシャル・プロフィール・エイジングの改善 |
[3-5] | Robocalling v2.0 | ロボコーリング v2.0 |
[3-6] | Criminal Business Intelligence | 犯罪者ビジネスインテリジェンス |
[3-7] | Abusing Image Recognition Systems | 画像認識システムの悪用 |
[3-7-1] | Autonomous Cars | 自律走行車 |
[3-7-2] | Drones, Connected Skies, and the Internet of Flying Things | ドローン、コネクテッドスカイ、空を飛ぶIoT |
[3-8] | Escaping an Image Recognition System | 画像認識システムの回避 |
[3-9] | Remote Machine Learning Sets Pollution | 汚染させられた遠隔機械学習 |
[3-9-1] | Security Algorithms | セキュリティアルゴリズム |
[3-9-2] | AI-Enabled Stock Market Manipulation | AIを活用した株式市場の操作 |
[3-10] | Business Process Compromise and Injection of Safelisted Telemetry | ビジネスプロセスの汚染と安全なテレメ遠隔測定の注入 |
[3-11] | Insider Attacks: Banking and Trading Floor AI | インサイダー攻撃:銀行と取引所のAI |
[3-12] | Local Library Poisoning by Resident Malware | 常駐マルウェアによる地域の図書館中毒 |
[3-13] | AI-Supported Ransomware | AIに支援されたランサムウェア |
[3-14] | Escaping AI Detection Systems | AI検出システムの回避 |
[3-14-1] | Fraud and Voice Recognition in Banks | 銀行における不正行為と音声認識 |
[4] | Recommendations | 勧告 |
[5] | Case Study: A Deep Dive Into Deepfakes | ケーススタディ:ディープフェイクスへのディープダイブ |
[5-1] | Deepfakes | ディープフェイクス |
[5-2] | The Technology Behind Deepfakes | ディープフェイクスを支える技術 |
[5-2-1] | Deepfake Creation: Apps and Tools | ディープフェイク作成:アプリとツール |
[5-3] | The Current State of the Abuse of Deepfakes | ディープフェイクスの悪用の現状 |
[5-4] | Potential Reasons for the Low Rate of Adoption of Deepfakes | ディープフェイクスの採用率が低い潜在的な理由 |
[5-5] | Some Possible Future Threats of Deepfakes | ディープフェイクの将来的な脅威のいくつかの可能性 |
[5-6] | Countering Deepfakes | ディープフェイクに対抗する |
[5-6-1] | Deepfake Detection | ディープフェイクの検出 |
[5-6-2] | Deepfake Policies | ディープフェイクポリシー |
[5-6-3] | Recommendations and Considerations for Further Research | さらなる研究への提言と考察 |
[6] | Conclusion | 結論 |
[7] | Appendix | 付録 |
[7-1] | YARA Rules for AI-Powered Malware Detection | AIを活用したマルウェア検出のためのYARAルール |
[8] | References | 参考文献 |
Introduction
In the dynamic world of technology and computer science, AI continues to offer a wide range of possible applications for enterprises and individuals. Unfortunately, the promise of more efficient automation and autonomy is inseparable from the different schemes that malicious actors are capable of.
For instance, criminals can use AI to facilitate and improve their attacks by maximizing opportunities for profit in a shorter time, exploiting new victims, and creating more innovative criminal business models while reducing the chances of being caught. Additionally, as AI-as-a-Service becomes more widespread, it will lower the barrier to entry by reducing the skills and technical expertise needed to employ AI.
Criminals and organized crime groups (OCGs) have been swiftly integrating new technologies into their modi operandi, thus creating not only constant shifts in the criminal landscape worldwide, but also creating significant challenges for law enforcement and cybersecurity in general. The Crimeas-a-Service (CaaS) business model, which allows non-technologically savvy criminals to procure technical tools and services in the digital underground that allow them to extend their attack capacity and sophistication,8 further increases the potential for new technologies such as AI to be abused by criminals and become a driver of crime.
Building knowledge about the potential use of AI by criminals will improve the ability of the cybersecurity industry in general and law enforcement agencies in particular to anticipate possible malicious and criminal activities, as well as to prevent, respond to, or mitigate the effects of such attacks in a proactive manner. An understanding of the capabilities, scenarios, and attack vectors is key to enhancing preparedness and increasing resilience.
In line with the goal to contribute to the body of knowledge on AI, this report, a joint effort among Trend Micro, the United Nations Interregional Crime and Justice Research Institute (UNICRI), and Europol, seeks to provide a thorough and in-depth look into the present and possible future malicious uses and abuses of AI and related technologies.
The intended audience of this report includes cybersecurity experts and practitioners, law enforcement, innovation hubs, policy-makers, and researchers.
The report can also be leveraged by members of this audience as a thinking space for ideas and a call for greater attention to the possible malicious uses or abuses of AI.
The findings contained in the report are based on contributions from the three entities. They have been combined with input collected during a focused workshop in March 2020 that was organized by Europol, Trend Micro, and UNICRI. Workshop participants included members from the Joint Cybercrime Action Taskforce (J-CAT), the International Criminal Court, and several members of Europol’s European Cybercrime Centre (EC3) Advisory Groups.
This report uses an important distinction that should be noted from the outset: namely, the distinction between malicious uses and abuses of AI. The “uses” in “malicious AI uses” refers to instances whereby criminals might employ AI systems to further their attack objectives — for example, by using ML to automate cyberattacks and improve malware. On the other hand, the “abuses” in “malicious AI abuses” refers to instances where criminals might try to attack and exploit existing AI systems to break or circumvent them — for example, by hacking smart home assistants.
With this distinction in mind, we framed the report around two main components: present AI malicious use or abuse, for which there are documented cases and research, and future malicious use or abuse for which there is no evidence or literature yet. Nonetheless, we believe that on the basis of technological trends and developments, future uses or abuses could become present realities in the not-too-distant future. To prepare for these uses or abuses, speculative scenarios that are likely to happen must be conceptualized. To ground the report with regard to this, we examine current trends in underground forums, as these can provide insights on what the malicious abuse of the AI threat landscape might look like in the near future.
With respect to present malicious uses or abuses, possible countermeasures to combat the malicious uses of AI are also identified here. It should be emphasized, however, that these countermeasures are not exhaustive and are only suggestive of one or more possible avenues for combatting the specific use or abuse identified.
Finally, through a case study at the end of this report, we take a closer look at deepfakes, a specific use of AI that has received widespread media attention and presents considerable potential for a range of malicious and criminal purposes. We also include an overview of the technological aspects of some specific use cases that are identified in the report, alongside further technical content, in this case study.
« INTERPOL, Europol, バーゼルガバナンス研究所 「第4回犯罪金融と暗号通貨に関する世界会議」の7つの推奨事項 | Main | フェイク画像はここまで来たのか・・・ (The New York Times)+IDF辻井先生のコラム »
Comments