« AIと将来の軍事力についての中国の視点 | Main | オーストラリア政府 内務省がサイバーセキュリティ戦略2020を公表していましたね。。。 »

2020.08.10

DeepfakeについてのNATO, CSET, Partnership on AI, GAOの報告書(少し前ですが・・・)

こんにちは、丸山満彦です。

Deepfakeについての公表物がいくつかあるので、備忘録・・・

Deepfakeを作る技術と見破る技術のイタチごっこ, cat-and-mouse gameが続くのでしょうね。。。

映画とかでは面白い技術とは思いますが、SNSで作為を持って流されるなども考えられるので、何らかの規制が将来は必要となるかもしれませんね。。。

 

いくつか並べてみましたが、NATOとCSETの著者は同じでTim Hwang氏ですね。。。

NATO Strategic Communications Center of Excellence Riga、Latvia

・2020.06.05 New study offers insight into future of deepfake technology

・[PDF] Deepfakes - Primer and Forecast by Tim Hwang

 

CENTER for SECURITY and EMERGING TECHNOLOGY (CSET)

・2020.07 (Analysis) Deepfakes: A Grounded Threat Assessment by Tim Hwang

・[PDF] Deepfakes: A Grounded Threat Assessment

The rise of deepfakes could enhance the effectiveness of disinformation efforts by states, political parties and adversarial actors. How rapidly is this technology advancing, and who in reality might adopt it for malicious ends? This report offers a comprehensive deepfake threat assessment grounded in the latest machine learning research on generative models.

 

Partnership on AI

・2020.03.12 (NEWS) A Report on the Deepfake Detection Challenge by CLAIRE LEIBOWICZ

・[PDF] The Deepfake Detection Challenge: Insights and Recommendations for AI and Media Integrity

GAO

・2020.02.20 SCIENCE & TECH SPOTLIGHT: Deepfakes GAO-20-379SP

・[PDF] Full report

 

その他

Reserchgate.net

・2019.09 Deep Learning for Deepfakes Creation and Detection: A Survey by Thanh Thi Nguyen and Cuong M. Nguyen


Abstract
Deep learning has been successfully applied to solve various complex problems ranging from big data analytics to computer vision and human-level control. Deep learning advances however have also been employed to create software that can cause threats to privacy, democracy and national security. One of those deep learning-powered applications recently emerged is "deepfake". Deepfake algorithms can create fake images and videos that humans cannot distinguish them from authentic ones. The proposal of technologies that can automatically detect and assess the integrity of digital visual media is therefore indispensable. This paper presents a survey of algorithms used to create deepfakes and, more importantly, methods proposed to detect deepfakes in the literature to date. We present extensive discussions on challenges, research trends and directions related to deepfake technologies. By reviewing the background of deepfakes and state-of-the-art deepfake detection methods, this study provides a comprehensive overview of deepfake techniques and facilitates the development of new and more robust methods to deal with the increasingly challenging deepfakes.
Springer Link

・2020.08.06 The Epistemic Threat of Deepfakes by Don Fallis

 ・[PDF]

Abstract

Deepfakes are realistic videos created using new machine learning techniques rather than traditional photographic means. They tend to depict people saying and doing things that they did not actually say or do. In the news media and the blogosphere, the worry has been raised that, as a result of deepfakes, we are heading toward an “infopocalypse” where we cannot tell what is real from what is not. Several philosophers (e.g., Deborah Johnson, Luciano Floridi, Regina Rini) have now issued similar warnings. In this paper, I offer an analysis of why deepfakes are such a serious threat to knowledge. Utilizing the account of information carrying recently developed by Brian Skyrms (2010), I argue that deepfakes reduce the amount of information that videos carry to viewers. I conclude by drawing some implications of this analysis for addressing the epistemic threat of deepfakes.


 

・2020.06.23 DeepFake Video Detection: A Time-Distributed Approach by Amritpal Singh, Amanpreet Singh Saimbhi, Navjot Singh & Mamta Mittal

・[PDF]

Abstract

Recent developments in machine learning algorithms have led to the generation of forged videos having remarkable quality, which are indistinguishable from real videos. This can fatally affect the way in which one perceives the information available digitally. Thus, this paper aims to efficiently and holistically detect manipulated videos generated using DeepFake, which is the most effective deep learning powered technique developed so far by the researchers. Arduous efforts have been put to detect the forgery in still images, but the authors leveraged the spatio-temporal features of the videos by taking sequences of frames as input to the model. Furthermore, the authors have proposed an architecture which took advantage of lower-level features around regions of interest as well as discrepancies across multiple frames. Experiments have been performed on the Deep Fake Detection Challenge dataset of  470 GB in size, and it has been observed that the proposed approach yielded a test accuracy score of 97.6%.

 


 

 

|

« AIと将来の軍事力についての中国の視点 | Main | オーストラリア政府 内務省がサイバーセキュリティ戦略2020を公表していましたね。。。 »

Comments

Post a comment



(Not displayed with comment.)


Comments are moderated, and will not appear on this weblog until the author has approved them.



« AIと将来の軍事力についての中国の視点 | Main | オーストラリア政府 内務省がサイバーセキュリティ戦略2020を公表していましたね。。。 »