{"id":3469,"date":"2021-04-13T16:50:58","date_gmt":"2021-04-13T20:50:58","guid":{"rendered":"https:\/\/wpmanstage.com\/crim\/?p=3469"},"modified":"2021-08-26T15:25:35","modified_gmt":"2021-08-26T19:25:35","slug":"the-growing-threat-of-deepfake","status":"publish","type":"post","link":"https:\/\/www.crim.ca\/en\/the-growing-threat-of-deepfake\/","title":{"rendered":"The growing threat of deepfake"},"content":{"rendered":"<p>Powered by the latest technological advancement in artificial intelligence and machine learning, deepfakes offer automated procedures to create fake content that is harder and harder for human observers to detect\u00b9.<\/p>\n<p>The technique was developed in 2016 in the research community and gained public attention when fake X-rated videos putting in display public figures appeared on Reddit. Any digital content disseminated online can be deepfaked. You guessed it, deepfake is often done with malicious intent to cause harm to a public figure. The danger with deepfake goes beyond X-rated videos.<\/p>\n<p>The video shown below, illustrating two toddlers that are visibly friends, became viral on social media, disseminating a positive message of tolerance and friendship. However, over the course of the last US presidential campaign, it was tampered in a way that showed one toddler running\u00a0\u00a0away from the other. The new video was titled <em>Terrified toddler runs from racist baby<\/em> and shared by President Trump, sending a message of fear and intolerance.<\/p>\n<h6><img fetchpriority=\"high\" decoding=\"async\" class=\"aligncenter size-full wp-image-3474\" src=\"https:\/\/www.crim.ca\/\/wp-content\/uploads\/2021\/04\/toddlersembracing.jpeg\" alt=\"Stillshot of video showing toddlers embracing\" width=\"318\" height=\"159\" srcset=\"https:\/\/www.crim.ca\/wp-content\/uploads\/2021\/04\/toddlersembracing.jpeg 318w, https:\/\/www.crim.ca\/wp-content\/uploads\/2021\/04\/toddlersembracing-300x150.jpeg 300w\" sizes=\"(max-width: 318px) 100vw, 318px\" \/><\/h6>\n<h6 style=\"text-align: center;\">Original video on CNN showing two toddlers that are visibly friends.*<\/h6>\n<p><img decoding=\"async\" class=\"aligncenter size-full wp-image-3476\" style=\"color: #333333; font-style: normal; font-weight: 300;\" src=\"https:\/\/www.crim.ca\/\/wp-content\/uploads\/2021\/04\/toddlerfleeing.jpeg\" alt=\"Stillshot of video showing toddlers fleeing one another\" width=\"282\" height=\"178\" \/><\/p>\n<h6 style=\"text-align: center;\">Tampered video showing one toddler running away from the other.*<\/h6>\n<p>In 2019, there were 15,000\u00b2 deepfake videos in circulation online, an 83% increase in less than a year! This trend is still going strong as deepfake software hit mainstream.<\/p>\n<p>Deepfake can include adding, editing or removing objects from an original video or image. Deepfake works especially well with human faces as they all share the same features (eyes, nose, mouth, etc.). The algorithm \u201clearns\u201d the original face and from there, one of two things can happen: the face from a source photo is swapped onto a face appearing in a target photo, often that of a public figure; or the face of a public figure is rigged in a way that the algorithm makes it say things that are out of character. As this technology\u00a0grows in sophistication,\u00a0experts fear that\u00a0it may become increasingly difficult to tell the difference between real and fake\u00a0images.<\/p>\n<blockquote><p>These counterfeit images and videos may be damageable to society and individuals. They can be used for defamation and extorsion.<\/p>\n<p>\u2013 Mohamed Dahmane, researcher in computer vision at the Computer Research Institute of Montr\u00e9al (CRIM)<\/p><\/blockquote>\n<p><img decoding=\"async\" class=\"aligncenter wp-image-3445 size-full\" src=\"https:\/\/www.crim.ca\/\/wp-content\/uploads\/2021\/04\/iStock-1204719646-e1617740899807.jpg\" alt=\"\" width=\"600\" height=\"336\" \/><\/p>\n<p>In 2017,\u00a0Canada\u2019s <strong>Department of National Defence (DND)<\/strong>\u00a0launched the <span style=\"text-decoration: underline;\"><em><a href=\"https:\/\/www.canada.ca\/en\/department-national-defence\/programs\/defence-ideas.html\" target=\"_blank\" rel=\"noopener\">Innovation for Defence Excellence and Security\u00a0program\u00a0(IDEaS)<\/a><\/em><\/span>,\u00a0a\u00a0$1.6 billion\u00a0investment over 20 years.\u00a0To foster collaboration between innovators and provide opportunities to\u00a0work with government, this\u00a0program\u00a0offers\u00a0technological challenges\u00a0for research organisations and businesses.<\/p>\n<p>CRIM experts chose the\u00a0<em>Verification of full motion video integrity\u00a0<\/em>challenge.\u00a0The desired outcome is a suite of AI-based tools and methods for detecting tampered\u00a0videos\u00a0in circulation and for\u00a0telling\u00a0fake images\u00a0apart.<\/p>\n<h3>The Fake Detector Prototype<\/h3>\n<p>The prototype and its plugins were built using a novel dataset of 10,765 generated tampered images**. The semantic splicing plugin, developed by researcher Mohamed Dahmane, classifies the pixels in each image and detects which ones have been tampered.<\/p>\n<p><img loading=\"lazy\" decoding=\"async\" class=\"aligncenter size-full wp-image-3503\" src=\"https:\/\/www.crim.ca\/\/wp-content\/uploads\/2021\/04\/diagramme-deepfake.jpeg\" alt=\"\" width=\"532\" height=\"326\" srcset=\"https:\/\/www.crim.ca\/wp-content\/uploads\/2021\/04\/diagramme-deepfake.jpeg 532w, https:\/\/www.crim.ca\/wp-content\/uploads\/2021\/04\/diagramme-deepfake-300x184.jpeg 300w\" sizes=\"(max-width: 532px) 100vw, 532px\" \/><\/p>\n<h6 style=\"text-align: center;\">Figure 1. CRIM Detection Flow Chart<\/h6>\n<p>&nbsp;<\/p>\n<p>Here, the picture shows a large aircraft to which was added a fake plane. The goal was to determine whether the semantic plugin could detect the pixels of the foreign object and deduct that the image was forged.<\/p>\n<h6 style=\"text-align: center;\"><img loading=\"lazy\" decoding=\"async\" class=\"aligncenter wp-image-3509 size-full\" src=\"https:\/\/www.crim.ca\/\/wp-content\/uploads\/2021\/04\/imagesavion-2-Copie-e1618340691946.png\" alt=\"\" width=\"550\" height=\"385\" \/>Figure 2. The original image to which was added a fake plane<\/h6>\n<p>&nbsp;<\/p>\n<p>As shown below, the semantic splicing plugin detected the pixel contour of the small plane with a high Intersection over Union (IOU)*** of 77%, meaning that it identified 77% of the tampered pixels that were manually flagged by experts.<\/p>\n<p><img loading=\"lazy\" decoding=\"async\" class=\"aligncenter wp-image-3506 size-large\" src=\"https:\/\/www.crim.ca\/\/wp-content\/uploads\/2021\/04\/imagesavion-2-1024x358.png\" alt=\"\" width=\"640\" height=\"224\" srcset=\"https:\/\/www.crim.ca\/wp-content\/uploads\/2021\/04\/imagesavion-2-1024x358.png 1024w, https:\/\/www.crim.ca\/wp-content\/uploads\/2021\/04\/imagesavion-2-300x105.png 300w, https:\/\/www.crim.ca\/wp-content\/uploads\/2021\/04\/imagesavion-2-768x269.png 768w, https:\/\/www.crim.ca\/wp-content\/uploads\/2021\/04\/imagesavion-2.png 1100w\" sizes=\"(max-width: 640px) 100vw, 640px\" \/><\/p>\n<h6 style=\"text-align: center;\">Figure 3. The CRIM semantic plugin detects the pixels of the fake plane.<\/h6>\n<p>&nbsp;<\/p>\n<p>In short,\u00a0the\u00a0CRIM\u00a0deep learning\u00a0solution\u00a0has\u00a0helped\u00a0detect rigged or tampered\u00a0pixels of\u00a0images\u00a0significantly.<\/p>\n<p>Nevertheless,\u00a0the fight against deepfake is\u00a0far from over.\u00a0With every\u00a0detection solution comes greater\u00a0sophistication\u00a0from deepfake software makers.\u00a0It&#8217;s a game of cat and mouse, concludes Mohamed Dahmane.<\/p>\n<p>Time and effort\u00a0must\u00a0be invested\u00a0continuously\u00a0to\u00a0improve\u00a0the detection model.\u00a0With the support of National Sciences\u00a0and Engineering Research Council of Canada\u00a0(NSERC),\u00a0CRIM will pursue research around deepfake detection, not only in the context of fake news but also in the judiciary sphere. In fact, Mohamed Dahmane hopes that one day, algorithms developed by CRIM can be used to certify the authenticity of digital content used in a court of law. Stay tuned!<\/p>\n<hr \/>\n<h6>\u00b9 https:\/\/www.sciencedirect.com\/science\/article\/abs\/pii\/S0007681319301600?via%3Dihub<br \/>\n\u00b2\u00a0https:\/\/www.forbes.com\/sites\/johnbbrandon\/2019\/10\/08\/there-are-now-15000-deepfake-videos-on-social-media-yes-you-should-worry\/?sh=55bff1023750<br \/>\n* https:\/\/www.cnn.com\/2020\/06\/18\/business\/trump-video-twitter-manipulated-media\/index.html (Screenshot)<br \/>\n** Presented at the 2019 27th European Signal Processing Conference: https:\/\/ieeexplore.ieee.org\/document\/8903181<br \/>\n*** An evaluation metric used to measure the accuracy of an object detector on a particular dataset.<\/h6>\n","protected":false},"excerpt":{"rendered":"<p>Powered by the latest technological advancement in artificial intelligence and machine learning, deepfakes offer automated procedures to create fake content that is harder and harder for human observers to detect\u00b9. The technique was developed in 2016 in the research community and gained public attention when fake X-rated videos putting in display public figures appeared on [&hellip;]<\/p>\n","protected":false},"author":18,"featured_media":9505,"comment_status":"closed","ping_status":"closed","sticky":false,"template":"","format":"image","meta":{"_acf_changed":false,"footnotes":"","_links_to":"","_links_to_target":""},"categories":[112],"tags":[160,187,183,202],"class_list":["post-3469","post","type-post","status-publish","format-image","has-post-thumbnail","hentry","category-success-story","tag-artificial-intelligence","tag-computer-vision","tag-deep-learning","tag-deepfake-en","post_format-post-format-image"],"acf":[],"_links":{"self":[{"href":"https:\/\/www.crim.ca\/en\/wp-json\/wp\/v2\/posts\/3469","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/www.crim.ca\/en\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/www.crim.ca\/en\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/www.crim.ca\/en\/wp-json\/wp\/v2\/users\/18"}],"replies":[{"embeddable":true,"href":"https:\/\/www.crim.ca\/en\/wp-json\/wp\/v2\/comments?post=3469"}],"version-history":[{"count":15,"href":"https:\/\/www.crim.ca\/en\/wp-json\/wp\/v2\/posts\/3469\/revisions"}],"predecessor-version":[{"id":3635,"href":"https:\/\/www.crim.ca\/en\/wp-json\/wp\/v2\/posts\/3469\/revisions\/3635"}],"wp:featuredmedia":[{"embeddable":true,"href":"https:\/\/www.crim.ca\/en\/wp-json\/wp\/v2\/media\/9505"}],"wp:attachment":[{"href":"https:\/\/www.crim.ca\/en\/wp-json\/wp\/v2\/media?parent=3469"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/www.crim.ca\/en\/wp-json\/wp\/v2\/categories?post=3469"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/www.crim.ca\/en\/wp-json\/wp\/v2\/tags?post=3469"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}