Taking on Deep Fakes with Machine Learning

Have you seen the Mark Zuckerburg video where he talks about Facebook using stolen user data? Or the one of Barack Obama insulting Donald Trump? Or, perhaps,  the more recent video of Richard Nixon announcing a fatal disaster during NASA’s Apollo 11 mission?

All these videos have one thing in common. They are fake.

Each video described above was created using deep fake technology, which removes complex barriers to create realistic, albeit fake, footage to deceive viewers.

Data Machines Corp. is working to help put an end to this spread of disinformation. As a leader in the machine learning and artificial intelligence space, DMC is the system integrator for the Defense Advanced Research Projects Agency (DARPA) Media Forensics program. Known as MediFor, the program uses machine learning and artificial intelligence to identify images and videos that have been manipulated from their original form. The program relies on world-class researchers, who have developed technologies that can assess the integrity of images and videos in a media forensics database.

Today, the majority of media consumed contains visual components, making this technology more important than ever before as the U.S. Government and its allies try to curb these manipulations. 

“Developing technical solutions to combat the rapid spread of global disinformation is an absolutely critical piece to maintaining stable democracies,” said Nick Strocchia, DMC Project Manager for MediFor. “While MediFor and many systems like it are first generation, the importance of generating awareness for decision-makers and the public of these technologies is vital.”

The MediFor system can identify several types of image manipulations, including anything from photoshopped images to forged checks. The system uses more than 90 different detection methods that have been trained against a database of 50 million images and videos to identify the validity of an image. Once the image is run through the MediFor system, the results from all 90 detection methods are combined into a “fused confidence score,” which helps the user quickly determine whether or not manipulation has occurred. Anything with a score under 99 may have been manipulated.

Left: A potentially manipulated image of Kim Jong Un watching a missile launch. Right: Heat map version highlighting the parts of the image that may have been edited.

Left: A potentially manipulated image of Kim Jong Un watching a missile launch. Right: Heat map version highlighting the parts of the image that may have been edited.

“The fused confidence score is the crux of the MediFor platform and creates a world of possibilities for end users. For instance, the simple sort function in our console enables users to quickly triage a huge batch of data by score to sort out potentially manipulated media with a low score from unmanipulated media with a high score. From there, users are able to dive into individual images, videos, and photos to examine each analytic and see where manipulations have occurred,” said Strocchia.

DMC began working on this program with DARPA in 2016, and MediFor’s capabilities are now ready to be released. Later this summer, DARPA will host a virtual demonstration to showcase the progress on this initiative. Stay tuned for more information on MediFor and DMC’s involvement in the program, and learn more about our capabilities here.

Previous
Previous

Interns Reflect on Real-World Experience with Data Machines

Next
Next

Biometric Devices: Common Courtesy Amidst COVID-19