‘Is it real or not?’: Jigsaw introduces platform that detects manipulated images

flickr/Creative Commons

Related Posts

Jigsaw — a unit within Google that foresees and works with emerging cyber threats — launched the platform Assembler, which can detect the manipulation of images, according to a blog post on Medium.

The development of this experimental platform began in 2016 when Jigsaw joined with researchers and academics to help create a technology-based approach to detect certain aspects of disinformation campaigns and alleviate the process of debunking images for fact-checkers and journalists, according to Jigsaw’s blog post.

Assembler is composed of a number of image manipulation detectors that multiple academics gathered into one single applicable tool, according to the blog post. The post also states that experts at UC Berkeley, among others, made a contribution to the detection models built in the platform.

Before this platform was developed, fact-checkers and journalists had to go through a time-consuming, error-prone process, while depending on a number of tools and methods to verify the authenticity of images, according to the blog post. It also states that this platform will help alleviate that process.

“The idea that we can identify pictures that are not real or altered is absolutely important,” said Ken Light, professor of photojournalism at UC Berkeley Graduate School of Journalism.

Light experienced consequences to the circulation of an altered image of his own during the 2004 presidential election.

John Kerry’s original picture taken by Light was manipulated by someone else to look like the American politician was standing next to Jane Fonda speaking at an anti-Vietnam War protest, Light said.

Even after the announcement of the photograph’s forgery days later, Light said some individuals still believed the image to be real.

“There were people who still believed it was not a fake, it was real.” Light said. “It was my word against those people who wanted to believe that the altered picture was true. And it was pretty scary.”

The platform will test two new detectors simultaneously, according to the blog post. The StyleGAN detector will differentiate real from “deepfake” images produced by the StyleGAN deepfake architecture, while the ensemble model will analyze how many times an image has been manipulated, according to the blog post.

Professor of journalism and dean of the UC Berkeley Graduate School of Journalism Edward Wasserman has hope that recent technology will grant the ability to not only determine altered images, but also find a way to prevent their circulation in the first place.

“There is a larger question of policy as much as technology,” Wasserman said. “Once we have images that are being distributed across a number of platforms and later they are labeled as false or counterfeit, the damage is done.”

Light added that he will likely mention the tool to his photojournalism graduate students.

According to Light, news staff and working journalists who seek pictures taken by citizen journalists will find the tool more useful than him and his students.

“Is it real or not?” Light said. “That is what’s important, the cornerstone of journalism, to seek the truth.”

Contact Olivia González Britt at [email protected] and follow her on Twitter at @Oliviagbritt.