The whole world is taken aback by the storm of fake news that has been making rounds on the internet. One of the primary sources is the manipulated imagery created on Photoshop.
Already built into Photoshop are image recognition tools that prevent scans or photos of certain bank notes from being opened at all. A new detection tool has been designed with the hyper-specific task of detecting when a face has been warped and subtly manipulated using Photoshop’s own Face Aware Liquify tool.
To make it more reliable and accurate, Adobe’s researchers collaborated with colleagues from UC Berkeley to train a neural network on a set of before and after face images that had been automatically warped using the Face Aware Liquify tool, as well as a set of headshots that had been edited by an actual artist using the same tool.
The Adobe researchers first tested how adept human beings were at spotting manipulated headshots by showing a group of test subjects two images at a time, and asking them to identify which of the pair had been edited. Much to their surprise, the human subjects were only able to guess which headshot had been edited 53% of the time. The neural network, by comparison, was able to find the manipulated image 99% of the time. Furthermore, the detection tool was able to tell where and how a face had been warped, and even undo those changes and revert the photos to their original state. These are indeed promising results, but this neural network has a long way to go before it could be unleashed upon the billions of photos floating around the internet right now.
Based on how far the capabilities of neural networks have improved, even over just the last year, it’s no longer impossible to detect and verify manipulated images and set the records straight once and for all.