Researchers from Rutgers University have developed a cutting-edge process for removing precipitation from photographs. A paper uploaded to a Cornell preprint server on January 21 explores the use of Generative Adversarial Networks (GANs) for rain removal. GANs are a type of adversarial machine learning, meaning that machines learn by working on competing tasks, and they’re used for the “generation of images that humans find visually realistic.”
Rain poses a challenge for machine vision: While computers can now pick out faces and other details from images, it’s much harder when a photo is obscured by precipitation.
“We work in computer vision science, and one of the problems we often come across is having really bad quality images,” said Dr. Vishal M. Patel, a professor at Rutgers and an author of the paper.
One of the hardest issues the researchers had to solve was ensuring that the de-rained image is indistinguishable from an unmodified clear image, so they can be used in detection algorithms. “We require the de-rained image to be good in the sense that it should be able to do computer vision,” said Patel. “It should be able to identify objects, it should be able to detect objects, which is essentially not there in the traditional algorithms. We require our algorithm to have this constraint while de-raining.”
Before the use of GANs, scientists de-rained images through other methods. A 2015 paper from South China University of Technology and National University of Singapore discusses using discriminative sparse coding to remove rain, and a 2012 paper from Taiwan considers a learning-based framework for single image de-raining. But the GAN method goes beyond previous de-raining attempts. “Traditionally this problem is viewed as a problem of image restoration,” said Patel. “The idea right now, you’re given this degraded image and you want to get the clean image out. This problem is very typical in image processing. We are putting some computer vision restraints on photo restoration. The results we get are pretty much state-of-the art.”
The use of GANs is what distinguishes Patel’s de-raining algorithm from traditional restorative attempts. “These networks are playing a game,” he said. “One network is trying to get a good result. The other is trying to spoof the results, and it’s trying to determine whether it’s correct or not. So it’s basically game theory going on here.”
Photographs, as John Berger said, are there to remind us of what we forget. The use of photography has changed, though, and a photograph can be used as a means to an end. The new Rutgers paper says that “severe weather conditions such as rain and snow adversely affect the visual quality of images captured under such conditions thus rendering them useless for further usage and sharing.” An image is “rendered useless” by the presence of rain, says the abstract, and the rain must be cleared up for accuracy. An unmodified picture of a rainy scene is less useful than a picture with the rain artificially removed.
Dystopian implications aside, learning how to adequately remove rain from images has practical applications: “It has been widely acknowledged that unpredictable impairments such as illumination, noise, and severe weather conditions (i.e. rain, snow, and fog) adversely affect the performance of many computer vision algorithms such as detection, classification, and tracking. This is primarily due to the fact that these algorithms are trained using images that are captured under well-controlled conditions.” In other words, if you’re planning to commit a crime, wait until it’s raining. Patel also said that it would be useful to improve the accuracy of aerial imaging, like planes and drones.
Visual realism can’t hold the same weight as the truly real. In an age where lies become “alt-facts” used to deceive and oppress, and politicians lie about easily observed events, maybe a system of modifying images to make them more truthful is fitting.