A Novel Conditional Wasserstein Deep Convolutional Generative Adversarial Network

Abstract

Generative Adversarial Networks (GAN) and their several variants have not only been used for adversarial purposes but also used for extending the learning coverage of different AI/ML models. Most of these variants are unconditional and do not have enough control over their outputs. Conditional GANs (CGANs) have the ability to control their outputs by conditioning their generator and discriminator with an auxiliary variable (such as class labels, and text descriptions). However, CGANs have several drawbacks such as unstable training, non-convergence and multiple mode collapses like other unconditional basic GANs (where the discriminators are classifiers). DCGANs, WGANs, and MMDGANs enforce significant improvements to stabilize the GAN training although have no control over their outputs. We developed a novel conditional Wasserstein GAN model, called CWGAN (a.k.a RD-GAN named after the initials of the authors' surnames) that stabilizes GAN training by replacing relatively unstable JS divergence with Wasserstein-1 distance while maintaining better control over its outputs. We have shown that the CWGAN can produce optimal generators and discriminators irrespective of the original and input noise data distributions. We presented a detailed formulation of CWGAN and highlighted its salient features along with proper justifications. We showed the CWGAN has a wide variety of adversarial applications including preparing fake images through a CWGAN-based deep generative hashing function and generating highly accurate user mouse trajectories for fooling any underlying mouse dynamics authentications (MDAs). We conducted detailed experiments using well-known benchmark datasets in support of our claims.

Publication Title

IEEE Transactions on Artificial Intelligence

Share

COinS