What are Generative Adversarial Networks?

Generative Adversarial Network GAN: These are a class of neural networks designed to generate data sets from a given distribution of data .They mimic a population distribution.  They were first proposed in 2014 by deep learning expert Ian Goodfellow and his colleagues in the following research paper (https://lnkd.in/gAErdSg).

How do GANs work?

A generative advisory network consists of two neural networks, one trained to generate data and the other trained to distinguish fake data from real data. The adversarial relationship is established by training the generator to have the broadest possible dataset.

The discriminator network is trained to reject obvious fake data to make the generated data set look as realistic as possible. This training scheme is borrowed from Game theory, and at a certain point, a Nash equilibrium is obtained when both adversaries run out of any better strategy to win more points in the game.

What are the common business uses of it?

Generative Adversarial Network GANs are used to generate testing data for applications like search and commerce engines. Human Images and videos can be generated that look realistic without real humans taking any part in them.

What are the famous Generative Adversarial Network GANS?

The so-called ‘deep fake apps’  have made several Generative Adversarial Network GAN architecture-based networks famous, including  CycleGAN, StyleGAN, pixelRNN/CNN, and IsGAN. 

NVLabs Style GAN generates impressive and realistic-looking pictures as its new Generator architecture is better than traditional GAN networks. PIxelCNN+ model from OpenAI is a class of auto-generative GANs that improves Likelihood calculations and stable training methods. 

CycleGAN translates images from one domain into another so that source images of zebras can be mapped to horses in Zerba form factor or Generate Winter Yosemite Valley images from a set of summer images.

What About ChatGPT and Super Human AI models?

Radford et al. in 2018 introduced the Generative Pre-Training (GPT) model, derived from the BERT transformer architecture, which could perform downstream tasks with fine-tuning. Downstream tasks act as discriminators from a traditional Generative Adversarial Network GAN perspective. GPT-2 was created with 340 million trainable parameters and trained on 8 million documents from 40 million web pages with upvotes on Reddit.

In 2020 175 Billion parameter GPT3 model was released by OpenAI and was trained on 570 GB plaintext from the entire Wikipedia and hundreds of thousands of books in the public domain. In 2022 OpenAI launched the InstructGPT model that uses supervised learning and reinforcement learning models with human feedback to create a model that can follow instructions from a human instructor.

In the same year, they added a chatbot dataset to instructGPT and launched it as ChatGPT. All these models are generative and can generate songs, poetry, code, or whatever.


The High Plains Computing (HPC) team has deep Kubernetes implementation and observability setup experience. Please reach out for any further information.