Abstract: Text generation techniques can be applied for improving language models, machine translation, summarization, and captioning. One mainstream approach for text generation is by modeling sequence via recurrent neural network. In this paper, a survey is conducted on the recent text generation techniques based on GAN, SeqGAN, and RNN. Mainly conduct survey on a paper which has model utilizing Generative Adversarial Net (GAN) to create realistic text. Rather than utilizing standard GAN, consolidate Variational Auto Encoder (VAE) with generative adversarial net is used. The utilisation of high-level latent random variables is useful to learn the data distribution that takes care of the issue that generative adversarial net always emits the similar data. Here VGAN model has been used where the generative model is made out of recurrent neural network and VAE. The discriminative model is a convolutional neural network and it trains the model via policy gradient method. The VGAN is applied to the task of text generation and contrast it with other recent neural network based models, for example, recurrent neural network language model and SeqGAN. It evaluates the performance of the model by calculating negative log-likelihood and the BLEU score. Here experiments have been conducted on three benchmark datasets, and results demonstrate that a model beats different past models.
Keywords: Generative Adversarial Net, Variational Auto-encoder, VGAN, Text Generation