Abstract: Generating coherent and fluent text passages even using neural networks is still difficult. A story generation system using recurrent neural networks can be a solution to this. A large dataset of 300k human written stories along with writing prompts is used for hierarchical story generation. The system first generates a prompt and transforms it into a passage which ideally captures what the prompt intended to. The prompt is generated using a convolutional language model. The prompt gives a general sketch to what the story should be. A seq2seq model is used to generate a story that follows the prompt. A novel form of model fusion that improves the relevance of the story to the prompt, and adding a new gated multi-scale self-attention mechanism to model long-range context improves the coherence and the structure of the text passage. Also conditioning on the prompt makes it easier for the story to remain consistent and also have structure at a level beyond single phrase. A gated convolutional language (GCNN) is used for language model. A seq2seq model using LSTM and attention based decoder is used. Pre-training techniques have been used to improve the accuracy. Generating coherent and fluent passages has been improved efficiently using these techniques.

Keywords: Seq2seq Model, Hierarchical Story Generation, Gated Convolutional Language, Gated multi-scale self-attention mechanism

Open chat
Chat with IJARCCE