← Back to VOLUME 9, ISSUE 5, MAY 2020
This work is licensed under a Creative Commons Attribution 4.0 International License.
Everybody Dance Now
Abstract: This paper presents a basic strategy for "do as I do" : taken a recording of person performing dance which is given as the input to the model as source, the model creates recording of a beginner dancing with the movements which is same as the source recording .Video to video interpretation is utilized as a middle portrayal. From the source subject, we extricate the postures to pass on the movement. Then, the extracted postures are applied to train the appearance mapping to generate target subject. We anticipate two sequential frames that are likely to contain similar object or objects to obtain video results and present a different pipeline for sensible face synthesis. In spite of the fact that our strategy is very basic, it delivers shockingly convincing outcomes. This persuades us to likewise give a forensics device to identify the fake, which can recognize recordings blended by our framework from genuine information.
Keywords: Computer vision, pose detection, pose extraction, deep learning, cGAN
π 28 views
How to Cite:
[1] C Mounica Reddy, Pooja S R, Deepthi N V, Chandana P, Mr. Suhas S, βEverybody Dance Now,β International Journal of Advanced Research in Computer and Communication Engineering (IJARCCE), DOI: 10.17148/IJARCCE.2020.9540
