Abstract: Deep learning has been effectively used to address a variety of challenging issues, from computer vision to big data analytics. Advances in deep learning have also been used to develop software that poses risks to national security, privacy, and democracy. Deepfake is one of those recently developed deep learning-based applications. The recent entry of deepfakes has marked a turning point in the development of fake material, even if manipulations of visual and auditory media are as old as media itself. Deepfakes offer automated methods for producing fake content that is getting more difficult for human observers to spot thanks to the most recent developments in artificial intelligence and machine learning. Therefore, the development of systems that can instantly identify and evaluate the integrity of digital visual media is essential. The discipline of computer vision, a branch of computer science, has developed methods for producing and identifying deepfakes. Humanities and social science approaches have focused on the social and ethical ramifications of deepfakes. This study examines the algorithms used to produce deep fakes.
Keywords: Deepfakes, Generation and Detection of Deepfake, Generative Adversarial Network, Autoencoders.
| DOI: 10.17148/IJARCCE.2023.124107