Abstract: Recent improvements in AI-driven content synthesis have made it possible to fabricate highly realistic human imagery and video sequences. These artificially produced visuals now resemble genuine recordings so closely that older inspection practices struggle to recognize what has been digitally altered. This situation has amplified concerns related to online authenticity, public trust, and misuse of sensitive digital content. Because manual or traditional forensic checks no longer keep pace with advanced manipulation tools, dependable automated detection systems have become essential.
Models built using convolution-based learning strategies are frequently employed for this purpose because of their ability to uncover fine-grained visual abnormalities introduced during fabrication. This article reviews studies exploring such models for identifying altered still images, detecting AI-produced visuals, and analyzing videos using both spatial patterns and motion information. While these approaches show promising performance on curated datasets, they often face difficulties under real-world conditions. The review highlights current limitations—such as softness under compression, dataset dependency, and weak generalization—and identifies the need for stronger multimodal and adversarial-resistant solutions.
Keywords: Deepfake Detection, Convolutional Neural Networks, AI-Generated Images, Video Forgery Analysis, Multimedia Forensics, Digital Authenticity
Downloads:
|
DOI:
10.17148/IJARCCE.2026.15107
[1] Vijay Chakole, Akshita Lanjewar, Astha Jadhao, Pallavi Chikate, Mayuri Sawalakhe, "Deepfake Detection Using Convolutional Neural Networks (CNN)," International Journal of Advanced Research in Computer and Communication Engineering (IJARCCE), DOI: 10.17148/IJARCCE.2026.15107