Abstract :The Rapid Growth Of Artificial Intelligence (AI) And Generative Adversarial Networks (GANs) Has Significantly Contributed To The Emergence Of Highly Realistic Deepfake Media. This Advancement Poses Serious Challenges To Digital Trust, Cybersecurity, And The Authenticity Of Information. Deepfakes Are Increasingly Being Misused For Spreading Misinformation, Identity Theft, Online Fraud, And Cyberbullying, Making Traditional Manual Detection Methods Inefficient. To Overcome These Challenges, This Project Introduces DeepGuard AI, An Advanced Deepfake Detection System Designed To Identify Manipulated Images And Videos In Real Time. The System Utilizes A Convolutional Neural Network (CNN) Combined With Transfer Learning, Specifically Leveraging The XceptionNet Architecture To Achieve High Detection Accuracy. The System Processes User-uploaded Media Through Several Preprocessing Steps, Including Face Detection, Frame Extraction, Normalization, And Alignment Using MTCNN And OpenCV Techniques. The Trained Model Then Performs Binary Classification To Determine Whether The Input Media Is Real Or Fake, Providing A Confidence Score For Improved Transparency. For Video Analysis, Frame-level Predictions Are Aggregated Using Efficient Algorithms To Ensure Consistent And Reliable Results. Additionally, The System Features A Webbased Interface Developed Using Flask, Enabling Smooth User Interaction, While REST APIs Support Seamless Integration With Other Platforms. Overall, DeepGuard AI Provides A Scalable, Efficient, And User-friendly Solution To Detect Deepfakes And Strengthen The Authenticity Of Digital Media |
Published:07-4-2026 Issue:Vol. 26 No. 4 (2026) Page Nos:1597-1603 Section:Articles License:This work is licensed under a Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 International License. How to Cite |