ISSN No:2250-3676 ----- Crossref DOI Prefix: 10.64771 ----- Impact Factor: 9.625
   Email: ijesatj@gmail.com,   

(Peer Reviewed, Referred & Indexed Journal)


    DETECTING AI-GENERATED IMAGES WITH CNN AND INTERPRETATION USING EXPLAINABLE AI

    MEKALA AKHIL SURIBABA, Y SRINIVAS RAJU

    Author

    ID: 2593

    DOI:

    Abstract :

    The Rapid Advancement Of Generative Models Has Significantly Increased The Production Of Synthetic Images, Raising Concerns About Authenticity, Misinformation, And Digital Fraud. AI-generated Images, Particularly Realistic Human Faces, Are Increasingly Difficult To Distinguish From Genuine Ones Using Human Perception Alone. This Study Proposes A Robust Deep Learning-based Framework To Detect And Classify Real Versus AI-generated Images Using Convolutional Neural Networks (CNNs), Combined With Explainable Artificial Intelligence (XAI) Techniques For Model Interpretability. The Proposed System Utilizes The DenseNet121 Pre-trained Architecture For Feature Extraction And Classification Due To Its Efficient Feature Reuse And Strong Performance In Image-related Tasks. The Model Is Trained On A Large-scale Dataset Comprising Real And Fake Facial Images, Ensuring Diversity And Generalization. Experimental Results Demonstrate That DenseNet121 Achieves An Accuracy Of 94%, While Further Enhancement Using The NASNet Architecture Improves Performance To 98%, Outperforming Other Evaluated Models Such As VGG16, VGG19, And Xception. To Enhance Transparency And Trust In Model Predictions, The System Integrates Two Widely Used XAI Techniques: Gradient-weighted Class Activation Mapping (Grad-CAM) And Local Interpretable Modelagnostic Explanations (LIME). Grad-CAM Highlights The Most Influential Regions In An Image Contributing To Classification Decisions, While LIME Identifies Critical Feature Segments Responsible For Predictions. The Combined Use Of These Methods Provides Consistent And Interpretable Visual Explanations, Validating The Model’s Decision-making Process. Overall, The Proposed Framework Not Only Achieves High Accuracy In Detecting AI-generated Images But Also Ensures Interpretability, Making It Suitable For Real-world Applications In Digital Forensics, Media Authentication, And Cybersecurity.

    Published:

    08-4-2026

    Issue:

    Vol. 26 No. 4 (2026)


    Page Nos:

    1954-1963


    Section:

    Articles

    License:

    This work is licensed under a Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 International License.

    How to Cite

    MEKALA AKHIL SURIBABA, Y SRINIVAS RAJU , DETECTING AI-GENERATED IMAGES WITH CNN AND INTERPRETATION USING EXPLAINABLE AI , 2026, International Journal of Engineering Sciences and Advanced Technology, 26(4), Page 1954-1963, ISSN No: 2250-3676.

    DOI: