EXPLAINABLE AI FOR INTRUSION DETECTION SYSTEMS:LIME AND SHAP APPLICABILITY ON MULTI- LAYER PERCEPTRONID: 2598 Abstract :Intrusion Detection Systems (IDS) Play A Critical Role In Identifying Malicious Activities In Network Environments. While Machine Learning And Deep Learning Models Such As Multi-Layer Perceptron (MLP) Provide High Prediction Accuracy, They Often Act As Blackbox Models, Making It Difficult To Understand How Predictions Are Made. This Project Proposes An Explainable Artificial Intelligence (XAI)-based IDS Framework That Integrates LIME (Local Interpretable Model-agnostic Explanations) And SHAP (SHapley Additive ExPlanations) To Interpret Model Decisions. The System Is Trained Using Intrusion Detection Datasets Such As CIC IDS, And Multiple Algorithms Including MLP, LSTM, TCN, XGBoost, And Voting Classifier Are Implemented To Improve Prediction Accuracy. Among These, The Voting Classifier Achieves The Highest Accuracy. LIME Is Used To Provide Local Explanations For Individual Predictions, While SHAP Provides Both Global And Local Feature Importance Insights. Experimental Results Demonstrate That The Proposed System Not Only Achieves High Accuracy But Also Enhances Transparency And Interpretability. This Approach Helps In Identifying Important Features Contributing To Intrusion Detection, Improving Trust And Reliability In Cybersecurity Systems. |
Published:08-4-2026 Issue:Vol. 26 No. 4 (2026) Page Nos:2000-2006 Section:Articles License:This work is licensed under a Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 International License. How to Cite |