Abstract :Ensuring Patient Safety Is A Critical Concern In Healthcare, Where Errors Or Adverse Events Can Have Severe Consequences. Traditional Predictive Models Often Provide Limited Insight Into Their Decisionmaking Processes, Making It Challenging For Healthcare Professionals To Trust And Act On Their Recommendations. Explainable Artificial Intelligence (XAI) Offers A Solution By Providing Transparent And Interpretable Models That Highlight The Reasoning Behind Predictions. This Paper Presents An XAI-based Approach To Enhance Patient Safety By Analyzing Clinical Data To Predict Potential Risks Such As Medication Errors, Adverse Drug Reactions, Or Hospital-acquired Infections. By Combining Machine Learning Algorithms With Interpretability Techniques Like SHAP (Shapley Additive Explanations) And LIME (Local Interpretable Model-agnostic Explanations), The System Not Only Predicts Safety Risks But Also Explains The Contributing Factors For Each Prediction. Experimental Results On Healthcare Datasets Demonstrate That The Proposed System Achieves High Predictive Accuracy While Providing Clear, Actionable Explanations For Clinicians. This Transparency Improves Trust, Facilitates Timely Intervention, And Supports Informed Decision-making, Ultimately Enhancing Patient Safety And Care Quality. Keywords: Explainable AI, Patient Safety, Machine Learning, Deep Learning, Clinical Decision Support, Model Interpretability, Risk Prediction, Medical Error Prevention, Feature Importance, Trustworthy AI, Healthcare Analytics. |
Published:28-10-2025 Issue:Vol. 25 No. 10 (2025) Page Nos:164-168 Section:Articles License:This work is licensed under a Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 International License. How to Cite |