**Peer Review Journal ** DOI on demand of Author (Charges Apply) ** Fast Review and Publicaton Process ** Free E-Certificate to Each Author

Current Issues
     2026:7/1

Journal of Frontiers in Multidisciplinary Research

ISSN: 3050-9718 (Print) | 3050-9726 (Online) | Impact Factor: 8.10 | Open Access

Explainable AI for Cybersecurity: Interpretable Intrusion Detection in Encrypted Traffic

Full Text (PDF)

Open Access - Free to Download

Download Full Article (PDF)

Abstract

As cyber threats grow in complexity, the need for advanced and transparent detection mechanisms has become critical in modern cybersecurity. Intrusion Detection Systems (IDS), particularly those leveraging artificial intelligence (AI), play a pivotal role in identifying malicious behaviors across network environments. However, the increasing use of encrypted traffic such as HTTPS, TLS, and VPN protocols poses a major challenge to traditional IDS, which rely heavily on packet content for analysis. At the same time, most AI-based IDS operate as "black boxes," offering little visibility into their decision-making processes. This lack of interpretability hinders trust, limits regulatory compliance, and makes it difficult for cybersecurity analysts to validate and act upon alerts. To address these issues, Explainable AI (XAI) is emerging as a vital framework for enhancing transparency, accountability, and trust in AI-driven cybersecurity, particularly in the context of interpretable intrusion detection in encrypted traffic. This explores the integration of explainable AI methodologies with machine learning-based intrusion detection systems tailored for encrypted traffic analysis. This investigate how models can utilize metadata features such as packet sizes, flow duration, inter-arrival times, and statistical flow characteristics while employing interpretable techniques like decision trees, SHAP (SHapley Additive exPlanations), LIME (Local Interpretable Model-agnostic Explanations), and attention mechanisms. These methods provide human-understandable insights into how threats are detected without accessing payload content, thereby preserving user privacy. We present case studies from enterprise and IoT network environments, evaluate model performance across multiple encrypted traffic datasets, and analyze trade-offs between accuracy, explainability, and computational efficiency. The findings demonstrate that XAI can significantly improve the operational utility of AI-based IDS by increasing trust and facilitating informed responses. This work highlights the importance of designing security solutions that are not only effective and privacy-preserving but also transparent and interpretable, thus promoting broader adoption of AI in secure and responsible cybersecurity frameworks.

How to Cite This Article

Jamiu Olamilekan Akande, Olaitan Miriam Olufisayo Raji, Olufunbi Babalola, Abdullahi Olalekan Abdulkareem, Adeladan Samson, Steve Folorunso (2023). Explainable AI for Cybersecurity: Interpretable Intrusion Detection in Encrypted Traffic . Journal of Frontiers in Multidisciplinary Research (JFMR), 4(2), 213-222. DOI: https://doi.org/10.54660/.JFMR.2023.4.2.213-222

Share This Article: