Journal of Frontiers in Multidisciplinary Research  |  ISSN: 3050-9718  |  Double-Blind Peer Review  |  Open Access  |  CC BY 4.0

Current Issues
     2026:7/1

Journal of Frontiers in Multidisciplinary Research

ISSN: 3050-9718 | Impact Factor: 8.10 | Open Access

Explainable Artificial Intelligence for Financial Crime Prevention: Translating Machine Learning Outputs into Regulatory and Complia Decision-Making

Full Text (PDF)

Open Access - Free to Download

Download Full Article (PDF)

Abstract

Financial institutions increasingly rely on machine learning systems to detect fraudulent activities and prevent financial crime across complex transactional environments. Models like these are exemplified by their remarkable prediction accuracy, but at the same time, their lack of interpretability leads to considerable problems in terms of compliance with regulations, auditing, and operational trust. In the case of financial transactions that carry a high level of risk, regulators and compliance professionals want not just precise risk forecasts but also the reasoning behind the decisions to be open and understandable. The presented research introduces a regulatory-compliant explainable artificial intelligence (XAI) framework that connects machine learning results and financial crime decision-making processes. The framework does not innovate but rather focuses on converting risk scores and explainability outputs of predictive models into interpretable decision artifacts that can be easily reviewed for compliance, supervisory oversight, and human-in-the-loop validation. The proposed methodology combines explainability mechanisms with governance-oriented design principles, allowing for consistent justification of flagged transactions, enhanced audit trails, and increased accountability in automated systems for financial crimes detection and prevention. Case study illustrations demonstrate how explainable AI can support escalation, investigation, and reporting decisions in fraud and anti-money laundering contexts. The findings highlight the role of explainable AI as a critical enabler for aligning machine learning innovation with regulatory expectations, contributing to more transparent, trustworthy, and responsible financial crime prevention systems.

How to Cite This Article

Okolie Awele, Daniel Oghenekome Erebi, Bright Kofi Ladzro, Oluwatosin Lawal, Didunoluwa Olukoya, Samson Onaopemipo Amoran (2024). Explainable Artificial Intelligence for Financial Crime Prevention: Translating Machine Learning Outputs into Regulatory and Complia Decision-Making . Journal of Frontiers in Multidisciplinary Research (JFMR), 5(2), 148-153. DOI: https://doi.org/10.54660/.IJFMR.2024.5.2.148-153

Share This Article: