Request a Demo

  Blog

Busting the AI Distrust Myth: How AI is Earning Regulatory Approval

April 14, 2025

About the Author
David Shapiro

Regulatory Affairs Manager

LinkedIn

In our ongoing series, Busting AI Myths: Why Risk Officers and MLROs Hesitate to Adopt AI (And Why They Shouldn’t), we confront another widespread myth: Regulators don’t trust AI-driven compliance decisions. While it’s understandable why some compliance officers, AML managers, and risk officers might still believe this, the reality is different. Regulatory bodies globally are acknowledging the potential of AI and are actively encouraging its adoption—under the right conditions.

Let’s unpack why this myth persists, and more importantly, why regulators are championing AI to strengthen the fight against financial crime.

Dispelling the Myth of Regulatory Distrust

One of the main reasons why the myth of regulators’ distrust in AI persists is the concern over transparency and accountability. It’s easy to see where this comes from—AI can still seem like a “black box” to many. 

The facts? 

Regulators across the world, including the FATF, FCA, and FinCEN, have been clear about their support for AI, provided it’s transparent and auditable. In fact they’re taking proactive steps to integrate AI into their frameworks for anti-money laundering (AML) and counter-financing of terrorism (CFT).

The Financial Action Task Force (FATF) has recognized the transformative power of AI, saying it can “make compliance faster, cheaper, and more effective.” In fact, the FATF’s recent guidance encourages AI adoption, as long as financial institutions maintain transparency and ensure their AI systems meet regulatory standards.

The FCA in the UK, for instance, has been vocal about how AI can help modernize compliance. They note that AI has the potential to improve both the efficiency and accuracy of risk assessments—two areas that can make a huge difference in AML efforts.

This shift is happening globally, and it’s not just about reducing costs—it’s about enabling institutions to tackle financial crime more effectively and efficiently.

How AI is Transforming Financial Crime Compliance

So, why are regulators on board with AI? 

Because AI isn’t just helping financial institutions—it’s helping law enforcement, too.

INTERPOL Secretary General Jürgen Stock put it best:

“AI is undeniably a game changer for criminals and law enforcement alike. However, it is imperative that we make the shift to the new technological era in a trustworthy, lawful and responsible manner, providing a clear, pragmatic, and most of all useful way.”

This view is echoed by Antonia De Meo, Director of UNICRI, who explains how AI can be used in a human rights-compliant way to support law enforcement:

“Together we have produced an invaluable blueprint to guide the global law enforcement community to leverage the promise of AI in a human rights compliant and ethical manner.”

Law enforcement agencies are already using AI to track illegal activities and criminal networks. The AI Toolkit developed by INTERPOL and UNICRI helps guide law enforcement on how to use AI responsibly, with a focus on accountability and ethics. It’s a good example of how regulators and law enforcement can ensure that AI is applied transparently and ethically.

The Regulatory Shift: From Skepticism to Support

One of the main reasons for AI’s slow uptake in compliance is the “black box” concern—AI decisions are often seen as difficult to explain. But this is a myth in itself. Advanced AI solutions, particularly in compliance, are designed to be transparent and auditable.

Regulatory bodies are prioritizing transparency. The EU AI Act, for example, aims to establish a legal framework for AI across various sectors, including specific provisions ensuring that AI systems used in financial crime prevention are explainable and transparent. This includes requirements for financial institutions to document how their AI systems work making it easier to provide explanations when compliance decisions are challenged. This is a significant step forward for financial institutions looking to adopt AI while maintaining the trust of regulators and their customers.

Why Regulators Are Championing AI and Breaking Barriers

Countries around the world are taking steps to ensure that AI is deployed responsibly in financial crime compliance, setting a global precedent for best practice. The UK is a prime example, with initiatives like the AI Public Policy Forum (AIPPF) helping shape ethical guidelines for the use of AI in financial services. The UK is leading the way in balancing innovation and ethics, ensuring that AI doesn’t just improve efficiency, but does so in a way that upholds values like transparency and fairness.

Similarly, Singapore has emerged as a leader in the AI regulatory space, particularly in the financial sector. The Monetary Authority of Singapore (MAS) has published the information paper Artificial Intelligence Model Risk Management, Observations from a Thematic Review, which provides guidelines for AI-powered AML measures and recognizes the importance of AI in identifying and managing risk. The MAS encourages financial institutions to adopt AI technologies but insists on strict governance and transparency measures to ensure that AI tools do not undermine ethical standards. 

Unveiling the Truth Behind Regulatory Perceptions

It’s clear: AI is not only accepted but actively encouraged by regulators worldwide. The global regulatory landscape is evolving to accommodate AI’s transformative potential in compliance. So, it’s time to put the AI distrust myth to rest. 

Regulators are already collaborating with financial institutions to support the ethical and transparent use of AI for compliance. For institutions looking to integrate AI into their compliance processes, the path forward is simple: prioritize transparency in selecting a technology partner. Choose financial crime compliance AI solutions that are explainable and auditable, and stay aligned with regulatory guidelines. This approach will not only enhance compliance efforts by improving the detection of true positive alerts, but it will also help build trust with regulators and customers alike.

About the Author
David Shapiro

Regulatory Affairs Manager

LinkedIn
Request a Demo