The Compliance Crisis Most People Don’t Know About

Here's what many organizations may not realize: the regulatory landscape for AI underwent a significant shift in 2024, and the majority of AI implementations fail to meet the new requirements. Not because all companies aren't trying to be compliant, but because they no longer understand what compliance truly means.

 

Basel III mandates that financial institutions demonstrate a clear understanding and documentation of algorithmic risk factors. State insurance regulators are demanding interpretable models for underwriting and claims decisions. Bar associations are establishing requirements for explainable AI in legal decision support. The EU AI Act extends GDPR's "right to explanation" to AI-generated decisions in high-risk applications. The FDA's 2024-2025 AI/ML Action Plan now requires medical AI systems to include model transparency, version-control tracking, and explainability documentation.

 

These aren't proposed regulations. They're active frameworks and requirements in effect and being enforced right now.

 

Realistically, a gap exists between current regulations, those who enforce them, and the companies expected to follow them. Depending on the industry, company, and situation, this gap varies. However, it is crucial for organizations in highly regulated sectors such as healthcare, financial services, legal, insurance, and energy to become familiar with and proficient in the evolving regulations. From our conversations, it's clear that companies have different levels of documentation. Some companies simply don't understand how to document correctly, and depending on the industry, use case, and system architecture, the level of detail and documentation may vary widely.

 

Unfortunately, even those who consider themselves 'compliant' may not be addressing the full scope of regulatory requirements across these industries. Many of the SaaS tools being offered and utilized meet basic compliance standards but lack critical interpretability features. For example, a credit scoring AI tool that meets fair lending compliance may satisfy non-discrimination requirements through statistical parity testing, but still operates as a black box unable to explain individual decisions to regulators or consumers. Similarly, AI tools used for insurance underwriting or legal document review may meet data protection standards while failing to provide the interpretability necessary for regulatory defense.

 

Compliance ≠ Interpretability.

 

This distinction will become increasingly important as regulations evolve. The EU AI Act already requires different levels of explainability based on risk categories, and similar requirements are emerging globally across all regulated sectors. Most AI companies prioritize 'performance first' approaches using black box models, wrapping them with compliance layers rather than developing interpretable solutions. Because in regulated industries, compliance isn't optional, and neither is interpretability.

 
Next
Next

In the future, Al fluency will be currency. Experience alone won't be enough.