AI in risk modelling: From compliance burden to competitive advantage
Supervisors have made it clear: if you use AI in insurance decisioning, you own the outcomes. The IAIS has now published an Application Paper on the supervision of artificial intelligence (July 2025), signaling that AI governance, fairness, explainability, and controls are no longer innovation topics. They are core supervisory expectations.
Two signals are converging for specialty carriers, commercial insurers, and global brokerages:
Margin pressure is structural. The IMF projects global growth slowing from 3.3% (2024) to 3.2% (2025) and 3.1% (2026). When macro conditions are not lifting all boats, underwriting accuracy and capital efficiency become the real profit engine.
AI is moving into the supervisory perimeter. The IAIS AI Application Paper reinforces that existing Insurance Core Principles still apply, and it highlights practical themes supervisors will care about, including data management, fairness by design, documenting inferred relationships, monitoring outcomes, and redress mechanisms.
If you add the compliance reality, the direction is consistent. FATF has also recognized that AI and machine learning can strengthen customer risk assessment and monitoring, but that it must be implemented with controls appropriate to financial crime risk.
From a leadership and risk management standpoint, the wrong framing is: “AI is a model upgrade.” The right framing is: “AI is a new operating model for underwriting, pricing, claims, and compliance.”
In 2026, AI will not separate insurers by who has the best algorithm. It will separate insurers by who can answer four board-level questions with evidence:
What data did we use, and why is it fit for purpose
How do we detect bias, drift, and unintended outcomes
Can we explain decisions to customers, regulators, and courts
Who is accountable when the model is wrong
That is why many teams feel AI as a compliance burden first. They try to deploy before they can govern — STOPPED
Strategic Insight
AI shifts the industry in three ways that matter to CEOs, CFOs, CROs, CCOs, and brokerage MDs:
Risk modelling becomes continuous, not periodic. Traditional pricing cycles assume stable relationships and slow-changing exposure. AI models can ingest new signals faster, but they also require continuous monitoring for drift, feedback loops, and changing behaviors. The competitive advantage is not speed alone, it is controlled speed.
Compliance moves from checking to engineering. The IAIS direction is effectively telling the market: governance, conduct, and risk management must be built into the lifecycle, not appended at the end. That changes budgets and org charts. Compliance becomes a product requirement, not a gatekeeper function.
Growth opportunities expand where you can measure better. AI is most valuable where uncertainty is high and data is rich: specialty lines, complex commercial risks, supply chain exposures, fraud-heavy claims environments, dynamic catastrophe and secondary peril patterns, and portfolio steering across geographies and industries. If your models are auditable and resilient, you can write risks competitors will avoid or misprice.
Playbook (practical actions for executives)
Stand up “model governance that can ship.” Create a single, enforceable standard for AI model documentation, data lineage, validation, monitoring, and accountability. Align it to supervisory expectations on governance, fairness, and outcomes monitoring. Make it fast enough that business teams use it, and strict enough that audit trusts it.
Treat explainability as a product feature. Decide, by use case, what level of explainability is required (pricing, claims triage, fraud detection, distribution, customer decisioning). Build templates for rationale, limitations, and escalation paths. Your future disputes will be won or lost here.
Operationalize monitoring, not just validation. Require ongoing drift tests, bias checks, and performance dashboards tied to business KPIs (loss ratio stability, leakage reduction, claims cycle time, complaint rates). Put thresholds in place that trigger review or rollback.
Use AI to strengthen compliance workflows, not bypass them. Apply AI to customer risk scoring, transaction monitoring, and anomaly detection with clear controls and human oversight, consistent with FATF’s view that new technologies can improve monitoring and risk assessment when implemented responsibly.
AI in risk modelling is no longer optional because the market is demanding precision and supervisors are demanding control. The winners in 2026 will be those who turn AI governance into a repeatable capability that improves underwriting selection, strengthens claims outcomes, and protects capital efficiency.


