How AI Regulation Is Taking Shape

Two key questions for insurers: Do guardrails exist around your AI tools, and can you defend those guardrails through testing and documentation?

A person wearing a grey long sleeve sweater shaking hands with an AI hand emerging from the screen of a laptop

Two developments, in particular, in 2023 will inform the approaches that regulators will take in the years ahead on how (and how much) to regulate artificial intelligence’s use in the business of insurance

First, insurance regulators via the National Association of Insurance Commissioners (NAIC) are drafting a model bulletin on artificial intelligence use with the aim of guiding companies in establishing governance systems and regulatory expectations for such systems. As explained during the 2023 spring national meeting in Louisville, the NAIC’s Committee on Innovation, Cybersecurity and Technology is taking the lead on producing a commissioner-driven deliverable that likely will be exposed for public comment this year. Much like the NAIC’s AI Principles adopted in 2021, the bulletin likely will provide high-level, principles-driven guidance that will serve as a good guide for companies seeking to understand, at a minimum, what kinds of questions and information regulators will ask when seeking more information about artificial intelligence and machine learning products.

Second, Colorado continues to move forward with its rulemaking under the Colorado Privacy Act, with the first round of draft rules for life insurers exposed for public comment in February. Although focused on life insurers, the Colorado Department of Insurance has stated in public meetings that property and casualty insurers should expect the version of the rule applicable to them will be similar. Colorado’s rules are more prescriptive than anything coming out of the NAIC thus far in detailing the information insurers will need to have available for using AI as well as how to report such information to the department.

The differences in these approaches is a preview for regulatory differences insurers will face in the near future across jurisdictions. Some will adopt the NAIC model bulletin, while others will modify it. Still others may follow Colorado’s lead in seeking legislation or adopting rules specific to AI usage. At a minimum, carriers using AI as part of their insurance offerings in multiple jurisdictions, irrespective of line, likely will be faced with a somewhat disjointed regulatory regime in the near term, even as regulators work to find consensus wherever possible.

So what should savvy insurers do now? At a minimum, any insurer that is using or considering the use of AI should be giving thought to implementing a well-documented governance system for its AI and machine learning tools. In other words — how does the enterprise show its work? Whether a jurisdiction elects a more front-loaded approach to regulation (like Colorado, with significant reporting requirements) or back-loaded (guidance followed up with market conduct reviews, if necessary), much of the regulatory risk surrounding AI boils down to two questions: Do guardrails exist around a company’s AI and machine learning tools, and can the company defend those guardrails as appropriate and adequate through testing and documentation?

See also: Regulatory Interest in Big Data

In addition, companies with robust governance integrated into their AI and machine learning portfolios are in a much stronger position to shape regulatory requirements as they come into sharper focus. As regulators and policymakers focus more on how and to what extent companies should be prepared to explain AI guardrails, such carriers will not only be more prepared when regulation comes, they will also be in a much stronger position to speak up and be taken seriously when regulatory proposals become unnecessarily burdensome. As counterintuitive as it may seem to some, patient engagement with insurance regulators will make for a more navigable long-term regulatory framework.

Near-term regulatory uncertainty notwithstanding, establishing robust governance and testing regimes for AI and machine learning are smart investments for insurers in anticipating whatever regulatory requirements emerge. It will be much easier to tweak such systems as needed, once established, than scramble to implement wholesale systems in response to new regulatory requirements. Keep in mind: Governance is distinct from AI and machine learning tools themselves. If AI and machine learning are the economic engines of the future for insurance carriers, effective governance is the oil that will keep the engine running smoothly — and compliantly.

As first published in Digital Insurance.


Evan Daniels

Profile picture for user EvanDaniels

Evan Daniels

Evan Daniels serves on the advisory board of Monitaur, an AI/ML governance software company committed to working with the insurance industry and regulators toward the responsible and effective integration of AI/ML.

Formerly director of the Arizona Department of Insurance and Financial Institutions, he served as the 2022 co-vice chair of the NAIC committee on innovation, cybersecurity and technology, which oversees the NAIC’s big data and artificial intelligence workstreams,

Daniels also is counsel at Mitchell Sandler, a boutique financial services law firm, where he advises insurance companies, insurtechs, fintechs and financial institutions on regulatory matters. 

Read More