In the Big Top Spotlight: NAIC Model Bulletin on the Use of Artificial Intelligence Systems by Insurers
The Innovation, Cybersecurity, and Technology (H) Committee of the National Association of Insurance Commissioners has been in the big top spotlight the past year as it has been developing its model bulletin on the use of artificial intelligence systems by insurers. As swift as trapeze artists, the committee quickly drafted and exposed for comment three versions of the model bulletin before the NAIC’s 2023 Fall National Meeting. Drafting of the model bulletin brought together various performers from 15 states who collaboratively sought to set forth the industry regulatory expectations for the responsible use of AI by insurance companies.
During the H Committee’s portion of the meeting, there was only a minor sideshow as to comments on the third version of the model bulletin. The use of the term “bias” in the model bulletin was juggled about:
- North Dakota Commissioner Jon Godfread suggested the references to “bias” be replaced with the phrase “unfair discrimination.”
- Iowa Commissioner Doug Ommen expressed concerns about the replacement of “bias,” and bantered about questions as to whether the term “unfair discrimination” would be uniformly understood among regulators or the industry.
- Colorado Commissioner Michael Conway took a stab and proposed “statistical bias” as an alternative.
- Rhode Island Superintendent Elizabeth Dwyer pointed out the varying uses of the term “bias” throughout the model bulletin.
In the end, there was no change to the use of the term bias, and the only adopted proposed change was to clarify that audits on third parties would only be performed to the extent that there were contractual rights to do so. In the big ring, on December 4, the NAIC Executive Committee and Plenary adopted the model bulletin with no commotion.
The model bulletin, now a traveling act, is set to tour each state for possible adoption and use. It serves as a guiding document, with the intent of fostering uniformity among state insurance regulators regarding expectations for insurance carriers deploying AI. Indeed, H Committee Chair Kathleen Birrane reminded stakeholders that the model bulletin is an interpretive bulletin, not a regulation or model law, and individual states would need to consider it for adoption.
Now that the circus has left town, insurers would be wise to review their own use of AI and consider how such use is consistent with the regulatory expectations set forth in the model bulletin. In particular, insurers should use AI in a manner that mitigates the risk of “adverse consumer outcomes,” which is defined as adversely impacting consumers in a manner that violates insurance regulatory standards. To do so, the model bulletin recognizes that robust governance, risk management controls, and internal audit functions play a core role in mitigating the risk of adverse consumer outcomes. The model bulletin sets forth:
- General guidelines for an insurer’s written program for the responsible use of AI.
- Considerations for an insurer as it develops its governance framework.
- Items that should be addressed in an insurer’s risk management and internal controls for each stage of the AI life cycle.
- The considerations for the acquisition, use, or reliance on third parties concerning the insurer’s use of AI.
- The inquiries and document requests that an insurer should expect to receive from regulators.
Insurance companies should consider doing a dress rehearsal to align their practices with regulators’ evolving expectations ... before the circus comes to town again.