Disclaimer

The information on this website is presented as a service for our clients and Internet users and is not intended to be legal advice, nor should you consider it as such. Although we welcome your inquiries, please keep in mind that merely contacting us will not establish an attorney-client relationship between us. Consequently, you should not convey any confidential information to us until a formal attorney-client relationship has been established. Please remember that electronic correspondence on the internet is not secure and that you should not include sensitive or confidential information in messages. With that in mind, we look forward to hearing from you.

Skip to Content

Topsy-Turvy World of Accelerated Underwriting and Artificial Intelligence

As accelerated underwriting (AU) and artificial intelligence (AI) begin to turn life underwriting upside down, several NAIC working groups are seeking to bring order to the disruption: the Big Data (EX) Working Group (“Big Data WG”), the Innovation and Technology (EX) Task Force (“Innovation TF”), the Accelerated Underwriting (A) Working Group (“AU WG”), and the Artificial Intelligence (EX) Working Group (“AI WG”). Discussed below are some of the key questions they have been considering that potentially have major implications for consumers and the insurance industry.

Who Is Subject to Regulation?

With the flood of newly available consumer data, third-party vendors have entered the fray of life insurance underwriting. By rearranging the data and developing new models, these vendors offer to reduce the time taken to underwrite a policy. Consumer groups frenetically complain that unregulated third-party vendors are not accountable if they provide an insurer with data points or models that contain inaccurate information or prohibited factors that lead to unfair discrimination. At the August 13 NAIC special session on race, Birny Birnbaum of the Center for Economic Justice urged regulators to establish oversight for unregulated vendors of data and models.

Acknowledging these concerns, the AI WG incorporated into its AI Principles a definition of “AI actors” that includes “third parties such as rating, data providers and advisory organizations” who play an active role in the AI system life cycle. By so doing, regulators have made clear their expectation that third-party vendors “promote, consider, monitor and uphold” fair, ethical, accountable, compliant, transparent, secure, safe, and robust AI principles even if they are outside the regulatory reach of the state insurance departments. The AI Principles were adopted at the August 14 Joint Meeting of the NAIC’s Executive Committee and Plenary.

What Data Should Be Used?

  • Is the Data Accurate?

    Because the new sources of non-traditional data are often not consumer reporting agencies and are therefore not subject to the Fair Credit Reporting Act, at the August 7 Innovation TF meeting, regulators and consumer groups questioned the accuracy of the disjointed array of data that is used in AU. To assure the accuracy of non-traditional data, at its July 31 meeting, the AU WG considered:

    • Reinforcing to insurers that they retain the sole responsibility for the collection, scrutiny, and analysis of data to ensure it is reliable, even if it is provided by a third-party vendor.
    • Banning the use of non-FCRA data or requiring FCRA-type protections on non-FCRA data, including consumer rights to access and correct such data.
  • Do the Data Points Used Reflect Causation or Merely Correlation?

    To the extent that behavioral data points, such as a person’s gym membership, shopping habits, wearable device data, magazine subscriptions, voting history, and web browsing history, are used within AU models, regulators and consumer groups have expressed concerns that such data points:

    • Not be unhinged, but have a rational and understandable relation to risk.
    • Reflect the consumer’s reality. For example, the fact that a lower-income individual cannot afford a monthly gym membership does not automatically mean that person lives an unhealthy lifestyle warranting a higher risk class.
    • Not be littered with unrelated information, but are only that of the individual. For example, a person could purchase unhealthy products at a grocery store for someone else’s consumption.

    Presenters at the August 4 Big Data WG meeting urged regulators to “dig deeper” into what an insurer’s model is trying to achieve, why each variable is important, and “what aspect of the real world makes the correlation come about.”

  • Should Credit Scores Be Allowed?

    Credit scores are an increasingly messy factor in underwriting “as the distributions of credit scores vary significantly among ethnic groups.” At the NAIC special session on race, regulators discussed the historical bias imbedded in credit scores and the potential discriminatory impact of factors linked to economics. During its July 31 meeting, the AU WG warned that credit scores should not be used in isolation; instead, checks and balances must be employed to protect against discrimination.

Are Consumers Adequately Protected?

  • What Do Consumers Know and Did They Consent?

    Regulators fear consumers are unaware or confused about the amount and extent of their data being collected or how it is being used. Regulators and consumer representatives are considering requiring insurers to:

    • Obtain consumers’ consent.
    • Disclose the information used in underwriting.
    • Test input data for accuracy and inherent bias.

    Additionally, the AU WG’s work product will seek to address whether:

    • Consumers understand what information can be collected on them and how it can be used.
    • The results are transparent to consumers.
  • Do the Data Points or Models Used Discriminate?

    To confront the issue of whether data points or models result in discrimination:

    • After its June 30 meeting, the AI WG included within its AI Principles “avoiding proxy discrimination” due to regulatory concern that some data points such as credit score, education, occupation, and criminal history used in a model may result in unfair discrimination.
    • During its July 31 meeting, the AU WG discussed the need for insurers to test their models and ensure the results are not skewed but are reliable and unbiased. This testing should occur during development, periodically, and on all future generations of an AU program. The AU WG also posited that insurers should document their AU program testing and monitoring and warned that AU programs will be challenged in upcoming market conduct exams.
    • Also at its July 31 meeting, the AU WG stressed the importance of multiple departments, including IT, internal audit, actuarial, and legal, being able to explain the data points used and how the model works, not just those that run the model.

Do Regulators Have the Tools to Review the Models?

Regulators acknowledge that their review of complex models becomes more difficult if:

  • There is a lack of transparency, particularly if the models are a “black box” because it is not clearly explainable how a given rating or score resulted from the data used by the model. This issue is exacerbated if the models evolve over time through machine learning.
  • There is a lack of regulatory expertise and resources to review complex models properly. Regulators have discussed the development of an NAIC resource to assist their review of complex models, particularly for property and casualty rate review.
  • Companies rely on third-party vendors, who are not subject to regulation, to provide data or develop models and such vendors restrict insurers from sharing information.

At the August 8 Big Data WG meeting, presenters from the Casualty Actuarial and Statistical Task Force discussed that the regulatory review of complex models should:

  • Ensure compliance with rating laws; rates that are not excessive, inadequate, or unfairly discriminatory.
  • Review all aspects of the model: data, assumptions, adjustments, variables, input, and resulting output.
  • Evaluate how the model interacts with and improves the rating plan.
  • Enable competition and innovation.

Additionally, presenters at the August 7 Innovation TF meeting suggested that regulatory review of models should take place before the models are in place, especially if the models come from a third-party vendor.

*With assistance from Facundo Scialpi, a student at the University of Miami School of Law.

©2024 Carlton Fields, P.A. Carlton Fields practices law in California through Carlton Fields, LLP. Carlton Fields publications should not be construed as legal advice on any specific facts or circumstances. The contents are intended for general information and educational purposes only, and should not be relied on as if it were advice about a particular fact situation. The distribution of this publication is not intended to create, and receipt of it does not constitute, an attorney-client relationship with Carlton Fields. This publication may not be quoted or referred to in any other publication or proceeding without the prior written consent of the firm, to be given or withheld at our discretion. To request reprint permission for any of our publications, please use our Contact Us form via the link below. The views set forth herein are the personal views of the author and do not necessarily reflect those of the firm. This site may contain hypertext links to information created and maintained by other entities. Carlton Fields does not control or guarantee the accuracy or completeness of this outside information, nor is the inclusion of a link to be intended as an endorsement of those outside sites.