NY DFS Delivers an Icy Blast to Insurers Using External Data Sources and Algorithmic Underwriting

Financial Services Regulatory   |   Insurance   |   Life, Annuity, and Retirement Solutions   |   Securities & Investment Companies   |   January 29, 2019

On January 18, 2019, a wintry wind blew when the New York Department of Financial Services (NY DFS) released Circular Letter No. 1 (2019) ("Letter No. 1") "to advise insurers authorized to write life insurance in New York of their statutory obligations regarding the use of external consumer data and information sources in underwriting for life insurance." Letter No. 1 follows the 308 letter the NY DFS previously released on June 29, 2017, which sought information on life insurers' use of "external consumer data or information sources" in connection with either an "accelerated or algorithmic underwriting program" or "to supplement traditional medical underwriting." Letter No. 1 expresses the NY DFS' following two areas of concern:

  • The use of external data sources, algorithms, and predictive models has a significant potential negative impact on the availability and affordability of life insurance for protected classes of consumers.
  • The use of external data sources is often accompanied by a lack of transparency for consumers.

This alert reviews the NY DFS’ chilling views expressed in Letter No. 1 against existing New York law and general insurance law principles.

Unfair Discrimination and Disparate Impact

Letter No. 1 asserts "Insurance Law Article 26 prohibits the use of race, color, creed, national origin, status as a victim of domestic violence, or past lawful travel in any manner, among other things, in underwriting." (emphasis added) However, Article 26 prohibits the use of race, color, creed, national origin, status as a victim of domestic violence, past lawful travel, etc. for the following actions:

  • Rejecting any application for a policy of insurance issued and/or sold by it.
  • Refusing to issue, cancel, or decline to renew or sell such policy after appropriate application therefore.
  • Fixing any lower rate or discriminating in the fees or commissions of agents or brokers for writing or renewing such a policy.
  • Inserting any condition or making any stipulation whereby the insured binds themselves to accept any sum or service less than the full value or amount of such policy.
  • Making or requiring any rebate, discrimination, or discount upon the amount to be paid or the service to be rendered.
  • Demanding or requiring a greater premium or payment from any person.
  • Making any distinction or discrimination between persons as to the premiums or rates charged for insurance policies or in any other manner whatever.

Article 26 deals with actions regarding insurability and cost. Importantly, none are specifically related to the underwriting process, thus, Article 26 does not directly address whether "an expedited, accelerated, or algorithmic underwriting process in lieu of traditional medical underwriting" may be utilized. It appears the NY DFS is grasping on to the phrase "or in any other manner whatever" to assert that a distinction in underwriting should be prohibited. However, according to the cannon of statutory construction Ejusdem Generis ("of the same kind") the general phrase "or in any other manner whatever" should be interpreted to mean other distinctions or discriminations between persons of the same kind as enumerated before the phrase, i.e. those based on premiums or rates charged.

In Letter No. 1, the NY DFS howls that "geographical data (including community-level mortality, addiction or smoking data), homeownership data, credit information, educational attainment, licensures, civil judgments and court records . . . all have the potential to reflect disguised and illegal race-based underwriting that violates Articles 26 and 42." Letter No. 1 also raises concern with "[o]ther models and algorithms [that] purport to make predictions about a consumer’s health status based on the consumer’s retail purchase history; social media, internet or mobile activity; geographic location tracking; the condition or type of an applicant’s electronic devices (and any systems or applications operating thereon); or based on how the consumer appears in a photograph." The NY DFS warns that "[a]t the very least, the use of these models may either lack a sufficient rationale or actuarial basis and may also have a strong potential to have a disparate impact on the protected classes identified in New York and federal law." (emphasis added).

Letter No. 1 reflects distrust of the use of new data points and algorithms by adopting an unseasonal disparate impact standard that has no basis in insurance principles. All insurance discriminates, the question is whether that discrimination is unfair. As explained in "Disparate Impact and Unfairly Discriminatory Insurance Rates"[i]:

  • The concept of unfairly discriminatory rates has traditionally been cost-based, meaning that rates reflect the underlying risk and hazard. The standard of disparate impact has its origins in federal civil rights laws. [It] has no relationship to the underlying insurance costs and refers solely to the adverse, significant disproportionate impact of one or more rate factors on a protected minority class.
  • If [disparate impact is] applied to insurance, a risk/rate factor will potentially be said to have a disparate impact if it more adversely impacts a protected minority class than it does the majority class, regardless of its relationship to underlying costs. The standards of unfair discrimination and disparate impact will potentially be in conflict because of the likelihood that protected minority classes will not be proportionately distributed throughout the various risk classifications. This assumption implies that all risk factors used to measure and assess risk are potentially in violation of a disparate impact rate standard, even though each risk factor accurately reflects expected losses and expenses.
  • If the standard of disparate impact prevails over the standard of unfairly discriminatory rates, important risk factors will likely be banned from insurance rating plans. The elimination of even one proven risk factor will result in a rate structure that is unfairly discriminatory. Accurate risk assessment will be destroyed; adverse selection will be rampant; and coverage availability problems will likely arise.

Moreover, Letter No. 1 does not explain how disparate impact will be determined. It seems incongruous for the NY DFS to determine there has been a disparate impact based on correlations – i.e., based on the proportion of protected class members impacted. If it did so, would the NY DFS be doing what it is prohibiting insurers from doing? Rather, as discussed by "Disparate Impact and Unfairly Discriminatory Insurance Rates," in insurance the appropriate standard for the use of a data point is actuarial justification.

Letter No. 1 further imposes a dreary burden by requiring insurers to: 1) "determin[e] that the external tools or data sources do not collect or utilize prohibited criteria" and 2) "establish that the underwriting or rating guidelines are not unfairly discriminatory." (emphasis added). Letter No 1. goes on to warn that "[a]n insurer may not simply rely on a vendor’s claim of non-discrimination or the proprietary nature of a third-party process as a justification for a failure to independently determine compliance with anti-discrimination laws. The burden remains with the insurer at all times." (emphasis added).

This burden on insurers is unprecedented and raises a storm of questions, including:

  • What constitutes an adequate determination that external data sources do not “collect or utilize prohibited criteria”?
  • Why must the insurer ensure a data source does not collect prohibited criteria if it is not used?
  • Has information on “community level mortality” risen to the level of protected class?
  • Would receipt of an actuarial opinion satisfy an insurer's obligations to independently determine compliance with anti-discrimination laws?

While Letter No. 1 acknowledges that the use of technology can "improve access to financial services" and "benefit insurers and consumers alike by simplifying and expediting life insurance sales and underwriting processes," imposing the blizzard of burdens on insurers will weigh down the development of the very processes that can yield these benefits.

Consumer Disclosure and Transparency

Letter No. 1 asserts that "[p]ursuant to Insurance Law § 4224(a)(2), insurers must notify the insured or potential insured of the right to receive the specific reason or reasons for a declination, limitation, rate differential or other adverse underwriting decision." (emphasis added). The NY DFS expands its icy grip by asserting an adverse underwriting decision "include[s] the inability of an applicant to utilize an expedited, accelerated or algorithmic underwriting process in lieu of a traditional medical underwriting." Section 4224(a)(2), however, states only that an insurer "shall notify the insured or potential insured of the right to receive, or designate a medical professional to receive, the specific reason or reasons for such refusal [to insure], limitation [on insurance] or rate differential," but does not include any reference to "other adverse underwriting decisions."

Thus, consumer disclosure is required under Section 4224(a)(2) only with respect to insurability and cost and is not required with respect to the underwriting process. The language does not address whether "an expedited, accelerated, or algorithmic underwriting process in lieu of traditional medical underwriting" may be utilized.

The NY DFS ends Letter No. 1 by "reserve[ing] the right to audit and examine an insurer's underwriting criteria, programs, algorithms, and models." Only time will tell how much frostbite this blustery proclamation will give insurers using external data sources and algorithmic underwriting as they tread through the storm.

We will continue to monitor the activities and guidance of the NY DFS related to insurers' use of external data sources and algorithmic underwriting.


[i] Michael J. Miller, Disparate Impact and Unfairly Discriminatory Insurance Rates, Casualty Actuarial Society E-Forum, Winter 2009, at 277, 287.

©2023 Carlton Fields, P.A. Carlton Fields practices law in California through Carlton Fields, LLP. Carlton Fields publications should not be construed as legal advice on any specific facts or circumstances. The contents are intended for general information and educational purposes only, and should not be relied on as if it were advice about a particular fact situation. The distribution of this publication is not intended to create, and receipt of it does not constitute, an attorney-client relationship with Carlton Fields. This publication may not be quoted or referred to in any other publication or proceeding without the prior written consent of the firm, to be given or withheld at our discretion. To request reprint permission for any of our publications, please use our Contact Us form via the link below. The views set forth herein are the personal views of the author and do not necessarily reflect those of the firm. This site may contain hypertext links to information created and maintained by other entities. Carlton Fields does not control or guarantee the accuracy or completeness of this outside information, nor is the inclusion of a link to be intended as an endorsement of those outside sites.

Subscribe to Publications


The information on this website is presented as a service for our clients and Internet users and is not intended to be legal advice, nor should you consider it as such. Although we welcome your inquiries, please keep in mind that merely contacting us will not establish an attorney-client relationship between us. Consequently, you should not convey any confidential information to us until a formal attorney-client relationship has been established. Please remember that electronic correspondence on the internet is not secure and that you should not include sensitive or confidential information in messages. With that in mind, we look forward to hearing from you.