Disclaimer

The information on this website is presented as a service for our clients and Internet users and is not intended to be legal advice, nor should you consider it as such. Although we welcome your inquiries, please keep in mind that merely contacting us will not establish an attorney-client relationship between us. Consequently, you should not convey any confidential information to us until a formal attorney-client relationship has been established. Please remember that electronic correspondence on the internet is not secure and that you should not include sensitive or confidential information in messages. With that in mind, we look forward to hearing from you.

Skip to Content

Scrutiny of Algorithms and Consumer Data

With the growing use of algorithms and external consumer data, several national and international bodies have recently drafted work product or proposed regulations as follows:

  • The NAIC Accelerated Underwriting Working Group (AU WG) – which released a November 11, 2021, draft of its educational report for regulators to facilitate “understand[ing] the current state of the [insurance] industry and its use of accelerated underwriting.”
  • The NAIC Special (EX) Committee on Race and Insurance (Special Committee) – whose 2021/2022 charges include considering “the impact of traditional life insurance underwriting on traditionally underserved populations, considering the relationship between mortality risk and disparate impact.”
  • Colorado Division of Insurance (CO DOI) – which is developing regulations to implement new section 10-3-1104.9’s prohibition of the use of external consumer data and information sources (external data), as well as algorithms and predictive models using external data (technology) in a way that unfairly discriminates based on race, color, national or ethnic origin, religion, sex, sexual orientation, disability, gender identity, or gender expression (protected status), which became effective on September 7, 2021.
  • The White House Office of Science and Technology Policy (White House OSTP) – which is assessing “exhibited and potential harms of a particular biometric technology” as part of its October 8, 2021, information request.
  • The European Parliament and the Council of the European Union (EU Parliament) – which proposed “laying down harmonised rules on artificial intelligence” (EU AI Regulation) on April 4, 2021, that recognize “the right to dignity and non-discrimination and the values of equality and justice.”
  • The Cyberspace Administration of China (China Cyber Admin) – which on August 27, 2021, issued a 30-point proposal regarding “algorithm recommendation management regulations.”

These bodies’ work includes the following themes: (i) prohibiting unfair discrimination; (ii) promoting fairness and transparency; and (iii) requiring governance programs.

Unfair Discrimination

The various bodies are addressing the potential for unfair discrimination in the use of algorithms and external consumer data, as follows:

What May Be Unfair Discrimination

  • Colorado section 10-3-1104.9 imposes a three-prong test to determine whether unfair discrimination exists:

1. The use of external data or technology has a correlation to a protected status;

2. The correlation results in a disproportionately negative outcome for such protected status; and

3. The negative outcome exceeds the reasonable correlation to the underlying insurance practice, including losses or costs for underwriting.

The Colorado commissioner is required to make rules implementing section 10-3-1104.9 and to hold stakeholder meetings, which are expected in January 2022. In addition, perhaps providing more guidance on unfair discrimination, the required rules must (i) provide a reasonable time for insurers to remedy any unfair discrimination impact of any employed technology and (ii) allow for the use of external data and technology that has been found not to be unfairly discriminatory.

  • The AU WG’s draft educational report (i) warns that due “to the fact accelerated underwriting relies on predictive models or machine learning algorithms, it may lead to unexpected or unfairly discriminatory outcomes even though the input data may not be overtly discriminatory” and (ii) expresses concern with the use of a consumer’s behavioral data, including “gym membership, one’s profession, marital status, family size, grocery shopping habits, wearable technology, and credit attributes” because “[a]lthough medical data has a scientific linkage with mortality, behavioral data may lead to questionable conclusions as correlation may be confused with causation.”
  • The EU AI Regulation specifically notes that AI systems “used to evaluate the credit score or creditworthiness of natural persons should be classified as high-risk AI systems” because they “may lead to discrimination of persons or groups and perpetuate historical patterns of discrimination, for example based on racial or ethnic origins, disabilities, age, sexual orientation, or create new forms of discriminatory impacts.”

The EU AI Regulation also includes “specific requirements that aim to minimise the risk of algorithmic discrimination, in particular in relation to the design and the quality of data sets used for the development of AI systems complemented with obligations for testing, risk management, documentation and human oversight throughout the AI systems’ lifecycle.”

Additional Study

Workstream 4 of the Special Committee will address unfair discrimination, disparate treatment, proxy, and disparate impact in insurance underwriting in a proposed white paper.

The White House OSTP is seeking information to assess “exhibited and potential harms of a particular biometric technology,” including “harms due to disparities in effectiveness of the system for different demographic groups.”

Fairness and Transparency

The AU WG, the EU AI Regulation, and the China Cyber Admin seek to ensure the use of algorithms and consumer data is fair and transparent.

Additional Guidance

  • AU WG’s Educational Report offers the following measures that can be taken: (i) ensure data inputs are transparent, accurate, reliable, and the data itself does not have any unfair bias; (ii) ensure that the external data sources, algorithms, or predictive models are based on sound actuarial principles with a valid explanation or rationale for any claimed correlation or causal connection; (iii) be able to provide the reason(s) for an adverse underwriting decision to the consumer and all information upon which the insurer based its adverse underwriting decision; (iv) be able to produce information upon request as part of regular rate and policy reviews or market conduct examinations.
  • EU AI Regulation notes that “[h]igh-risk AI systems should ... be accompanied by relevant documentation and instructions of use and include concise and clear information, including in relation to possible risks to fundamental rights and discrimination, where appropriate.”
  • China Cyber Admin seeks to require that “[c]ompanies must disclose the basic principles of any algorithm recommendation service, explaining the purpose and mechanisms for recommendations in a ‘conspicuous’ manner.”

Governance Program

The various bodies believe those using algorithms and consumer data must design and implement governance programs to properly monitor and evaluate such use.

  • AU WG’s Educational Report recommends that a governance program should (i) ensure that the predictive models or machine learning algorithm within accelerated underwriting has an intended outcome and that outcome is being achieved; (ii) ensure that the predictive models or machine learning algorithm achieve an outcome that is not unfairly discriminatory; and (iii) have a mechanism to correct mistakes if found.
  • Colorado section 10-3-1104.9 requires insurers to (i) establish and maintain a risk management framework reasonably designed to determine, to the extent practicable, whether the insurer’s use of external data and technology unfairly discriminates against a protected status; (ii) assess the risk management framework; and (iii) obtain officer attestations as to the implementation of the risk management framework. At the NAIC Fall National Meeting, Commissioner Conway explained that Colorado intentionally places the burden of monitoring and testing on insurers because Colorado does not have the resources or expertise to do so.
  • EU AI Regulation requires “appropriate human oversight measures” and specifies that “such measures should guarantee that the system is subject to in-built operational constraints that cannot be overridden by the system itself and is responsive to the human operator, and that the natural persons to whom human oversight has been assigned have the necessary competence, training and authority to carry out that role.”
  • China Cyber Admin’s proposal will require providers to “regularly assess and test their algorithms and data to avoid models that will induce users’ obsessive behaviors, excessive spending or other behaviors that violate public order and morality.”

Insurers need to consider what consumer data and algorithms are being used throughout all areas of the company, including marketing, product design, underwriting, administrative services, claims, and fraud units, and what measures are in place to address unfair discrimination and fairness and transparency. This also includes considering what governance is in place or may need to be enhanced.

©2024 Carlton Fields, P.A. Carlton Fields practices law in California through Carlton Fields, LLP. Carlton Fields publications should not be construed as legal advice on any specific facts or circumstances. The contents are intended for general information and educational purposes only, and should not be relied on as if it were advice about a particular fact situation. The distribution of this publication is not intended to create, and receipt of it does not constitute, an attorney-client relationship with Carlton Fields. This publication may not be quoted or referred to in any other publication or proceeding without the prior written consent of the firm, to be given or withheld at our discretion. To request reprint permission for any of our publications, please use our Contact Us form via the link below. The views set forth herein are the personal views of the author and do not necessarily reflect those of the firm. This site may contain hypertext links to information created and maintained by other entities. Carlton Fields does not control or guarantee the accuracy or completeness of this outside information, nor is the inclusion of a link to be intended as an endorsement of those outside sites.