Menu

California Throws Shade at Insurance Industry’s Use of Big Data and Algorithms

Life, Annuity, and Retirement Solutions   |   Life, Annuity, and Retirement Solutions   |   Financial Services Regulatory   |   Technology   |   July 5, 2022
Download Download   
Share Share Page

On June 30, 2022, the California Department of Insurance (CA DOI) released a bulletin titled "Allegations of Racial Bias and Unfair Discrimination in Marketing, Rating, Underwriting, and Claims Practices by the Insurance Industry" to remind insurers "of their obligation to market and issue insurance, charge premiums, investigate suspected fraud, and pay insurance claims in a manner that treats all similarly-situated persons alike." The CA DOI posited that “conscious and unconscious bias or discrimination ... can and often does result from the use of artificial intelligence, as well as other forms of ‘Big Data’ (i.e., extremely large data sets analyzed to reveal patterns and trends).”

The CA DOI issued the bulletin casting doubt on the industry’s use of big data as a result of its investigation of “several recent examples of potential bias and alleged unfair discrimination in many lines of insurance resulting from the use of technology and data.” The CA DOI offered the following examples of alleged unfair discrimination:

  • Flagging claims from inner-city ZIP codes, referring them to the Special Investigative Unit, and denying or offering unreasonably low settlements.
  • Using facial recognition to influence whether to pay or deny claims.
  • Using biometric and other personal information unrelated to risk in marketing and underwriting insurance policies.

The bulletin makes clear that insurers are responsible for avoiding both conscious and unconscious bias or discrimination and specifically calls out "the use of purportedly neutral individual characteristics as a proxy for prohibited characteristics that results in racial bias, unfair discrimination, or disparate impact." The bulletin notes the CA DOI finds the following data points suspect:

  • Geographical data, homeownership data, credit information, education level, civil judgments, and court records because of "the strong potential to disguise bias and discrimination."
  • Retail purchase history, social media, internet use, geographic location tracking, the condition or type of an applicant's electronic devices, or how the consumer appears in a photograph because these factors are "arbitrary."

The bulletin reminds insurers of their obligation to:

  • Train their staff and conduct their own due diligence to ensure full compliance with all applicable laws, including laws prohibiting discrimination in:
    • Insurance ratemaking
    • Claims handling practices
    • Accepting insurance applications
    • Canceling or nonrenewing insurance policies
  • Provide consumers with the specific reason or reasons when a declination, limitation, premium increase, or other adverse action occurs.

The bulletin also warned that the use of algorithms and models must have a sufficient actuarial nexus to the risk of loss. It further noted that even when the “models and data may suggest an actuarial nexus to risk of loss, unless a specific law expressly states otherwise, discrimination against protected classes of individuals is categorically and unconditionally prohibited.” The CA DOI reminded insurers that California’s Unruh Civil Rights Act expressly identified protected classes of persons and makes clear:

All persons within the jurisdiction of this state are free and equal, and no matter what their sex, race, color, religion, ancestry, national origin, disability, medical condition, genetic information, marital status, sexual orientation, citizenship, primary language, or immigration status are entitled to the full and equal accommodations, advantages, facilities, privileges, or services in all business establishments of every kind whatsoever.

The CA DOI reserves the right to audit and examine any insurers' business practices, including marketing, rating, claims, and underwriting criteria, programs, algorithms, and models, and to take disciplinary action to ensure compliance.


©2022 Carlton Fields, P.A. Carlton Fields practices law in California through Carlton Fields, LLP. Carlton Fields publications should not be construed as legal advice on any specific facts or circumstances. The contents are intended for general information and educational purposes only, and should not be relied on as if it were advice about a particular fact situation. The distribution of this publication is not intended to create, and receipt of it does not constitute, an attorney-client relationship with Carlton Fields. This publication may not be quoted or referred to in any other publication or proceeding without the prior written consent of the firm, to be given or withheld at our discretion. To request reprint permission for any of our publications, please use our Contact Us form via the link below. The views set forth herein are the personal views of the author and do not necessarily reflect those of the firm. This site may contain hypertext links to information created and maintained by other entities. Carlton Fields does not control or guarantee the accuracy or completeness of this outside information, nor is the inclusion of a link to be intended as an endorsement of those outside sites.

Subscribe to Publications

Disclaimer

The information on this website is presented as a service for our clients and Internet users and is not intended to be legal advice, nor should you consider it as such. Although we welcome your inquiries, please keep in mind that merely contacting us will not establish an attorney-client relationship between us. Consequently, you should not convey any confidential information to us until a formal attorney-client relationship has been established. Please remember that electronic correspondence on the internet is not secure and that you should not include sensitive or confidential information in messages. With that in mind, we look forward to hearing from you.