Artificial Intelligence in Insurance: Risks, Rewards, & Regulations While AI has the potential to transform the industry, managing its risks through effective regulation and oversight is imperative The insurance industry, like many others, has embraced the rapid growth and development of artificial intelligence (AI) and machine learning (ML) technologies. The addition of AI promises benefits such as increased efficiency, enhanced customer satisfaction, and reduced costs. However, alongside these advantages come significant regulatory challenges that must be navigated to guarantee insurers’ use of AI is not harmful to consumers and follows existing and incoming laws and regulations. Understanding how AI can be used and its inherent risks is necessary for regulators to effectively develop AI governance programs that balance oversight and regulation with innovation in this developing field of technology. The successful implementation of AI in the insurance industry is partly due to the fact that data plays such a crucial role in insurance. “Big Data” is a term used to describe very large sets of data and the trends and patterns within those sets.[1] ML systems, predictive models, and generative AI require big data to effectively identify patterns, trends, and associations and have thrived in insurance because the industry is so reliant on Big Data.[2] How and what types of data are being analyzed by insurance companies is an area of concern for regulators. However, the presence of such an abundance of extensive data sets is a major contributor to the continued effectiveness of AI in this industry. This article will discuss the integration of AI into the insurance industry, focusing on the balance between its transformative potential and the associated risks. It will delve into the regulatory landscape, examining existing and emerging guidelines and legislation designed to mitigate risks like bias, discrimination, and lack of transparency. It will also address the legal challenges and class action lawsuits arising from AI usage, emphasizing the importance of ethical AI usage and compliance with regulatory standards. Ultimately, this article aims to provide an understanding of how insurers can navigate ensuring fair and equitable outcomes for consumers while leveraging the full potential of AI. I. Uses and Identified Risks of AI The incorporation of AI into insurance operations spans numerous facets of the industry including underwriting, claims processing, fraud detection and prevention, and customer service. The National Association of Insurance Commissioners (NAIC) Big Data and Artificial Intelligence (H) Working Group surveyed 194 home insurers from 10 states in August of 2023. Of the reporting companies, 70% use, plan to use, or plan to explore using AI/ML models. This number falls between the 88% of reporting private passenger auto (PPA) insurers and 58% of reporting life insurers who were surveyed in December of 2022 and November of 2023, respectively.[3] Regarding the various insurance operations, home insurers’ use of AI ranges from 54% to just 14%. In descending order, the percentage of home insurance companies using AI/ML models by insurance function was: 54% in claims, 47% in both underwriting and marketing, 42% in fraud detection, 35% in rating, and 14% in loss prevention. The more specific uses of AI/ML models for home insurers as well as life and PPA insurers included claims triage, image evaluation to determine loss, referring claims for further investigation, risk class assignment, and targeted online advertising.[4] While the use of AI in different areas of the insurance industry offers numerous benefits, it also introduces several challenges. As insurers continue to integrate AI/ML models into their operations, it is crucial to recognize and understand the potential risks that come with these technologies. AI models and algorithms are trained and developed using large amounts of historical data. This data can contain bias, leading to discriminatory practices and adverse outcomes in underwriting, pricing, and claims processing if not addressed. The possibility of discrimination along with the complex nature of AI and its related programs and systems can also make it difficult to understand how insurance decisions are being made.[5] This has caused a lack of transparency to arise as another focal point for regulators. In the NAIC survey of home insurers, only 10% of companies said they provide their customers with information regarding what types and the purpose of data being used beyond what is required by law.[6] Although a major benefit of AI is automation and increased speed and efficiency, insurers need to remain accountable for and aware of how AI-generated decisions are being made should a consumer or regulator have questions or concerns regarding these decisions. II. NAIC Model Guidelines Regarding AI As AI continues to play a greater role in insurance, regulatory bodies and organizations have started to issue guidance on the appropriate use of AI throughout the industry. In December of 2023, the NAIC adopted a model bulletin titled “Use of Artificial Intelligence Systems by Insurers.” The bulletin was issued by the NAIC’s Innovation, Cybersecurity, and Technology (H) Committee which was comprised of representatives of insurance regulatory bodies from fourteen states, the District of Columbia, and Puerto Rico. The bulletin reminds insurers that AI-driven decisions and actions impacting consumers must comply with all applicable laws and regulations, mainly a state’s prohibitions against unfair trade and unfair claims settlement practices. A state’s unfair trade practices act defines what constitutes unfair or deceptive acts, practices, or methods of competition and prohibiting them as such. While a state’s unfair claims settlement act establishes standards for the investigation and resolution of a policyholder’s insurance claim.[7] The model bulletin, if adopted, largely does not modify or impose any new requirements in relation to a state’s unfair trade and unfair claims settlement acts, as actions by an insurer should never violate these laws regardless of the method being used to make or support a decision by the insurer. Rather, the model bulletin expects insurers to adopt governance frameworks and risk management protocols that are designed to ensure the responsible use of AI. The first state to adopt the model bulletin was Alaska. Arkansas, Connecticut, Delaware, District of Columbia, Illinois, Iowa, Kentucky, Maryland, Massachusetts, Michigan, Nebraska, Nevada, New Hampshire, New Jersey, North Carolina, Oklahoma, Pennsylvania, Rhode Island, Vermont, Virginia, Washington, and West Virginia have followed suit and adopted the bulletin.[8] The guidance and expectations of the model bulletin can be generalized into a few key principles:
III. State Regulations Since the NAIC’s release of the model bulletin, states that have released guidance on the use and regulation of AI in the insurance industry have mainly done so by adopting the model bulletin, most virtually word-for-word, without many changes. Prior to the release of the model bulletin, however, states have passed various laws, regulations, and guidance that address how AI is to be utilized by insurers. Colorado is one of the first states to have enacted legislation in this area. Effective as of 2021, C.R.S. § 10-3-1104.9 focuses on protecting consumers from unfair discrimination based on a protected class. The statute provides in part: Rules adopted pursuant to this section must require each insurer to: (I) Provide information to the commissioner concerning the external consumer data and information sources used by the insurer in the development and implementation of algorithms and predictive models for a particular type of insurance and insurance practice; (II) Provide an explanation of the manner in which the insurer uses external consumer data and information sources, as well as algorithms and predictive models using external consumer data and information sources, for the particular type of insurance and insurance practice; (III) Establish and maintain a risk management framework or similar processes or procedures that are reasonably designed to determine, to the extent practicable, whether the insurer’s use of external consumer data and information sources, as well as algorithms and predictive models using external consumer data and information sources, unfairly discriminates based on [protected classes].[10] In 2023, pursuant to C.R.S. 10-3-1104.9, Colorado issued 3 CCR 702-10, which established the requirements for governance and risk management frameworks related to the use of external consumer data and information sources (ECIDS). The regulation was specifically targeted at life insurers and their use of ECDIS which could potentially result in unfair discrimination on the basis of race. Generally, the framework must be designed to determine whether unfair discrimination could result and if so, remediate it, but also include documentation and oversight requirements. More specifically, a governance and risk management framework should document the policies and procedures for ongoing oversight, addressing consumer complaints, detecting unfair discrimination, and selecting third-party vendors to name just a few. Reports summarizing an insurer’s compliance with the regulation are now required annually. Colorado has also recently enacted another piece of legislation regarding artificial intelligence. The Consumer Protections for Artificial Intelligence Act, C.R.S. §§ 6-1-1701-1707, was passed on May 17, 2024, and takes effect on February 16, 2026. The act governs the developers and deployers of AI systems, rather than users of AI, and requires them to use reasonable care to protect consumers from “algorithmic discrimination.” It targets those developers and deployers of AI systems that are used in making “consequential decisions.” “Consequential decisions” refer to decisions that significantly impact consumers, either legally or otherwise, in relation to the denial, cost, or terms of insurance, as well as in other industries. Effective in 2022, California issued Bulletin 2022-5 stating the Department of Insurance has been made aware of and continues to investigate allegations of racial bias and discrimination across the insurance industry resulting from insurance companies’ use of artificial intelligence and other forms of Big Data. It provides, in part: Although the responsible use of data by the insurance industry can improve customer service and increase efficiency, technology and algorithmic data are susceptible to misuse that results in bias, unfair discrimination, or other unconscionable impacts among similarly-situated consumers. A growing concern is the use of purportedly neutral individual characteristics as a proxy for prohibited characteristics that results in racial bias, unfair discrimination, or disparate impact. The greater use by the insurance industry of artificial intelligence, algorithms, and other data collection models have resulted in an increase in consumer complaints relating to unfair discrimination in California and elsewhere.[11] The Bulletin urges insurers to conduct their own due diligence to ensure full compliance with all applicable laws before utilizing artificial intelligence and similar technology. The allegations include unfairly flagging claims in certain zip codes then denying these claims and/or offering unreasonably low settlement offers as well as using biometric data from facial recognition technology to influence the payment or denial of claims. Insurers are advised to take care before and while using AI and its related systems to ensure full compliance with applicable laws, particularly those prohibiting discriminatory practices.[12] More recently, California Senate Bill 1120, The Physicians Make Decisions Act, took effect on January 1, 2024. The Act amended California’s existing law governing the use of utilization review and utilization management functions by healthcare service plans and disability insurers. The Act seeks to regulate the increased use of AI in healthcare, particularly in reviewing, approving, modifying, delaying, or denying requests for healthcare services based on medical necessity. The Act ensures human oversight by requiring denials, delays, or modifications of healthcare services based on medical necessity be made only by a licensed physician or healthcare professional competent to evaluate the relevant clinical issues. Any determinations made by AI, algorithms, or other software tools should not rely solely on a group dataset but must consider the following as applicable:
In addition, the Act imposes disclosure requirements and subjects the systems in use to possible audit or compliance review. The state of New York in 2019 released Insurance Circular Letter No. 1 to address the use of ECDIS in underwriting for life insurance. It reminds insurers that they should not be utilizing ECDIS until the insurer has determined that it does not collect or rely on prohibited criteria, such as information pertaining to protected classes, and does not yield unfairly discriminatory results. On July 11, 2024, the New York Department of Financial Services published another, more in-depth circular letter, Insurance Circular Letter No. 7 titled “Use of Artificial Intelligence and External Consumer Data and Information Sources in Insurance Underwriting and Pricing.” It advises insurers to ensure they are not using ECDIS or AI Systems that unfairly discriminate against, are using, or are based in any way on protected classes. Insurers are expected to be able to demonstrate that the ECDIS they use are supported by “generally accepted actuarial standards,” are based on “actual or reasonably anticipated experience,” and “demonstrate a statistically significant, rational and not unfairly discriminatory relationship between the variables used and the relevant risk.” The guidance lays out a three-step process to assess whether the ECDIS used are unfairly discriminatory, requires documentation of their policies and procedures relating to the use and analysis of AI systems and ECDIS, and requires insurers to consider risks from both individual systems and in the aggregate. It also emphasizes the need for transparency and disclosure when insurers make adverse underwriting decisions. Specifically, where an insurer is using predictive models or ECDIS to make adverse underwriting decisions, it requires insurers to provide consumers with a specific reason for the decision including details about the information used to make the decision and the source of such information. The responses of some states like Colorado have been more in-depth than others. The bulletin issued by the Texas Department of Insurance in 2020, Commissioner’s Bulletin # B-0036-20, is less than 200 words. While it is succinct, this bulletin is just as effective as any other in reminding insurers that they are responsible for the accuracy of data used in rating, underwriting, and claims processing regardless of where the data comes from. IV. Future of Regulatory Frameworks Addressing AI It is possible that some states may follow in the footsteps of Colorado and begin to enact comprehensive legislative frameworks regarding the ethical use of AI. However, it is likely that many states that have yet to adopt some sort of regulatory framework addressing the use of AI in insurance will continue to adopt the NAIC model bulletin. In February of 2024, the American Property Casualty Insurance Association (APCIA) commented on a Connecticut Senate Bill No. 2 entitled “An Act Concerning Artificial Intelligence,” raising concerns about the proposed legislation. The Act is not just aimed at insurers. It mandates that all developers and deployers of high-risk AI systems must take reasonable measures to safeguard Connecticut residents from any known or likely risks of algorithmic discrimination. The APCIA warned that because the Connecticut Insurance Department has already issued regulations addressing AI, the passing of legislation on the very same topic could potentially result in duplicative and conflicting regulations, complicating compliance.[13] While states continue to explore and implement new regulations, the insurance industry is already grappling with significant legal challenges under the existing laws. The increasing scope of the regulations in this area will only intensify the scrutiny and legal risks for insurers who are not careful with their use of AI. Class action lawsuits in connection with the insurance industry’s use of data and artificial intelligence have already begun. Three class action lawsuits have been separately filed against health insurers Cigna, Humana, and UnitedHealthcare—all by Clarkson, a public interest law firm in California.[14] The suits allege that the health insurance companies’ reliance on AI algorithms has resulted in the wrongful denial or premature termination of coverage for healthcare services. More specifically, the complaint filed against Cigna states that the basis for the suit is “Cigna’s illegal scheme to systematically, wrongfully, and automatically deny its insureds the thorough, individualized physician review of claims guaranteed to them by law and, ultimately, the payments for necessary medical procedures owed to them under Cigna’s health insurance policies.”[15] Litigation concerning the insurance industry’s use of AI is not a new development. There are over ninety published court decisions, starting in the late 1990s, that mention or name Colossus, an AI-powered software program created by Computer Services Corporation to evaluate injuries and calculate the value of insurance claims.[16] Allstate, which has also been named in numerous lawsuits for its use of Colossus, used the program to make settlement offers for bodily injury claims.[17] In 2010, Allstate agreed to pay $10 million to forty-five states in a regulatory settlement following an examination by the NAIC.[18] The examination found that Allstate failed to “modify or ‘tune’ the software in a uniform and consistent manner across its claims handling regions.”[19] The software has also been criticized for its alleged underestimation and underpayment of insurance claims causing Allstate to be accused of unfair trade practices.[20] It is crucial for insurers in all sectors of the insurance industry to closely monitor the outcomes of past and pending cases to understand the fine line between leveraging technology for efficiency and violating legal and ethical standards. As regulatory frameworks become more stringent and public scrutiny increases, we can expect a rise in lawsuits targeting insurers that fail to implement AI responsibly. Automation and AI can undoubtedly enhance an insurance company’s capabilities, but when these tools are used in ways that undermine consumers’ rights and legal protections, they invite significant legal scrutiny. Insurers must verify that their use of AI complies with regulations while also providing the necessary human oversight to maintain fairness and accuracy. Adapting to emerging standards is critical to maintaining trust and integrity within the insurance industry. As such, insurers should proactively review these coming decisions and adjust their AI practices accordingly to avoid similar legal challenges. As AI has become an increasingly prominent point of litigation, insurers have implemented products that provide insurance coverage for AI usage. Because the integration of AI has introduced new risks and potential legal challenges, insurers and businesses alike have begun implementing ways to mitigate those risks through specialized insurance policies and products that address the unique liabilities associated with AI. This evolving sector of insurance necessitates a deeper understanding of how AI-related risks can be managed and insured, ensuring that as technology advances, adequate protections are in place to safeguard both designers and users of AI. V. Conclusion The use of AI and its related technologies are becoming an integral part of the insurance industry at a rapid rate. This growth in the use of AI will only continue as the technology and its benefits are refined. However, with increased use also comes increased risk. Regulatory bodies must be vigilant yet careful. Overregulation in the industry could stifle the development and growth of existing and new technological advancements. But if left unchecked, AI can and has resulted in adverse and unfair outcomes for consumers. Insurers and technology developers must do their part as well. As AI use continues to progress, prioritizing transparency, accountability, and fairness is essential. Insurers must adopt robust governance and risk management frameworks to effectively monitor and control AI use, which in turn will maintain compliance with legal standards, protect consumer rights, and mitigate potential legal challenges. While AI has the potential to transform the industry, managing its risks through effective regulation and oversight is imperative. If regulatory compliance can be achieved without completely sacrificing automation, the insurance industry can harness AI’s benefits while ensuring fair and equitable outcomes and continued growth in the insurance industry. This article was originally published in the May 2025 issue of DRI’s For The Defense magazine, which is available at https://digitaleditions.walsworth.com/publication/?i=847119&p=26&view=issueViewer. Author Biographies: R. Brandon McCullough is a Director and Shareholder at Houston Harbaugh, P.C. in Pittsburgh, PA, where he focuses his practice on insurance coverage and bad faith litigation. Mr. McCullough represents insurers in a wide array of coverage and bad faith disputes involving numerous types of commercial and personal lines policies, including disputes over coverage for construction defects, environmental contamination, fires, equipment failures, latent and progressive injuries, sexual abuse claims, healthcare liability, and trucking accidents. Mr. McCullough is currently the Vice Chair of the DRI Insurance Law Committee, and previously served as Chair of the 2023 DRI Bad Faith and Extra-Contractual Liability Seminar. Taylor Hinds is a Law Clerk at Houston Harbaugh, P.C. in Pittsburgh, Pennsylvania. Taylor is a J.D. candidate completing her final year at the University of Pittsburgh School of Law. Set to graduate in May of 2025, she is serving as a Teaching Assistant for Legal Writing and has served as President of the Sports and Entertainment Law Society and Community Liaison for the Pitt Law Women’s Association. Taylor earned her Bachelor of Arts in Legal Studies from the University of Pittsburgh in 2022, graduating Magna Cum Laude with a minor in Political Science and a Sport Studies Certificate. [1] Daniel A. Cotter, Esq., Artificial Intelligence Impacts on the Insurance Industry, 34 FORC Journal of Ins. L. and Regul. 10, 11 (2023). [2] Id. [3] Nat’l Ass’n of Ins. Comm’rs, 2022-23 Home Artificial Intelligence/Machine Learning Survey Analysis, 2 (2023), available at https://content.naic.org/sites/default/files/committee_related_documents/Home%2520Survey%2520Memo%2520to%2520BDAIWG.pdf; Nat’l Ass’n of Ins. Comm’rs, 2023 Life Artificial Intelligence (AI)/Machine Learning (ML) Survey Analysis, 2 (2023), available at https://content.naic.org/sites/default/files/committee_related_documents/Life%2520Survey%2520Memo%2520to%2520BDAIWG_Posted121423.pdf. [4] Nat’l Ass’n of Ins. Comm’rs, 2022-23 Home AI/ML Survey Analysis, supra, at 2. [5] Fred E Karlinsky, Esq. et al., Balancing Innovation and Regulation in an Insurance Market Driven by Artificial Intelligence, 34 FORC Journal of Ins. L. and Regul. 2, 2 (2024). [6] Nat’l Ass’n of Ins. Comm’rs, 2022-23 Home AI/ML Survey Analysis, supra, at 5. [7] Nat’l Ass’n of Ins. Comm’rs, NAIC Model Bulletin: Use of Artificial Intelligence Systems by Insurers (December 4, 2024), available at https://content.naic.org/sites/default/files/cmte-h-big-data-artificial-intelligence-wg-ai-model-bulletin.pdf; Nat’l Ass’n of Ins. Comm’rs, 2024 Membership: Innovation, Cybersecurity, and Technology (H) Committee (2024), available at https://content.naic.org/sites/default/files/inline-files/2024%20H-Innovation%2C%20Cybersecurity%20and%20Technology%20Cmte_5.pdf. [8] Nat’l Ass’n of Ins. Comm’rs, Implementation of NAIC Model Bulletin: Use of Artificial Intelligence Systems by Insurers (Mar. 3, 2025), available at https://content.naic.org/sites/default/files/cmteh-big-data-artificial-intelligence-wg-aimodel-bulletin.pdf.pdf. [9] Nat’l Ass’n of Ins. Comm’rs, NAIC Model Bulletin, supra. [10] C.R.S. 10-3-1104.9(b). [11] Cal. Dep’t of Ins., BULLETIN 2022-5 (June 30, 2022). [12] Id. [13] Kristina Baldwin, Vice President, APCIA, Testimony Regarding 2024 SB-00002, Conn. Gen. Assemb., (Feb. 15, 2024), available at https://www.cga.ct.gov/2024/gldata/TMY/2024SB-00002-R000229-Baldwin,%20Kristina,%20Vice%20President-APCIA--TMY.PDF. [14] Kisting-Leung, et al. v. Cigna Corp., et al, No. 2:23-cv-01477 (E.D. Cal. July 24, 2023); Barrows, et al. v. Humana, Inc., No. 3:23-cv-00654 (W.D. Ky. Dec. 12, 2023); Estate Of Gene B. Lokken, et al. v. UnitedHealth Group, Inc., et al., No. 0:23-cv-3514 (D. Minn. Nov. 14, 2023). [15] See Class Action Compl. at 1 filed in Kisting-Leung, No. 2:23-cv-01477. [16] Melissa M. D’Alelio & Taylore Karpa Schollard, Colossus and Xactimate: A Tale of Two AI Insurance Software Programs, The American Bar Association (Feb. 7, 2020), available at https://www.americanbar.org/groups/tort_trial_insurance_practice/resources/brief/archive/colossus-xactimate-tale-two-ai-insurance-software-programs/. [17] Id. [18] Allstate to Pay $10 Million to Settle “Colossus” Controversy, 24 Westlaw Journal Software Law 5 (2010). [19] Id. [20] Quynh Truong v. Allstate Ins. Co., 227 P.3d 73, 76 (N.M. 2010). |