DFS Superintendent Harris has proposed guidance on Artificial Intelligence (AI) to counteract discrimination.
This circular letter applies to all insurers authorized to underwrite insurance in New York State.
In a preview of what’s to come from U.S. regulators, the New York Department of Financial Services recently released proposed rules on how insurance companies should use artificial intelligence and alternative data in underwriting and pricing.
The NYDFS circular expects network operators to establish governance protocols for AI systems and so-called external consumer data and information sources (ECDIS) and conduct fairness tests before using predictive models and variables. He said there was. Currently, insurers do not have to comply with testing requirements when using AI for underwriting or pricing.
The proposed rules, announced late last month, apply to all insurance sectors, including property and casualty insurance, health insurance, life insurance, auto insurance, and home insurance.
“The Department expects insurers to use new technologies, such as artificial intelligence, in a manner that complies with all applicable federal and state laws, rules, and regulations,” the letter reads. The Department acknowledged that new technology could benefit both insurers and consumers by simplifying and speeding up procedures and potentially leading to more accurate underwriting and pricing. However, the letter also says that the introduction of such technology can reflect systemic bias and that its use can reinforce and exacerbate inequalities.
“This raises serious concerns about the potential for undue adverse effects and discriminatory decisions,” the ministry’s notice said. “ECDIS has varying degrees of accuracy and reliability, and some are provided by companies that are not subject to regulatory oversight or consumer protection.”
The letter states that AIS’s self-learning actions increase the risk of “inaccurate, arbitrary, capricious, or unjustifiably discriminatory outcomes” that disproportionately impact vulnerable communities and individuals or disrupt New York’s insurance market. It is said that it may weaken the system.
New York follows the example of the EU
New York state is following in the footsteps of the European Union with rules on the use of AI, and Colorado is proposing similar regulations for insurance companies.
They track some of the industry’s key issues regarding the use of AI. Last summer, health insurance giant Cigna Inc. was accused by the federal government of using computer algorithms to automatically deny hundreds of thousands of patient claims without individually reviewing them, as required by California law. accused in a lawsuit.
Cigna said the lawsuit is seeking class-action status. And Cigna Health & Life Insurance Company denied more than 300,000 claims in just two months last year.
According to the complaint, the company used an algorithm called Procedure-to-Diagnosis (PXDX) to determine whether a claim met certain requirements, with each review taking an average of just 1.2 seconds. There is. “As a result of the PXDX system, Cigna physicians will immediately deny medical claims without ever opening patient records, effectively leaving thousands of patients without insurance and facing unexpected charges. “This occurred,” the complaint states.
A similar lawsuit was filed in December, alleging that Humana used an AI model called nHPredict to unfairly deny medically necessary care to elderly and disabled patients eligible for Medicare Advantage. claims to have done so. In another lawsuit, United Healthcare refused to accept coverage denial tools even though it was found to be approximately 90% flawed and would override a patient’s physician’s determination that the cost was medically necessary. The lawsuit alleges that the company also used nHPredict to deny claims.
Experts said the biggest concern around AI in the insurance sector is using third-party data and tools by unregulated providers.
“New York City is essentially saying that companies must be held accountable for the information and datasets they purchase from third parties,” said Philip Dawson, head of AI policy at Armira, which developed the AI verification platform. Stated. “These systems must be audited and tested to ensure they are regulatory, actuarially compliant, and effective.”
Just the beginning of the proposed AI rules
Dawson believes the New York circular is just the beginning of a wave of AI insurance regulations being proposed in many states.
“What stands out most about this circular is the requirement for the evaluation of AI models and third-party data sets (ECDIS) and the granularity of insurers’ obligations in this regard, from the implementation of AI governance frameworks to detailed quantification. This circular sets out some very clear expectations for the use of AI by insurers, and these expectations also apply to third-party AI tools. “This is consistent with Colorado’s regulatory approach to AI and comments from the Federal Trade Commission that companies cannot impose AI risk assessment obligations on the providers they purchase from.”
AI and similar technologies can and may have already, revolutionized the insurance industry, with significant implications for speed and cost-saving efficiencies in claims processing, underwriting, fraud detection, and customer service. There is little doubt that it will bring about progress. Many insurance companies are using virtual assistants such as chatbots to improve the customer experience. Chatbots can provide basic advice, verify billing information, and handle common queries and transactions. Additionally, claims management can be enhanced using machine learning techniques at various stages of the claims processing process, according to the National Association of Insurance Commissioners. Machine learning models use historical data, sensors, and imagery to help quickly assess damage severity and predict repair costs.
However, something could go wrong, and the NAIC said it will continue to monitor the use of AI in the insurance industry and consider developing further regulatory guidance if necessary.
“AI is not inherently good or bad, right or wrong. It is human interaction, interpretation, and use of AI that reflects AI in some way,” says Axa XL Senior said Rose Hall, Vice President and Head of Innovation. A leading global provider of commercial property and casualty insurance based in Connecticut.
Claims and underwriting are the goals of AI insurance
According to a new report from Reuters Events and Clearwater Analytics, insurers’ investments in AI are primarily aimed at use in claims and underwriting processes, as it can increase efficiency in related tasks. It has been found. Claims management is currently the most commonly cited function or department to focus on implementing generative AI, followed by customer service.
“AI is the second most commonly used technology in underwriting after digital portals and exceeds the average application share of 26% in our technology pool,” the report said. “We can therefore conclude that companies looking to improve the efficiency of their underwriting processes are more likely to invest in AI than any other technology on the list.”
“AI in insurance is not a concept of the future, but a necessity today,” said Subik Das, chief technology officer at Clearwater Analytics. “As we stand on the precipice of a new era of technology, one thing is certain: AI is coming.”
However, regulators and others are concerned about AI’s potential risks, including data breaches, security vulnerabilities, and algorithmic bias. Bloomberg Research predicts that the generative AI market will grow to $1.3 trillion over the next decade.
“The insurance market’s understanding of the risks associated with generative AI is still in its infancy,” says a recent report on AI in insurance published by Aon PLC, an Anglo-American professional services and management consulting firm. Masu. “This evolving form of AI will impact many insurance areas, including but not limited to technical errors and omissions/cyber, professional liability, media liability, and employment practices liability, depending on the use case for AI. will have an impact.”
Recommended governance framework
Aon recommends that companies stay ahead of regulations and work with technology experts, attorneys, and advisors to set policies and establish governance frameworks that meet regulatory requirements and industry standards. Some components of this framework may include:
Regularly audit your AI models to ensure your algorithms and datasets aren’t introducing unwanted bias.
Ensure you understand the copyright of AI-generated materials. Insert human checkpoints to verify that governance models used in AI development comply with legal and regulatory frameworks.
These risks can be avoided by conducting legal, claims, and insurance reviews and considering alternative risk transfer mechanisms when insurance is brought to market.