Last week the New York Department of Financial Services (NYDFS) issued a proposed circular to govern how Insurance carriers use AI and alternative data in the Empire State.
Here’s what insurance carriers need to know about the NYDFS circular:
Governance and Fairness Testing: NYDFS expects that carriers will establish governance protocols for AI systems and alternative data, and also engage in fairness testing of predictive models and variables before they are put into use and at a regular cadence thereafter.
Scope: The Circular applies to all insurance lines—life, health, property and casualty, etc.
Scope Limitation: The Circular is only intended to address AI systems and alternative data used for insurance underwriting and pricing. The Circular does not contemplate the use of AI in other stages of the customer journey, for example marketing and claims administration raising questions about the governance of predictive models and big data in these ares.
Exemptions for Certain Data: The Circular does not apply to certain “traditional underwriting” factors like medical data, motor vehicle reports and criminal search histories. Presumably these variables are exempt because they are thought to be “Actuarially Valid” — that is, they are inherent measures of the risks insurance companies are seeking to manage and therefore, some argue, are per se legitimate for use, even if they cause disparities for protected groups. Indeed….
Actuarial Validity Requirement: DFS expects that the alternative data Carriers use for insurance underwriting and pricing will be “actuarially valid” which is to say: to be fit-for-use, variables should have a reasonable and relatively intuitive connection to risk. But…and this is a biggie….
Search for Less Discriminatory Alternatives: NYDFS expects users of AI systems and alternative data to evaluate whether there are models or alternative input variables that would achieve their business objectives while being fairer—or less discriminatory, in legal parlance—to protected groups. If viable fairer models exist, DFS expects carriers to use them.
Proxy Detection Mandate: DFS expects carriers using alternative data to establish and document that variables do not act as proxies for protected status. Normally this is done either by looking at measures of correlation or the extent to which a variable is predictive of protected class.
Fairness Measures: The Circular urges quantitative model assessments using many of the measures used to conduct fair lending analysis in consumer finance. Fairness measures proposed by DFS include: Adverse Impact Ratios (which compare the rates at which protected groups are approved for insurance relative to control groups) and Standardized Mean Differences (which can be used to gauge average differences in price paid by protected groups).
Responsibility for Third-Party Vendors: DFS says carriers bear responsibility for disparities driven by their reliance on AI and alternative data vendors. This means firms that build and sell predictive models and alternative data will ultimately be subject to the Circular’s fairness and other governance prescriptions too even if they are not directly regulated by NYDFS.
Proportionality Principle: DFS concedes that the terms “AI” and “alternative data” are in some sense poorly defined so the circular also emphasizes that the principle of proportionality—the scope of a carrier’s AI and alternative data programs—should be commensurate with each carrier’s business complexity and the risks arising from use of the AI or data.
Board Oversight: The Circular mandates board-level oversight of AI and alternative data use by carriers.
Adverse Action: Consistent with current DFS policy, carriers must provide adverse action notices if AI or alternative data lead to negative decisions like policy declinations or rate differentials.
If you’re an insurance carrier, there’s good news: you can learn from the lessons of banks, who’ve labored under a set of virtually identical rules for years. On the banking side, there is increasingly technology infrastructure and responsible AI practitioners who can help navigate your transition to profitable and fair AI.
It’s now possible to have real-time visibility into the fairness of your decisions and opportunities to be fairer that will boost your profits.The pathway to this visibility is called fairness infrastructure or Fairness-as-a-Service. Now that it’s here, insurance underwriting and pricing will never be the same.