Eliminating AI Bias in Insurance

Insurers face a conundrum: Insurance requires bias (in terms of how risks are priced) but must be fair.

Insurance in the U.S. goes back to the mid-1700s and Benjamin Franklin. It has become one of the most essential parts of our lives and one of our most important economic industries. We depend on insurance companies and policies to protect us and our assets in times of loss and catastrophe. As it is such a critical piece of our social and economic fabric — it is also one of the most regulated and scrutinized industries — we fundamentally want and need to trust insurance.

For the most part over the centuries, consumers and businesses who purchase insurance have felt a relative transparency and obvious correlation between the relationships of risks and insurance; if you live in a flood zone or have a history of speeding tickets, insurance costs more. However, as carriers are touting proprietary advancements in big data and artificial intelligence (AI), insurance becomes more complex, and questions arise.

As society at large is challenging a lack of equity and fairness across races, genders and social statuses, insurance, too, is under scrutiny. Exactly what “big data” is being used, and how are those factors influencing model-based decisions about prices or coverage? There is an expectation to prove fairness and sometimes to “eliminate bias,” but delivering on this expectation is not so simple. In actuality, it is IMPOSSIBLE to eliminate bias from insurance. Insurance fundamentally needs to be biased; it needs to bias away from unreasonable risks to be financially feasible. Insurance can, however, put processes in place to mitigate disparate impact and unfair treatment.

So how does insurance move forward in a world not simply expecting proof of fairness but also an unrealistic expectation of eliminating bias? The solution has to come from and live within a corporate prioritization framework and a cross-functional lifecycle approach to model governance.

Prioritize Fairness as a Pillar of Corporate Governance

Data and model governance (AI governance) needs to be a C-level priority. Committing to fairness and transparency is a corporate responsibility. Managing AI risks like bias is a business problem, not just a technical problem.

Mitigation of unfair bias needs to be incorporated into compliance and risk concerns of the board and enabled through strategy and budget by the C-suite. The best strategies fit within a broader vision or plan, and, in this case, incorporating mitigation of bias aligns well with ESG or CSR efforts. As the SEC, regulators and investors demand more attention to these areas, executives have a unique opportunity to take advantage of the momentum and incorporate data and model fairness as central tenets of corporate governance. Leadership can ensure that AI governance is properly funded to deliver results and avoid the challenges of distributed ownership and budgets across the company.

Finally, it’s important to promote and celebrate these efforts externally. Show consumers and regulators evidence of your awareness and investments in building greater oversight and accountability of your organization’s use of data and modeling systems. Sharing these efforts is investing in brokering trust and confidence -- important and lasting competitive advantages.

Establish Stakeholder Alignment and Shared Lifecycle Transparency

When it comes to AI and other consequential decision systems, the technical nature of the work tends to silo the essential stakeholders from one another. A line of business owner greenlights the project. A team of data scientists and engineers develop on their own. Risk and compliance teams come in at the end to evaluate a system they’ve never seen before. Such a pattern is a recipe for bias to enter the equation unknowingly.

To combat this, companies need to invest time and effort in creating transparency across teams, not just in the decisions that their models are making once deployed but also in the human processes and human decisions that surround the model’s conception, development and deployment. Every person involved with a project should have access to the core documentation that helps them understand the goals, expected outcomes and reason that a model is the best way to solve the business problem at hand.

Once a model is in production, non-technical team members should have user-friendly ways to access, monitor and understand the decisions made by their AI and ML projects. Technologists can do much in designing their models for governance to provide more visibility and understandability of its decisions, and hiding behind the veil of the “black box” only creates more work for them in the end when they have to go back in time to explain odd or unexpected behavior from their models. Business owners should be able to evaluate system performance, know when problems of bias arise and understand the steps that were taken to identify and correct course.

See also: 3 Big Opportunities From AI and ML

Require Objective Oversight

Objective oversight and risk controls are not a new concept for your business, so continue this best practice when it comes to data and models. There needs to be a separation of duties and responsibilities between the teams who build the models and modeling systems from those functions who are responsible for managing risk and governance. The incentives are different, and so the objective motivations of mitigating risks that sit within governance functions need to be empowered and expected to oversee modeling systems. While there are technical tools being developed for data science teams to monitor, handle version control for and explain AI/ML systems, the tools are not oriented toward the non-technical, objective risk partners. The modeling team cannot, and should not, be expected to self-govern.

Thanks to the thorough approach to corporate and model governance covered above, second and third lines of defense will have intuitive and context-rich records and interfaces to discover, understand and interrogate models and the decisions the models have made for themselves. Because all of this evidence is mapped to a previously established model governance methodology, the objective governance teams can readily pass or fail adherence with policy.

Of course, this sort of objective governance and control will require front-end work and focus on collaboration, but it is an obvious and necessary approach to enhancing the fairness of systems. There’s a secondary benefit of that effort: Understanding the boundaries gives your R&D teams a much clearer path to develop systems that operate within them, thereby unlocking innovation rather than stifling it.

Perfection Is Not the Goal – Effort and Intent Are

Despite all best practices and efforts, we depend on humans to build and oversee these systems, and humans make mistakes. We will continue to have incidents and challenges managing fairness and bias with technology, but insurers can implement risk governance, transparency and objectivity with clear intent. These efforts will yield positive results and continue to cultivate trust and confidence from customers and regulators.


Anthony Habayeb

Profile picture for user AnthonyHabayeb

Anthony Habayeb

Anthony Habayeb is founding CEO of Monitaur, an AI governance software company, that serves highly regulated enterprises like flagship customer Progressive Insurance.

MORE FROM THIS AUTHOR

Read More