Beware the Algorithm!

Lawyers and consumer advocates are demonizing algorithms any time a decision goes against a policy holder. Let's talk about them less. 

Image
algorithm

When I write the Great American Novel about today's political dysfunction, I'm going to call it "They." Because some mysterious, evil "they" sure are causing lots of problems — pick just about any complaint about anything you don't like in the U.S. these days, and "they" are causing the problem. "They" will make a great villain for my book. 

When I write the sequel, about the insurance industry, I'll call it "The Algorithm." Because, to look at how consumer advocates are using the term to villainize any decision they don't like, we may have another "they" on our hands.

In the meantime, I suggest that, even amid all the excitement about generative AI, we talk far less about the great algorithms we're creating and put a human face on everything we do. 

The thought about how algorithms are being used to demonize insurers crystallized for me when I saw this headline on a recent press release

"Insurance Commissioner Finalizes Plan Allowing Secret Algorithms to Raise Home Insurance Rates; Lies About Making Insurers Sell More Coverage in Return, Says Consumer Watchdog."

Now, I'm not sure that the California insurance commissioner isn't using some spin when he describes how insurers will have to commit to offering coverage for a certain number of vulnerable homes, in return for the right to use predictive modeling and not just historical data in their underwriting.

But the notion of 'secret algorithms" is a straight-up scare tactic. We're basically supposed to imagine that the Terminator has traveled back through time and just landed among us. 

The use of predictive models is thoroughly standard these days, across industries, but I suspect that consumer advocates, lawyers, and anyone else with a beef with an insurance company will lean into the specter of evil algorithms for some time. 

AI advocates talk a lot these days about going even beyond generative AI to what's known as artificial general intelligence, or AGI — AI that isn't just relegated to a specific task but has truly human intelligence. The advocates talk glowingly about the prospect, but some experts think humanity could be innovating its way out of existence — and we all remember how HAL turned out in "2001: A Space Odyssey."

Insurers are doing all the responsible things, thinking about biases that can creep into AI and providing as much insight as possible into how machine learning makes decisions, so they don't just come from a black box. 

Still, I'd suggest not talking about algorithms, AI, or machine learning except in a situation where there is a clear benefit to the consumer. 

"We're using AI so we can automatically approve your policy submission." 

"We're paying your claim super-fast because of our algorithms." 

"We're using machine learning so our chatbots are so smart they can answer all your questions at any hour of the day or night." 

But I'd make clear that any decision to turn down a policy submission or claim or to raise a rate was made by a human, based on all sorts of traditional, explainable criteria. And I'd make sure customers know that any confusion with a chatbot means a query is immediately kicked to a living, breathing person. 

Because if we were to link adverse outcomes to algorithms, "they" could have a field day. 

Cheers,

Paul

P.S. My favorite Dad joke goes like this:

Q. How do we know that Al Gore invented the internet?

A. Because it relies on Al-Gore-ithms.

Ba-dum-bum.