Two Warnings About AI

Customers are making clear that they hold AI to higher standards than they do humans — and hate when AI makes decisions for or about them.

Image
blue glowing lock

If you've watched "The Good Place" — and you should, if you haven't already — you saw an enactment of a deep philosophical question known as "the trolley problem." The notion is that you're on a trolley heading down a hill, and the brakes fail. If you keep going straight, you're going to kill five people. You can throw a switch and head off onto a siding, but then you're going to kill one person. 

Do you save the five people and accept the responsibility of killing someone? Does your thinking change if that one person is a friend of yours?

The trolley problem may seem like an odd one to include in a comedy, but the handling is extremely funny, and, of course, no character stays dead. 

And the problem neatly exemplifies one of the issues that companies will face as they roll out AI that touches customers directly. If, like Chidi, the moral philosopher in "The Good Place," you make a spur-of-the-moment decision, you are given some grace because you're only human and can only process so fast. But AI doesn't get that grace. Someone sat down ahead of time and programmed or at least developed the AI, so whatever decision is made is treated as well-thought-out — and has to be defended.

AI is held to a much higher standard than we humans are. You can't just decide your AI is good to go once it outperforms your current approach when dealing with customers. You have to account for what people expect out of AI. 

The higher standards for AI have shown up recently in a spate of articles complaining about drones that use computer vision to inspect roofs. The technology sometimes says there is moss or some other problem that warrants denial of coverage when there is, in fact, no problem. 

The systems already do a better job than could be accomplished by having a host of inspectors climb ladders and tromp around on roofs, but homeowners aren't using the current system as their benchmark. They've been led to believe that computers are nearly infallible and that AI is close to magic, so they don't tolerate errors — and often complain to reporters, who share many of those attitudes and are happy to ding AI when it messes up. 

Phil Koopman, a professor at Carnegie Mellon who has a popular newsletter on driverless cars, writes: "It’s simple: people over-trust too soon, and backlash too hard just as quickly after adverse news."

While nearly 41,000 people died in accidents on U.S. roads last year, I'd bet that none got as much attention as the non-fatal accident involving a Cruise autonomous vehicle. The accident was gruesome: Although the AV was initially blameless, hitting a jaywalking pedestrian who was tossed into its path when another car hit her, the AV then pulled off to the side of the road, unaware that the pedestrian was underneath the car and was being dragged 20 feet. But the involvement of the AI greatly heightened the industry and the willingness to find blame — among other repercussions, the CEO of Cruise lost his job, and Cruise lost its license to operate autonomous robotaxis in San Francisco.

My second caution about AI is that, as an article in The Byte expresses it: 

"So-called 'automated decision-making' is being heralded as the next big thing  — but it turns out that many consumers are disgusted by the idea of AI making choices for them."

The article cites a survey that isn't specifically about insurance; it's about job hiring, banking, renting, medical diagnoses, and surveillance. But it's pretty easy to extend the survey results apply to insurance. 

Just as customers want a human making decisions on their loan or job applications, I'd bet that customers don't want to be told that they were denied coverage by AI or had a claim lowered or denied by AI.

AI will increasingly be used to make decisions that touch clients — as it should be — but, at least for now, I'd suggest having humans make the final call and communicate those decisions.

You'll still be required to defend the AI's role in decisions, and people won't give you the benefit of the doubt that they might give to a human acting under time and other pressures.

But at least you won't have to deal with the theatrics that the Ted Danson character summoned for poor Chidi.

Cheers,

Paul

P.S. Here is the trolley scene from "The Good Place." Watching it will be the best three minutes of your day.