Listen to the pundits and self-described experts, and you get the impression that artificial intelligence is taking over the world. Our cars will drive themselves, our buildings will optimize their energy, accidents will be avoided and we will be afforded every manner of convenience. Everything around us will be smart – looking out for our best interests, automating everything, providing new services and generally making life wonderful.
Many emerging technologies will have to come together to realize this vision, but perhaps artificial intelligence (in all its various forms) is the lynchpin.
This all sounds fantastic – and creates some great opportunities for the insurance industry to help policyholders reduce risks and improve their health and well-being. But dig a little deeper and you’ll discover a paradox – we will be expected to turn over the controls to AI-based machines while recognizing the fact that the AI in use, at least today, can be unreliable.
Everyone has their own favorite examples of tech gone awry, and these examples are not meant to tar all AI-based systems with the same brush. But let me offer a few examples from everyday life to illustrate that. while many actually work quite well, there are enough that stumble to cause more careful consideration to how AI should be used for critical applications.
See also: Underwriting Lessons From the PGA
AI-Based Meeting Schedulers
Interacting with bots via email to quickly schedule meetings can lead to frustration. What seems like a simple request that any person would understand is often misinterpreted by the AI-based schedulers. I’ve seen many instances where emails were sent back and forth multiple times, each with increasing layers of confusion, just to get the right people on the right call at the right time.
Voice Assistants
Voice communications are poised to become the major way that humans interact with computing devices. Tremendous progress has been made in the accuracy of speech recognition and natural language processing. While the progress has been terrific and the accuracy rates are now approaching that of human understanding of speech, there are still enough errors that we should be cautious. To illustrate this point, I provide the following humorous examples. As humorist Dave Barry is fond of saying, “I am not making this up.”
What I said in a post about autonomous vehicles:
“… as more personal vehicles are embedded with AI…”
How Siri translated it:
“…as more personal vehicles are in bed with a guy…”
Another prime example is the word "insurtech." I would not expect this to be translated correctly at first, but after correcting Siri hundreds of times, I still get "and shirt tech," or "ensure text" or my personal favorite: “I’m sure Texas.”
I do find that my Amazon Echo device, Alexa, is generally good at interpreting a request, although one of the most common responses I get is: “Hmmm, I don’t know that.”
With my car navigation system, I have given up trying to voice dial my wife, Deanna, and certain other individuals because the names are never recognized, and I sometimes trigger a call to someone in Asia!
These are light examples of how a small error in interpretation can significantly alter the original meaning. In the examples I’ve provided, the errors were harmless and easily fixed. But what does this mean for the vision of the connected, intelligent, autonomous world? And what does it mean for insurers?
See also: Seriously? Artificial Intelligence?
I believe the power and potential of AI is tremendous and will yield astounding benefits for the world. We will be able to dramatically reduce vehicle accidents and reduce or avoid machine breakdowns and property damage. We will be able to assist the elderly and disabled to live independently, improve personal health and extend lifespans. All of this is possible and will be enabled by AI, working in concert with other emerging technologies like the IoT, robotics, wearables and others.
But we do need to be circumspect regarding this AI-fueled future. Asking the status of a flight or requesting a song is one thing … but controlling high-value machinery, healthcare devices or moving vehicles is quite another. Insurers should monitor the progress of AI, pilot and experiment with various types of AI technologies and consider the possible positive and negative implications of a broader usage of AI. The real questions are about timing and whether businesses and individuals will have the restraint and governance to wisely use AI-based solutions for the mission-critical and life-critical uses that are being contemplated.