AI developers are making amazing advances. Witness the excitement around AI’s progress in search, cancer diagnosis, genomic medicine, autonomous vehicles, Go, smart homes, machine translation, and even lip reading.
Progress in such complex problems raises hopes for the development of general-purpose AI that can be deployed in a wide range of intelligent, open-ended interactions with people like computer interface, customer service, planning and advice.
[caption id="attachment_23871" align="alignnone" width="960"] Photographer: Michael Nagle/Bloomberg[/caption]
It is easy to imagine an enhanced Apple Siri, Amazon Alexa or IBM Watson that engages in conversations with people to answer questions, fulfill commands and even anticipate needs. In fact, unless you watch marketing videos with a very critical eye (like the latest one for Alexa shown below), you might even believe that AI has already reached this point.
Unfortunately, AI is far from this level of intelligence. AI lacks the capability to understand, much less answer, many kinds of easy questions that we might pose to human assistants, agents, advisors and friends.
Imagine asking this question of some AI-enhanced tool in the foreseeable future:
I am thinking about driving to New York from my home in Vermont next week. What do you think?
Most such tools will easily offer a wealth of data, like possible routes, including distances, travel times, attractions, rest stops, and restaurants. Some might incorporate historical traffic patterns for different times of day and even weather forecasts to recommend particular routes.
See also: Could Alexa Testify Against You?
But, as the noted AI researcher Roger Schank smartly lays out in a recent article, there are many aspects of this question that AI tools will not address adequately any time soon—but that any person could easily do so now.
Understanding such limitations is key to understanding the near term potential of AI and what it really means to be “intelligent."
Schank points out that a person who knows you would know much about what you are really asking. For example, is your old car up to the task? Are you up to making the drive? Would you enjoy it? How might Broadway show schedules affect your decision about whether or when to go?
“Real conversation involves people who make assessments of each other and know what to say to whom based on their previous relationship and what they know about each other,” Schank writes. “Sorry, but no ‘AI’ is anywhere near being able to have such a conversation because modern AI is not building complex models of what we know about each other.”
In additional to the above question, Schank offers nine other questions that illustrate what people can easily answer but AI cannot:
What would be the first question you would ask Bob Dylan if you were to meet him?
Your friend told you, after you invited him for dinner, that he had just ordered pizza. What will he eat? Will he use a knife and fork? Why won’t he change his plans?
Who do you love more, your parents, your spouse, or your dog?
My friend’s son wants to drop out of high school and learn car repair. I told her to send him over. What advice do you think I gave him?
I just saw an ad for IBM’s Watson. It says it can help me make smarter decisions. Can it?
Suppose you wanted to write a novel and you met Stephen King. What would you ask him?
Is there anything else I need to know?
I can’t figure out how to grow my business. Got any ideas?
Does what I am writing make sense?
Answering these kinds of questions, Schank points out, requires robust models of the world. How do mechanical, social and economic systems work? How do people relate to one another? What are our expectations about what is reasonable and what is not?
Answering Question 2, for example, requires an understanding of how people function in daily life. It requires knowing that people intend to eat food that they order and that pizza is typically eaten with one’s hands.
Answering Question 5 requires analyzing lots of data, which AI can do, and thus help in making better decisions. But, actually making better decisions also requires prioritizing goals and anticipating the consequences of complex actions.
Answering open-ended questions like Question 7 requires knowing the context of the question and to whom you are talking.
Answering advice-seeking questions like Question 8 requires the use of prior experiences to predict future scenarios. Quite often, such advice is illustrated with personal stories.
See also: Insights on Insurance and AI
Many AI researchers (like Schank) have explored such capabilities but none have mastered them. That does not mean that they never will. It does mean that applications that depend on such capabilities will be much more brittle and far less intelligent than is required.
One way of thinking about AI is that it consists of the leading edges of computer science. Mind-bending computational capabilities are being developed in numerous application domains and deserve your attention. Generalizing those capabilities to human level intelligence, and therefore assuming their widespread applicability, is premature.
Having a clear-eyed view of what AI can and cannot do is key to making good decisions about this disruptive technology—and leaving the irrational exuberance to others.
Get Involved
Our authors are what set Insurance Thought Leadership apart.