Is Insurance Ready for AI Agents?

In this Future of Risk interview, Gallagher Bassett's chief digital officer, Joe Powell, details how far AI has come in insurance and where it goes next (carefully).

JOe future of risk banner

 

joe headshot

Joe Powell joined Gallagher Bassett in 2014. He serves as chief digital officer, overseeing data, analytics, and product innovation functions.

His team manages GB's innovation road map, including Luminos RMIS, Waypoint decision support, and GB's suite of AI technologies. The team also provides a wide range of analysis, reporting, and insight services, from basic loss runs to state-of-the-art machine learning-based benchmarking.

Previously, Powell served as a management consultant at Bain, focusing on initiatives involving growth strategy, corporate investment planning, IT strategy, post-merger integration, and cost reduction across numerous industries.

He holds a bachelor of science degree and a master of science degree in management information systems, both from the Kelley School of Business at Indiana University.
 


Insurance Thought Leadership

How would you describe the transition from basic generative AI to agentic AI?

Joe Powell

I'll start with AI as it existed when ChatGPT was launched in 2022. It was this raw foundation model that you could ask questions of, and it would give you a cogent -- maybe not completely accurate, but at least coherent -- response. It sounded like a human. It was magical.

From there, the progression has been a relatively natural evolution toward better and better accuracy, to the point where eventually that accuracy becomes so good that you can begin to use it to take action. That's the nature of agentic AI -- you're going from something that is error-prone to something that hopefully is highly accurate and is driving action for your organization.

You could imagine the progression: When we first got into AI, having a chatbot like ChatGPT was impressive in itself. Eventually, it evolved to allow handling documents and asking questions about them. The next step was allowing the AI not just to get you the answer from that document but to show you where in that document it found the answer, thus getting a little bit more action-oriented. Eventually, the AI gets to a place where you've implemented this seamlessly in an organization's process, and through that process, it's handling a step or helping with a step and presenting information to a person to make a decision. Finally, you get to a place where the AI is taking that action out of a person's hands, and operating autonomously. That’s when you achieve the true goal of agentic AI.

Insurance Thought Leadership

What are some examples of AI taking actions traditionally performed by humans, either at Gallagher Bassett or in the industry at large?

Joe Powell

In the industry, we're generally not at the point where AI can make major decisions yet. There are two important dimensions to consider when thinking about agentic AI and what actions it should take. First, how critical is the decision? Another way to think about this is: What are the consequences if the decision is wrong? And second, how autonomous do you want the AI to be?

As decisions become more critical, you're less likely to want AI making them. For instance, you're not going to have AI deciding the settlement on a $10 million claim. Instead, you'll start with AI handling relatively routine, day-to-day decisions. As it becomes more sophisticated and accurate, you'll trust it with increasingly critical decisions.

The same principle applies to autonomy. The right approach is to begin with AI providing information to help you make decisions. Then, as it progresses, AI will help make recommendations. Eventually, you'll reach a stage where AI is making decisions with human oversight, and finally, it will make decisions with full autonomy.

Insurance Thought Leadership

What are some current examples of AI applications that meet appropriate risk and reliability standards?

Joe Powell

Around the industry, we're seeing organizations take a cautious approach by starting with low-risk decisions. Much of what we're seeing involves having AI review work that humans have already completed – for example, reviewing completed policy work or double-checking the coding on a claim.

At Gallagher Bassett, we're developing a tool to help flag urgent emails. This addresses a micro-decision that people make every day when reviewing their inboxes -- determining which messages are critical. Having AI serve as a second set of eyes to identify important demand letters and determine what needs immediate escalation is highly valuable.

These are the types of low-risk applications where we're starting to see AI step in -- relatively small day-to-day decisions where having an automated helper or second set of eyes can provide significant value.

Insurance Thought Leadership

It seems AI could be a good verification tool, given the recent story about AI quickly detecting a mathematical error in research about black plastic spatulas that was overlooked by human researchers?

Joe Powell

Absolutely. There are analogs for our industry, as well. You could imagine somebody entering a plan of action on a claim and having AI double check to make sure it aligns with what was prescribed in the prior plan of action -- what was going to be done, what was going to be followed up on, and whether those action items were addressed. 

AI can also be a huge help in ensuring adherence to best practices and consistent product delivery. At Gallagher Bassett, we use both generative AI and machine learning models to help us make more consistent decisions in various ways.

We tend to leave the power in the hands of our claims experts but still have AI offering recommendations or acting as a check. A simple example is determining the right reserve at a given time. We ultimately equip our adjusters to make a decision on the most likely ultimate financial outcome on that claim, but we have AI models running in the background that are constantly checking to say, "Is this what the AI would come up with?" If not, we have a conversation about why, and whether a reserve change is needed.

Insurance Thought Leadership

How do you ensure that AI implementation genuinely helps adjusters and customers rather than becoming technology in search of a problem?

Joe Powell

A few ways. One is getting those people involved early in the decision-making process. We have a team of former adjusters, whom we call AI specialists, that are engaged in our AI design team. These individuals advocate from the very beginning of an AI project in terms of how to make the AI effective in the adjuster’s workflow.

Second, as the product progresses to something usable, you want to begin testing it in multiple ways. There's automated testing for accuracy, which includes head-to-head tests with actual people to see which one's more accurate. You do automated tests to see how consistent the AI is in its decision-making and whether that reveals areas where we might need to improve the model. Finally, you gain feedback early in the process from a pilot group.

As an example, we launched a tool that summarizes claim files. You can imagine, if you're an adjuster handling an insurance claim that's been open for a few years and is highly complex, it can have hundreds of pages of documentation. It can be extremely helpful to have the AI scan through those hundreds of pages and give you a tight summary.

The summary covers what happened with the claim, the medical situation, how legal has progressed, whether we're nearing settlement, and what next steps have been documented. It also provides the ability to drill into any one of those areas to find out more. The user can then dive deeper into specific pieces of information, like all the medical visits that have happened.

We had the AI specialists involved upfront, but we also piloted it and got phenomenal feedback from adjusters in the field. We also asked what more we could provide. The input was key to the tool’s success when we launched this across our entire North America operation.

Insurance Thought Leadership

How do you envision teams of AI agents working together, particularly in claims processing?

Joe Powell

This is an interesting problem because as an organization and industry we're launching more and more AI tools, and they all tend to report back to the human user -- typically the adjuster, in our world. 

I like to use the analogy of a basketball team. If you're playing basketball and you can only talk to the coach, you as a team are not going to communicate well. The players need to be able to communicate with each other, not just with the coach.

I think that's the next stage, especially as we see more agentic AI that's actually taking action. Those actions, just like human decisions, need to be based on information from other agents and what they're seeing. For example, if you have an AI that handles claim intake and asks various questions, being able to distill that information down and perhaps having a back-and-forth between that AI agent and one that's concerned with detecting fraud could equip the fraud agent to do an even better job.

The interplay between these AI agents is something we're just beginning to experiment with, but it's really powerful. In the same way that a human would be much less productive if they could only work by themselves and never ask anybody questions, AI agents will be much more productive when they can begin to interact and share knowledge.

Insurance Thought Leadership

How will you approach integrating AI agents that span different organizations, ones that maybe start with Gallagher Bassett and eventually expand to carriers and brokerages?

Joe Powell

For the foreseeable future, it's much more realistic to do this within a controlled environment. But we're laying the groundwork for broader implementation.

For example, we have that AI that engages with email -- flagging important messages and gleaning critical information from them. We've got an AI that covers claim documentation and another AI that listens to phone calls, creating transcription summaries, checking if we're following best practices, and analyzing the sentiment of the person on the other end of the call.

When you've got these various sources of information available to AI, we can begin to pull relevant pieces from each to make things like litigation predictors or reserve predictors even more powerful. Whatever you could dream up becomes possible when you can access and integrate these various sources of information.

Insurance Thought Leadership

What are your thoughts on the trend of moving from large language models to small language models that are specific to industries and business functions?

Joe Powell

It'll be interesting to see how this develops. There's a lot of experimentation happening around whether to use large language models (like ChatGPT 4o), small language models (smaller, more efficient models), or large language models that have been fine-tuned for specific industries.

Which approach fits best with which use case is something we'll likely gain better insights about in the coming months and years. Right now, there’s a bias toward large language models because they tend to be more accurate and thus are lower risk, but there could theoretically be a place for small language models for very low risk use cases.

Insurance Thought Leadership

AI processing capacity is reportedly doubling every 3.4 months, so there’s certainly a lot of runway in front of us. 

Joe Powell

There are some interesting rumblings about whether AI is hitting a plateau. While I don't have a strong opinion on that, I think this discussion misses a crucial point: There's a tremendous amount of value we can derive from applications of AI even if the underlying technology itself doesn't get dramatically better.

It's similar to what happened with the internet. At a certain point, even if bandwidth doesn't massively increase, there's still so much you can do with it simply by connecting people -- you just have to figure out creative ways of using the technology. That was true 30 years ago, and it's still relevant today.

We've got the foundational AI technology now.

Insurance Thought Leadership

I'm sure some amusing situations will come up as AI agents talk to each other. I remember moderating a panel in the late 1990s, when a senior partner at a major VC firm talked about using early versions of speech recognition in his car and in his phone. At one point, he said, his car said something unprompted. The phone responded, "I don't understand you." The car started talking back, the phone responded, and so on. His car and phone had this long conversation that he couldn't figure out how to stop.

Joe Powell

You do have to tightly prescribe those interactions. I'm probably making it seem like you just put the two together and say, "Have a conversation," but there's going to have to be a lot of forethought in terms of the architecture. You need to consider what information you want them to share and how it will be shared so you don't have these surprises, but instead have the right information flowing from the right agents to the right agents.

Insurance Thought Leadership

What advice would you give readers looking to begin their AI journey?

Joe Powell

There are several ways companies can get started. One is to build your own, which is what big, well-resourced claims organizations are going to want to do. You build a team of AI experts and invest in the infrastructure -- specifically a private and secure in-house AI environment. It's quite an investment, which is why this path is typically for larger organizations.

Smaller organizations can take different approaches. One option is to work with AI startups and organizations that are selling their services on an ad hoc basis. This is a fast way to get to value. However, longer term, this option presents some risks in terms of what I described -- if you eventually want your AIs taking action and working together, you need them to be cohesive. If you've gone with a multi-vendor approach where you're taking very specific skill sets from each of them, the question becomes whether they'll be able to work well together in the future.

The other option, and probably the better option for most organizations that have claims or claim handling needs, is to partner with an organization that offers an end-to-end solution -- one that has a vision that you share. The key then becomes a matter of making sure you do indeed share a vision -- in terms of their adherence to privacy and security, accuracy, where you think the technology will go, and the organization’s capability to improve your outcomes in terms of better claims results, better communication, and better employee experience.

Insurance Thought Leadership

How do you determine which functions to outsource? The case of Borders Bookstores outsourcing their online book sales to Amazon in the 1990s seems like a cautionary tale.

Joe Powell

Yeah, 100%. That's a fantastic point. At Gallagher Bassett, when we think about what we should build, we look at our core competencies.

The claims summarizer is a great example. We're experts at handling claims -- it's a huge part of what we do as an organization. So naturally, when we want an AI to help us digest claims into tight summaries, that's something we feel we're better than anybody else at taking on. But when you look at something like building the foundation model, that's not something we're going to try to do. There are other examples where we might say OpenAI or Google would be better because that's their game.

Insurance Thought Leadership

This is great. Thanks, Joe. 


Insurance Thought Leadership

Profile picture for user Insurance Thought Leadership

Insurance Thought Leadership

Insurance Thought Leadership (ITL) delivers engaging, informative articles from our global network of thought leaders and decision makers. Their insights are transforming the insurance and risk management marketplace through knowledge sharing, big ideas on a wide variety of topics, and lessons learned through real-life applications of innovative technology.

We also connect our network of authors and readers in ways that help them uncover opportunities and that lead to innovation and strategic advantage.

MORE FROM THIS AUTHOR

Read More