Insurers stake their businesses on their ability to accurately price risk when writing policies. For some, faith in their pricing is a point of pride. Take Progressive. The auto insurer is so confident in the accuracy of its pricing that it facilitates comparison shopping for potential customers—making the bet it can afford to lose a policy that another insurer has underpriced, effectively passing off riskier customers to someone else’s business.
There are a number of data points that go into calculating the premium of a typical home or auto insurance policy: the claim history or driving record of the insured; whether there is a security system like a smoke or burglar alarm installed; the make, model and year of the car or construction of the home. Another contributing factor, of course, is location, whether it’s due to an area’s vehicle density or crime statistics or distance of homes from a coastline. Insurers pay close attention to location for these reasons, but the current industry standard methods for determining a location—whether by zip code or street segment data—often substitutes an estimated location for the actual location. In many cases, the gap between the estimated and actual location is small enough to be insignificant, but where it’s not, there’s room for error—and that error can be costly.
Studies conducted by Perr&Knight for Pitney Bowes looked into the gap between the generally used estimated location and a more accurate method for insurers, to find out what impact the difference had on policy premium pricing. The studies found that around 5% of homeowner policies and a portion of auto policies—as many as 10% when looking at zip-code level data—could be priced incorrectly because of imprecise location data. Crucially, the research discovered that the range of incorrect pricing—in both under- and overpriced premiums—could vary significantly. And that opens insurers up to adverse selection, in which they lose less-risky business to better-priced competitors and attract riskier policies with their own underpricing.
Essentially, this report discusses why a “close enough is good enough” approach to location in premium pricing overlooks the importance of accuracy—and opens insurers to underpricing risk and adverse selection.
The first part of this paper discusses the business case for hyper-accurate location data in insurance, before going into more detail on the Perr&Knight research and the implications of its findings, as well as considerations when improving location data. It concludes with a few key takeaways for insurers going forward. We hope you find it constructive and a good starting point for your own discussions.
The Business Case for Better Location Data
Precise location data helps insurers realize increased profits by minimizing risk in underwriting, thereby reducing underpricing in policies. These factors work together to improve the overall health of the insurer’s portfolio.
“The basic, common sense principle is that it’s really hard to determine the risk on a property you’re insuring if you don’t know where that is,” says Mike Hofert, managing director of insurance solutions at Pitney Bowes. “Really, the key question is, how precisely do you need to know where it is? If you’re within a few miles, is that close enough?”
While most of the time, Hofert says, the answer might be yes—especially for homes in major hurricane, landslide or wildfire zones, because those homes all have a similar location-based risk profile—it’s not always the case. Where it’s not, imprecise location data can have costly consequences. “There are instances where being off by a little bit geographically turns into a big dollar impact,” he says.
See also: Competing in an Age of Data Symmetry
Currently, industry standard location data for homeowner policies rely typically on interpolated street data. That means that streets will be split into segments of varying length, and homes within that segment are priced at the same risk. However, explains Jay Gentry, insurance practice director at Pitney Bowes, the more precise method is to use latitude and longitude measured in the center of the parcel, where the house is. That can be a difference of a few feet from the segment, or it can be a difference of 500 feet, a mile or more. “It just depends on how good the [segment] data is,” Gentry says.
And that flows into pricing, because when underwriters can more accurately assess the risk of a location—whether it’s where a home is located or where a car is garaged—policies can be priced according to the risk that location actually represents.
It’s tempting to look at the portion of underpriced policies and assume that they’re zeroed out by the overpriced policies an insurer is carrying, but Gentry says that’s the wrong way to look at it—it’s not a “zero sum” game. “If you really start peeling back the layers on that, the issue is that—over a period of time—it rots out the validity of the business,” he says. “If you have an over- and underpriced scenario, the chances are that you’re going to write a lot more underpriced business.”
A key point here is reducing underpricing, because, when the underlying data leads to policies that are priced at a lower rate than they should be, not only does it open an insurer up to paying out on a policy it hasn’t received adequate premiums for, but underpriced policies may also end up constituting a larger and larger portion of the overall book. This is essentially adverse selection.
Michael Reilly, managing director at Accenture, explains that if the underlying pricing assumptions are off, then a certain percentage of new policies will be mispriced, whether at too high or too low a rate. “The ones that are overpriced, I’m not going to get,” he says, explaining that the overpriced submissions will find an insurer that more accurately prices at a lower rate. “The ones that are underpriced, I’m going to continue to get and so, over time, I am continuing to make my book worse,” he says. “Because I’m against competitors who know how that [policy] should be priced correctly, my book will start to erode.”
And, if that policy is seriously underpriced, losses could easily outweigh all else. Gentry recalls the example of an insurer covering a restaurant destroyed in the Tennessee wildfires in 2016, which it had underpriced due to an inaccurate understanding of that location’s susceptibility to wildfire. “The entire block was wiped out by the wildfire, and [the insurer] had a $9 million claim that they will never recoup the loss on, based upon the premiums.”
The Value of Precision
Perr&Knight is an actuarial consulting and insurance operations solutions firm, assisting insurers with a range of activities including systems and data reporting, product development and regulatory compliance. It also commonly carries out research in the insurance space, and Pitney Bowes contracted it to conduct a comparison of home and auto policy pricing with industry-standard location data and its Master Location Data set. We spoke with principal and consulting actuary Dee Dee Mays to understand how the research was conducted and what it found. The following conversation has been edited for clarity and length:
How was each study carried out and what kinds of things were you looking to find?
On the homeowners' side, we looked at the geo-coding application versus the master location data application. And on the personal auto side, we looked at three older versions that are called INT, Zip4 and Zip5, and we compared those results with the master location data result.
In both cases, we selected one insurance company in one state—a large writer—and had Pitney Bowes provide us with all of the locations in the state. For homeowners, they provided us with a database of single-family, detached home addresses and which territory each geo-coding application would put the address in. They provided us with that database and then we calculated what the premiums would be based on those results and how different they would be, given the different territory that was defined.
For both cases, we picked a typical policy, and we used that one policy to say, “Okay, if that policy was written for all these different houses, or for a vehicle with all these different addresses, how much would the premium differ for that one policy under the various systems?”
And what did you find?
What we found [for homeowners] was there were 5.7% that had a change in territory. So, almost 94% had no change under the two systems. It's coming down to the 5% that do change.
I think that what is more telling is the range of changes. The premium could, under the master location data, either go up 87%, or it could go down 46%. You can see that there's a big possibility for a big change in premiums, and I would say that the key is, if your premium is not priced correctly, if your price is too high compared with what an accurate location would give you, you are probably not going to write that risk. [If] someone else was able to write it with a more accurate location and charge a lower premium, the policyholder would say, “Well, I want to go with this lower premium.”
See also: Location, Location, Location – It Matters in Insurance, Too
So, you're not going to get the premium that's too high, but if you're using inaccurate location and you come up with a lower premium than an insurer that was using accurate location, you are more likely to write that policyholder.
The studies were conducted based on policies for homeowners in Florida and vehicle owners in Ohio; so what kind of conclusions can we draw about policies in other states?
I think it really depends on what the individual insurance company is using to price its policies. One [example] is that it’s now more common in states like California, Arizona, even Nevada, for companies to have wildfire surcharges—and they determine that based on the location of the property. So it’s definitely applicable in another state like that, because any time you’re using location to determine where the property is and you have rating factors based on the location, you have the potential that more-accurate data will give you a better price for the risk that you’re taking.
Putting a Plan in Place
Michael Reilly works at Accenture with the insurance industry and advises on underwriting regarding pricing efficiencies; he also works with Pitney Bowes to educate insurers about location data and its potential to affect accuracy of premium pricing. We talked to him about Perr&Knight’s findings and the impact that more precise location data can have on pricing. The following conversation has been edited for clarity and length:
Given the finding that more than 5% of policies can be priced incorrectly due to location, what's the potential business impact for insurers?
It’s a very powerful element in the industry when your pricing is more accurate, when you know that you’ve priced appropriately for the risk that you have. And when there’s this leakage that’s in here, you’ve got to recognize that the leakage isn’t just affecting the 5% to 6% of policies. That leakage, where they’re underpriced, has to be made up from an actuarial discipline. So that underwriting leakage is actually spread as a few more dollars on every other policy that’s in the account. That jacks up all their pricing just a little bit, and it makes them a little bit less competitive. If their pricing is more accurate, that improves the overall quality of their book and improves their ability to offer better pricing throughout their book.
What are some of the reasons insurers have been slow to act on improving location data?
I think it’s coming from multiple elements. With anything like this, it’s not always a simple thing. One thing is, there are carriers that don’t realize it, don’t realize there is an opportunity for better location [data] and how much that better location [could] actually contribute to their pricing. The second is—and part of the reason there’s a lack-of-awareness issue—is that the lack of awareness is twofold, because it’s also a lack of awareness by the business. Typically, data purchases are handled either by procurement or by IT, and the business doesn’t think about the imprecisions they have in their data. They just trust that the data they get from their geolocation vendor is good, and they move on with life.
The other piece about this is the fact that replacing a geospatial location is not [a matter of] taking one vendor [out] and plugging in a new one, right? We do have all these policies that are on the books, and I’ve got to figure out how do I handle that pricing disruption so I don’t lose customers that are underpriced. I want to manage them through it. I need to look at how I’m pricing out, and actually look it up and look in my file. Do I have to refile because I have a change in rate structure, or does my filing cover the fact that I replaced it with an accurate system? So, I need to look at a couple different things in order to get to where I’d be in the right price.
And then, quite frankly, once they open the covers of this, it also starts to raise other questions of, “Oh, wait a second.” If this data element is wrong or this data element can be better, which other data elements can be improved; or what new data elements can be considered? Could the fire protection score be changed, or average drive speed be used? That’s why we’re starting to talk to carriers and say we might as well look for the other areas of opportunity, as well, because we probably have more leakage from just this. This is the tip. It’s very easily identifiable, very easily measurable, but it’s probably not the only source of leakage within your current pricing.
See also: 10 Trends on Big Data, Advanced Analytics
What we’re trying to help [insurers] do is say, look, if you’re going to purchase this new data, let’s make sure that we have a plan on how we’re going to get in and start to achieve the value relatively quickly. In most cases, if it’s a decent-sized carrier, we know they’re issuing X number of wrong quotes per day because of not having the right location information. So how do we fix this as fast as possible, so we’re not continuing to make the problem worse?
And when you say realizing value quickly, what would be a typical timeline?
There are a couple of elements that will come into play. If someone has to do a refiling, the refiling itself will take a period of time. Assuming they don’t have to do a refiling—and not in all cases will they need to—and depending upon their technology, if they can immediately switch geolocations for new business post-renewals, then you can do that in a very, very short window. At least start to make sure that all new quotes are priced correctly.
Then the question comes in as to how do you want to handle renewals? Whether you want to spread the pricing increase over one year or two years or along those lines? That usually takes a little bit more time to implement within a system, but probably not a significantly long period of time—only a couple of months and then a year to run through your entire book to fully realize the value. Now, if you have to do a filing, all that could be delayed by X number of months.
Key Considerations
Given that location has a material impact on premium pricing, the onus is on insurers to have the most accurate location data available. Those that do will have a competitive advantage over those that don't. Keep in mind the following considerations:
- "Close enough" is not always good enough. Even though location is close enough most of the time, imprecision can have big costs when it masks proximity to hazards.
- The portion of policies affected may be small, but it can have big cost impacts. The range of under- and overpricing varied widely, with some premium pricing off by more than $2,000. And, as Michael Reilly points out, the impact of underwriting leakage is actuarially spread across the entire portfolio, making premiums incrementally less competitive.
- Underpricing is not "zeroed out" by overpricing. In fact, underpricing opens insurers to adverse selection, in which overpriced policies are lost to more accurately priced competitors, and underpriced policies make up a greater proportion of the business.
- Time to value can be quick – and new ratings filings are not always needed.
You can download the full report here.