One of the amazing things about where we are in the arc of data changing our lives is that analytic models are pervasive. They are changing our professional lives, for sure, but I was also reminded recently that models can be used in all areas of our lives. Why? Because, golf! As I watched the professional golf Tour Championship, I thought about how analytic models recently helped me to cash in on predictive golf data.
For the British Open golf tournament in July, the golf club where I play ran a Pick 5 pool. They divide the field into the Top 5 players and A, B, C and D groups of players. You pick one player from each group, and the handful of people who pick the best-performing groups of five players win some credits in the pro shop.
I could have simply made my picks based on research, gut feel for the players and a little knowledge of the game. Instead, in a surprise to nobody, I opted to pick using a big data approach. CBS Sports created a simulation of all the golfers in the field playing the course for the event 10,000 times. They used the current statistics for each player, mapped how those statistics would help or hurt the player on the specific course for the event and then ranked the projected scores for the golfers. I made my picks based on their results.
I won the pool for the British Open using this approach. The golfers that the CBS Sports model had as the lowest scorers for each of the groups created the best pick of about 150 picks from my club mates.
Where is the win in insurance data?
My experience has a corollary meaning for insurance. There is money to be made (and saved) in insurance data modeling by understanding where underwriting is heading with the power of analytics. While we look at what is changing in underwriting, we’ll also look at its impact on insurance profitability, examining three areas in particular:
- Improving the pool of risk
- Deeper analysis and new data sources that will drive product innovation
- Artificial Intelligence (AI) and predictive analytics
Improving the pool of risk
Let’s start with the basics and define the pool. Our pool contains insureds (the breadth of the pool) and their data (the depth of the pool). It would be nice, as underwriters, to pick only pools of winners, but criteria that strict would give us pools that are too small to generate premiums, and underwriters would frequently “lose” because their best picks would disappoint them.
This is the first lesson from the golf simulation’s success: I didn’t use it to pick the winner of the tournament. I used the model to pick a portfolio of golfers who should have performed better than the others in their group. I actually
didn’t have the winner of the tournament in my group.
As with putting together a baseball team, picking stocks for a mutual fund or filling any occupation where the performance of a group matters, that we need to build a healthy pool of risk is a “no-brainer.” Actually doing it, however, is more difficult than simply looking at a few key factors. It requires expert data analysis (some of it automated). It requires excellent visibility (into the pool of risk). And, it requires continual monitoring and tweaking (possibly with some assistance from AI and cognitive computing).
See also: The Next Step in Underwriting
The basic idea, in summary, is that we need a complete knowledge of the full pool and a better visibility into the life of the individual applicant. Underwriters are trying to create a balanced portfolio. They don’t need to pick a perfect risk, but they need to know who is positioned to outperform their peers. By figuring out how to identify those above-expectation performers, they are able to skew their portfolio risk lower and out-perform the odds and the market.
Deeper analysis, new data sources and “smarter” pools will prepare insurers for product innovation.
The second lesson from the golf simulation was this: Every piece of data that is available should be made available in the decision process. In Majesco’s recent report:
Winning in a New Age of Insurance: Insurance Moneyball, we look at how outdated analytic techniques can hide strategic opportunities. The risk to insurers is that up-and-comers will evaluate and price risk with more sources of data and more relevant data.
Traditional underwriting characteristics will give you “A”, “B” and “C” risks (as well as those you’ll reject), but you won’t see within the peer group to see where there’s value in writing business. Traditional underwriting also assumes that factors don’t change on the applicant once they have entered the pool. And it treats everyone in the pool equally (same premiums, same terms) with the same expected outcomes.
But what if pools were built with the ability to tap into more granular data and to adapt forecasts based on current conditions and possible trends? Like looking at a golfer’s ability to play on a wet course, what if we could see how a number of new factors including both personal and global data will affect outcomes? For example, what if commercial insurers could see how small changes in investor sentiment early in a cycle drive expensive, D&O-covered, class action lawsuits three years (two renewals) later?
Look at life insurance. When your company initially accepted Ron as an applicant, it placed him into the A pool. At the time, you only collected MIB data, credit data and some personal data. Since then, you’ve started giving small discounts to the same pool when given access to wearable data and social media data, and you have started collecting Rx reports. In running some simulations, you are realizing that a combination of factors can give you a much better picture of possible outcomes with the new data sources, such as Amazon purchase data or wearable data.
What if you set out to improve predictive analytics within the pool by re-analyzing the pool under newer criteria? Perhaps you offer to give wearables at a discount to insureds or free health check-ups to at-risk members of the pool. It could be any kind of data, but the key is continuous pool analysis.
Preparation’s bonus: Product agility and on-demand underwriting
Every bit of work that goes into analyzing new data sources has a doubly valuable incentive: preparation for next-generation product development.
Once we have our data sources in place and our analytics models prepared, we can grasp the real value in the source, creating some redundancy and fluidity to the process. So, if a data source goes away or is temporarily unavailable or it becomes tainted (imagine more Experian breaches), it could be removed without consequence.
This new thinking will help insurers prepare for on-demand products that will need, not just on-demand underwriting, but on-demand rating and pricing. As we noted in our thought leadership report,
Future Trends 2017: The Shift Gains Momentum, we showed how the sharing economy is giving rise to new product needs and new business models that are using real-time, on-demand data to create innovative products that don’t fit under the constraints of current underwriting practices. P&C insurers, for example, are experimenting with products that can be turned on and off for different coverages … like auto insurance for shared drivers like Uber or Lyft. And this is just the start of the on-demand world. Insurance is available when and where it is needed and priced based on location, duration and circumstances of need.
If an insurer has removed the rigidity of its data collection and added real depth to data alternatives, it will be able to approach these markets with greater ease. At Majesco, we help insurers employ data and analytic strategies that will provide agility in the use of data streams.
Real-time underwriting will become instant/continuous underwriting. Analytics will be used more to prevent claims than to predict them.
Which brings us to the role of artificial intelligence in underwriting.
See also: Data Opportunities in Underwriting
AI and predictive analytics
Simulations have been in use for decades, but, with artificial intelligence and cognitive computing, simulations and learning systems will become underwriting’s greatest asset. Underwriters who have seen hundreds and thousands of applications can pick out outlying factors that have an impact on claims experience. This is good, and certainly it should continue, but perhaps a better form for picking the winners would be for applications to run through simulations first. Let cognitive computing have the opportunity to pick out the outlying factors and allow predictive analytics to weigh applications and opportunities for protection. (For more information on how AI will affect insurance, be sure to read Majesco’s
Future Trends 2017: The Shift Gains Momentum).
Machine learning will improve actuarial models, bringing even more consistency to underwriting and greater automation potential to higher and higher policy values. And it will also allow for “creativity” and rapid testing of new products. Can we adapt a factor and re-run the simulation? Can we dial up or dial down the importance of a factor? Majesco is currently working with IBM to integrate AI/cognitive into the next generation of underwriting and data analysis.
Perfection is unattainable. But if we aim for the best process we can produce, we can certainly use new sources of data and new methods of analysis to improve our game and take home a higher share of the winnings.
How do I know this? Well, the golf club ran a pool for the PGA Championship the month after the British Open. I didn’t win that pool. Out of more than 200 entries — I came in second. Cha-ching!