The Earth is a living, breathing planet, rife with hazards that often hit without warning. Tropical cyclones, extra-tropical cyclones, earthquakes, tsunamis, tornados and ice storms: Severe elements are part of the planet’s progression. Fortunately, the vast majority of these events are not what we would categorize as “catastrophic.” However, when nature does call, these events can be incredibly destructive.
To help put things into perspective: Nearly 70% (and growing) of the entire world’s population currently lives within 100 miles of a coastline. When a tropical cyclone makes landfall, it’s likely to affect millions of people at one time and cause billions of dollars of damage. Though the physical impact of windstorms or earthquakes is regional, the risk associated with those types of events, including the economic aftermath, is not. Often, the economic repercussions are felt globally, both in the public and private sectors. We need only look back to Hurricane Katrina, Super Storm Sandy and the recent tsunamis in Japan and Indonesia to see what toll a single catastrophe can have on populations, economies and politics.
However, because actual catastrophes are so rare, property insurers are left incredibly under-informed when attempting to underwrite coverage and are vulnerable to catastrophic loss.
Currently, insurers’ standard actuarial practices are unhelpful and often dangerous because, with so little historical data, the likelihood of underpricing dramatically increases. If underwriting teams do not have the tools to know where large events will occur, how often they will occur or how severe they will be when they do occur, then risk management teams must blindly cap their exposure. Insurers lacking the proper tools can’t possibly fully understand the implications of thousands of claims from a single event. Risk management must place arbitrary capacity limits on geographic exposures, resulting in unavoidable misallocation of capital.
However, insurers’ perceived success from these arbitrary risk management practices, combined with a fortunate pause in catastrophes lasting multiple decades created a perfect storm of profit, which lulled insurers into a false sense of security. It allowed them to grow to a point where they felt invulnerable to any large event that may come their way. They had been “successful” for decades. They’re obviously doing something right, they thought. What could possibly go wrong?
Fast forward to late August 1992. The first of two pivotal events that forced a change in the attitude of insurers toward catastrophes was brewing in the Atlantic. Hurricane Andrew, a Category 5 event, with top wind speeds of 175 mph, would slam into southern Florida and cause, by far, the largest loss to date in the insurance industry’s history, totaling
$15 billion in insured losses. As a result, 11 consistently stable insurers became insolvent. Those still standing either quickly left the state or started drastically reducing their exposures.
The second influential event was the 1994 earthquake in Northridge, CA. That event occurred on a fault system that was previously unknown, and, even though it measured only a 6.7 magnitude, it generated incredibly powerful ground motion, collapsing highways and leveling buildings.
Northridge, like Andrew, also created approximately $15 billion in insured losses and caused insurers that feared additional losses to flee the California market altogether.
Andrew and Northridge were game changers. Across the country, insurers’ capacity became severely reduced for both wind and earthquake perils as a result of those events. Where capacity was in particularly short supply, substantial rate increases were sought. Insurers rethought their strategies and, in all aspects, looked to reduce their catastrophic exposure. In both California and Florida, quasi-state entities were formed to replace the capacity from which the private market was withdrawing. To this day, Citizens Property Insurance in Florida and the California Earthquake Authority, so-called insurers of last resort, both control substantial market shares in their respective states. For many property owners exposed to severe winds or earthquakes, obtaining adequate coverage simply isn’t within financial reach, even 20 years removed from those two seminal events.
How was it possible that insurers could be so exposed? Didn’t they see the obvious possibility that southern Florida could have a large hurricane or that the Los Angeles area was prone to earthquakes?
What seems so obvious now was not so obvious then, because of a lack of data and understanding of the risks. Insurers were writing coverage for wind and earthquake hazards before they even understood the physics of those types of events. In hindsight, we recognize that the strategy was as imprudent as picking numbers from a hat.
What insurers need is data, data about the likelihood of where catastrophic events will occur, how often they will likely occur and what the impact will be when they do occur. The industry at that time simply didn’t have the ability to leverage data or experience that was so desperately needed to reasonably quantify their exposures and help them manage catastrophic risk.
Ironically, well before Andrew and Northridge, right under property insurers’ noses, two innovative people on opposite sides of the U.S. had come to the same conclusion and had already begun answering the following questions:
- Could we use computers to simulate millions of scientifically plausible catastrophic events against a portfolio of properties?
- Would the output of that kind of simulation be adequate for property insurers to manage their businesses more accurately?
- Could this data be incorporated into all their key insurance operations – underwriting, claims, marketing, finance and actuarial – to make better decisions?
What emerged from that series of questions would come to revolutionize the insurance industry.