Catastrophe models from third-party vendors have established themselves as essential tools in the armory of risk managers and other practitioners wanting to understand insurance risk relating to natural catastrophes. This is a welcome trend. Catastrophe models are perhaps the best way of understanding the risks posed by natural perils—they use a huge amount of information to link extreme or systemic external events to an economic loss and, in turn, to an insured (or reinsured) loss. But no model is perfect, and a certain kind of overreliance on the output from catastrophe models can have egregious effects.
This article provides a brief overview of the kinds of traps and pitfalls associated with catastrophe modeling. We expect that this list is already familiar to most catastrophe modelers. It is by no means intended to be exhaustive. The pitfalls could be categorized in many different ways, but this list might trigger internal lines of inquiry that lead to improved risk processes. In the brave new world of enterprise risk management, and ever-increasing scrutiny from stakeholders, that can only be a good thing.
1. Understand what the model is modeling…and what it is not modeling!
This is probably not a surprising "No. 1" issue. In recent years, the number and variety of loss-generating natural catastrophes around the world has reminded companies and their risk committees that catastrophe models do not, and probably never will, capture the entire universe of natural perils; far from it. This is no criticism of modeling companies, simply a statement of fact that needs to remain at the front of every risk-taker’s mind.
The usual suspects—such as U.S. wind, European wind and Japanese earthquake—are "bread and butter" peril/territory combinations. However, other combinations are either modeled to a far more limited extent, or not at all. European flood models, for example, remain limited in territorial scope (although certain imminent releases from third-party vendors may well rectify this). Tsunami risk, too, may not be modeled even though it tends to go hand-in-hand with earthquake risk (as evidenced by the devastating 2011 Tohoku earthquake and tsunami in Japan).
Underwriters often refer to natural peril "hot" and "cold" spots, where a hot spot means a type of natural catastrophe that is particularly severe in terms of insurance loss and is (relatively) frequent. This focus of modeling companies on the hot spots is right and proper but means that cold spots are potentially somewhat overlooked. Indeed, the worldwide experience in 2011 and 2012 (including, among other events, a Thailand flood, an Australian flood and a New Zealand earthquake) reminded companies that so-called cold spots are very capable of aggregating up to some significant levels of insured loss. The severity of the recurrent earthquakes in Christchurch, and associated insurance losses, demonstrates the uncertainty and subjectivity associated with the cold spot/ hot spot distinction.
There are all sorts of alternative ways of managing the natural focus of catastrophe models on hot spots (exclusions, named perils within policy wordings, maximum total exposure, etc.) but so-called cold spots do need to remain on insurance companies’ risk radars, and insurers also need to remain aware of the possibility, and possible impact, of other, non-modeled risks.
2. Remember that the model is only a fuzzy version of the truth.
It is human nature to take the path of least resistance; that is, to rely on model output and assume that the model is getting you pretty close to the right answer. After all, we have the best people and modelers in the business! But even were that to be true, there can be a kind of vicious circle in which model output is treated with most suspicion by the modeler, with rather less concern by the next layer of management and so on, until summarized output reaches the board and is deemed absolute truth.
We are all very aware that data is never complete, and there can be surprising variations of data completeness across territories. For example, there may not be a defined post or zip code system for identifying locations, or original insured values may not be captured within the data. The building codes assigned to a particular risk may also be quite subjective, and there can be a number of "heroic" assumptions made during the modeling process in classifying and preparing the modeling data set. At the very least, these assumptions should be articulated and challenged. There can also be a "key person" risk, where data preparation has traditionally resided with one critical data processor, or a small team. If knowledge is not shared, then there is clear vulnerability to that person or team leaving. But there is also a risk of undue and unquestioning reliance being placed upon that individual or team, reliance that might be due more to their unique position than to any proven expertise.
What kind of model has been run? A detailed, risk-by-risk model or an aggregate model? Certain people in the decision-making chain may not even understand that this could be an issue and simply consider that "a model is a model."
It is worth highlighting how this fuzzy version of the truth has emerged both retrospectively and prospectively. Retrospectively, actual loss levels have on occasion far exceeded modeled loss levels: the breaching of the levies protecting New Orleans, for example, during Hurricane Katrina in 2005. Prospectively, new releases or revisions of catastrophe models have caused modeled results to move, sometimes materially, even when there is no change to the actual underlying insurance portfolio.
3. Employ additional risk monitoring tools beyond the catastrophe model(s).
Catastrophe models are a great tool, but it is dangerous to rely on them as the only source of risk management information, even when an insurer has access to more than one proprietary modelling package.
Other risk management tools and techniques available include:
- Monitoring total sum insured (TSI) by peril and territory
- Stress and scenario testing
- Simple internal validation models
- Experience analysis
Stress and scenario testing, in particular, can be very instructive because a scenario yields intuitive and understandable insight into how a given portfolio might respond to a specific event (or small group of events). It enjoys, therefore, a natural complementarity with the hundreds of thousands of events underlying a catastrophe model. Furthermore, it is possible to construct scenarios to investigate areas where the catastrophe model may be especially weak, such as consideration of cross-class clash risk.
Experience analysis might, at first glance, appear to be an inferior tool for assessing catastrophe loss. Indeed, at the most extreme end of the scale, it will normally provide only limited insight. But catastrophe models are themselves built and given parameters from historical data and historical events. This means that a quick assessment of how a portfolio has performed against the usual suspects, such as, for U.S. exposures, hurricanes Ivan (2004), Katrina (2005), Rita (2005), Wilma (2005), Ike (2008) and Sandy (2012), can provide some very interesting independent views on the shape of the modeled distribution. In this regard, it is essential to tap into the underwriting expertise and qualitative insight that the property underwriters can bring to risk assessment.
4. Communicate the modeling uncertainty.
In light of the inherent uncertainties that exist around modeled risk, it is always worth discussing how to load explicitly for model and parameter risk when reporting return-period exposures, and their movements, to senior management. Pointing out the need for model risk buffers, and highlighting that they are material, can trigger helpful discussions in the relevant decision-making forums. Indeed, finding the most effective way of communicating the weaknesses of catastrophe modeling, without losing the headline messages in the detail and complexity of the modeling steps, and without senior management dismissing the models as too flawed to be of any use, is sometimes as important for the business as the original modeling process.
The decisions that emerge from these internal debates should ultimately protect the risk carrier from surprise or outsize losses. When they happen, such surprises have a tendency to cause rapid loss of credibility from outside analysts, rating agencies or capital providers.