Natural catastrophe risk models have revolutionized the property/casualty re/insurance business over the past 30 years. They have allowed more efficient deployment of capital by providing a rigorous way of estimating potential losses, better quantifying the tail and increasing trust in the probabilities assigned to natural disasters and the damage and losses they produce.
All of these models have been developed from common assumptions: An event happens and produces impacts on a known (although somewhat uncertain) exposure (property or other fixed asset), which has a known (although, again, somewhat uncertain) vulnerability to the consequences (hazard) of the originating event. Using an intricate mix of physics (through natural science and engineering lenses) and statistics, such models produce insurance loss estimates that are, generally, robust and defensible.
As new systemic and non-natural risks have emerged, establishing the potential future loss range of perils, such as terrorism and cyber, has required the introduction of social science disciplines (and greater levels of uncertainty) but did not greatly disrupt the established logic of the cat model; the components and controls remained familiar.
Not so infectious disease models. First introduced to the insurance sector to capture excess mortality from global pandemics in the life insurance business, they began as a combination of stochastic elements of natural catastrophe models with a well-established form of epidemiological model, the Susceptible – Infectious – Recovered compartmental model (and its many and varied siblings).
Unknowns
From a traditional cat modeling perspective, there remained a lot of unknowns. For example, the two components of “hazard” – location and intensity – were both poorly understood, thanks to a very sparse and poorly documented experiential history and only a rudimentary understanding of the zoonotic viruses that are the dominant cause of epidemics and pandemics.
And the model architecture required was more Gaudí than Brutalism. There is no fixed exposure or vulnerability; both are dynamic and feed directly back into the model in its next time step. And exposure and vulnerability are not controlled by engineering equations, they are assumed impacts of political decisions and human behavior, of travel webs and social networks.
The Sars-CoV-2 virus has brought epidemiological modeling to our living rooms (many doubling as home offices). Previously obscure epidemiological modelers have become household names, and the concepts of reproduction numbers, non-pharmaceutical control measures and even herd immunity have become all too familiar. Covid-19 is by far the best-documented pandemic ever, but even after many months of live information being available (although to widely varying degrees and with a broad quality range) to calibrate forward-looking models of case counts and mortality, inconsistencies and uncertainties abound.
See also: Transformation of the Risk Landscape
Epidemic forecasting, by nature, is a tall order. In some cases, these model inconsistencies are due to different assumptions that necessarily change as new information becomes available. Another reason model outputs may not reflect future outcomes is because there is a feedback loop dynamic – models affect reality. If a model predicts a dire outcome, it may in fact prompt decision makers and even the general public to change their behaviors, thereby changing the final outcome.
Further challenges are found in the conversion of pandemic model outputs to the short-term economic impacts of interest to P&C re/insurers. The literature on the economic impacts of pandemics is extremely sparse (although this will change) and dominated by economic simulations that sit on top of epidemic simulations, rather than empirical data. The consequences of government policy responses (like lockdowns) and sociological dynamics (fear, social distancing) are generally not economic outputs from models but input assumptions driving the direction of the reproduction number and, ultimately, the outcome of the epidemiological event.
As one moves from modeling a single event to the full probabilistic modeling familiar to the re/insurance industry, additional challenges must be addressed.
We think near misses are frequent in real life and must be captured via counterfactuals in the modeling domain; two coronaviruses with very similar characteristics emerging in very similar locations can lead to very different global outcomes, at the whim of individual actions – by patient zero, a head of state or many people in between – impossible to fully capture stochastically. Big challenges remain in quantifying public policy and behavioral elements that shape the nature of risk; these, too, need to be mapped out as they evolve over time and then linked to biological and epidemiological modeling frameworks.
Lessons to learn
Progress is being made, however, and Covid-19 learnings will help, although the temptation to model to the last big event has to be closely managed. The next pandemic will most certainly be different in character.
There have been significant advances in our understanding of the nature and spatial distribution of zoonotic viruses that pose the greatest risk of spilling into human populations and igniting pandemics. Improvements in biosurveillance have also shed new light on the rate of spillover, which is critical to characterizing high-frequency events, as well as the tail.
There are also continuing advances in modeling methodology, ranging from the incorporation of socio-political factors to capturing population movements. And there is still work to be done. The assumptions required to construct a probabilistic pandemic model are hugely influential on outcomes but are now based on expert judgments that are art as much as science and vary (often in ways that are not readily quantifiable) from modeler to modeler. The use of structured expert judgment to quantify and constrain uncertainties in such assumptions – and thus in model outcomes – is an area of development that carries promise from successful deployment in other contexts and, alongside other innovations, will help to build a level of trust in pandemic models that approaches that found in nat cat models.
See also: Benchmarks, Analytics Post-COVID
Despite present and future scientific and modeling advances, the full benefits will not be realized if there is a failure among decision makers to effectively use data and analytical tools as part of their decision-making process, whether it be to inform preparedness or guide response activities.
In the context of the global re/insurance market, it must be recognized that while modeling infectious disease risk is challenging and will take time and resources to build the level of trust found in nat cat models, there are already pathways to gain an understanding of the risk. This present understanding is sufficient to support tangible innovation – policy experiments, insurance structures, refinements to preparedness and mitigation strategies – within both public and private sectors. Ultimately, further innovation will be necessary (and is entirely within our grasp) if we hope to better manage the financial and social consequences of future epidemics and pandemics.