The principal reserving challenge today is the opportunity cost of inefficiencies, primarily caused by the ineffectual use of human capital. This inefficiency makes it hard to redeploy more resources for continuous development and drains talent from reserving teams. The reporting process itself and supporting activities are far too onerous and consume too much human capital resource.
Given this strain on resources and the continuous compromises made due to deficiencies, why has there been so little innovation to date? And why do we think things are going to change?
Uniquely complex
The reporting process is uniquely challenging due to its very different objectives, constraints and timescales compared with typical modeling processes. The governance controls and audit requirements are much more onerous. This adds significantly to the difficulty of maintaining the process - let alone considering innovative techniques and new approaches.
For example, in some markets there is a requirement to print out information on the model parameters and the models used with enough detail for someone to independently recreate the results, which then becomes even more challenging when you start to add complexity to the models involved.
The reserving process itself is an optimization exercise. We recognize that there are a range of reasonable best estimates at any given valuation date; the challenge is in optimizing the selection of our best estimates to meet reserving objectives over time. This is a non-trivial process control problem, and real progress can only be made with a step change in how we think about and perform reserving.
There is, of course, more to reserving than the reporting process. But until we address the fundamental issues here, there is little opportunity for growth. There has been a common perception that reserving is a statutory requirement and nothing more, but reserving is a critical part of business intelligence.
What has changed
The difference now is that advancements in technology provide the tools and resources necessary for a step change in capabilities. For example, workflow management solutions enable the production of much more decision support material in a shorter time, while maintaining and improving governance and controls. And the leaders in this space reduce the need for ad hoc analysis, have more data-driven analysis to support decision making and free up time to embed further capabilities.
Tools provide a powerful solution for integrating software and systems in an end-to-end process, supporting a best-in-class approach to the architecture. This allows much more flexibility to use the best tool for a particular job in an end-to-end process.
The availability of cheap computing power is opening up possibilities to leverage data assets. For example: Robotic process automation is being used to produce more for less with increasing granularity; interpretation techniques are helping decision makers identify the pertinent information and prevent this being lost in large volumes of analytical output; and machine learning is being leveraged to unlock value from unstructured data.
The architecture possibilities enabled by the capability to integrate a diverse range of systems and applications into a coherent process has really expanded the possibilities, not just within reserving but across the organization. For this reason, real change in reserving practices appears feasible in the near future.
See also: How Machine Learning Halts Data Breaches
Impact of machine learning
The role of machine learning should be limited primarily as an enabler for targeted elements of the reserving solution, rather than as a one-stop remedy. Further, machine learning needs to be considered in the context of a wider road map to a future target operating model. Investing in the right development at the wrong time is probably the biggest pitfall when it comes to machine learning and reserving.
Besides operational efficiency and process control, the main benefit of machine learning for reserving is improved insights, and at its core it is designed to tackle the problem of optimization. We are effectively running an optimization to hit a moving target, with the objective of considering the cost of being wrong to provide meaningful outputs. This is very different from machine learning techniques where supervised learning methods are typically used with a well-defined response to simply minimize the error in fitting to historic data.
In pricing and risk modeling, for example, an insurer can assume that the risk differentials in their recent historic policy and claims data are representative of experience in the near future. This is a reasonable assumption in practice.
The difficulty in reserving is three-fold. First, the information necessary to inform the answer far in the future may not be in the historic data. So, building a predictive model of historic data is not going to provide the answer. Second, the estimates will change over time as the insurer generates more information on what the outcome will be. Finally, the reserving process and the way the insurer communicates results are going to be very different from today. This will require many more visualizations. Back testing diagnostics, for example, will be essential. This is why the road map is so important for placing developments in the right order to get where they need to be.
In terms of a single machine learning method that is going to magically solve all the problems in reserving, there is no silver bullet. Indeed, some developments will simply exacerbate existing problems if the right parts of the wider solution are not in place beforehand.
We are seeing a gradual move toward the use of machine learning in projection of ultimate losses. In considering the end goal for machine learning in reserving processes, there is a useful analogy in process control applications in robotics. Consider an exercise of programming a quadcopter to fly through a hoop that is thrown through the air at random. The quadcopter needs to monitor the data feed from sensors tracking the position of the quadcopter and the hoop to determine adjustments to the speed and direction the quadcopter should make.
How the hoop is thrown, how the wind blows during flight and numerous other variables mean that the quadcopter cannot know where that hoop will be when it eventually flies through it. The algorithms optimize a decision, given all information available at the time, to minimize the probability of missing the target at that point in time, and repeat this at regular intervals until the target is reached. In reserving, we cannot know what the ultimate loss for any given cohort will be, any more than the quadcopter can know where the hoop will be. But we can optimize the output from our reserving processes to acknowledge that we're on a journey and minimize the cost of being wrong at any given time.
See also: The Risks of AI and Machine Learning
Road map to unlock reserving capabilities
Insurers that have had the greatest success so far in improving their reserving capabilities are those that have made the most progress on the journey toward their defined future operating model. Strategic planning on developments and clear objectives for these exercises up front is a key differentiator for market leaders. There is a lot that can be done to realize the benefits of operational efficiency and process control before resorting to machine learning. It has its place, but it's not the complete solution.
Embedding a workflow management solution in the reserving processes, such as WTW’s Unify technology, is a critical enabler for the journey. The step change in automation capabilities will enable insurers to produce significantly more decision support material, while mitigating the problems of maintaining the governance, controls and audit, in addition to freeing resources for development activities. This tooling is essential to have in place to enable any real growth in data-driven analytics as well as supporting reserving without expenses spiraling out of control. Insurers will have the capabilities to readily integrate new tooling into the environment, with improvements in governance, control and audit. Robotic process automation will then complement these capabilities by producing increasingly sophisticated data-driven analysis and output in a timely manner for the same headcount.