Can the insurance industry rise to the challenge of giving businesses a safety net for their AI usage? In a recent World Economic Forum report, nearly 1,500 surveyed professionals identified AI as their organization’s biggest technology risk. Insurers, however, could view AI risk mitigation as a meaningful business opportunity. Deloitte projects that by 2032, insurers can potentially write around $4.7 billion in annual global AI insurance premiums, at a compounded annual growth rate of around 80%.
To get there, many insurance firms will likely need to build and expand their capabilities—and soon. To put things into perspective, there are estimates that AI could add over 10%—or roughly $12.5 billion—to global GDP by 2032. In the next few years, society may be hard-pressed to find any aspect of daily life that does not have an AI engine in the background.
However, this revolutionary technology is not without both anticipated and unforeseen risks. Consider the following scenario: In the not-too-distant future, a person could take their self-driving car to a doctor’s appointment to get an AI-assisted diagnosis; a few weeks later, they could have AI-assisted surgery and eventually file an insurance claim through an AI chatbot. A lot of things can go wrong in this scenario; the autonomous car could bump into another vehicle, the initial diagnosis could be incorrect, or the chatbot could reject the valid claim outright. The risks stemming from AI in this example could range from a significant financial loss to a potential fatality. And while some of these risks may seem futuristic, they are already starting to materialize.
Liabilities arising from use and development of AI can potentially be both significant and unpredictable. However, in today’s competitive market, business leaders may feel pressure to adopt AI technology, despite the risks of diving into unknown territory. Consequently, leaders are often seeking security against unforeseen events.
See also: How AI Is Shaking Up Insurance
Currently, a number of AI solution vendors are providing some safeguards for their AI products, including indemnification from legal claims made against output of their generative AI tools. However, the current and future anticipated velocity of AI development could affect the magnitude and variety of risks that can unfold and may go beyond what can be managed by a few corporations on their own, particularly those that may have already been on the receiving end of lawsuits.
From generative AI alone, businesses could face losses from risks such as cybersecurity threats, copyright infringements, wrong or biased outputs, misinformation or disinformation and data privacy issues. Having an insurance policy to protect against such issues could help assuage concerns and even encourage further AI adoption at scale.
Regulators globally are likely to soon demand safeguards and risk management practices around AI use, which will likely include insurance coverage.
The European Union is developing the world’s first comprehensive set of regulations governing AI and has provisions for fines up to $38 million. Several U.S. states have also introduced bills or resolutions governing AI. At the federal level, President Biden issued the Executive Order on Safe, Secure, and Trustworthy Development and Use of Artificial Intelligence. Even if these regulations do not mandate insurance, provisions for hefty fines may drive companies to seek insurance coverage for these risks. Another factor that could compel businesses to seek insurance coverage would be if there were an increase in the severity and frequency of AI damages and losses.
A few large reinsurers are already participating in the AI insurance market. Munich Re rolled out a specific AI insurance product, primarily meant for AI startups in 2018. They also launched coverages for AI developers, adopters and businesses building self-developed AI models. Several insurtech startups are also beginning to operate in this space. Armilla AI launched a product that guarantees the performance of AI products.
That said, the lack of historic data on the performance of AI models and the speed at which they are evolving can make assessing and pricing risks difficult. Insurers entering the market are developing in-house expertise and proprietary qualitative and quantitative assessment frameworks to better understand the risks inherent to these AI systems.
See also: Cautionary Tales on AI
Most insurers are expected to follow a wait-and-watch approach, looking at large global carriers as they establish some pricing and loss history. Examining lessons learned from cyber insurance, carriers will likely demand stringent risk management practices and guardrails to limit their liabilities. Carriers may also rely on model audit, attestation firms and other outside AI expertise for help in understanding the “black box” better before pricing it.
As the world continues to evolve, new risks will emerge. In their role of providing coverage for a wide range of risks, insurers will be called on to design protection and trust in a society where AI is pervasive.