Balancing AI Innovation With Consumer Protection

Regulators in the EU and U.K. want insurers to balance AI's potential with protecting consumers from potential bias and harm.

finding balance

The regulatory challenge around artificial intelligence (AI) is as broad as its potential applications. In response to the scale of the task ahead, the U.K. government’s white paper "A Pro-Innovation Approach to AI Regulation," published last year, emphasizes fostering and encouraging innovation. The ambition is clear: to create an environment that enables businesses to develop and invest in AI while protecting businesses and consumers from potential harms and some of the worst excesses of the technology.

Regulation today

Currently, AI is indirectly regulated in the U.K., meaning there are no specific U.K. laws designed to address AI. Instead, a range of legal frameworks are indirectly relevant to the development and use of AI.

AI systems fundamentally rely on large amounts of data to train the models that underpin these systems. Personal data is often used to develop and operate AI systems, and individuals whose personal data is used have all the normal rights under existing laws such as the General Data Protection Regulation (GDPR). These typically include rights of transparency, rights to data access, and, perhaps most importantly, existing rights under GDPR not to be subject to automated decision-making in relation to significant decisions, except under special circumstances.

Impact on insurance

The current approach to AI regulation in the U.K., as outlined in the government's white paper, relies heavily on existing regulatory frameworks. Particularly relevant for the insurance industry is financial services regulation and the role of the Financial Conduct Authority (FCA) and Prudential Regulation Authority (PRA).

The FCA and PRA will use their existing powers to monitor and enforce against the misuse of AI, applying existing principles such as consumer protection and treating customers fairly. These principles might be affected if an insurer relies on an AI system and predictive models to make pricing decisions.

Because there is a risk that some customers might be discriminated against or priced out of insurance markets through the increased use of AI, the FCA is carefully considering how to translate existing principles when regulating firms in the use and misuse of AI.

See also: Balancing AI and the Future of Insurance

Embracing the future

Most companies accept that it won’t be possible to hold back the tide of AI. As such, many leading businesses are already focusing on how to integrate AI into their operations and apply it to their existing business models, recognizing the need to embrace rather than resist change.

In the insurance industry, not all of this is new. For many years, insurers have used algorithms and machine learning principles for risk assessment and pricing purposes. However, new developments in AI make these technologies increasingly powerful, coupled with the explosion in other forms of AI, such as generative AI, that are more novel for insurance businesses. Consequently, a key challenge for the industry is adapting and upgrading existing practices and processes to account for advancements in existing technology while also embracing certain innovations.

Regulation tomorrow

The EU AI Act is one of the world's first comprehensive horizontal laws designed specifically to focus on the regulation of AI systems, and the first to command widespread global attention. It is an EU law, but importantly, it also has extraterritorial effect. U.K. businesses selling into the EU or using AI systems where the outputs affect individuals in the EU are potentially caught by the law.

The EU AI Act applies to all parts and sectors of the economy. Crucially, it is also a risk-based law that does not try to regulate all AI systems and distinguishes among different tiers of risk in relation to the systems it does regulate. The most important category of AI system under the act is high-risk AI systems, where the vast majority of obligations lie.

Most of these obligations will apply to the provider or developer of the AI system, but there are also obligations that apply to the system’s user or deployer. The insurance industry is specifically flagged in the EU AI Act as an area of concern for high-risk AI systems. The act explicitly identifies AI systems used in health and life insurance for pricing, underwriting, and claims decision-making due to the potential significant impact on individuals’ livelihoods.

Because the majority of the obligations will sit with the system’s provider, businesses building their own AI tools—even if those tools depend on existing models such as a large language model to develop an in-house AI system—will be classified as a provider of that AI system under the act.

As a result, businesses will need to understand the new law, first determining in which areas they are caught as a provider or deployer, before planning and building the required compliance framework to address their relevant obligations.

The regulation of AI had one brief mention in July’s King’s Speech, specifically in relation to the establishment of requirements around the most powerful AI models. This means the EU AI Act is the most pressing law on AI likely to be applicable to insurers in the coming years. Other relevant parties, such as the Association of British Insurers (ABI), have provided guidance to their members on the use of AI.

See also: Two Warnings About AI

Navigating AI

For the insurance industry and other sectors to make the most out of AI, businesses will first need to map out where they are already using AI systems. You can only control what you understand. Similar to when businesses first began their GDPR compliance programs, the starting point is often to identify where within the organization they are using personal data and, in particular, where they are using AI systems that are higher risk because of the sensitivity of the data they're processing or the criticality of the decisions for which they are being used.

Once the mapping stage is complete, organizations should then begin the process of building a governance and risk management framework for AI. The purpose is to ensure clearly defined leadership for AI is in place within the business, such as a steering or oversight group that has representation from different functions, articulating the business's overall approach to AI adoption.

Organizations will also need to decide how aggressive or risk-averse they want their business to be regarding the use of AI. This includes drafting key policy statements that will help clarify what the business is prepared to do and not do, as well as defining some of the most fundamental controls that will need to be in place.

Following this, more granular risk assessment tools will be needed for specific use cases proposed by the business that can be assessed, including a deeper dive into the associated legal risks, combined with controls on how to build the AI system in a compliant way that can then be audited and monitored in practice.

The approach a business takes will also depend significantly on whether they buy their AI systems from a technology vendor or instead buy the data they need for their own in-house AI system, as well as potentially selling AI to their own customers at the other end of the pipeline. An insurance broker, for example, might sell certain services to an insurer that depends on the use of AI and will therefore need to consider how they manage their risk at both ends of that pipeline in terms of the contracts, the assurances they get from their vendors, and whether they are prepared to give the same assurances to their customers.

The challenge with AI at present is that use cases are continuously emerging, so demand is increasing. Therefore, the creation of governance frameworks and processes needs to be designed alongside the business processes to assess and prioritize the highest-value AI activities and investments. Governance, business, and AI teams need to work side-by-side to embed appropriate processes within the emerging use cases, which can often save significant work later.

The EU AI Act is a complex piece of legislation and will be a significant challenge to construct the frameworks needed. Success will not only rely on the quality of data and models used and good governance but also on adopting an approach to the new law that is proportionate and builds confidence to enable businesses to make the most of future AI opportunities.


Chris Halliday

Profile picture for user ChrisHalliday

Chris Halliday

Chris Halliday is global proposition leader for personal lines pricing, product, claims and underwriting at WTW's Insurance Consulting & Technology business.

Read More