5 Practical Steps to Get to Self-Service

Many insurance carriers are finding they have to overcome decades of information neglect.

To participate in the new world of customer self-service and straight-through processing, many insurance carriers find themselves having to deal with decades of information neglect. As insurers take on the arduous task of moving from a legacy to a modernized information architecture and platform, they face many challenges. I'll outline some of the common themes and challenges, possible categories of solutions and practical steps that can be taken to move forward. Let's consider the case of Prototypical Insurance Company (PICO), a mid-market, multiline property/casualty and life insurance carrier, with regional operations. PICO takes in $700 million in direct written premiums from 600,000 active policies and contracts. PICO's customers want to go online to answer basic questions, such as "what's my deductible?"; "when is my payment due?"; "when is my policy up for renewal?"; and "what’s the status of my claim?” They also want to be able to request policy changes, view and pay their bills online and report claims. After hearing much clamoring, PICO embarks on an initiative to offer these basic self-service capabilities. As a first step, PICO reviews its systems landscape. The results are not encouraging. PICO finds four key challenges. 1. Customer data is fragmented across multiple source systems. Historically, PICO has been using several policy-centric systems, each catering to a particular line of business or family of products. There are separate policy administration systems for auto, home and life. Each system holds its own notion of the policyholder. This makes developing a unified customer-centric view extremely difficult. The situation is further complicated because the level and amount of detail captured in each system is incongruent. For example, the auto policy system has lots of details about vehicles and some details about drivers, while the home system has very little information about the people but a lot of details about the home. Thus, choices for key fields that can be used to match people in one system with another are very limited. 2. Data formats across systems are inconsistent. PICO has been operating with systems from multiple vendors. Each vendor has chosen to implement a custom data representation, some of which are proprietary. To respond to evolving business needs, PICO has had to customize its systems over the years. This has led to a dilution of the meaning and usage of data fields: The same field represents different data, depending on the context. 3. Data is lacking in quality. PICO has business units that are organized by line of business. Each unit holds expertise in a specific product line and operates fairly autonomously. This has resulted in different practices when it comes to data entry. The data models from decades-old systems weren’t designed to handle today's business needs. To get around that, PICO has used creative solutions. While this creativity has brought several points of flexibility in dealing with an evolving business landscape, it's at the cost of increased data entropy. 4. Systems are only available in defined windows during the day, not 24/7. Many of PICO's core systems are batch-oriented. This means that updates made throughout the day are not available in the system until after-hours batch processing has completed. Furthermore, while the after-hours batch processing is taking place, the systems are not available, neither for querying nor for accepting transactions. Another aspect affecting availability is the closed nature of the systems. Consider the life policy administration system. While it can calculate cash values, loan amounts, accrued interest and other time-sensitive quantities, it doesn't offer these capabilities through any programmatic application interface that an external system could use to access these results. These challenges will sound familiar to many mid-market insurance carriers, but they’re opportunities in disguise. The opportunity to bring to bear proven and established patterns of solutions is there for the taking. FOUR SOLUTION PATTERNS There are four solution patterns that are commonly used to meet these challenges: 1) establishing a service-oriented architecture; 2) leveraging a data warehouse; 3) modernizing core systems; and 4) instituting a data management program. The particular solution a carrier pursues will ultimately depend on its individual context. 1. Service-oriented architecture SOA consists of independent, message-based, contract-driven and, possibly, asynchronous services that collaborate. Creating such an architecture in a landscape of disparate systems requires defining:
  • Services that are meaningful to the business: for instance, customer, policy, billing, claim, etc.
  • Common formats to represent business data entities.
  • Messages and message formats that represent business transactions (operations on business data).
  • Contracts that guide interactions between the business services.
Organizations such as Object Management Group and ACORD have made a lot of headway toward offering industry-standard message formats and data models. After completing the initial groundwork, the next step is to enable existing systems to exchange defined messages and respond to them in accordance with the defined contracts. Simple as it might sound, this so-called service-enablement of existing systems is often not a straightforward step. Success here is heavily dependent on how well the technologies behind the existing systems lend themselves to service enablement. An upfront assessment would be entirely warranted. Assuming service enablement is possible, we’re still not in the clear. SOA only helps address issues of data format inconsistencies and data fragmentation. It will not help with issues of data quality and can offer only limited reprieve from unavailability of systems. Unless those can be addressed in concert, this approach will only provide limited success. 2. Data warehouse A data warehouse is a data store that accumulates data from a wide range of sources within an organization and is ultimately used to guide decision-making. While using a data warehouse as the basis of an operational system (such as customer self-service) is a choice, it is really a false choice for a couple of different reasons.
    • Building a data warehouse is a big effort. Insurers usually can’t wait for its completion. They have to move ahead with self-service now.
    • Data warehouses are meant to power business intelligence, not operational systems. If the warehouse already exists, there’s a 50% chance that it was built on a dimensional model. A dimensional model does not lend itself to serving as a source for downstream operational systems. On the other hand, if it’s a “single version of truth” warehouse, the company is well on its way to addressing the data challenges under discussion.
3. Modernizing core systems Modern systems make self-service relatively simple. However, unless modernization is already well underway, it, too, cannot be waited for, because implementation timeframes are so long. 4. Instituting a data management program A data management program is a solution that deals with specific data challenges, not the foundational reasons behind those challenges. To overcome the four challenges mentioned at the beginning of the article, a program could consist of a consolidated data repository implemented using a canonical data model on top of a highly available systems architecture leveraging data quality tools at key junctions. Implementing such a program would be much quicker than the previous three options. Furthermore, it can serve as an intermediate step toward each of the previous three options. As an intermediate step, it has a risk-mitigation quality that’s particularly appealing to mid-sized organizations. The particular solution a carrier pursues will ultimately depend on its individual context. In the final part of this series, we’ll discuss practical steps that a carrier can take towards instituting its own data management program. PRACTICAL STEPS Here are the practical steps that a carrier can take toward instituting its own data management program that can successfully support customer self-service. The program should have the following five characteristics: 1. A consolidated data repository The antidote to data fragmentation is a single repository that consolidates data from all systems that are a primary source of customer data. For the typical carrier, this will include systems for quoting, policy administration, CRM, billing and claims. A consolidated repository results in a replicated copy of data, which is a typical allergy of traditional insurance IT departments. Managing the data replication through defined ETL processes will often preempt the symptoms of such an allergy. 2. A canonical data model To address inconsistencies in data formats used within the primary systems, the consolidated data repository must use a canonical data model. All data feeding into the repository must conform to this model. To develop the data model pragmatically, simultaneously using both a top-down and a bottom-up approach will provide the right balance between theory and practice. Industry-standard data models developed by organizations such as the Object Management Group and ACORD will serve as a good starting point for the top-down analysis. The bottom-up analysis can start from existing source system data sets. 3. "Operational Data Store" mindset -- a Jedi mind trick Modern operational systems often use an ODS to expose their data for downstream usage. The typical motivation for this is to eliminate (negative) performance impacts of external querying while still allowing external querying of data in an operational (as opposed to analytical) format. Advertising the consolidated data repository built with a canonical data model as an ODS will shift the organizational view of the repository from one of a single-system database to that of an enterprise asset that can be leveraged for additional operational needs. This is the data management program’s equivalent of a Jedi mind trick! 4. 24/7/365 availability To adequately position the data repository as an enterprise asset, it must be highly available. For traditional insurance IT departments, 24/7/365 availability might be a new paradigm. Successful implementations will require adoption of patterns for high availability at multiple levels. At the infrastructure level, useful patterns would include clustering for fail-over, mirrored disks, data replication, load balancing, redundancy, etc. At the SDLC level, techniques such as continuous integration, automated and hot deployments, automated test suites, etc. will prove to be necessary. At the integration architecture level (for systems needing access to data in the consolidated repository), patterns such as asynchronicity, loose coupling, caching, etc., will need to be followed. 5. Encryption of sensitive data Once data from multiple systems is consolidated into a single repository, the impact of a potential breach in security will be amplified several-fold – and breaches will happen; it’s only a matter of time, be they internal or external, innocent or malicious. To mitigate some of that risk, it’s worthwhile to invest in infrastructure level encryption (options are available in each of the storage, database and data access layers) of, at a minimum, sensitive data. A successful data management program spans several IT disciplines. To ensure coherency across all of them, oversight from a versatile architect capable of conceiving infrastructure, data and integration architectures will prove invaluable.

Samir Ahmed

Profile picture for user SamirAhmed

Samir Ahmed

Samir Ahmed is an architect with X by 2, a technology consulting company in Farmington Hills, MI, specializing in software, data architecture and transformation projects for the insurance industry. He received a BSE in computer science and computer engineering from the University of Michigan.

Read More