Businesses typically have a hard time quantifying potential losses from a data breach because of the myriad factors that need to be considered.
A recent disagreement between Verizon and the Ponemon Institute about the best approach to take for estimating breach losses could make that job a little harder.
For some time, Ponemon has used a cost-per-record measure to help companies and insurers get an idea of how much a breach could cost them. Its estimates are widely used.
The institute recently released its latest numbers showing that the average cost of a data breach has risen from $3.5 million in 2014 to $3.8 million this year, with the average cost per lost or stolen record going from $145 to $154.
Infographic: Data breaches drain profits
The report, sponsored by IBM, showed that per-record costs have jumped dramatically in the retail industry, from $105 last year to $165 this year. The cost was highest in the healthcare industry, at $363 per compromised record. Ponemon has released similar estimates for the past 10 years.
But, according to Verizon, organizations trying to estimate the potential cost of a data breach should avoid using a pure cost-per-record measure.
Free IDT911 white paper: Breach, Privacy, And Cyber Coverages: Fact And Fiction
ThirdCertainty spoke with representatives of both Verizon and Ponemon to hear why they think their methods are best.
Verizon’s Jay Jacobs
Ponemon’s measure does not work very well with data breaches involving tens of millions of records, said Jay Jacobs, Verizon data scientist and an author of the company’s latest Data Breach Investigations Report (DBIR).
Jacobs says that, when Verizon applied the cost-per-record model to breach-loss data obtained from 191 insurance claims, the numbers it got were very different from those released by Ponemon. Instead of hundreds of dollars per compromised record, Jacobs said, his math turned up an average of 58 cents per record.
Why the difference? With a cost-per-record measure, the method is to divide the sum of all losses stemming from a breach by the total number of records lost. The issue with this approach, Jacobs said, is that cost per record typically tends to be higher with small breaches and drops as the size of the breach increases.
Generally, the more records a company loses, the more it’s likely to pay in associated mitigation costs. But the cost per record itself tends to come down as the breach size increases, because of economies of scale, he said.
Many per-record costs associated with a breach, such as notification and credit monitoring, drop sharply as the volume of records increase. When costs are averaged across millions of records, per-record costs fall dramatically, Jacobs said. For massive breaches in the range of 100 million records, the cost can drop to pennies per record, compared with the hundreds and even thousands of dollars that companies can end up paying per record for small breaches.
“That’s simply how averages work,” Jacobs said. “With the megabreaches, you get efficiencies of scale, where the victim is getting much better prices on mass-mailing notifications,” and most other contributing.
Ponemon’s report does not reflect this because its estimates are only for breaches involving 100,000 records or fewer, Jacobs said. The estimates also include hard-to-measure costs, such as those of downtime and brand damage, that don’t show up in insurance claims data, he said.
An alternate method is to apply more of a statistical approach to available data to develop estimated average loss ranges for different-size breaches, Jacobs said
While breach costs increase with the number of records lost, not all increases are the same. Several factors can cause costs to vary, such as how robust incident response plans, pre-negotiated contracts for customer notification and credit monitoring are, Jacobs said. Companies might want to develop a model that captures these variances in costs in the most complete picture possible and to express potential losses as an expected range rather than use per-record numbers.
Using this approach on the insurance data, Verizon has developed a model that, for example, lets it say with 95% confidence that the average loss for a breach of 1,000 records is forecast to come in at between $52,000 and $87,000, with an expected cost of $67,480. Similarly, the expected cost for a breach involving 100 records is $25,450, but average costs could range from $18,120 to $35,730.
Jacobs said this model is not perfectly accurate because of the many factors that affect breach costs. As the number of records breached increases, the overall accuracy of the predictions begins to decrease, he said. Even so, the approach is more scientific than averaging costs and arriving at per-record estimates, he said.
Ponemon’s Larry Ponemon
Larry Ponemon, chairman and founder of the Ponemon Institute, stood by his methodology and said the estimates are a fair representation of the economic impact of a breach.
Ponemon’s estimates are based on actual data collected from individual companies that have suffered data breaches, he said. It considers all costs that companies can incur when they suffer a data breach and includes estimates from more than 180 cost categories in total.
By contrast, the Verizon model looks only at the direct costs of a data breach collected from a relatively small sample of 191 insurance claims, Ponemon said. Such claims often provide an incomplete picture of the true costs incurred by a company in a data breach. Often, the claim limits also are smaller than the actual damages suffered by an organization, he said.
“In general, the use of claims data as surrogate for breach costs is a huge problem, because it underestimates the true costs” significantly, Ponemon said.
Verizon’s use of logarithmic regression to arrive at the estimates also is problematic because of the small data size and the fact the data was not derived from a scientific sample, he said.
Ponemon said the costs of a data breach are linearly related to the size of the breach. Per-record costs come down as the number of records increases, but not to the extent portrayed by Verizon’s estimates, he said.
“I have met several insurance companies that are using our data to underwrite risk,” he said.