Paige St. John: Insurers’ computer models deeply flawed
Nov 15, 2010
The following article was published in the Sarasota Herald-Tribune on November 15, 2010:
Paige St. John: Insurers’ computer models deeply flawed
By Paige St. John
Property insurers today are largely dependent on the secret algorithms of just a few simulation programs to determine who qualifies for home insurance and how much it will cost.
The computer models created three decades ago as advisory tools have become so embedded, cautions University of Colorado professor Roger Pielke, a national expert on climate science, “they are treated like Black Box truth machines.”
But the catastrophe models at the core of just about every aspect of hurricane insurance, from rates to regulation, are flawed.
Their creators warn the programs are imprecise, more useful for spitting out a range of possibilities than the single numbers insurance companies commonly select and cite as fact.
But even the ranges are suspect. Studies conducted after Hurricane Katrina showed the models were stuffed with bad data. Industry leaders warn a “garbage-in, gospel-out” mentality has taken hold: Insurers plug in bad information about the property they insure yet accept the risk calculations spit out of the model as fact.
Even further from their intended use, models are being used not to seek the most accurate picture of hurricane risk but to chase the highest profits.
A Herald-Tribune review of regulatory filings and interviews with experts found insurers deciding which model to use, or how to use it, to produce higher rates. Several companies seeking rate increases used models that left out data about safety features on the homes they insure — factors that would have reduced their expected losses and undermined their request for higher premiums.
And 7 of 18 insurers surveyed had located their customers by ZIP code rather than street address, a practice also shown to raise loss predictions.
While Florida regulators have limited control over what property insurers do with their models, they have no say over the offshore reinsurers that carry much of Florida’s hurricane risk and help drive its price.
“It’s the Wild West out there,” Howard Eagelfeld, a state actuary who sits on the only public agency in the U.S. given access to the confidential programs, testified at a national forum.
So much of what comes out of the “scientific” models is wrong, or subjective, or can be manipulated, that experts caution they undermine efforts to regulate rates and give consumers a false sense of certainty about the bill they pay.
Despite that danger, the realm entrusted to the model is growing. Since Katrina, catastrophe models have been expanded to include costs for political meddling, government ineptness and even human greed.
FLAWED FROM THE START
Catastrophe models that insurers use to estimate their risk of hurricane loss come primarily from three vendors.
The majority of what is within those models is confidential; external scrutiny is limited to a single public agency, a Florida commission sworn to keep most of what it sees secret.
Its annual reviews show that inside the black boxes are a mash of real-world observations, theories and statistical formulas. At the core are programs similar to the models weather forecasters use to predict landfall of an approaching hurricane — including a deceptively precise dotted line surrounded by a wide cone of doubt.
Modelers have less than 50 years of reliable hurricane experience from which to draw. Many assumptions must be made, producing results that span wide ranges.
Yet insurers and their regulators typically pick a single number from that cloud of possibilities, using it to set rates, buy reinsurance and determine where to cut coverage.
“Models are seductive,” Nonnie Burns told participants at a national reinsurance conference in 2009, when she was Massachusetts’ insurance commissioner. “We’ve all been sucked into the fascination of models.”
The trap, said Burns and others, is believing the final number chosen from the range of estimates. For a specific insurance company riding out a specific catastrophe, it is almost always wrong.
United Property & Casualty filed documents with state regulators showing its catastrophe model, applied in hindsight, estimated Hurricane Charley would cost twice as much as it actually did.
The same program came close for hurricanes Frances and Katrina, but overshot Hurricane Jeanne by a factor of three.
Another major Florida insurer, the Tower Hill Group, said the state-approved model it uses to set rates came close to actual losses only once. It overshot hurricanes Frances, Ivan and Wilma by a factor of two.
Errors run the other direction, too. Tiny insurer Homesite told regulators its model underestimated four of the past five hurricanes as much as 60 percent.
Despite the missed predictions, all three insurers told regulators they give the models “100 percent credibility” to support recent rate hike requests.
Florida insurance regulators ask every insurer to compare actual losses with what the models predict. Scores of agency files reviewed by the Herald-Tribune show few comply.
Former Renaissance Reinsurance CEO William Riker, a leader in the early adoption of models, accused the industry of what he called “delusional exactitude.”
“A lot of management, regulators, consumers, have gotten confused thinking these cat models are precise estimates of what can actually happen,” Riker told the Herald-Tribune in an interview before his death last May.
GARBAGE IN
If a model is imprecise by nature, it is even more so if the information fed into it is wrong.
After insurers reported widespread problems with their models underestimating Hurricane Katrina losses, modelers reviewed the data insurers had put into those models.
AIR Worldwide discovered property values for commercial buildings off by as much as 90 percent. Other crucial details were missing, including roof type, building construction, window protection and height.
A similar study by competitor RMS found an 80 percent error rate, a company official told the Royal Gazette, a Bermuda newspaper, in 2008.
Among the most glaring of examples: floating gambling casinos in Biloxi, Miss., had been coded into models as land-built concrete buildings. Tossed by the storm, an RMS official said, they behaved more like mobile homes.
After two decades of encouraging insurers to use and trust their models, AIR founder Karen Clark was alarmed.
“Even the minimal amount of information the companies were putting in was lacking,” she told the Herald-Tribune. “There is no way the models can even come close to giving accurate or precise loss estimates.”
Those errors do not just mislead insurers. They lead to added charges for policyholders.
Bad data is so widespread that a survey by financial consulting giant Ernst & Young found reinsurers — companies that sell coverage to insurers — commonly tack on surcharges as high as 25 percent to cover potentially missed risk. The penalty is passed to policyholders, who pay for their insurer’s inability to distinguish between a concrete bunker and a mobile home.
SKEWING RESULTS
How an insurer uses a model also greatly affects the results it gets. It is up to insurers whether to factor in storm surge, details of home construction and even the exact location of a property.
Experts say such flexibility allows insurers to fit a model to their particular situation or tolerance of risk.
But the Herald-Tribune also found insurers choosing models or using them in ways that boosted their bottom line, including to argue for rate hikes.
Filings with Florida regulators show several insurers sought rate increases this year after using catastrophe models that left out loss-reducing details such as roof shape or storm shutters.
Other insurers, including State Farm, modeled their policies at the ZIP code level rather than street address, a practice a former Lloyd’s of London executive said generally increases the estimated loss. An analysis by RMS showed the hurricane risk within a single Miami ZIP code differs by as much as 250 percent.
Which model insurers choose is crucial, potentially doubling the estimated hurricane losses, according to a 2009 Florida State University study.
Confidential documents turned over to Florida regulators show Allstate in 2007 went so far as to develop a “Plan B” if it thought its catastrophe model would not support a rate hike.
If that were the case, according to a company PowerPoint presentation obtained and cited by state regulators, Allstate intended to switch to a later model version known to produce higher losses.
In a 2008 letter to Florida Senate leaders, Allstate lawyers said the company was not model shopping.
They said Allstate planned to eventually switch to the higher model anyway and it was just a matter of timing.
State Farm this year openly switched to a model that better supported its rate-hike request. After years of presenting other models, State Farm swapped to a seldom-used engineering-based catastrophe model to argue for a 21 percent rate increase on homes fortified against hurricanes.
The company said it always uses multiple models, and chose this one to best support its evidence that home mitigation, such as storm shutters and modern roof design, is not as effective as regulators contend.
A comparison by the Herald-Tribune shows State Farm’s new model generates statewide loss estimates 18 percent higher than its previous model. Records show Florida regulators questioned the model switch but nevertheless approved the rate increase last week.
MODEL CREEP
Until 2006, Florida largely limited catastrophe models to meteorology and engineering.
That changed when modelers successfully argued they had enough data from the 2004 and 2005 hurricanes to accurately predict what they call “demand surge.”
The label once included only the rise in price of materials and hourly wages that accompany the biggest storms. But with state approval came a large expansion of demand-surge factors.
The model RMS created in 2006 calculated not only for a bundle of shingles, but for price-gouging by contractors, claims fraud by policyholders and sloppy work by harried adjusters.
It also created what it called “Super Cat” charges for major storms RMS believed would trigger a series of follow-on disasters, as did Katrina in New Orleans.
They include the economic meltdown of a community, botched disaster response, political interference with insurers, and unforeseen events, such as the collapse of the levees. The Super Cat category drove up insurance costs primarily for commercial policyholders.
Eight cities, including Miami and Tampa, were designated as susceptible to such systemic meltdown, and their hurricane loss estimates increased accordingly.
For some Florida businesses, these new model factors did more to increase premiums than the controversial assumption that hurricane frequency had increased.
For a commercial property insurer, the Super Cat designation alone has the potential to more than double predicted hurricane losses, New Jersey-based insurance broker NAPCO warned clients in a 2006 review. RMS told the Herald-Tribune home losses typically increase less than 10 percent.
The model expansion allows insurers to shield yet more of their rate from open scrutiny by regulators. Where they were once forced to justify such charges, it is now an automatic function of the confidential model.
From 2006 to 2007, while the Florida modeling commission reviewed demand surge, it also ruled those discussions confidential to shield information from competitors.
“If it’s not logical, we would not approve it,” said commission chairman Randy Dumm, who runs an insurance risk institute at Florida State University.
Three commission members told the Herald-Tribune they believe demand surge is a real expense. Yet they also characterized efforts to model that cost as “evolving” and “not well-understood.” They noted models can include false assumptions that inflate losses.
Modeling commission actuary Martin Simons testified before Massachusetts regulators that the new RMS model presumed insurers will pay more to house storm victims in hotels — but applied that charge to all policies, even vacation homes whose owners do not collect temporary living expenses.
RMS in 2007 told the Florida model review commission it based its estimates for demand surge on data from hundreds of thousands of claims against its insurance company clients.
Though the company said that at the time there was no published literature to support its methods, RMS vice president Claire Souch last week noted recent studies from a French economist.
“You can be quite data-driven on this,” she said. “It’s human behavior, but it manifests itself through numbers.”
While the new charges are an attempt to make models better mirror the real world, they are “too blunt,” and should not be applied to all policies as a matter of course, said modeling expert Karen Clark.
Others worry that adding charges not based on independent research only makes models easier to manipulate.
“It leaves the door open for all sorts of incentives to take root,” said Pielke, the University of Colorado professor.
NO RULES IN BERMUDA
There are no rules for how Bermuda reinsurers — ultimately responsible for the bulk of private hurricane coverage and its price — use their models.
A survey by Bermuda regulators in 2008 found four of five reinsurers adjust their models in some way. Two-thirds increase the value of property as reported to them by the insurers they cover.
A similar number add in storm surge, though insurers do not generally pay flood damage. A third apply their own increases for price-gouging and inflation.
Though she said data quality is better among Florida home insurers than commercial carriers, Clark has pressed state regulators and national rating agencies for greater scrutiny of how insurers use models.
At the minimum, she said, regulators should require independent audits of data fed into models.
A rating agency, A.M. Best, agreed to ask insurers to declare what data quality checks they use.
Otherwise, Clark said, there has been no fundamental change in how rating agencies or regulators address data quality.
Find this story at: http://www.heraldtribune.com/article/20101115/ARTICLE/11151050/-1/news300?Title=Insurers-computer-models-deeply-flawed