As with most scientific fields, meta-analysis of a range of studies and statistics is crucial in testing both the reliability and validity of data to be used for further application. This blog article examines how risk professionals, working in the (re)insurance market, can adopt this approach to inform decisions regarding flood risk.
Data has become crucial for the day-to-day functioning of businesses across the globe, and spanning almost all industry sectors, in recent years. The (re)insurance market is no different; key decisions are informed by data across all levels of the business – from underwriting decisions made at the point of quote, to company-wide strategies made at executive level – to ensure that the opportunities to turn a profit are maximised whilst the risk of loss is predicted and accounted for. This is where a data-provider comes in, supplying a single dataset which gives a view of how likely an area is to flood.
This raises an important question: can one model alone provide the view of risk needed to make these important, and sometimes complex, decisions?
Some of the biggest scientific organisations, such as the Intergovernmental Panel on Climate Change (IPCC), understand the importance of analysing data outputs from multiple models and taking these into consideration before publishing their findings. In fact, when the IPCC are drafting their reports on climate change, they use results from over 40 datasets to ensure accuracy in the predictions they make.
Any professionals working with models of this kind are aware that relying on just one data source can be problematic. The variety of methodologies that can be implemented when modelling flood events can give rise to varying views of risk depending on the model, and so risk levels may not be always 100% accurate in each location.
This is exacerbated when looking at a national-scale model. In places where risk scores may vary, it would be prudent to have at least one other model to make comparisons to – whether the purpose is confirming what a preferred dataset is already outputting, or analysing the nuances within the differing results, this exercise gives risk professionals a wider pool of information to help them come to a decision on what would be the best course of action to take.
In addition to this, with Solvency II regulations to consider, having more than one dataset in your arsenal can arm you with a strong understanding of your data and risk distribution in your wider portfolio. This comes as a benefit alongside the potential uplift in new business and increased confidence in underwriting decisions, which in turn will help to reduce losses.
We at Ambiental have spoken with a number of insurers looking to take this approach, with another one of our large clients actively doing this by using our dataset alongside another leading supplier. This looks to be a trend that is on the incline, particularly in light of the frequency of flood events in recent years, coupled with competitive nature of the industry.
If you are looking to take on an additional dataset for the reasons outlined in this article, or for another reason which we haven’t covered, one of our team would be happy to hear from you.
Please send any queries regarding our data solutions through our contact form here, or give us a call on +44 (0)20 3857 8543.
About the Author
William Doheny is the Research Assistant at Ambiental Risk Analytics. His key role is to investigate and report on new innovations in flood data and analytics, and assist in the understanding emerging market trends.
Flood Insurance Analytics
Our advanced flood risk analytics expose new opportunities and help insurers assess and price flood risk more accurately.
Flood Re – One Year On
Flood Re, an exciting new scheme to help ensure affordable home insurance for everyone, has been running for just over a year now – so how has it fared?
FloodScore™ Property provides property-level flood risk scores and flood risk information for all properties in England, Scotland, Wales and Northern Ireland.