About the author
James is a Content Manager for Oxbow Partners' Magellan.Contact James
May 17, 2021 James Tribe
In our latest interview with a technology executive James Tribe, Content Manager for Magellan™, caught up with Adrian Rands, Founder and Chairman at Quantemplate, to discuss who Quantemplate are, what they do and why they are different from other data ingestion tools.
James: What is Quantemplate and why did you start it?
Adrian: Quantemplate is a data automation and connectivity platform for (re)insurers. It handles all data related to risk – claims, premiums, exposures, etc – bringing it into a unified environment from many different underwriting channels such as delegated underwriting, binding authorities, and MGAs, and cleaning it to fit into whatever schemas the client needs. We sit between the end-points of data sources and data visualisation, modelling, and regulatory reporting, automating and connecting everything between these two points.
Marek Nelken, Tom de Gay and I started Quantemplate back in 2013. I was previously at Howden Specialty, working as a reinsurance broker placing casualty treaties and binding authorities. On the side, I had been building technology for predictive forecasting of casualty reinsurance and more recently working at the quantitative hedge fund DE Shaw. Back in 2012 insurers were asking whether the tools we were building could help improve their analytics. But we realised the technical challenge was not the analytics models but the limited access to high quality underwriting data – predictive models are only as good as the data feeding into them. So we combined our knowledge of high frequency trading and insurance modelling and founded Quantemplate. From there, we’ve spent the last eight years building up the core functionality and connectivity to support flexible data ingestion.
A lot of tools out there can help move data from A to B, but we’ve found that you need the foundational technology that can ‘interpret’, standardise and reformat data really well. This is how you get to the stage of doing the value-adding analysis that insurers get excited about. We now have several global insurers, such as AXA, Chubb, and Sompo who route their global risk data through Quantemplate.
James: What differentiates you from other data ingestion tools?
Adrian: Quantemplate’s immediate benefit to (re)insurers comes from the automation of data cleansing and standardisation. Typically, data cleansing is conducted manually and as a result is a time-consuming and inefficient task within many insurance companies, especially in specialty and commercial lines. The automation offered by Quantemplate can deliver substantial cost & time savings from day 1.
However, Quantemplate goes further than other data ingestion solutions: many solutions and processes around data transformation used in the market today work by truncating, aggregating, or slicing down the data available for analysis. In contrast, Quantemplate gives you the flexibility to apply best practice to all your data, rather than truncating it. So rather than just using the automation to deliver cost-savings, (re)insurers can utilise all the subtlety and colour of their risk data to help quantify exposure and deliver idiosyncratic insights.
James: So it’s not just about automating data ingestion?
Adrian: No. Currently, best practice is often only applied to a small segment of an insurer’s book, with swathes of data not used in analysis. This is because the data is not flexible enough to fit into the schema needed by internal and external systems. There’s a lot of technology out there that can connect these systems, but they still need a “translator” to make the data they’ve connected understandable between different solutions.
Quantemplate solves this by “translating” data using AI and business rules to convert data into the schemas needed throughout the client’s technology ecosystem. We have pre-built integrations with major data partners like CapitalIQ, Praedicat, and Google Geo Coding, but the platform offers the flexibility to interpret & ingest data from any source. In fact, many connectivity providers in the market actually use our solution to translate their incoming data.
This translation layer and user interface are a result of nearly a decade of evolution. The combination of robust data management and ingestion with open flexibility and connectivity is what sets Quantemplate apart from its competition.
James: If we think of data schemas like “languages” that currently require “translators,” why has the insurance industry resisted using an imposed “language” such as ACORD?
Adrian: In a way, the concept of a set standard is a shortcut, not a solution. It forces all data to be at the same level of granularity and in the same template, rather than allowing for flexibility. Variety is the spice of life – insurers need different data variables to quantify the risks associated with each niche class of coverage. Fringe data sets are key to achieving precision risk assessment, and therefore insurers will resist the push for standardisation. To continue the translation metaphor, businesses naturally evolve their own internal “languages” to interpret these data sets. The end-goal isn’t to get everyone to speak the same language, but to ensure it can be communicated to partners in a language that they understand.
The initial attempts to create standards were driven by the limitations of the technology, e.g. SQL requires set schemas. With Quantemplate’s platform, it’s possible to automatically read and interpret any kind of dataset. In that sense, it mimics the way underwriters in niche lines have always viewed data, whilst at the same time rendering this data usable across the business.
We also believe the necessity for automated communication within insurance networks, for example between MGAs, carriers, and reinsurers will continue to greatly increase as risk management becomes more precise. As more business goes digital, data will become more niche, flexible and granular, a trend partly driven by rapidly emerging new data sources within the embedded economy and from IoT devices.
The end-state will likely look something like a social network, containing Clients, Brokers, MGAs, carriers, reinsurers, ILS funds, regulators, and peripheral third parties. That network is alive and changing, and a connectivity tool like Quantemplate is what’s essential for clear communications, rather than imposing set data schemas which limit the colour of this conversation.
James: What can customers look forward to from you in the near and longer term?
Adrian: In the immediate term, we’re building a comprehensive API suite, releasing new off-the-shelf connectivity with partners at the rate of around once a month. This includes new Insurtechs for bespoke pricing/modelling software, so customers can experiment with and test new pricing technologies easily. Downstream, we’re also creating off-the-shelf connectivity connecting into the major policy admin and modelling systems like Duck Creek, RMS, and AIR.
In the longer term, there’s still lots of work to be done in improving connectivity between underwriting partners, for example allowing the instant exchange of data between MGAs and a panel of carriers, with built-in validation and data conversion. I can still remember walking around Lloyds with a stack of papers, looking around at queues of brokers all doing the same thing. Brokers and underwriters will undoubtedly benefit from the face-to-face negotiations but the stack of papers are best left behind.
During our conversation, Adrian and I often returned to the theme of spreadsheet automation. Spreadsheets have become a by-word for underdevelopment in insurance, especially in commercial lines. But insurers must not get tunnel vision. As Adrian points out, spreadsheets will always have their uses. We should not expect to see them disappear from underwriters’ repertoires any time soon. Robust data management must go beyond automating spreadsheet ingestion, to enable seamless data communication throughout the insurer’s ‘social network’.
The challenge for solutions like Quantemplate is in quantifying the ROI of enabling these sophisticated data “networks”. This is especially true for commercial and specialty lines where underwriters are accustomed to using a combination of niche datasets, esoteric models, and instinct to define risk. As more and more insurers turn their attention to connectivity, time will tell where the greatest value is to be had. But whatever approach they choose, as we argued in our recent Impact 25 report, becoming data-led is “table stakes” for insurers. The businesses that are content to focus on short-term cost-savings by automating around the edges of their “data comfort zone”, will be selected against by those that are not.
To find out more about Quantemplate’s clients & capabilities, see the latest company news and discover similar vendors, visit their profile on Magellan™ here.
Alternatively, visit Quantemplate’s website at www.quantemplate.com.
Our TechExec interviews are powered by Magellan™, Oxbow Partners insurance technology navigator. Magellan™ provides
A private equity client recently told us that they had cooled on