Why proprietary insight trumps proprietary data
April 14, 2021 Chris Sandilands
A private equity client recently told us that they had cooled on an investment because they had discovered that the target did not have any proprietary data. As a result they were merely “buying a process”, and that meant that the business did not have a sustainable competitive advantage.
We disagree. In this post we put forward three counter-arguments, two anecdotal and one with more insurance context. We end with a thought on Open Finance.
Anecdote 1: Bellingcat
Listen to the Bellingcat podcast about the downing of MH17. (You should do this anyway, whether you are interested in insurance or not.)
Bellingcat is an “independent international collective of researchers, investigators and citizen journalists using open source and social media investigation to probe a variety of subjects.” Through forensic analysis of publicly available social media posts and cross-referencing of public domain materials (e.g. maps shown at Russian military press conferences with satellite data), they have proved beyond reasonable doubt that Russia provided the weapons that downed the aircraft and was coordinating the separatists on the ground.
If you can solve governmental crime through third party data, then we think it is reasonable to suggest that competitive advantage can be gained in insurance.
Anecdote 2: SME credit insurance
The trade credit insurance market is an esoteric backwater in the insurance landscape, dominated by a number of specialist players like Euler Hermes, Atradius and Coface. These insurers have been around for decades meticulously collecting insolvency data from bankruptcy gazettes around the world. They were the oracle on credit risk at global scale.
And then companies like Xero, the cloud accounting platform, and Amazon Marketplace came along. Xero has millions of subscribers, mostly SMEs, and for each SME the company has real time data on cash balances, payables and receivables.
We are not going to predict the future of the trade credit market in this article. Our proposition is merely that proprietary data does not create as secure a moat as some might think in a world where new data is being created at unprecedented pace. A forward-looking strategy and ability to react quickly to change are more important than a large historical dataset.
The insurance view
The view of our private equity client is not unusual in insurance. A few years ago, an executive at an insurance services and analytics business told us he was not interested in the new generation of analytics platforms because they only invested in proprietary datasets. (We note that the firm has recently started to acquire such analytics businesses.) On the Talent Equals podcast, Steven Mendel, CEO of Bought By Many, recently said that “data is only useful if you collected it and it’s proprietary to you.”
We think this is a surprising statement by Steven Mendel. In his recent book, Sam Gilbert, Bought By Many’s former Chief Marketing Officer, describes how the business analysed Google data to identify where there were pockets of unfulfilled demand for insurance. This allowed the company quickly to find their first hundred thousand customers.
To be fair, the next thing Steven says is: “that doesn’t mean that other [i.e. non-proprietary] data is useless, but it’s nowhere near as useful as data that is yours, that is unique to you and only you have it.” Steven was probably talking up his bank of forty thousand survey responses about pet insurance that Bought By Many has recently collected, and surely other sources of proprietary insight. But our view is still more balanced: good analysis of third party data is in itself incredibly valuable and often differentiating – and it is hard to imagine any business that does this well would not in parallel develop at least some proprietary data.
Is the reverse then also true? In other words, are companies with large proprietary datasets also excellent at analysing third party data? Our view is not. Consider, for example, large insurers who typically have huge repositories of policy-linked data – customer information, risk surveys and claims costs for example. This data is often locked away in legacy admin systems or even lever arch files. In fact, it is precisely because of large companies’ low maturity in the use of this data that the new generation of data processing businesses have emerged.
Who is this new generation? There are many examples. Another podcast worth listening to is Forbes McKenzie, Founder of McKenzie Intelligence Services, on Voice of Insurance. Forbes is a former military intelligence officer turned insights provider to the insurance industry. MIS ingests data about events around the world, for example the Texas freeze or hurricanes, analyses damage, and allows insurers to estimate losses as a result.
In the podcast, Forbes describes how his software can, for example, identify which buildings have got tarpaulins over them from aerial imagery. This allows the company to understand at pace and scale which structures have been severely damaged. In other words, the analysis creates a proxy dataset to the original structural survey data. We would argue that the value in his business is the ability to source and interpret this data, and would not expect him to rely on proprietary data sources.
Another business we have often talked about is Pharm3r, which provides liability underwriters with deep insight into drug and medical device manufacturers. We have written about Pharm3r at length on our blog and 2019 Impact 25 report. The key point for the purposes of this article is that the analysis resides principally on publicly available data about, say, reported side effects from drugs. You could, in theory, recreate this insight, but you’d probably have to get a PhD in molecular biology first, and perhaps set up and sell a pharma hedge fund – like its founder Libbe Englander.
Direction of travel: Open Finance
A final thought on the topic is the impact of Open Finance, which we recently wrote about. The FCA’s default principle is that financial services customers own and control both the data they supply and which is created on their behalf. Open Finance business models (e.g. Impact 25 2021 Member Optiopay), therefore, principally centre on the processing of data they do not own. In other words, the moat is the ability to create reasonable value exchange for the customer. The direction of travel is away from proprietary data to proprietary insight.
Conclusion
A business with a defendable proprietary dataset and market-leading analytics is, of course, the gold standard – but also a unicorn.
But it is misguided to assume that value can only be created from proprietary datasets, and indeed these might have lower moats than you would expect in this world of fast-moving digital propositions. Instead, we believe that proprietary insight can come from either valuable datasets or differentiating analytical capabilities, and it is hard to imagine how a business with market-leading analytics would not be creating proprietary data and insight as it goes along.