Lloyd’s Market Association (LMA) & Oxbow Partners Event on Gen AI: Key take-aways
July 3, 2024
Yesterday we launched our report on the Maturity of Generative AI in the Specialty and Reinsurance markets at a packed joint event with the Lloyd’s Market Association in the Old Library at Lloyd’s.
Call me biased, but this was by far the best AI session I’ve attended, and I wrote six pages of notes. Many thanks to presenters and panellists for sharing their insights – Elizabeth Jenkin (Underwriting Director at LMA), Rachel Turk (CUO at Lloyd’s), James Slaughter (Group CUO at Apollo), Marianne Harvey (COO at AEGIS London), Kanika Chaganty (CDO at Brit) and, of course, our own Miqdaad Versi.
This article summarises some of the most interesting insights.
Rachel Turk presentation
Rachel talked briefly about the risks posed by models and the need to oversee them. She drew a good parallel: insurers are used to overseeing similar ‘delegation risks’ in their coverholder relationships and model oversight can be thought about in a similar way.
Rachel believes – as do we – that humans will remain central to the specialty underwriting process for the foreseeable future both because of the complexity of business at Lloyd’s and because there needed to be an audit trail for Lloyd’s and the PRA. Furthermore, data quality and availability was still going to hold back model effectiveness in the short term, and capital providers were generally still concerned about “black box” underwriting in this part of the market.
She therefore got onto “augmented underwriting” and cited an interesting D&O use case in which underwriting information (e.g. annual reports, financial data, analyst reports) could be collated and pre-analysed in “three minutes rather than three hours”. D&O underwriting has moved on a long way since I was a high layer D&O fac underwriter; I like to joke it’s where I learnt to use the =RAND() function in Excel to price risks.
As the market is evolving so quickly, Lloyd’s is taking a cautious approach to oversight, limiting formal written guidance and “listening to the market” to establish and evolve its frameworks.
Panel discussion
During his opening address summarising the findings of our report, Miqdaad had asked for a show of hands from the audience to see who thought Gen AI was going to be transformational for the industry. Acknowledging significant sample bias, the result was 100%.
James Slaughter (Apollo) opened up the panel discussion by tempering this enthusiasm. He noted that the impact from Gen AI would likely be significant but would take much longer than five years. He noted that market initiatives tended to take a long time, and often much longer than anticipated. For Gen AI to be effective, the market needed to “build the data foundations, work out how to use models safely, and learn from some mistakes.”
Marianne Harvey (AEGIS London) then commented that the most important first step for any company was to establish its strategic position with regard to Gen AI. “Do you want to be reactive, a fast follower or a pioneering innovator?” She also noted the significant work needed to make progress on Gen AI including access to data, training and governance frameworks.
Kanika Chaganty (Brit) picked up the theme of education and the importance of developing colleagues’ data capabilities. As a data professional, she is excited that data topics have been moved to the top of the executive agenda, even though many of these topics are not new. For example, data ethics, privacy and fairness are now all central top-level questions. She noted that existing regulations would probably have to flex to adapt to the new realities.
On this theme, James observed that the market needed to shift from seeing data as a proprietary asset to a “democratised asset”. He described how Apollo had hired data professionals from other industries whose natural instinct was to share code and insight on platforms like GitHub – a mentality that was not traditionally engrained in the market. Companies needed to accommodate a new generation with a different mindset.
Marianne then described a few promising early use cases. These included anything that was time consuming and could be addressed in isolation to avoid constraints like legacy technology. She cited underwriting and claims operations, data extraction and gap-filling, triage decisioning and dynamic risk scoring. She was excited about operational reporting and automating peer review processes, for example wording and contract comparisons. James added that the creation of “synthetic datasets” was a hugely exciting area for use cases around, say, product development.
Kanika introduced an interesting concept of the “data network effect” – broadly the exponential benefit of working with data both as a company and as a market.
The panel then discussed how they engaged with partners. Interestingly, the rationale for working with third parties was not just a known skill or knowledge deficit, but also to ensure that you were learning from others, now and in the future, in this highly dynamic area.
Elizabeth closed the panel with a question about what companies should be doing now to prepare.
Kanika responded with a neat phrase: “humans should not be afraid of AI, but should be afraid of humans using AI.” Insurance professionals need a “mindset shift” to embrace new technologies, and companies need to boost their training.
Marianne echoed the need for training and James noted that AI was a theme that should be used to attract new talent into the industry.
Miqdaad reminded the audience that Oxbow Partners would be delighted to discuss the subject with senior executives around the market to help them develop and execute their AI strategies.
About the author
Chris Sandilands is a Partner at Oxbow Partners. He leads engagements across strategy and transformation, focusing most of his time on global reinsurance and UK & Ireland retail insurance.