This browser is not actively supported anymore. For the best passle experience, we strongly recommend you upgrade your browser.
| 5 minute read

Techbio: sourcing, valuing and sharing data in an AI world – takeaways from Genesis 2022

The life sciences industry saw unprecedented levels of activity in 2020 and 2021 in the fight against Covid. In 2022 we saw a different picture for the sector – with a heavy downturn in capital markets, and numerous pharma companies announcing significant scale backs and cost cutting measures.

However, last year was not all doom and gloom. In particular, the level of venture capital investment in the UK biotech sector in 2022 - whilst it was lower than in the two previous record-breaking years - was nevertheless the third best year ever. Although securing investment is always challenging, significant sums of money are still going into selected companies.

Amongst these are techbios - deep tech, data-driven businesses that use advanced technologies such as AI and machine learning to transform drug discovery and patient care. Whilst these companies have been gathering momentum over the past few years, 2022 was the year when “Techbio” arguably became a more widespread term and cemented its own identity as a subsector. This was reinforced by events such as the BioIndustry Association (BIA)’s excellent “TechBio UK” event in October 2022 - you can read a summary of that event here.

With the rise of techbio expected to continue into 2023, I had the privilege of being invited to speak as part of a panel on a related topic at One Nucleus’ Genesis Life Sciences Conference in December 2022 – with a specific focus on sourcing, sharing and valuing data in the context of AI deals in the sector.  

The panel was hosted by Victoria English from MedNous and included representatives from Astellas Pharma (Bradley Hardiman), Ernst & Young (Chris Wayman), PharmEnable (Hannah Sore), imec (Peter Peumans) and Bristows (Claire Smith). Links to all the keynote sessions at Genesis are here Genesis Keynote Videos, and if you want to check out this particular session then navigate to the section on “Value Creation and Sharing in a Converging World“ which is here Genesis: Value Creation Session.

Here are my key takeaways from the discussion:

  1. Sources of Data: whilst there are many different sources of healthcare data, the problem is that they are disparate, siloed and can be of low quality. There was an informative discussion about specific sources of data within the NHS, for instance that Patient Recorded Outcomes (PROMs) could be a better place to look for data for validation purposes, whereas HES (Hospital Episode Statistics) and SUS (Secondary Uses Service) data tends to be top level information with fewer (and regularly changing) data fields which make it hard to use for analytics. Some companies may also want to think about seeking access to under-tapped resources such as dental data, which can be very revealing about someone’s health more generally. 

Private health data tends to be more expensive and, although the quality can much better, this is not always guaranteed to be the case. Some of this data is very difficult to access (e.g. where it is held by pharma companies). Amongst other things, data from in vivo models can be particularly hard to come by.

  1. Value of Data: so how should you share value in data‑driven deals? This is undoubtedly a thorny issue. Speaker Chris Wayman of EY, who focuses on valuing data for clients, said it was rare for the NHS to get value for the data that it provides to the life sciences industry. For instance, he noted that universities are developing algorithms off the back of NHS data and spinning-out the resulting technology, but the NHS does not often see any return from this. 

Whilst most people would agree that the NHS should get fair value for its data – what this looks like in practice is still very difficult to determine. The value is not in the raw data itself, but lies in the insights that are gained by linking and analysing data sets. It may be appropriate for a university in this example to share with the NHS some of the value that the university receives from its spin-out, but much will depend on the circumstances. As more and more effort, and resources, are poured into the research by the spin-out company and, for example, a pharma company who subsequently takes on the development of the product through to market, the value chain builds significantly, but the value of the original data contributions will be seen as minimal at that stage.

Public bodies and other data providers may be looking for the “holy grail” of deals where they can share in the ultimate success of the products developed with the help of their data, by securing milestone and royalty payments as those products are progressed through the different stages of development and (hopefully) eventually are commercialised.  

The inherent difficulty of these deals for a techbio or biopharma company is that the insights derived from the use of AI and data can be varied and far‑reaching, potentially leading to a range of new products and services which are more difficult to predict at the outset and where the benefit they had from the original data is perhaps more difficult to trace. This can make companies nervous about agreeing to pay reach-through milestones or royalties. Of course these sorts of outcome-based payments may be appropriate in individual cases, particularly where the data provided is instrumental to a product’s development and/or the data provider has added value by working closely in collaboration with the company in the R&D phase. However, given that techbios and other companies in the sector often have to rely on numerous, disparate sources of data, the issue becomes magnified - paying numerous royalty streams to different data sources (even if each royalty is set at a very modest rate) could become unsustainable.

  1. Real world data: there was also an engaging discussion on the panel about the use of real‑world data in developing medicinal products and medical devices and how this is taking off. We talked about the vast amounts of potentially useful data that is out there e.g. gathered from wearables, the difficulty of accessing this (as it lies in the hands of the likes of Apple), and whether, in any event, its quality is up to snuff (medically speaking).  

We touched on other topics including patient tokenisation. In particular, we had a discussion about synthetic data – data that’s created based on real-world data and has the same attributes, but which for example uses a patient avatar (rather than a key code) where the link to the original patient has been severed. This reduces the privacy hurdles (and helps speed up getting access) whilst at the same time still provides researchers with clinically deep, granular and linked-up data sets to work with. In some people’s eyes, synthetic data potentially could have more value than traditional patient data.

In particular, real-world evidence is coming into play now in the regulatory approval process. For example, Roche’s product, Alecensa, received marketing approval based on the use of a synthetic control arm. This allows a pharma company to save time and money in clinical trials by recruiting patients to receive the new treatment (in the normal way), but not recruiting patients for a control group or standard-of-care arm. Instead, data previously collected from patients from other sources (including existing electronic health records) is used to provide the evidence that is needed to assess the drug’s relative performance against existing treatments.   

For more on synthetic control arms and the transformative potential of AI in clinical trials, please the previous article from my colleague, Erik Müürsepp here.

This session was a follow-on from a previous One Nucleus panel discussion hosted and moderated by Bristows earlier last year on “Value recognition in AI transactions”. For a write-up, and bite-sized audio recordings from that event, see: Value Recognition in AI transactions, and for a fuller article on this topic see here: commercial models for AI life sciences deals.

The life sciences sector is evolving. Biotech companies are using cutting-edge techniques from data-driven tech to transform drug discovery and patient care. We call this interface of biology and technology “techbio".

Subscribe to receive our latest insights - on the topics that matter most to you - direct to your inbox, at your preferred frequency. Subscribe here

Tags

techbio, data, life sciences, artificial intelligence, biotech, data protection and privacy, digital disruption, technology