Talking about the future with assurance takes faith. For those in a business setting it is often preceded with carefully crafted legalese, so-called safe harbor or forward-looking statements, that seek to come clean on just how uncertain, about the future, we truly are. Yet a future unimagined is a future unrealized. It is our power to predict, or at least the desire to do so, that sets humans apart.
A collective peek into the future of healthcare is attempted each year at the JP Morgan (JPM) Conference and this year’s January gathering was no different. Apropos to the fragility of predictions, it (thankfully) rained much less than had been forecast. None the less, all the conventionally well-prepared biotech executives had their, just in case, umbrellas at the ready.
This year at JPM I had the privilege to guide a discussion at the WuXi Global Healthcare Conference (v-restream begins at 148 minutes) on the role that advanced artificial intelligence (AI) and machine learning (ML) are hoping to play in healthcare. It was fascinating.
The conference had begun with R&D leaders expressing fundamental concerns about the implied limits, derived from algorithm training bias, of AI/ML in Rx discovery. To unpack this in greater detail our conversation focused initially on the key raw material “data”. Swiftly moving beyond the generalities of “big data”, into a focused discussion on what is commonly missing in opportunistic datasets. On the fact, that all too often the “available big data” are neither of sufficient quality nor in an optimal format to generate insightful AI or ML derived observations – and how when we elect to make data compromises we do so at great risk. Because the same supra-human abilities which enable AI/ML algorithms to detect subtle signal in large multi-dimensional datasets makes them equally (and hauntingly) good at detecting, then erroneously focusing on, artifactual noise.
Even in relatively mature fields, like population genomics, we are only just beginning to collect the right data needed to enable scaled AI/ML derived insights. The same for preclinical, clinical and even behavioral data. Building scale, quality, and continuity between carefully collected datasets has essentially just begun.
In addition to recent attention on improved data collection, rapid progress is just beginning on the analytical side. Some of which are derived from customized hardware that process image-based data with greater and greater ease. Software and algorithms, particularly in deep learning approaches, are now progressing rapidly too – with new benchmarks being eclipsed daily.
Yet the raison d’etre of the session was on value creation. On the possibility that AI/ML applications could enable smarter drug development, perhaps making R&D processes faster or cheaper and on ways in which real-world evidence could cross-correlate additional behavioral, contextual, and treatment-related insights – collectively leading to even more value-advantaged medical solutions.
Yet, our starting point is somewhat grim. Even after decades of trying (without any AI/ML help), we have little to demonstrate that developing innovative medical solutions is getting any faster nor have we reduced our rate of failure. In fact, the data would suggest, that on an ROI basis, as an industry, we are getting swiftly worse.
Can these AI/ML tools help? It is too early to be certain, but the trends are encouraging. As with many new tools, much trial and error await. Some will seek to use these methods with the wrong data and follow inferred fantasies to false lands. Others will seek to use them in model settings for which we have insufficient correlation and causation linkage. But transformative progress seems inevitable. Using built-for-purpose datasets, filling the air gaps between those data to facilitate longitudinal forecasting, then integrating -omics, personal context, and individual behavior data with long term intervention or prevention settings are when and where the first big contributions await. The ability to eliminate costs (and steps) associated with “business as usual”, will require not just technology advances but ecosystem changes involving regulators, policy, etc. But these too are likely as more and more evidence of value accrues from platforms built on sound datasets and validat(able) models.
Predicting the future is a fool’s bargain. But with some assurance (i.e. faith), one has to imagine that the healthcare contributions of AI and ML will (eventually) be material. And beyond a blind faith in technology, we will need equal attention on insightful ethics, regulation, and policy. Data ownership, dynamic consent, privacy, security, and data use remuneration all will prove to be as central as any chipset or AL/ML algorithm. For the world that we seek to serve, will expect that our industry understands their needs and personally defined interests. This near-future will demand that we earn, honor and respect the profound trust bestowed on those who work with the personal health and wellness data-diaries of the world. This is a setting in which the Hippocratic oath primum non nocere (first, do no harm) must always remain front and center.