AI and Drug Discovery: Attacking the Right Problems

I’ve been meaning to write some more about artificial intelligence, machine learning, and drug discovery, and this paper (open access) by Andreas Bender is an excellent starting point. I’m going to be talking in fairly general terms here, but for practitioners in the field, I can recommend this review of the 2020 literature by Pat Walters, which will take you through a number of important topics and where they seem to be heading.

Even if you’re not a computational drug discovery type, a look at Pat’s roundup might be instructive, because seeing the actual problems that the field is wrestling with will very quickly take the shine off a lot of hyped-up headlines and press releases. These include things like “How do we even estimate the uncertainty in our model, and how do we compare it to others?”, “How do we deal with molecules as three-dimensional objects with changing conformations, as opposed to two-dimensional graph-theory objects or one-dimensional text strings?”, “Since no one can actually dock a billion virtual molecules into a protein target, how can we reduce the problem to something theoretically manageable without throwing away the answers we want? And how will we know if we have?” and “What do we do when our model will only start to work if we feed it more data than we’re ever going to have?” The next time you see a proclamation that everything’s been made obsolete by AI-driven modeling, keep those in mind.

The Bender paper is a good place to start if you’re not knee-deep in such questions, though, and  I especially appreciate a point it makes in its Figure 2. That’s the result of simulating improvements in the drug discovery process, with various estimates on the cost of capital, expected return from a new drug, patent lifetimes, and so on. It’s useful because very, very often you’ll hear the pitch for a new computational approach in terms of how it’ll speed everything up. No more stumbling around screening piles of molecules! No more tedious property optimization! But while those would be nice (and remember, we aren’t there yet), the real problem is having drug candidates fail in the clinic. All that other stuff is a roundoff error compared to the clinical failure rate.

That’s what the paper’s simulation found. Lowering the cost of the preclinical stages by 20% or making them 20% faster (which to a certain degree are the same thing) does indeed save you money. . .but those are overwhelmed by the savings that you could realize if you could just reduce the clinical failure rate by 20%. The absolute best ways to do that would be through picking better targets and through picking compounds and targets that don’t throw up unexpected toxicity in humans. Those, sadly, are exactly the areas that AI/ML approaches are currently making the least traction in, because it’s so hard to think up a useful way to attack them. Speeding up screening or estimating physical properties, for all their difficulties, are so much more tractable. Which accounts for the press releases talking these up as if they’re removing gigantic stumbling blocks to fast and easy drug discovery.

This is not a new insight. But it’s a hard one to swallow, for several reasons. We have an awful lot of proxy measurements in this business (the Bender paper is very good on this topic). We have to have them, because measuring the most important things (does this drug work against a human disease, and to what degree, and without causing more problems than it solves) can only really be done in the clinic. We come up with mechanistic biochemical rationales, cell assays, animal assays, evaluation schemes for compound structures and physical properties, all sorts of things to try to increase our chances for success when the curtain goes up and the real show starts. Which is human dosing.

These proxies generate heaps of numerical data, so it’s understandable that computational approaches use them to try to make better predictions. But in the end, they’re all still just proxies. The paper’s Table 2 goes into details, with the strengths and weaknesses of the various assays and systems. The bottom line is that they’re all useful, and they’re still not enough. We all go into the clinic having done a lot of stuff that’s Necessary But Not Sufficient, and if you don’t hold your breath when the first human doses start, then you haven’t been doing this stuff long enough. What everyone wants are AI systems, computational techniques, and models that will reduce all that finger-crossing and tachycardia, but that’s unfortunately some ways off.

It’s hard to even think about the best ways to (for example) improve target prediction or human toxicity computationally, other than just assembling more and more knowledge (which has been the program for the last few hundred years, and therefore does not make for a sexy stock prospectus). You’d need much better simulations of living biology than we have, and getting that to come into focus is going to take a lot of work and a lot of time. As it is, no one even bothers (for example) trying to predict side effects when a compound goes into a two-week tox assay in rodents. You’re about to find out what they are, and pretty much anything that’s a real concern is going to come as a surprise to you anyway. And it’s not like side effects are constant through a population, either – variations in human physiology and immune systems make sure of that, and that’s a whole different level of difficulty. Here’s a summary:

The need to make decisions with sufficient quality is only compatible in some cases with the data we have at hand to reach this goal. If we want to advance drug discovery, then acknowledging the suitability of a given end point to answer a given question is at least as important as modelling a particular end point. . .

The problem is, modeling is easier to start doing than dealing with that suitability question. It can also be harder to explain this point to investors, to granting agencies, and to upper management, because improvements in things like assay quality and target selection are harder to quantify and come on slowly. This, to me, is the big question looming over a lot of AI/ML approaches to drug discovery, and I’m really glad to see a paper addressing it head-on.

 

The post AI and Drug Discovery: Attacking the Right Problems first appeared on In the Pipeline.