AI and the Hard Stuff

I noticed a headline this morning that BenevolentAI is laying off staff and reorganizing to deal with a recent clinical failure. I’ve written about them from time to time over the years (here’s an early post), and what mostly caught my eye about them was the level of hype in their press releases (as that last link will demonstrate). The company was developing a pan-Trk inhibitor for atopic dermatitis, and that link (from the company) says that “We identified the role of the Trk receptors as mediators of both itch and inflammation in (atopic dermatitis)”, although I’m not sure I’d put it that way. The Trk proteins (tropomyosin receptor kinase) were already known to be involved in such processes (and others), and people had already been working on such inhibitors in general and TrkA in particular for just this sort of indication. Perhaps it’s the pan-Trk-ness of the BenevolentAI compound that was going to make it stand out? 

It didn’t. The compound missed its endpoints last month, so it looks like that hypothesis has failed. They have a PDE10 inhibitor heading for the clinic in ulcerative colitis as well. That’s a bit more novel, although there certainly have been PDE10 inhibitors studied for other indications and other PDE subtypes studied for inflammatory bowel diseases. And this brings up something that I have said many times, and will now perhaps say louder for the folks in the back of the room:

There are no existing AI/ML systems that mitigate clinical failure risks due to target choice or toxicology.

And the kicker to that statement is that those two factors account for a huge number of clinical failures. So what we see now is AI being applied to lead compound generation, to patent-busting, to hit expansion, all sorts of early-stage issues where computational methods have a chance of helping out – and there’s nothing wrong with that at all. I like seeing it. But we do not have enough data and we do not have enough insight to use AI/ML to pick better targets that have a higher chance of succeeding in the clinic. Someday we may well. But that day is not today, and I am very willing to stand by that statement.

BenevolentAI used to say stuff like this: “. . .BenevolentAI has created a bioscience machine brain, purpose-built to discover new medicines and cures for disease. Proprietary algorithms perform sophisticated reasoning on over 50 billion ingested and contextualised facts to extract knowledge and generate complex insights into the cause of diseases that have, until now, eluded human understanding.” But then, their stock also used to be worth 10 euros/share. It’s now under 2, and that’s still a higher per cent value remaining in it than there is in those claims about bioscience machine brains.

This is a hard, hard, business, and I hope that AI helps us out, because God knows we need it. But getting that to work is another hard, hard business all its own.