Modern Phenotypic Drug Discovery

By this point, I’ve lost track of the number of times that phenotypic drug discovery has made some sort of comeback. The only competitor in that category is natural products drug discovery, which every couple of years gets written up as ready to make some sort of resurgence. In truth, neither of these have ever gone away, but both of them are subject to rethinking in the light of technologies and new lessons that have been learned over the years. 

Background, for those outside the field: very broadly speaking, you can discover drugs in two ways. One is to work on the biochemistry of cells in health and disease until you think you have some mechanistic understanding: Aha, you realize after enough experiments, Disease X is caused/exacerbated by a step mediated by Protein X, so a Protein X inhibitor would be expected to be beneficial. That’s the “target-based” approach, and it has had some extraordinary successes and some equally extraordinary failures. A lot of really compelling ideas of this sort crash hard in the clinic. The other way to do it is phenotypic: you raise a bunch of cells, or a bunch of small organisms (like up to about mouse-sized at the most) and just start slinging compounds at them until you find some that do what you want. You don’t know the mechanism of action at all, just that something useful happened (you can, of course, use this information to start digging for that mechanism, which most of the time is what people do). This all implies that you’ve probably set up those cells or creatures to model some human disease to start with, and are then finding things that ameliorate it. And that’s one of the hardest parts, because recapitulating a disease in a useful model is not at all easy – and if you ignore that difficulty and plow ahead anyway, you run a huge risk of wasting your time in a truly comprehensive manner. There is no real phenotypic screen for Alzheimer’s, for example. But on the plus side, you can find totally new targets and mechanisms this way, which might furnish you not only with a new drug candidate but with a new understanding of the disease itself.

This article at Nature Reviews Drug Discovery makes these points well. It goes over historical and recent successes of the phenotypic approach, and discusses some areas that it’s opening up for discussion and research. One of these is the long-vexed question of polypharmacology: what do you do when your active compound doesn’t seem to have a single target, but rather hits a whole list of stuff at varying degrees of potency? Seen from a pure target-based viewpoint, this is a failure, and you’d better start working on something else. But to be honest, there are a lot of drugs out there (and not all of them ancient legacy compounds by any means) that work this way, even if their developers didn’t think so at the time. So it’s not to be disparaged on principle, but that said, it’s still a difficult area to make progress in because of all the variables. A good enough phenotypic hit, though, makes its own case that it’s worthy of further investigation and development, even if it’s not “clean” by rigorous target-based standards. But as always, your phenotypic screen had better be a good one. That is, it had really better model the human disease in a useful way, and have a good signal/noise. The authors note that you’re much better off with assays that involve a gain-of-function/gain-of-signal readout, as opposed to ones that could read out just through cellular stress or cytotoxicity, which is an invitation to chase your tail.

Another area the paper brings up is searching lower-molecular-weight compounds than are usually screened, down to fragment-sized. There are quite a few useful drugs out there with really low molecular weights – ibuprofen, aspirin, metformin, dimethyl fumarate, lacosamide and more – and any screening program would be happy to have discovered something as useful as those. As the authors note, hits like these in phenotypic screens might be another case of polypharmacology, or they might be hitting pathways whose “tone” we have not understood well (and for which micromolar inhibitors might work out just fine). At any rate, there might be an opportunity for fragment phenotypic screening, and even of covalent fragments (which will call for even more attention to the validity of the underlying screening model, I’d say).

The paper discusses the question of target ID, which for most phenotypic programs feels like a natural progression. Most of us are innately biased towards thinking in terms of drug targets, so when a phenotypic compound emerges we want to know what it’s “really” doing. And most of the time, there is such a target in there somewhere, although finding it can be quite a haul. I know of several compounds that have been kicking around for years that are obviously doing something in the assays, but no one has ever been able to pin down quite what that is! This paper makes the case for getting out of a binary mindset for target identification. They point out, correctly, that target ID is a means to an end, and that you do not actually need to identify your target to go on to clinical trials and go to the FDA for approval. I always find it surprising to find how many people are surprised by that, but it’s true. You also need to realize that knowing a target may not tell you nearly as much as you would want about a compound’s mechanism of action, if (as can certainly happen) your new target lands in the middle of a bunch of not-well-worked-out biology.

There’s a good case to be made that modern chemical biology and imaging techniques have made it easier to progress things, even if you’re not quite sure how they’re working. We can extract huge amounts of information about the cellular effects of a given compound, and if you do a good job of matching this against a closely related structure that’s phenotypically inactive, you can make a lot of headway. This doesn’t mean that you shouldn’t bother trying to find the target – as mentioned, this is a great way to expand the knowledge of the underlying disease, and can lead to other new programs spinning off of the phenotypic effort. But it does mean that you shouldn’t freeze in fear if you don’t have a target to point to. The FDA wants to see safety and efficacy, and that’s what we should want to see, too, for starters.

But as the paper notes at the end, phenotypic screening is going to advance at the pace of good model development. Many of these same chem-bio tools can be brought to bear on this question as well, along with advances in cell culture, organoids, and other new assay technologies. You’re not going to be able (realistically) to recapitulate all the features of a human disease, so you will probably find yourself concentrating on certain features that you can make the case for driving a project on. I was very happy to see this paper reference Jack Scannell’s paper on translatability (blogged about here), because its point is crucial to the whole phenotypic screening endeavour. If your underlying assay is flawed, there is nothing you can do in any other part of the project to make up for it. A poorly translatable assay is a sign that you should spend your time trying to fix it, or to go do something entirely different instead. It is not a sign that you should just keep on going, because “it’s the best thing we’ve got”. If it isn’t good enough, it isn’t good enough. I don’t get to quote A. E. Houseman much around here, but he’s right: “The toil of all that be. Helps not the primal fault; It rains into the sea. And still the sea is salt.” If you don’t fix your assay up front, you are raining into the sea.