Quantum Computing Comes to the Rescue?

Let’s look at another aspect of the general computational enthusiasm that we’re seeing in drug discovery these days. As you read my opinions, though, keep in mind that I’ve seen several cycles of this over a >30 year career, so I can’t help but be informed (or perhaps misinformed?) by that experience. But not many of us have had thirty years of thinking about the subject of this press release from Novo Nordisk, which announces their foray into quantum computing.

This part of the blog post is where the background on that concept should go, but I’m manifestly unquanlified to deliver it. Broadly put, it’s easy enough to define – this is the application to computational methods of the phenomena that can only occur meaningfully down at the quantum level, things like entanglement, superposition, and interference. There’s no one thing that encompasses all the possible techniques of quantum computing (as far as I know!) but all of the proposals plan to use these effects to do things that simply can’t be accomplished out here in the bulky world. You hear an awful lot of hand-waving talk about that step, stuff about “doing all the computations at once” and so on, and from what I do know about the subject you’d be better off ignoring that. But it does seem certain that quantum-based algorithms exist (or can exist) that would provide remarkable advantages over what can be accomplished classically. You’re not breaking out of the Church-Turing thesis, though: anything that a non-quantum computer can accomplish, theoretically, can be accomplished by a quantum one (of whatever sort), and vice versa. But there are opportunities for huge accelerations in how quickly those results can be obtained. 

Actually realizing those advantages, though – that’s been hard. There was a claim of “quantum supremacy” from Google not too long ago (a result that was obtained far more efficiently than any classical system could have), but not everyone in the field believed that, to put it lightly, and recent results bear out that skepticism. Building the hardware for a working quantum computer is extremely difficult, as is keeping it working once you’ve built it. And no matter how well you build it, you’re going to have to deal with quantum decoherence: the idealized particle-in-a-box of quantum chemistry courses can sit there platonically and not interact with its environment, but that’s not what’s going out here in the real world, particularly the real world where you’d like to read out the states of all these qubits at the end of the process. You will gradually (or maybe not so gradually) lose the specialness of the quantum states that you have gone to such trouble to obtain, so a big part of any working quantum computer is going to be some really robust error correction techniques.

Another big part of any such system is going to be software that takes maximum advantage of the quantum technology you’re using. That’s not so simple, either. There are some algorithms that have proven to mesh extremely well with quantum phenomenon (I discussed some of those in this post, which I think is the last time I dove into this topic), but it’s safe to say that a lot of effort has been going into identifying more such techniques in the expectation that we’re going to have hardware to run them on at some point. And that brings us, at last, to Novo Nordisk!

The press release announces an effort “to establish the first full-scale quantum computer for the development of new medicines“, but as you will see if you read on, that’s not all. Very quickly we find out that this machine is also expected to provide breakthroughs in the study of climate change and the “green transition”. In fact, the whole thing turns into a dump-truck delivery of buzzy phrases. Personalized medicine? Yep! Human microbiome studies? Of course! Large-scale genomics data? Naturally! New sustainable materials, decarbonization, cybersecurity, energy solutions – it’s all in there. And to be fair, sure, one could imagine that wildly more capable computational resources could indeed come in handy in all those fields. To the credit of Novo Nordisk and the Danish government, this is a twelve-year initiative, the first seven of which are expected to be spent in just figuring out what sort of quantum computer to build, which seems at least fairly realistic.

But in each case, you’re going to run into the same fundamental problem: faster computers are only going to be wonderfully helpful for processes whose rate-limiting stops involve the speed of computation. That might sound obvious or tautological at first, but think about it. This all gets back to the same things that I keep saying about (for example) AlphaFold – that as great an accomplishment as it is, it is not going to lead to an immediate revolution in drug discovery because our biggest problems do not depend on knowledge of protein structure and thus can’t really be accelerated by it much. Similarly, having something that can sort through massive mounds of (say) genomics data would be useful – but what would be even more useful is understanding what those results mean and what to do with them. That’s a slower process, for sure. Look at that AlphaFold-driven work that I was writing about the other week: if you suddenly gave that team access to a working quantum computer, what would they be able to accomplish with it under present conditions? Generate fairly unreliable docking-and-scoring results much more quickly? What does that buy you? (Please note – I’m not putting down that paper or the people who wrote it at all! If you read it, you can sense their own frustrations with the state of the computational tools that we have now, and I’m sure that just speeding those up would not be the first thing they’d wish for).

So there are, in the end, a number of important choices to be made when you start talking about quantum computing for the life sciences. First off, you have to decide what sort of quantum computational technique you’re going to be using. Then you have to decide how to realize that in actual hardware. After that, you have to find algorithms that will take maximum advantage of your new machine. Then you have to very carefully pick the problems for which those algorithms will fit the best, and finally, you will want to narrow down on the problems that have been waiting for just the sorts of results you will now be able to deliver. What we’ll be left with after these selection steps will only become obvious with time. And effort. And lots and lots of money.