Effective Altruism’s Philosopher King Just Wants to Be Practical

Academic philosophers these days do not tend to be the subjects of overwhelming attention in the national media. The Oxford professor William MacAskill is a notable exception. In the month and a half since the publication of his provocative new book, What We Owe the Future, he has been profiled or excerpted or reviewed or interviewed in just about every major American publication.

MacAskill is a leader of the effective-altruism movement, whose adherents use evidence and reason to figure out how to do as much good in the world as possible. His book takes that fairly intuitive-sounding project in a somewhat less intuitive direction, arguing for an idea called “longtermism,” the view that members of future generations—we’re talking unimaginably distant descendants, not just your grandchildren or great-grandchildren—deserve the same moral consideration as people living in the present. The idea is predicated on brute arithmetic: Assuming humanity does not drive itself to premature extinction, future people will vastly outnumber present people, and so, the thinking goes, we ought to be spending a lot more time and energy looking out for their interests than we currently do. In practice, longtermists argue, this means prioritizing a set of existential threats that the average person doesn’t spend all that much time fretting about. At the top of the list: runaway artificial intelligence, bioengineered pandemics, nuclear holocaust.

Whatever you think of longtermism or EA, they are fast gaining currency—both literally and figuratively. A movement once confined to university seminar tables and niche online forums now has tens of billions of dollars behind it. This year, it fielded its first major political candidate in the U.S. Earlier this month, I spoke with MacAskill about the logic of longtermism and EA, and the future of the movement more broadly.

Our conversation has been edited for length and clarity.


Jacob Stern: Effective altruists have been focused on pandemics since long before COVID. Are there ways that EA efforts helped with the COVID pandemic? If not, why not?

William MacAskill: EAs, like many people in public health, were particularly early in terms of warning about the pandemic. There were some things that were helpful early, even if they didn’t change the outcome completely. 1Day Sooner is an EA-funded organization that got set up to advocate for human challenge trials. And if governments had been more flexible and responsive, that could have led to vaccines being rolled out months earlier, I think. It would have meant you could get evidence of efficacy and safety much faster.

[Read: How future generations will remember us]

There is an organization called microCOVID that quantifies what your risk is of getting COVID from various sorts of activities you might do. You hang out with someone at a bar: What’s your chance of getting COVID? It would actually provide estimates of that, which was great and I think widely used. Our World in Data—which is kind of EA-adjacent—provided a leading source of data over the course of the pandemic. One thing I think I should say, though, is it makes me wish that we’d done way more on pandemics earlier. You know, these are all pretty minor in the grand scheme of things. I think EA did very well at identifying this as a threat, as a major issue we should care about, but I don’t think I can necessarily point to enormous advances.

Stern: What are the lessons EA has taken from the pandemic?

MacAskill: One lesson is that even extremely ambitious public-health plans won’t necessarily suffice, at least for future pandemics, especially if one was a deliberate pandemic, from an engineered virus. Omicron infected roughly a quarter of Americans within 100 days. And there’s just not really a feasible path whereby you design, develop, and produce a vaccine and vaccinate everybody within 100 days. So what should we do for future pandemics?

Early detection becomes absolutely crucial. What you can do is monitor wastewater at many, many sites around the world, and you screen the wastewater for all potential pathogens. We’re particularly worried about engineered pathogens: If we get a COVID-19-scale pandemic once every hundred years or so from natural origins, that chance increases dramatically given advances in bioengineering. You can take viruses and upgrade them in terms of their destructive properties so they can become more infectious or more lethal. It’s known as gain-of-function research. If this is happening all around the world, then you just should expect lab leaks quite regularly. There’s also the even more worrying phenomenon of bioweapons. It’s really a scary thing.

In terms of labs, possibly we want to slow down or not even allow certain sorts of gain-of-function research. Minimally, what we could do is ask labs to have regulations such that there’s third-party liability insurance. So if I buy a car, I have to buy such insurance. If I hit someone, that means I’m insured for their health, because that’s an externality of driving a car. In labs, if you leak, you should have to pay for the costs. There’s no way you actually can insure against billions dead, but you could have some very high cap at least, and it would disincentivize unnecessary and dangerous research, while not disincentivizing necessary research, because then if it’s so important, you should be willing to pay the cost.

Another thing I’m excited about is low-wavelength UV lighting. It’s a form of lighting that basically can sterilize a room safe for humans. It needs more research to confirm safety and efficacy and certainly to get the cost down; we want it at like a dollar a bulb. So then you could install it as part of building codes. Potentially no one ever gets a cold again. You eradicate most respiratory infections as well as the next pandemic.

Stern: Shifting out of pandemic gear, I was wondering whether there are major lobbying efforts under way to persuade billionaires to convert to EA, given that the potential payoff of persuading someone like Jeff Bezos to donate some significant part of his fortune is just massive.

MacAskill: I do a bunch of this. I’ve spoken at the Giving Pledge annual retreat, and I do a bunch of other speaking. It’s been pretty successful overall, insofar as there are other people kind of coming in—not on the size of Sam Bankman-Fried or Dustin Moskovitz and Cari Tuna, but there’s definitely further interest, and it is something I’ll kind of keep trying to do. Another organization is Longview Philanthropy, which has done a lot of advising for new philanthropists to get them more involved and interested in EA ideas.

I have not ever successfully spoken with Jeff Bezos, but I would certainly take the opportunity. It has seemed to me like his giving so far is relatively small scale. It’s not clear to me how EA-motivated it is. But it would certainly be worth having a conversation with him.

Stern: Another thing I was wondering about is the issue of abortion. On the surface at least, longtermism seems like it would commit you to—or at least point you in the direction of—an anti-abortion stance. But I know that you don’t see things that way. So I would love to hear how you think through that.

MacAskill: Yes, I’m pro-choice. I don’t think government should interfere in women’s reproductive rights. The key distinction is when pro-life advocates say they are concerned about the unborn, they are saying that, at conception or shortly afterwards, the fetus becomes a person. And so what you’re doing when you have an abortion is morally equivalent or very similar to killing a newborn infant. From my perspective, what you’re doing when having an early-term abortion is much closer to choosing not to conceive. And I certainly don’t think that the government should be going around forcing people to conceive, and then certainly they shouldn’t be forcing people to not have an abortion. There is a second thought of Well, don’t you say it’s good to have more people, at least if they have sufficiently good lives? And there I say yes, but the right way of achieving morally valuable goals is not, again, by restricting people’s rights.

Stern: I think there are at least three separate questions here. The first being this one that you just addressed: Is it right for a government to restrict abortion? The second being, on an individual level, if you’re a person thinking of having an abortion, is that choice ethical? And the third being, are you operating from the premise that unborn fetuses are a constituency in the same way that future people are a constituency?

MacAskill: Yes and no on the last thing. In What We Owe the Future, I do argue for this view that I still find kind of intuitive: It can be good to have a new person in existence if their life is sufficiently good. Instrumentally, I think it’s important for the world to not have this dip in population that standard projections suggest. But then there’s nothing special about the unborn fetus.

On the individual level, having kids and bringing them up well can be a good way to live, a good way of making the world better. I think there are many ways of making the world better. You can also donate. You can also change your career. Obviously, I don’t want to belittle having an abortion, because it’s often a heart-wrenching decision, but from a moral perspective I think it’s much closer to failing to conceive that month, rather than the pro-life view, which is it’s more like killing a child that’s born.

Stern: What you’re saying on some level makes total sense but is also something that I think your average pro-choice American would totally reject.

MacAskill: It’s tough, because I think it’s mainly a matter of rhetoric and association. Because the average pro-choice American is also probably concerned about climate change. That involves concern for how our actions will impact generations of as-yet-unborn people. And so the key difference is the pro-life person wants to extend the franchise just a little bit to the 10 million unborn fetuses that are around at the moment. I want to extend the franchise to all future people! It’s a very different move.

[Read: Is colonizing Mars the most important project in human history?]

Stern: How do you think about trying to balance the moral rigor or correctness of your philosophy with the goal of actually getting the most people to subscribe and produce the most good in the world? Once you start down the logical path of effective altruism, it’s hard to figure out where to stop, how to justify not going full Peter Singer and giving almost all your money away. So how do you get people to a place where they feel comfortable going halfway or a quarter of the way?

MacAskill: I think it’s tough because I don’t think there’s a privileged stopping point, philosophically. At least not until you’re at the point where you’re really doing almost everything you can. So with Giving What You Can, for example, we chose 10 percent as a target for what portion of people’s income they could give away. In a sense it’s a totally arbitrary number. Why not 9 percent or 11 percent? It does have the benefit of 10 percent being a round number. And it also is the right level, I think, where if you get people to give 1 percent, they’re probably giving that amount anyway. Whereas 10 percent, I think, is achievable yet at the same time really is a difference compared to what they otherwise would have been doing.

That, I think, is just going to be true more generally. We try to have a culture that is accepting and supportive of these kinds of intermediate levels of sacrifice or commitment. It is something that people within EA struggle with, including myself. It’s kind of funny: People will often beat themselves up for not doing enough good, even though other people never beat other people up for not doing enough good. EA is really accepting that this stuff is hard, and we’re all human and we’re not superhuman moral saints.

Stern: Which I guess is what worries or scares people about it. The idea that once I start thinking this way, how do I not end up beating myself up for not doing more? So I think where a lot of people end up, in light of that, is deciding that what’s easiest is just not thinking about any of it so they don’t feel bad.

MacAskill: Yeah. And that’s a real shame. I don’t know. It bugs me a bit. It’s just a general issue of people when confronted with a moral idea. It’s like, Hey, you should become vegetarian. People are like, Oh, I should care about animals? What about if you had to kill an animal in order to live? Would you do that? What about eating sugar that is bleached with bone? You’re a hypocrite! Somehow people feel like unless you’re doing the most extreme version of your views, then it’s not justified. Look, it’s better to be a vegetarian than to not be a vegetarian. Let’s accept that things are on a spectrum.

On the podcast I was just on, I was just like, ‘Look, these are all philosophical issues. This is irrelevant to the practical questions.’ It’s funny that I am finding myself saying that more and more.

Stern: On what grounds, EA-wise, did you justify spending an hour on the phone with me?

MacAskill: I think the media is important! Getting the ideas out there is important. If more people hear about the ideas, some people are inspired, and they get off their seat and start doing stuff, that’s a huge impact. If I spend one hour talking to you, you write an article, and that leads to one person switching their career, well, that’s one hour turned into 80,000 hours—seems like a pretty good trade.