The unbearable weight of the future – a book review of What We Owe the Future
What if, several years before the emergence of the COVID-19 pandemic, someone warned you about the risks of a global pandemic? What if, prior to the release of the remarkable ChatGPT AI, that same person warned you that AI was developing far faster than people thought and might pose a risk to the human race? You might have laughed at them at the time, then wondered at their prescience a few years later.
Well, it turns out such prophets do exist. The long-termism movement has been worrying about existential threats to humanity’s survival for several years. They were funding pandemic research and AI research before COVID-19 and ChatGPT emerged. I set out to learn more about this prescient movement by reading the recently released What We Owe the Future by William Macaskill.
Macaskill, an associate professor of philosophy at Oxford, argues that we ought to value the lives and experiences of those living in the future as much as we value the lives of those living today. He further argues that because humanity could live for millions of years or longer, the numbers of future humans is very large (around 10^54 people). In any cost benefit analysis, that means even very small risks to their existence or their happiness count for a great deal.
That means we should spend more resources on preventing existential risks to humanity like AI taking over the world or global pandemics. It also means preventing things that would cause suffering for long periods, such as an extended period of technological stagnation or locking in values that prevent moral or technological growth. On their face, these seem reasonable arguments, yet I would argue there are deep flaws in it with profound consequences.
Making the improbable probable
Part of the genius of Macaskill’s approach is that because there are so many future lives at stake, they can outweigh any costs or downsides that might occur in the present without the need to do any fancy maths.
It reminds me of two children bickering in a playground. One says he’s won because he crossed his fingers twice. The other kid says he crossed his fingers three times. They go back and forth till one kid yells, “nuh uh, I win cause I crossed my fingers infinity times.” After all, everybody knows you can’t outnumber infinity.
Yet, like that kid who didn’t actually cross his fingers an infinite number of times, Macaskill is weighing up lives that may never exist or whose lives may be filled with intense suffering.
Macaskill sets out a broadly utilitarian decision-making framework in which you take the action that provides the most total benefit to everyone present and future. Macaskill is explicit about one assumption – that we should value the suffering or enjoyment of a person equally regardless of whether they live in the present or the distant future.
We should, in other words, act as though a future person’s pain were our own. Yet how should one person think about their own future pain or gain? Should a man go to university (costing $50,000) now in exchange for a higher income of $10,000 more per year of work in the future? A man following Macaskill’s approach might simply add up the income he would earn over his working life ($10k x 40 years) and decide it is way more than the cost of going to university ($50k in direct costs plus three years worth of study).
Economics, however, says it is not so simple. Despite adopting a similar utilitarian value framework, it says that we should value the future gains less than present gains or losses. One reason is because future gains are inherently uncertainty. The student may not live long enough to work until retirement age. The career for which the student trains may become obsolete. Another reason is because of opportunity costs. The student could instead invest that $50,000 and earn on average 9% in the stock market.
According to economics, we should therefore discount future income flows to adjust for opportunity costs and uncertainty. If we assume the cash flows are certain, then we should discount these future cash flows by 9% a year. The $10,000 the student earns in year four is equivalent to $10,000/(1.09%)^4, or $7,084 in present value terms. In other words, I could invest $7,084 today and expect on average to earn $10,000 in four years’ time so I would be indifferent between having $7,084 now or $10,000 in four year’s time. If I added up the present value of my increased income over 30 years, I get $77,423 of added income. The first analysis significantly exaggerated the benefits of studying because it did not take into account the uncertain nature of those benefits. The student should study at university since the benefits outweigh the costs.
By the same token, let us assume the government could spend $1 million now on AI research that has a 1% chance of preventing AI from extinguishing the human race in the year 2050. Forecasters think there will be 10 billion people alive then. Alternatively, the government could use that $1 million now on life saving drugs to save a million people living today. Should the government invest in AI research or the life saving drugs?
I know with certainty that the million people saved by the drugs exist, but I don’t know for sure what the population of Earth will be in 2050. The UN forecasts that it will be 10 billion with a high degree of uncertainty. It might be lower than 10 billion because birth rates keep falling. It might be zero if aliens invade and wipe us out before 2050. It might be way higher than 10 billion if we discover new technologies that let us increase our population.
Even if there are ten billion humans living in 2050, their lives may not be positive. Pollution may be so bad that they suffer more than they enjoy their lives. In such a case, each life of torture may actually count as a cost in Macaskill’s cost benefit analysis.
We cannot just imagine 10 billion happy people will exist in 2050 and argue that their happiness outweighs any cost in the present. We need to discount those future lives for this uncertainty. If I apply a discount rate of only 1%, then those 10 billion uncertain lives are worth 7.6 billion certain lives. Since there’s only a 1% chance the AI research prevents our extinction, I expect on average to save 76 million lives. That’s still way more than the 1 million lives saved with the drugs, so we should pay for the AI research.
The equation changes if we assume that our extinction event doesn’t happen till much later. If the world of ten billion people is saved in a thousand years’ time, their uncertain existences are only worth 477,118 certain lives. And we only have a 1% chance of saving their lives so we only expect to save 4,118 of them. We also have to account for the lives of their children – let’s say another 10 billion people fifty years after. Because fifty more years have passed, their lives are less certain and we only expect to save 2,901 of them. Repeating that calculation, we expect to save 1,740 of the first generation’s grandchildren. The maths gets complicated but you can see the diminishing returns mean that we would expect to save more lives by buying the drugs. The change in timing has made the AI research non-viable.
Macaskill, quite sensibly, never makes predictions about when these extinction events might happen. Since he ignores the level of uncertainty about whether those future lives exist, it doesn’t affect his calculations. Yet my simple example shows how much it does matter.
Derek Parfit, an Oxford philosopher who has been influential on Macaskill, and Tyler Cowan, an economist loosely affiliated with the rationalist movement, argued applying these social discount rates purely to account for the passage of time. Yet in that paper they acknowledge that we should discount for probability where, for example, a prediction is less likely to be correct the further in the future it is.
As foreshadowed, there are several types of probability that Macaskill does not account for:
- Existential risk: Long-termists are fond of estimating the risks of extinction in any given year, we might be wiped out by one of those before our action has the chance to save us from AI. Extinction risk may vary over time. It may decrease as we become a multi-planetary species, but it may also increase as we develop new ways to kill ourselves. As such, quantifying the aggregate extinction risk is a purely speculative exercise.
- Population variance: Population growth forecasts have not been particularly accurate historically. Malthusian predictions made in the 1980s turned out to be wildly pessimistic because we developed new agricultural technologies that allowed the planet to sustain more people than expected. Why do we expect our forecasts for a thousand years time to be more accurate than our forecasts a few decades ahead?
- Welfare variance: Macaskill is counting the net enjoyment and/or suffering of our descendants. Yet we have no way to objectively quantify this welfare or to measure its variance over time historically. Again, attempts to forecast future welfare are purely speculative.
Parfit and Cowen also dismiss the argument about applying discount rates to account for opportunity costs. They argue these costs should be considered directly. I agree. Yet Macaskill makes no attempt to consider the opportunity costs of taking actions now.
One could compare the benefits of taking long-termist actions against alternative uses of the resources in the present, such as comparing it to saving lives with medicine as I did earlier. There are also opportunity costs that arise over time. What if, instead of researching AI risk mitigation now, I invested the $1 million and conducted the research in a decade’s time when we have a clearer picture of the technology underpinning AI? Then not only would we have more money to spend, our research would be more effective. Of course, you do run the risk that the AI wipes us out or becomes socially entrenched before the research is completed, but you could account for that risk in your deliberation.
Hand waving is not philosophy
The benefit of ignoring the uncertainty about future lives is that it makes the maths much simpler. The happiness of trillions and trillions of people outweighs any costs that may be incurred in the present to achieve that happiness. Even a 1% chance of saving these future lives is equivalent to 100 billion times as much as a billion human lives.
Using such simplistic logic, you could justify any number of atrocities for the greater good. Some have already done so. Prominent long-termist, Nick Bostrom, has argued for instituting a global invasive surveillance state to prevent extinction-level terrorist attacks. Others have argued we should save the rich instead of the poor as they are more likely to influence the future. Another branch of long-termists call themselves pro-natalists who wish to steer the future by having more offspring.
Many long-termists may disagree with these particular conclusions, but they demonstrate how fluent practitioners can and have used the framework to justify terrible things in the name of people who may never be born.
By incorporating the inherent uncertainty of the future into our cost benefit analysis, we can limit the potential for these arguments. Because the benefits of saving future lives are no longer astronomically high, proponents would actually need to quantify the probabilities, costs and benefits.
This is ultimately to the benefit of the long-termism movement. You don’t want factions to emerge arguing for what would otherwise be indefensible positions. This could cause immense reputational damage to what would otherwise be a promising movement. You can’t influence the future if people think you’re a bunch of nutcases.
Locking in values to prevent value lock-in
A recurring thought I had whilst reading this book was about what values you use to measure benefits gained over a very long period. Values do, after all, change over time. I was surprised an Oxford philosopher did not address this obvious question.
The book asks us to imagine we leave a shard of glass on the beach. According to modern ethics, we don’t care whether a child is hurt in a year’s time or five hundred years’ time. What if we asked a person from five hundred years ago if they cared whether a gay person was cut by that glass? They might not have cared given the punishment for homosexuality back then was death.
Will those alive in the distant future value the quality of their lives the same way we would? I think it’s doubtful and there are realistic scenarios where it can skew our calculations.
What if new sentient life emerges? Macaskill assumes that an AI takeover would be a bad thing, as viewed from his supposedly neutral moral vantage. Yet that AI would be a sentient being (or possibly beings if an army of robots replaces humanity) which could adopt values which are morally superior to our own. They might craft a brighter future because their programming is not bound by all our behavioural biases. Or they might create a dystopian hellhole. We just don’t know and it’s hard to quantify the likelihood of either scenario.
What if our future selves don’t value a high population because they care more about the environment? Macaskill does discuss the suffering of farmed and wild animals. He agrees that the lives of caged animals are of negative utility because they are filled with suffering. Yet he argues that because animals have fewer neurons than humans, they suffer proportionately less. Adjusted for brain size, he argues there are fewer farmed animal neurons than humans by a factor of thirty to one. He doesn’t reach any conclusion about “whether the welfare of humans and farmed animals combined is negative”.
We could use Macaskill’s own trick against him, weighing the suffering of all those future chickens to argue against human overpopulation. A single chicken produces 1.4 pounds of meat. Americans eat 274 pounds of meat a year, equivalent to 195 chickens. (Yes, I know Americans eat a variety of meats but I’m keeping it simple). According to Macaskill, a chicken has 200 million neurons compared to a human with 80 billion neurons so 400 chickens are worth one human. An American eats that many chickens in just two years. In other words, an overpopulated world of humans relying on factory farmed chickens for food generates more suffering than enjoyment.
Based on that simplistic calculation, any action that stops the extinction of the human race actually increases the net suffering in the universe unless we can get rid of factory farming forever.
Conflicting aims
Macaskill says we should aim to prevent existential risks and also to prevent value lock-in. These goals may, on occasion, conflict.
To prevent values lock-in, Macaskill says that maintaining a society with multiple cultures is positive and argues we should have more charter cities. Yet a world with multiple societies with different values is one with far more co-ordination problems. Countries with sharply different value systems may be more likely to conflict. They may not all agree to reduce existential risk – for example, authoritarian regimes may create engineered pathogens increasing the risk of another pandemic.
Long-termism is still grappling with problems such as these. Its leaders should set out a framework guiding us through situations where our goals conflict.
Longtermism unearths underappreciated risks, but itself has hidden risks
The future is inherently uncertain. The fog of the future has led humanity to be short-sighted in our aims. I’m glad someone is thinking about the distant future and how to safeguard it. The long-termists should be praised for their prescience with regards to AI risk and pandemics.
Long-termism is a nascent philosophy and its methods are crude. It is telling that Macaskill never does a complete cost benefit analysis showing how one action has a net positive impact on the world.
Their utilitarian calculations are a form of modelling. Like all models, their results are heavily dependent upon their assumptions and methodology. A methodology is not good or evil, but a methodology that lacks rigour can be abused. It reminds me of the investment bankers and their financial models which can justify whatever conclusion you pay them for.
Long-termism currently sits in the hands of philosophers. What would happen if we placed it in the hands of politicians without developing safeguards? Nationalists could use it to start wars justifying the deaths of millions by quoting the benefits to the lives of countless trillions. If their calculations lack rigour, they can point out that the long-termists leader never bothered with full cost benefit calculation himself.
The long-termists must develop ways to make their calculations more rigorous so their methods cannot so easily be abused to justify immoral behaviour. One step they must take is to acknowledge the uncertainty of the value of the future lives they seek to save. By quantifying that uncertainty, they can stop acting like kids shouting that infinity is on their side.