Disclaimers! First: I (somewhat indirectly) work for Tyler Cowen.
Second, this is a reply, not a review. My review is simple: Stubborn Attachments is a fascinating, thought-provoking work of political philosophy. Given its depth and originality, it’s also remarkably accessible. I strongly recommend it.
Third: there is plenty of interesting material in the book that I will not address at all. This includes narrow arguments about redistribution and environmental policy, as well as more abstract arguments about ethical disagreement and decision-making. I will not touch on these because I either simply agree, or if I have reservations, they’re not all that interesting.
Onto the fun stuff.
Introduction: Cowen’s argument in a nutshell, and map of my response
Essential to Cowen’s position is the claim that the discount rate for the value of the wellbeing of future people should be zero. In other words, the fact that someone doesn’t exist yet does not at all diminish the ethical value of their wellbeing. John, who is alive today, living a life of, say, 100 net utils, is worth exactly the same as Linda, who will live two hundred years from now, living a life of 100 net utils.
Presumably, there will be many, many more people alive in the future than are alive today. So, when we think about hard things like public policy and social organization, we shouldn’t aim to maximize the wellbeing of people alive today. Instead, we should maximize the wellbeing of all people—present and future. Given that there will be so many more future people, in practice this means our focus should be on maximizing the wellbeing of future people.
Luckily, says Cowen, we know how to maximize the wellbeing of future people: maximize economic growth. The reason for this is the miracle of compound returns. The difference between a 2% annual rate of economic growth and a 3% rate amounts to a massive difference in overall wealth (ie, wellbeing) over a long enough period of time. The impact of compound returns is so great that it dwarfs almost any other ethical considerations we might have. We should therefore be stubbornly attached to the ethical value of economic growth, above almost all else. ‘Almost’ because, Cowen argues, there is one category of relevant constraint: human rights. We shouldn’t violate human rights, even in favor of economic growth (I’ll address Cowen’s reasoning for this later on).
In sum, then, Cowen’s position: maximize economic growth, unless it means violating human rights. Ignore all else.
This argument is prima facie compelling. There are, however, three broad reasons to call it into question. In increasing order of both how interesting they are and how problematic they are for Cowen’s case, the reasons: (1) it is not clear that wealth = wellbeing; (2) the argument for a human rights constraint fails; and (3) uncertainty about the future is more problematic to Cowen’s view than he lets on. The third reason is by far the most complex, interesting, and potentially damaging to Cowen’s view, and will take up more than half of this reply.
I am not sure that these objections are quite sufficient to justify rejection. However, in the spirit of Cowen’s call for a reader who “will feel I have not pushed hard enough on the tough questions, no matter how hard I push” (25), I will myself push these objections as hard I as can.
Objection 1. Wealth is good?
There are many reasons to call into question that mo’ money = mo’ utils. To Cowen’s credit, he handles these well. He replaces GDP with ‘Wealth Plus’ as the relevant measure of wealth. Wealth Plus = GDP + “measures of leisure time, household production, and environmental amenities”. He goes on to argue that mo’ Wealth Plus does, indeed, correlate positively with wellbeing. In one compelling passage, he considers the possibility of a poor society happily “living in harmony with nature”, before bursting that bubble: “poor societies from the past have collapsed repeatedly through military weakness, ecological catastrophe, famine, tyranny, and natural disasters, among other factors” (32). I’m fully with Cowen here: being one with the trees sounds nice until you consider what it actually involves in practice.
I’m also on board with Cowen’s ‘pluralism’ about wellbeing. Wellbeing isn’t just happiness (whatever that is); it also includes “justice, fairness, beauty, the artistic peaks of human achievement, the quality of mercy” (17), etc. I have no qualms with the claim that the flourishing of these virtues, on net, correlates positively with Wealth Plus. The problem, if there is one, is that Cowen focuses almost exclusively on, let’s say, the gentler virtues. There are virtues that he tellingly neglects in his list: courage, strength, passion, aliveness, conviction, perseverance, honor, integrity.
The general point is that there’s an alternative view of ethics that Cowen doesn’t consider but that isn’t altogether implausible. Maybe the most well-known version of it is Nietzsche’s ‘blonde beast’ ethic (minus the racist component—brunette beasts welcome!). The extreme version of this position is hard to take seriously, but surely these virtues do matter to some extent. And it isn’t so clear that these Nietzschean virtues correlate positively with Wealth Plus. To put it bluntly: in the long run, wealth makes us fat overthinkers. When I compare a short, difficult life of striving for survival with a long, easy life of staring at a phone, I’m not so sure which is better. Sure, push comes to shove, I’ll probably take the latter. But allowing for intermediates, I doubt the best option is all the way to the wealthy side of the spectrum. ‘What doesn’t kill you makes you stronger’, like most clichés, has some truth to it. Hardship minimization is self-defeating, and wealth maximization more or less amounts of hardship minimization.
Having laid out this objection, I have to say: I don’t think it does much harm to Cowen’s position. Yes, these virtues do matter, but wealth doesn’t eradicate them altogether. It’s hard to quantify these matters, but on net, even when taking this objection into account, wealth maximization is probably still pretty close to the way to go. Even so, it is a weakness of Cowen’s book that he doesn’t consider these virtues and incorporate them into his analysis. This is especially so considering that by Cowen’s own account in his previous book, The Complacent Class, success is self-defeating for roughly the same reason that wealth diminishes the Nietzschean virtues: it makes us soft.
Objection 2. Against the rights constraint
My objection to Cowen’s rights constraint is two-fold: it is both redundant and too incomplete to constitute a real position. I’ll first quote Cowen’s exposition of the constraint before taking on each part of the objection.
Cowen’s proposed rights constraint is simple enough: “Inviolable human rights, where applicable, should constrain the quest for higher economic growth”. Later: “think of such rights as binding and absolute. That means: just don’t violate human rights” (emphasis in original). And later still: “these negative rights, restrictive though they may be, represent a stripped-down set of bare-bones constraints, a series of injunctions about the impermissibility of various forms of murder, torture, and abuse”. However, just as he’s establishing how absolute these rights are, Cowen adds that “we should violate rights to prevent extremely negative outcomes which involve the extinction of value altogether, such as the end of the world, as is sometimes postulated in philosophical thought experiments” (56–57).
Objection 2a. Deontology vs. utilitarianism
Cowen’s position, while intuitive, needs justification. Yes, of course the common sense view is that we should respect human rights; and yes, of course common sense also tells us that exceptions must be made for philosophers’ sadistic fantasies. The problem is that standard deontological arguments for rights are difficult to square away with exceptions. By deontological reasoning, killing a baby isn’t something we should refrain from because it leads to bad consequences, regardless of how bad; it’s something we shouldn’t do because it’s wrong. The consequences—even the end of the world—are irrelevant. Conversely, utilitarian reasoning tells us to maximize utility. This makes rights redundant. If respecting them maximizes utility, respect them; if it doesn’t, don’t. A genuine compromise needs to address this apparent incompatibility, not just describe the outcome we already knew we wanted.
Cowen does attempt a justification: uncertainty. I’ll paraphrase Cowen’s own examples. Suppose we’re presented with the option of killing a baby in exchange for raising GDP by $5 billion. Economists estimate the value of a life at about $5 million (I’ll put my disbelief that economists have found the value of a life aside). Utilitarianism at first seems to say: easy choice, kill the baby. But really, how can we be so sure that we have our estimates right? Maybe killing the baby won’t work. Maybe this baby would grow up to generate way more than $5 billion for the economy. Taking uncertainties into account, it makes sense not to kill the baby. By contrast, consider another example: kill a baby or aliens destroy the Earth. No uncertainty here, so the choice goes the other way. Kill the baby. (113–115)
The problem with this utilitarian/rights compromise is that it’s not a compromise at all. Cowen’s approach is 100% utilitarian—he’s just not willing to admit it. The uncertainty justification is an epistemic one and so is irrelevant to the ethical question as such. Keeping uncertainty in mind is important for how we implement utilitarianism, but it gives us no exceptions to it.
Cowen’s argument is, in fact, a familiar justification for rule utilitarianism: we don’t usually have sufficient knowledge to know how something will turn out, nor the resources to gather the evidence necessary to sufficiently raise our level of justified credence, so we should act in accordance with rules that usually lead to good outcomes. If, in special circumstances, we have more knowledge than usual and can be confident about that, it’s okay to violate the rules, since their justification is precisely that we usually don’t have that knowledge. None of this involves an exception or even constraint on utilitarianism. It’s rather a guideline for carrying it out in practice. The underlying ethics is still a simple mo’ utils = mo’ good, no exceptions necessary.
The relevant thought experiment, which Cowen doesn’t consider: suppose killing a baby nets humanity +1 util, all consequences considered. Suppose, also, that we have 100% justified epistemic confidence in this. By putting uncertainty aside, this thought experiment gets to the heart of the ethical question. Of course, I’m not sure how Cowen would choose. Given his framework, however, I posit that his answer should be to kill the baby. This is because Cowen’s ethical reasoning, besides the irrelevant argument from uncertainty, is utilitarian.
I do agree with Cowen’s call to respect rights in practice. I merely object to the framing of his ethics as one which has room for human rights—it simply doesn’t. That is, of course, assuming he opts to kill the baby in my version of the thought experiment. If he chooses not to, then he’s got more theorizing to do, because his justification from uncertainty can’t do the work he wants it to.
Assume Cowen is a good utilitarian and goes for killing the baby. Then, it might seem, my objection is really a semantic one, or one of emphasis at best. This is true, but should not distract from how misleading Cowen’s framing is. If the justification from uncertainty is all Cowen has to defend human rights, then any rule that usually leads to comparably good outcomes has the same standing as the ‘don’t violate human rights’ rule. There’s no a priori reason not to value ‘teach calculus to 16 year olds’ just as highly as ‘don’t violate human rights’—it’s just that we have higher justification for believing that the latter yields consistently good outcomes in cases of uncertainty. In other words, if the argument from uncertainty is really Cowen’s only defense of the rights constraint, then human rights quite literally have no special place at all in his ethical system.
Objection 2b. The problem with Awesomism
Put objection 2a aside for the moment. Besides what I quoted above, Cowen doesn’t say what the human rights are that should constrain economic growth. His motivation, I take it, is to leave the matter open for readers to consider and debate. I appreciate the gesture. However, until he specifies the rights, I’m not sure that Cowen has, in fact, proposed anything at all.
There is a problem that I haven’t seen directly addressed by political philosophers, but which I suspect is a difficult and important one (caveat: I’m far from an expert on political philosophy. It’s entirely possible that it has been addressed and I just haven’t seen it. If so, oops on this whole section). Suppose I am asked for my political philosophy and I say: “I’m an Awesomist. I believe in a political system in which everyone has what they need to live awesome lives”. Intuitively, we don’t want to disagree with this position; rather, we want to say that it isn’t a serious position at all (if it were, we should all be Awesomists!). But if we’re going to say Awesomism isn’t a real position, we need a reason. What requirements for being a serious position does it fail to meet?
I wish I had a helpful answer to my own question. I don’t. But I suspect that, if we tug at it long enough, we’ll find that many political views that we usually think of as perfectly legitimate (even if wrong) positions are, in fact, not real positions at all, in just the way that Awesomism isn’t.
I’m not sure which positions come out as real positions when faced against the question I’ve raised. But I’m fairly confident that ‘don’t violate human rights’ isn’t one of them, especially not when it’s supposed to be compatible with an imperative to maximize economic growth. There’s just no reason to think that it’s possible to both respect human rights and (non-trivially) maximize economic growth unless we know what these rights are supposed to be. What’s more, there’s no reason to assume it’s possible even to just ‘respect human rights’, unless we have some indication that these rights aren’t mutually incompatible.
Is the right of free movement a human right? I assume Cowen thinks not, since he is against open borders. But then what’s the underlying theory that makes some things rights but not freedom of movement? Until we hear more, we have no reason to assume there is (or isn’t) a convincing underlying theory than can deliver this result. Is freedom of contract a human right? Freedom from property violation? Cowen isn’t an anarchist, so I assume he doesn’t consider these to be rights in any absolute sense. Is there a non-ad-hoc rights theory that delivers just the rights Cowen thinks are ‘human rights’ but not others? Maybe, maybe not. But until he gives us a reason for thinking there might be, we have no reason to take his call for respecting human rights any more seriously than Awesomism.
Unlike Objection 1, which I don’t think does much harm to Cowen’s position, I take the two objections to the rights constraint to be decisive (2a more so than 2b; the latter is serious, but not insurmountable). However, Cowen’s real stubborn attachment is the one to economic growth. Dropping the rights constraint doesn’t really change his overall position. So, unless there are stronger objections to the economic growth imperative than Objection 1, his important argument stands. To my mind, the strongest objections to the argument for growth are epistemic. I turn to those now.
Objection 3. Epistemic objections from the future
Cowen spends a full chapter on epistemic objections. What he tackles in it, he does compellingly. The problem is with what he leaves out. Cowen ends the introduction to his epistemic chapter with a promise to “work through some examples of the radical uncertainty about the future” (105). He then proceeds to consider all manner of uncertainty scenarios, except those that apply to uncertainty about the future in particular. Cowen’s arguments are all about why uncertainty in general shouldn’t paralyze us. Despite the introductory promise, he fails to consider reasons why, even if he’s right about general uncertainty, uncertainty about the future has special features that are uniquely problematic to his view.
Before I move on, I need another disclaimer: most of what follows involves considerations about expected value, rational decision, and credence. I have not actually studied these fields! The issues I raise are simple enough that, despite my ignorance, I believe they stand. However, if there are discoveries in these fields that invalidate any of my points, I would love to hear them! Disclaimer out, back to the argument.
The force of the epistemic objections from the future rests on what the alternative is to Cowen’s proposal. What makes his proposal interesting and valuable is that, by focusing on compound returns, he gives us a definite prescription for what we should do to maximize wellbeing. If we were convinced that the future did not matter (or mattered less than Cowen suggests), compound returns would fall out of the equation (or be significantly diminished in impact), and his argument for maximizing growth would lose its force. If the (long-term) future did not matter, there might still be some reasons for maximizing (short-term) growth, but they would no longer be decisive, and would have to be weighed against reasons to maximize wellbeing in other ways (eg, redistribution). This isn’t just a yes-or-no matter, of course. All else equal, the force of Cowen’s prescription is weakened proportionally with the extent to which we have reasons to devalue the future relative to the present.
We can now put the issue as follows: there are two competing proposals. One is Cowen’s prescription: maximize economic growth because that yields the highest expected value over the long-term. The rival is the presentist prescription: maximize present wellbeing (however that can be optimally done, which presumably involves other goals besides maximizing growth). Reasons to diminish the value of the future relative to the present weigh favorably toward the presentist prescription.
A super simple framework for calculating expected value: suppose we have proposed course of action X, intended to yield average value Y for all people affected. The expected value of X is Y times Z, where Z = the number of people affected. This is a fine start, but it is, of course, too simple. For one, we should incorporate a measure of efficacy credence: our level of justified belief in the proposition that X will lead to the desired outcome. For example, if our efficacy credence is 0.5, the expected value of X is halved. We can model this as a coefficient on Y, always greater than 0 but less than 1.
That’s not all. To weigh Cowen’s prescription against its rival, we need to complicate Z as well. Cowen’s argument for prioritizing the future assumes that there will be many more future people than present people, ie, that Z will be much greater when we maximize for the future than when we maximize for the present. Any reasons we have to doubt that Z will be much greater in the future should also be incorporated by diminishing of the weight of the greater Z involved in the expected value calculation for Cowen’s prescription. A coefficient on Cowen’s Z could model the impact of these reasons for doubt.
Even with these complications, the framework remains much too simple. For example, we might think it 50% likely that X will lead to the desired outcome, but 80% likely that it will lead to an outcome at least 70% as valuable as the desired outcome. I trust decision theorists and formal epistemologists have models for this kind of stuff, but fear not—I have neither the competence nor the inclination for getting mathy here. Even if I did, the questions involved just have so much uncertainty about them that approximating credence values seems unavoidably arbitrary. I’ve outlined this super simple expected value model to give a taxonomy for what sorts of concerns should impact the expected value of the competing prescriptions and how, not to do any actual calculations.
Instead of calculations, I will consider some reasons to (a) doubt that Z will be much greater in the far future than in the present; (b) diminish the efficacy credence of Cowen’s prescription relative to that of the presentist prescription; and (c) diminish the efficacy credence of the presentist prescription relative to that of Cowen’s prescription. I’ll then consider a final complication (d): an epistemic spin on a metaphysical objection to the value of future people. I’ll (e) conclude this section with an (unsuccessful) attempt at a rough, informal weighing.
With one exception, the objections that Cowen addresses in his epistemic chapter apply equally well to his prescription as to the presentist one. Because of this, though they do help us overcome full out paralysis, they do not help us decide between the two competing prescriptions. The one exception is the subject of (c). For reasons that I hope are by now clear, it is (a) – (c) that are most obviously relevant to the choice between Cowen’s prescription and the presentist one.
Objection 3a. Existential risk
It may turn out that humanity will not be around all that much longer. First, there are some general reasons to think this. About 99.9% of species that have ever existed have gone extinct. It’s not a stretch to infer that our time is likely coming at some point. There may be any number of natural events that wipe us out, such as an asteroid strike, epidemic, etc. Though we don’t have strong reasons for thinking that this will happen very soon, the fact that we have some reason to think it will happen is enough to diminish Z’s coefficient in the expected value calculation of Cowen’s prescription.
More likely, we will self-destruct long before a natural event has the chance to wipe us out. Nick Bostrom likens humanity’s situation to an urn filled with balls representing technological developments. Most of the balls in the urn are white, representing safe technological developments. A few are gray, representing potentially detrimental technologies. And a very few are black, representing technological developments so dangerous that we are almost guaranteed to go extinct shortly after discovering them. Presumably, we haven’t pulled out a black ball yet. But as we pull out balls faster and faster, it is only a matter of time before we inadvertently grab for a black one.
It isn’t even clear that we haven’t already pulled out a black one (or, in any case, a very dark gray one). It has only been a few decades since we discovered nuclear weapons. It may well be that a species like ours, on a planet like ours, is on a nearly inevitable course for self-destruction upon discovering nuclear warfare, and the only real uncertainty is how long it takes. October 1962, North Korea, and Donald Trump’s current proximity to the button are all visceral reminders that this very well could be the case.
Environmental catastrophe caused by global warming is another example. There is a funny frustration in the dialogue about global warming: if only people would just listen, we could solve this! But just because we know how to alleviate global warming if we could get people to act the right way, doesn’t mean we’re anywhere close to figuring out how to get people to act the right way. It may just be that humanity doesn’t nearly have the resources (broadly understood) necessary to have even a fighting chance at solving the collective action problem posed by global warming in time. We could be on a (for all practical purposes) irreversible course toward environmental self-destruction. This isn’t the place to estimate what our credence in this possibility should be, but it’s not negligible.
These are just the black balls we may have already pulled out. At an increasing rate of technological growth, there are many potential black balls on the way. Maybe most alarming is the impending AI apocalypse Musk, Bostrom, Yudkowsky, Harris, Hawking, and others have been warning us about. I won’t go into the arguments for AI danger here. Suffice it to say, many of the brightest and best informed about AI think there’s a decent chance that recursively self-improving AI is on its way and that when it arrives, it will mark the end of humanity as we know it. The end may be annihilation by paper clip. Or it may be a merging with superintelligent AI, which we might happily consider attainment of immortality, godhood, or simply the next step in our evolution. Regardless of whether the outcome is heaven or hell, if these worries/hopes are right, humanity as we know it will cease to exist.
I have myself at times remarked: every generation has its doomsayers. We’re no different. That general observation goes some way to alleviating fears of existential risk, but frankly not very far. The arguments, as they stand, are strong. Taken together, they vastly diminish the impact of the much greater Z in the expected value calculation for Cowen’s prescription. Not enough to counteract it altogether, to be sure. But it’s quite the dent.
Objection 3b. Diminishing Cowen’s efficacy credence
Cowen’s master plan for maximizing utility into the far future is to maximize sustainable economic growth. The are three considerations that should diminish our confidence that this will work out, relative to our confidence in the presentist’s prescription working out.
Transformative change: It is a truism that major transformations in human socioeconomic organization are speeding up. After a gap of thousands of years between the agricultural and the industrial revolutions, we seem to be well into an information revolution of comparable impact only two hundred years later. The logic behind this general trend has caused Ray Kurzweil to popularize the notion of a ‘singularity’ beyond which we cannot make reasonable predictions about the future, and Robin Hanson to speculate the economics and social organization of a world in which human minds are cheaply duplicable.
We need not sympathize with any specific speculation about the future to sign on to the general prediction that the not-so-distant future will be drastically, almost unrecognizably different from the present. If it is, this suggests the possibility that Cowen’s argument for maximizing growth just does not apply to this future world. While Cowen makes a strong case that maximum ‘Wealth Plus’ correlates with maximum utility, we should entertain some doubt that this will remain the case for a future in which society has in some fundamental way transformed. Granted, ‘growth is good’ has been ‘applicable’ to civilization since its inception. What’s more, economic growth has so far been the very engine that spurts transformations of this sort. But massive transformations always involve the end of some previously constant feature of social organization. There is no a priori reason that the next such constant to go is one which will take the applicability of the ‘max wealth = max wellbeing’ rule with it. The more seriously we take this doubt, the more we must diminish our efficacy credence in Cowen’s prescription relative to that of the presentist one.
The ‘sustainable’ in sustainable economic growth: ‘Maximize economic growth’ is susceptible to the objection from Awesomism unless we have some indication that we know how to implement it. Cowen does tell us something about implementation. Certain institutions—well-defined property rights, meritocratic incentive structures, political stability, among others—are known to yield greater economic growth. This is a fine answer, in the short-term. But Cowen’s argument for maximizing economic growth is based on the assumption that we can sustain economic growth over a long period of time. This is the only way to ensure compound returns, the heart of Cowen’s prescription. So, for Cowen’s argument to plausibly survive the Awesomist objection, he needs to provide a reason for thinking that we know how to maximize not just economic growth, but sustainable economic growth over the long-term.
This is a serious objection. For suppose we take an action today that effectively increases economic growth. Do we have a reason for thinking that people in the future will continue to take actions that increase economic growth? If we don’t, then we don’t get our compounding. To paraphrase Reagan (apologies!), if growth-friendly institutions are never more than one generation away from extinction, Cowen’s argument fails.
Cowen’s reply here could simply be that economic growth is self-reinforcing: the more we achieve it, the more likely we are to sustain the institutions that foster it, even over a very long time horizon. This strikes me as more likely to be the case than not. But how reliable is this rule? And how high should our confidence be in it? Here, in particular, I need to cite my own ignorance: it may be that there’s significant literature on the long-term self-reinforcing character of growth-friendly institutions and I just don’t know it. Even so, I doubt our confidence should be 1 or even close to 1. Cowen’s own The Complacent Class makes a compelling case that success is, in at least some sense, self-defeating, not self-reinforcing. More generally, the cyclical view of history that Cowen argues for in that book, if correct, suggests that economic growth is more likely to peak and decrease rather than progress upward indefinitely. The possibility that this cyclical/self-defeat view is correct also diminishes the relative efficacy credence of Cowen’s prescription.
General uncertainty about the future: Beyond any specific argument, there is a general rule to consider: given certain constraints, and all else equal, the further away the future is, the less confident we should feel about our predictions for it.
These three considerations, along with 3a (all four of which are admittedly related—care should be taken not to double-count when tallying their weight), get stronger the further away into the future we are. We might reasonably say, then, that they do not so much weaken Cowen’s prescription as simply suggest that it should be modified to focus on the short- and medium-term, as opposed to the long-term, future. However, as Cowen’s prescription retreats from the far future, its advantage in the Z value diminishes, so it’s not obvious that this retreat on net supports his position. It should also be noted that as the temporal target for Cowen’s proposal retreats closer to the present, the course of action it prescribes likely moves closer to the presentist prescription. So Cowen’s prescription also suffers in its distinctness from the presentist prescription as its temporal target moves closer to the present.
Objection 3c. Diminishing the presentist’s efficacy credence
When it comes to considerations of epistemic confidence, Cowen’s prescription has one major advantage over its rival: we know what it is! I have thus far not said what the presentist’s proposed course of action is. There is a very good reason for this: I don’t know how to maximize wellbeing for the present and near future. There is a tremendous amount of reasonable disagreement about this. Cowen convincingly argues that we don’t have the same uncertainty once we extend the time horizon, precisely because long-term compounded returns on economic growth are so far and away better than anything else we might conjure. Putting all of the other considerations of this section aside, his argument holds and it’s powerful.
It is, however, possible to overstate this relative benefit of Cowen’s position. For one, I’m not sure there is as much reasonable disagreement about the presentist position as might initially seem. Much disagreement arises out of intellectual dishonesty motivated by ideological commitment and out of disagreement on what the end goal should be. If we control for this, there might not be that much more disagreement about the best way to maximize Wealth Plus for the present as there is about the best way to maximize it for the indefinite future. And it’s important to control for this, since these sources of disagreement apply equally to Cowen’s prescription. No matter how reasonable Cowen’s argument, plenty will disagree with it simply on ideological grounds, or because they disagree with the utilitarian framework. So whatever disagreement about the presentist course of action stems from these sources should be filtered out in the comparison.
Note, also, that Cowen’s prescription is only relatively more definite than the presentist. Cowen says nothing about how to implement the growth-friendly institutions. Just as we don’t know if we can reach a convincing answer to ‘how should we best maximize for present wellbeing?’, we also don’t know if we can reach one for ‘how should we implement our agreed upon growth-bearing institutions?’ So there is indeterminacy in the content of both prescriptions. The indeterminacy is greater for the presentist position, and this implies a relative gain for the efficacy credence of Cowen’s prescription. But the advantage here is one of degree, not one of the presentist prescription lacking something that Cowen’s does not.
Objection 3d. The ontological objection, epistemic edition
There’s an altogether different objection to valuing future wellbeing that I call the ‘ontological objection’, and it goes something like this: future people don’t exist. The value of the wellbeing of things that don’t exist is zero. We might as well value the wellbeing of Santa Claus.
I don’t think this objection works. I also don’t know how to give a knockdown argument against it. All I really have is that it’s intuitively hard to swallow. Yes, these future people don’t exist, but they will, and when they do, their lives will be as real as yours or mine. This is not the case for Santa Claus. But a defender of the ontological objection can simply reply: “that they will exist is irrelevant; the fact remains that they don’t, and so neither does their ‘wellbeing’”. There’s an impasse here that I don’t see how to cross.
(Note: if we take contemporary physics seriously, there seems to be good reason to think that time is in some sense a metaphysical illusion. This could serve as the basis for an argument against the ontological objection. I won’t press this because I don’t actually understand contemporary physics, but I note it for those who do.)
Why am I bringing up the ontological objection if I don’t think it works? Because even if it doesn’t work, we should not simply dismiss it, but reflect its weakness with the appropriate level of credence. Suppose that we’re justified in being 85% sure that the ontological objection fails. That amounts to a 15% discount rate on the wellbeing of future people, motivated by our epistemic situation regarding the ontological objection, not motivated by actually valuing those lives (ethically, metaphysically) as being 85% as valuable as present lives.
Note that there’s no comparably plausible argument to the effect that the lives of future people matter but the lives of present people do not. So there’s nothing to counteract the force of the epistemic version of the ontological objection.
Objection 3e. Weighing the epistemic considerations
An initial comparison of expected value between Cowen’s prescription and the presentist prescription puts Cowen’s well ahead. Not only is Cowen’s Z much larger, but so is his Y: compounded economic growth is likely to yield higher average living standards, even when there are many more people. Cowen’s prescription is then strengthened by consideration 3c: it is more definite in content than the presentist prescription, which means there is greater doubt about whether the presentist prescription is even executable.
Cowen’s prescription has quite a few epistemic considerations going against it, however, that do not apply to the presentist alternative. The first is perhaps the strongest: our confidence that Cowen’s Z is really so much bigger is deeply diminished by the many plausible arguments for existential risk. Add to that: greater uncertainty about the future in general, doubts about how to establish genuinely sustainable institutions for economic growth over the long-term, and the possibility that humanity may transform to such an extent that Cowen’s arguments become inapplicable. Finally, though the ethical value of present lives and of future lives seem intuitively equal, there is a slightly plausible argument that future lives are worth nothing whereas there is no comparably plausible argument going the other way.
I don’t know if these considerations, together, diminish the force of Cowen’s argument enough for the presentist prescription to overtake Cowen’s. My overview has been unsystematic and subject to interpretation, and I just don’t know to add them up, even informally. There are also likely many other possible epistemic considerations, going in either direction, that I’ve missed. Even so, I hope to at least have shown that the matter is not so obvious as Cowen suggests. We need to examine epistemic considerations much more before we can determine with any confidence whether Cowen’s argument withstands just how much thicker the fog of uncertainty is for the future than for the present.
I have covered three broad objections to Cowen’s main argument in Stubborn Attachments. The first objection is against Wealth Plus as a stand-in for wellbeing. I do not think this objection very serious, but worth considering and perhaps sufficient for a minor dent in Cowen’s wealth-centric vision for utilitarianism. The second objection argues that Cowen’s rights constraint is unnecessary because it is fully covered under his utilitarianism. This objection I consider the strongest, insofar as I do not see how Cowen can overcome it. It is the weakest, however, in that it changes nothing about Cowen’s position when properly understood. The point of the second objection is precisely that the rights constraint adds nothing; so, of course, removing it takes away nothing.
The third, epistemic, objection is the most complex and also potentially the most damaging. Unlike the rights objection, the epistemic objection goes to the heart of Cowen’s view: if successful, it invalidates Cowen’s entire position. I have tried to show that it is at best inconclusive how Cowen’s argument stands against this objection.
As a final word, I will hint toward another reason to favor Cowen’s view, though he does not state it in this book. Human beings need future-facing motivation. A particularly strong form of this is inspiration. A grandiose, future-oriented goal is inspiring. It’s the sort of thing that, if spread throughout the culture effectively, would likely yield significant increases in productivity, innovation, and yes, Wealth Plus.
My final point, then, is this: the sort of people who believe in and work toward a long-term future of compounding growth are likely to have a productivity advantage over the sort of people who believe that the best for now is the best we can do period. Perhaps it takes people believing in something like Cowen’s prescription for wellbeing to be maximized, whether that prescription is actually correct or not. If this is true, by arguing against Cowen’s prescription, I may do some small amount of damage to net wellbeing. Sorry about that.