Moral Duties and Public Goods: A Reply to Long

Moral duty?

Roderick Long’s essay “On Making Small Contributions to Evil” tackles a deep problem in ethics and rational actor theory. Suppose you, individually, decide to stop recycling – will that make any detectable difference for the environment? Suppose you stop eating meat for ethical reasons – will that result in the death of even one less animal? Suppose you vote in the general election – will that make any detectable difference to the election result? Most likely: no, no, no.

This seems like a problem. It feels like there are ethical imperatives involved here – certainly, we should recycle if it is good for the environment. At the same time, are there really ethical imperatives without consequences? Does that make sense?

Eradication of evil as a public good

Long’s essay conceptualizes this issue as one about the provision of public goods. Public goods are goods that are non-excludable (that is, it is very difficult to exclude someone from using them) and non-rivalrous (their utility isn’t diminished by one more person’s use). A beautiful sunset is an example: anyone can benefit and it doesn’t diminish anyone’s benefit if an additional person benefits. Same goes for lighthouses, national security, and asteroid defense.

Public goods tend to be underprovided by the market because of the “free-rider problem”: if the good is non-excludable, why would you ever pay for it? If you had the choice, would you pay in for national security? Why bother? Even if you don’t, so long as your neighbors do, you’ll still benefit. If your neighbors don’t, it won’t be provided whether you pay in or don’t.

Long’s insight: any social benefit in which small, individually insignificant contributions add up to the full result can be thought of as a public good. This includes the eradication of many mass evils. Suppose the end of the meat industry would be a morally good event (just go with it, none of this rests on this specific choice of example). By virtue of this, everyone would benefit from it happening. You would benefit. So should you stop eating meat? Eh. Whether you do or don’t has no effect on whether enough people will for the full effect to take place. So you have no real reason to, and neither does anyone else. Predictably, then, the good goes unprovided and the (presumably evil) meat industry prevails.

Long’s solution and the Make a Difference Principle

Long suggests that we have an “imperfect duty” to contribute to public goods. By imperfect, he simply means we don’t have to contribute to all public goods since that would be impossible. We have moral wiggle room to pick and choose our pet public goods to support.

This certainly solves the puzzle. We now have a reason to support public goods and so they can (should, anyway) be provided despite the free-rider problem. But is Long’s proposed moral duty justified? At first glance, it feels a bit ad-hoc. But maybe that’s not fatal. Even if ad-hoc, it seems plausible.

There’s a more serious problem w/ Long’s solution than being ad-hoc. Consider the Make a Difference Principle (MDP): Suppose action X is said to be a moral duty because X contributes to moral good Y. It is a necessary condition for X to actually be a moral duty that X can be reasonably expected to make a detectable difference relevant to Y.

In other words: I cannot be said to have a moral duty to (for example) donate to charity in order to alleviate world poverty if it turns out that my donating to charity actually makes no noticeable impact on world poverty.

Long’s solution entails a denial of MDP. Yet, the intuitive pull of MDP is huge – much more immediately grabbing than Long’s proposed imperfect duty. How can we have a duty to do something that makes no impact on the target aim?

Long makes a case for the imperfect duty. He justifies it, first, by painting a picture of someone (“Juliette”) who refuses to contribute to public goods because it would make no difference. He notes that our pre-theoretic intuitions tell us that her attitude is “too lax”, which is fair enough – arguably, they do. Then, after dropping the moral duty claim, he says of Juliette’s rejection of it that it:

…seems to do insufficient justice to our nature as social beings and our capacity to regard our fellow humans as partners in cooperative enterprise.  (Moreover, if we can’t have good reason to undertake any task whose success requires cooperation from others unless we can be sure of their cooperation, then we should likewise have no good reason to undertake any task whose success requires cooperation from our future selves – whose cooperation, given free will, likewise cannot be guaranteed – in which case we couldn’t even manage to walk across the room.)  Hence we have some grounds for accepting a duty to contribute to public goods.

And that’s all he says in support of the moral duty.

Long’s defense presupposes that taking Juliette’s side involves believing that “we can’t have good reason to undertake any task whose success requires cooperation from others unless we can be sure of their cooperation….” But this just isn’t true. We can justify Juliette (and reject Long’s duty) via the MDP, which is consistent with unguaranteed cooperation.

Suppose you and I are working on a collaborative project that I support and that requires both of us to get off the ground. If I bail, and it turns out that you intended to participate in it after all, then I’ve done something that very much made a difference: I derailed the whole project.

MDP doesn’t say we shouldn’t collaborate unless we’re sure everyone else will. It says something much weaker: that we have no duty to collaborate if we can be reasonably sure that our individual collaboration will make no difference regardless of what everyone else does. But even this much weaker claim suffices to justify Juliette. So, in so far as MDP is a major problem for Long’s proposed moral duty, his own defense is a non-sequitur.

Even if Long’s case doesn’t quite work, there does seem to be something counterintuitive about siding with Juliette and rejecting Long’s moral duty. Let’s explore.

“What if everyone thought that way?”

Everyone’s favorite answer to MDP: “what if everyone thought that way?” First, it must be stressed: this is also a non-sequitur. It does not follow from MDP that everyone must think that way. If I (for example) litter in an already dirty street, and justify myself by saying that my action didn’t make the street detectably dirtier, it is changing the topic to ask “what if everyone thought that way?” My justification is an individual justification, it is not a call for everyone to behave like me. I do not have a say over how everyone else acts. And this is the situation each of us is in when we evaluate the reasons we have for taking or not taking any action – we consider our own options, not those of a collective.

There’s a more reasonable way of putting the “what if everyone…” objection. Suppose MDP is true. Certainly, then, as a universal truth about action, it must be rational to live by it. Now suppose most people became more rational, did live by MDP, and so littered everywhere. The world would become unlivable. Everyone would be worse off. Therefore, in this scenario, more people behaving rationally led to those people (and the world) being much worse off. How can this be? Certainly, if rational action is really “rational”, then it must be that the more rational actors there are, the better. So it must not be rational to avoid contributing to public goods; there must be some reason, such as Long’s moral duty, to contribute. Since this is inconsistent with MDP, MDP must be false.

There are a number of ways of addressing this way of putting the objection.

1 – Though the implicit “more rationality must lead to better outcome” principle (hereafter “Z”) has some intuitive bite, it’s nowhere near that of MDP. Z seems plausible; MDP seems (at a glance) positively obvious. If they conflict, then it’s Z that goes.

2 – Z is, upon reflection, obviously false. The whole point of the prisoner’s dilemma is that counterexamples to Z are possible.

3 – There are other ways of incentivizing contribution to public goods, such as law (monopolistic or polycentric, though if you’re asking, I’ll take the latter please), emergent norms, and social disapproval. I personally don’t litter, even in already dirty streets, but not because I think I have a “moral duty”. Rather, internalized shame from social disapproval has made it habitual for me to avoid littering. This habit is perfectly rational: not having people mad at you pays.

Similarly, some people compost not just to avoid shame, but because similar social pressures make it feel positively good to do so. Here’s another example: suppose a massive protest against the criminal justice system would be a moral good. Plenty of people who feel no moral duty to attend would do so anyway because they’d enjoy it, it would be a way to express their frustration and rage, etc.

The point is: there are plenty of ways to rationally incentivize public goods that don’t involve Long’s moral duty and don’t violate MDP.

4 – It’s likely that a sudden spike in rational action would, as one of many consequences, yield a proliferation of new, creative ways of incentivizing contributions to public goods, or of making them less necessary. Whether this would compensate for the proliferation of rational actors avoiding contribution is hard to say. But it’s certainly not obvious that more rational actors would, on net, yield less of what public goods are intended to provide.

So no, more rational actors doesn’t necessarily mean more littering. And even if it did, that’s not enough to shoot down MDP.


Suppose you accidentally wander into a philosophical thought experiment. You’re surrounded by a mob determined to kill some innocent person. For whatever reason, they give you the gun. You know the victim is innocent. You also know that, if you refuse to shoot, they’ll immediately just give the gun to someone else who will happily shoot. Do you have a moral duty not to shoot?

If you do, that’s a counterexample of MDP. It wouldn’t make a difference to kill this person since they’re about to die anyway.

My first response is to say, well, it might make a difference. Suppose I stall to give the person more time alive; or, if they’re suffering, I could do it as quickly and painlessly as possible, since someone else might not.

Of course, we can always correct for these responses in philosophical thought experiments. Stipulate that I know everyone would kill the same way, with the same speed, and I can’t stall.

I bite the bullet here and I have no recourse but intuition. Rejecting MDP is more counterintuitive than the claim that I have a moral duty not to shoot the victim precisely because whether I shoot or don’t shoot the victim makes no difference (incidentally, yes, I’d flip the switch in the basic trolley problem).

Here’s a tougher version of the substitute issue. You and ten others are hanging out by a pond in Sicily when you notice a drowning child. You all know how to swim. Assume that, if any of you were alone in this situation, you’d have a moral duty save the drowning child and all of you know that. This means you can reasonably expect that any of the other ten will save the drowning child if you don’t. So it makes no difference if you save the drowning child, which means you have no moral duty to do so. But this applies to everyone else. Assuming no one has any other reason for saving the child (eg, they’d feel good doing it, they’d like the praise, instinct, etc.), the child will drown.

Is that last assumption plausible? Even if it were, I can poke holes here: maybe the fastest swimmer has a moral duty to save the child because his speed might make a difference. But, again, we can stipulate away these complications to satisfy the bottomless sadism of the thought experiment gods.

The mechanics of this scenario are actually fairly complex. The moral duty disappears because each person at the pond can reasonably expect that someone else would save the child. But can they? Shouldn’t they reasonably expect that everyone else will realize this, which means no one will save the child? At this point, the moral duty’s back – although, as soon as everyone realizes that, it’s gone again. This is why you never go in against a Sicilian when death is on the line.

Let’s assume the best case: the seemingly unstable situation above resolves with someone saving the child. Great – the child is saved. Now assume the worst: no one rushes in to save the child. Well, then each of those people has a moral duty to do so (either because they have perfect knowledge and can predict that no one else will, or they don’t have perfect knowledge but can see that no one else is going in). Either way, it’s hard to see how this scenario will cause MDP to output incorrectly. (At worst, it may lead to ten people starting to rush in, stopping when they see everyone else do so, then rushing in again, then stopping, and so on, doing this awkward dance all the way to drowning child, at which point they knock their heads together. Everyone drowns.)

I hope the case is made: in order for this type of scenario to even have a chance at making MDP problematic, it has to become so rigid as to lose all relevance and intuitive pull. Even then, it’s not much of a chance.

Sorites paradox

If I lost a dollar, that would almost definitely make no difference to my life. I probably wouldn’t even notice. So, according to MDP, you have no moral duty to refrain from stealing a dollar from me. Cool. So you do. After stealing that one dollar, it remains the case that if I lost a dollar, it would make no difference. So it’s okay for you to steal another dollar. Repeat a few times and it follows that, so long as you do it one dollar at a time, it’s permissible for you to steal all my dollars, or at least as many as it takes to get me to the point where losing another dollar would make a noticeable difference (at which point, losing a penny would probably still not make a difference, so you can actually continue).

I could fight this on plausibility. How are you doing this? At some point along this progression, surely I notice and either move away from you or change my password or whatever. But, of course, thought experiments can control for all this.

Actually, the problem is deeper than even this scenario suggests. The motivating idea behind MDP is that actions without consequences have no weight. Suppose working out for one minute makes no noticeable difference to whether I get fit and stay healthy. Then, assuming I have no other reason to work out, why bother working out for one minute? But this logic applies to all individual minutes. So I shouldn’t work out at all.

This is, of course, the sorites paradox. It ruins everything. Sorites reasoning can be used to prove that red is green and that I am a hard-boiled egg. There’s no consensus in the philosophical community as to how to deal with it. I’m rather fond of the problem and have investigated a pretty wide chunk of the more commonly proposed solutions: I’m unconvinced by all of them. It’s a real bastard.

Which is why it doesn’t worry me much regarding this problem. As I said, sorites reasoning breaks everything. That it also breaks MDP isn’t surprising. It probably wouldn’t be that hard to conjure a sorites argument proving that Long’s moral duty is a Dementor. So I don’t think it can be held against the MDP that it also falls prey to the sorites paradox.


You may still be unconvinced. You know that you shouldn’t litter, regardless of whether it makes a difference, or whether it could or could not be incentivized, or whether anyone’s looking, or whatever. If something is bad, you should refrain from contributing to it, end of story.

I think what’s still left to be said – and why you may feel that way (if you do) – has to do with integrity. Integrity, I submit, is the virtue of acting consistently with one’s values. Generally speaking, I think people feel that they should maintain integrity.

Now here’s the thing: we maintain integrity for ourselves. Integrity feels good. More importantly, it contributes to inner harmony, wisdom, fortitude, and peace of mind. Should we maintain integrity? To the extent that we should strive to live fulfilled and balanced lives, yes. But we don’t have a moral duty to integrity.

It’s the difference between what we owe others and what we owe ourselves. As an example, I owe it to others not to intentionally physically harm them (unless it’s in self-defense, third-party self-defense, or in a thought experiment). I also owe it to myself not lie to myself about my own happiness. But the former obligation is of a different sort. If I violate it, that’s an ethical violation of a moral duty. If I violate the latter, that’s at best a spiritual violation. It’s between me and myself. Integrity is self-help, not ethics.

So there is weight behind our intuition to contribute to public goods. In some sense of “should”, we should contribute to those public goods we value. But that “should” is personal, not moral, and it is not a duty. I don’t owe it to anyone but myself.

Leave a Reply

Your email address will not be published. Required fields are marked *