Individually consequentialist and collectively deontologist - or why you should still do the right thing if you're unsure of the impact
Or why you should act virtuously even if in situations where your individual impact is likely to be very small
The concept of a carbon footprint really bugs some people (on the left1) because it implicitly places the responsibility for tackling climate change onto individuals, and de-emphasises the role of governments, politicians and corporations. I had an argument with someone the other day2 where they rejected the entire premise, and said that people should not worry about their carbon footprints and should just live as they want, because each individual contribution is insignificant compared to the scale of the problem. Though there are some good reasons to be sceptical of carbon footprints in particular, given that BP appears to have invented the concept, and that recycling was pushed by the plastics industry as a solution to plastic waste, if you apply this principle more generally, it leads you to all sorts of undesirable conclusions. What is the point of voting, given that the marginal vote almost certainly makes no difference? What is the point of a boycott, given that the marginal purchase makes no difference? Why become a vegan or vegetarian, given that the marginal choice of meal makes no difference the number of animals slaughtered or suffering in any year? What bother with the hassle and discomfort of donating blood when the marginal unit makes no difference and hospitals never totally run out? Why, given that any individual action is swamped by the collective decisions of 8 billion other people, should you make any pro-social decision? Fortunately there is a good reason to make virtuous decisions even when your individual impact may be small or unclear3.
Consequentialism is the idea that you should judge the morality of an action by its consequences. It’s generally contrasted with deontological ethics, which says you judge the morality of an action by whether it adheres to a given set of rules and principles. Ethicists are mostly split between these two systems4. I think that in practice most people are a mish-mash of lots of different systems depending on the context (academic philosophers are probably not representative), but consequentialism is very popular among effective altruists in particular.
The appeal of consequentialism is very easy to see in certain idealised situations - e.g. if you are virtuously harbouring Jewish people during WWII and a Nazi hunter knocks on your front door, it is plainly preferable to lie (normally wrong!) if they ask you if you know where they can find any Jewish people. The beneficial consequences of this particular lie should massively outweigh any inherent immorality of the action of telling a lie.
However, it can be less obvious to see what consequentialism tells you to do in the examples in the opening paragraph. On average and even on the margin, there are effectively no consequences to your actions - the same party will win the election no matter what you do; companies will pollute the environment regardless of your purchasing habits. Your actions are functionally separate from their consequences because they get diluted by their presence in the collective. So what should you do? Nothing? But our collective decisions are the output of our combined individual actions. If far more people did become vegan, the meat industry would clearly decline. If enough people did vote for a specific party, they would win the next election. If more people did refuse to fly, CO2 emissions would go down.
The point I’m making here is that consequentialism can fail when you are in a collective action problem. Many pressing issues are collective action problems, and outside of thought experiments, the consequences of your actions are likely to depend to some extent on the actions of others, and while the consequences of your individual action can verge on the non-existent, they can be very large in the aggregate. The way to solve this is with a deontological rule that can provide a Schelling point for coordiation.5 In the above examples, this means you should vote6, you should boycott companies whose actions you oppose, you should buy vegan, and you shouldn’t fly (or should fly as little as possible). In all these cases, there is a clear action that people can coordinate on that should produce a significant effect7.
I the best way to resolve this issue with consequentialism is a hybrid approach where we prioritise the consequences of our actions where those are mostly independent of the behaviour of others, but we follow moral rules in areas of high uncertainty, collective action problems, or where the eventual outcome is contingent on the actions of others too. The other contribution of consequentialism is to devise the rules-of-thumb that can be applied in situations8.
For most of these problems - climate change, animal welfare - the major source of improvements will be technological, legal and regulatory. Individual consumption behaviours are less important than technological developments and regulations. But they do still matter, and you shouldn’t buy a petrol car, order ten steaks and stay at home on polling day.
If you really liked it, feel free to buy me a coffee.
We can take it as a given that it bugs many people on the right.
‘The other day’ in this case meaning ‘last summer’ because I write so infrequently.
There are some other reasons e.g. credibility (nobody is going to believe you take animal welfare seriously if you eat lots of caged chicken; nobody is going to take you seriously on climate change if you fly everywhere and drive an SUV etc.). But that’s an obvious one and I haven’t seen the main argument of this post elsewhere.
Also a fair few people are virtue ethicists. Results from this survey: https://survey2020.philpeople.org/survey/results/4890
https://en.wikipedia.org/wiki/Focal_point_(game_theory)
Not necessarily for any particular party.
There is a slight caveat that polarisation means some people may coordinate around opposing stances e.g. boycotts / endorsements of brands based on political statements.
For example, to figure out that if everyone took action X in situation A, the outcome would be Y - therefore the rule people should follow in this situation (or similar ones), should be “do X”. The examples