Normative Ethics and Moral Psychology I: The Reality of Other People
- A world without morality?
One way into the question ‘What is morality?’ is to ask another: ‘What would the world look like were there not to be such a thing as morality?’
Our language would be different: we may have much less need for such words as ‘right’, ‘wrong’, ‘just’, ‘unjust’, ‘required’, ‘permitted’, ‘forbidden’, ‘good’, ‘bad’, ‘evil’, ‘wicked’, ‘nasty’, ‘brave’, ‘rude’, ‘lazy’, ‘honourable’, ‘shameful’…
Our emotions would be different: we would have much less occasion to feel such things as guilt, shame, remorse, pity, indignation, gratitude, betrayal…
Our practices would be different: we would think differently about how we spend our money, what jobs we do (or refuse to do), what kinds of people we made friends with, what qualities we strove to engender in our children, the way in which we designed our political and legal institutions…
Our thought, our discourse, our common life – all are imbued with morality (moral concepts, moral vocabulary, moral attitudes). We don’t always use the word ‘moral’ to denote these things; indeed, outside of philosophy, we (nowadays) are quite chary of using the word moral.
(NB. ‘Ethics’ is generally used synonymously with ‘moral philosophy’; some philosophers like to distinguish the term ‘ethical’ from ‘moral’ but if it isn’t explicitly specified, the terms can safely be treated as interchangeable.)
- Metaethics and ethics
Like everything else that occupies so large a place in human life, morality raises all sorts of fascinating philosophical questions. Some of these are questions about ethics, i.e. metaethical. Others are questions in ethics, i.e. ethical or (more generally) normative. These questions can often be raised and answered separately. Sometimes, these questions – metaethical and ethical – can be related.
But metaethical anxieties can infect the whole project of ‘normative’ ethics. One might entertain the following thought:
‘Is there much point doing ethics? The whole thing seems to rest on such precarious foundations. The most basic assumption of the project seems indefensible, viz. that there are moral facts. As JL Mackie has shown us, the very idea of such things is weird: how on earth could there exist such things in a universe that is fully described by the natural sciences? But if there are no such things as moral facts, then surely much of our everyday moral thought and discourse is – not to put too fine a point on it – simply false.’
One can try to answer these anxieties by doing metaethics. Or, one can simply ‘deflate’ these claims. The best analogy for this general principle comes from the Austrian philosopher and economist Otto Neurath:
We are like sailors who on the open sea must reconstruct their ship but are never able to start afresh from the bottom. Where a beam is taken away a new one must at once be put there, and for this the rest of the ship is used as support. In this way, by using the old beams and driftwood the ship can be shaped entirely anew, but only by gradual reconstruction. [Anti-Spengler, 1921]
In other words, everything in philosophy is open to question; but not everything can be called into question at the same time. In order to raise questions in some parts of philosophy, we simply have to make assumptions about other parts. Those assumptions can themselves be, later, called into question. But at least part of what we do in philosophy is to hold some assumptions as (provisionally) fixed, while reflecting on the problems that come into focus when we make those assumptions.
But those of us who want to do first-order ethics don’t need to be merely defensive in staking out space for ethical reflection. We can take the battle to the sceptic’s camp. If sceptical positions such as Mackie’s can give us reason to doubt our first-order convictions, then why can’t the argument run in the opposite direction? In other words, why can’t our everyday ethical judgements give us reason to doubt any metaethical claim that purports to impugn them? (This is a serious suggestion; what, if anything, is wrong with it?)
- ‘There’s nowt so queer as folk?’
One way of ‘refuting’ Mackie is to point to some ‘queer’ entities. Christine Korsgaard writes:
According to Mackie, it is fantastic to think that the world contains objective values or intrinsically normative entities. […] For when you met an objective value, according to Mackie, it would have to be […] able both to tell you what to do and to make you do it. And nothing is like that. […] But Mackie is wrong […]. Of course there are entities that meet these criteria. […] it is the most familiar fact of human life that the world contains entities that can tell us what to do and make us do it. They are people, and the other animals. [Christine M. Korsgaard, The Sources of Normativity (Cambridge: Cambridge University Press, 1996), 166]
Korsgaard is trying to remind us of a familiar aspect of human existence, that we (usually) take other human beings and (sometimes) non-human animals as ‘intrinsically normative’, i.e. as sources of reasons to do or not to do certain things.
This points towards a general way of characterising the ‘ethical’: it’s that part of our thought and life that involves acknowledging that I’m not alone in the universe. I share my universe with other beings who are, in some important way, like me. And this fact is important. To acknowledge, really to acknowledge it, means living in a radically different way to someone who fails to do so.
Cf. a remark of Iris Murdoch’s: ‘Love is the extremely difficult realisation that something other than oneself is real. Love, and so art and morals, is the discovery of reality.’ [Iris Murdoch, ‘The Sublime and the Good’, Chicago Review, Vol. 13 Issue 3 (Autumn 1959), 51.]
The idea so far is that to acknowledge the reality of other people means taking them – their interests, their desires, their capacity for joy or suffering – as a source of reasons for doing certain things and for not doing others.
But this immediately raises further questions. Firstly, even if it turns out that we are – for whatever reason – required to take the reality of other people and animals seriously, that leaves all the important questions unanswered. What exactly would this involve? This question will take us into the central question of these lectures, viz. is there a best way of understanding what constraints other people and animals place on our action? Or, to put it in the standard way, is there one true moral theory, and if so, what is it? This will be the stuff of lectures 3 – 8, and we’ll be considering three of the traditional contenders: the utilitarianism inspired by Bentham, Mill and Sidgwick; the deontology inspired by Kant; and the virtue ethics inspired by Hume and Aristotle.
In the next lecture, we shall consider a more basic sceptical question. Why acknowledge the claims of other people at all? Why be moral when I could be an egoist instead?
Recommended readings for next week:
– Thomas Nagel, The Possibility of Altruism (Princeton: Princeton University Press, 1970), 3–6.
– Bernard Williams, Morality: An Introduction to Ethics (Cambridge: Cambridge University Press, 1972), 3–13.
Normative Ethics II: Egoism and Altruism
Last week, we considered the view that ethics basically involves acknowledging the reality, and therefore, the claims of other people. This is something we do anyway – though not always self-consciously under the label ‘ethical’ – and would find hard to stop doing, even in the face of sceptical arguments from metaethics. To put it briefly, the idea is this: ethics involves altruism. To be in the world of ethics is to recognise the demands or requirements of altruism. Altruism, as used here, is not the name of a feeling.
As Thomas Nagel puts it: ‘Altruism […] depends on a recognition of the reality of other persons, and on the equivalent capacity to regard oneself as merely one individual among many.’ [The Possibility of Altruism (Princeton: Princeton University Press, 1970), 3]
To use a different (and not inconsistent) formulation by Bernard Williams, it is ‘a general disposition to regard the interests of others, merely as such, as making some claim on one, and, in particular, as implying the possibility of limiting one’s own projects.’ [‘Egoism and Altruism’, in Problems of the Self (Cambridge: Cambridge University Press, 1973), 250]
Today we consider a question raised by this characterisation of ethics, viz. why do this? The word ‘why’ can be read in two ways.
(1) Sceptical. What is the authority of these so-called requirements? Why not ignore them?
(2) Explanatory. Assuming there are such requirements, what is their basis?
If we can success answer (2), maybe that will also answer (1): i.e. once we know the basis of the demands of altruism (and therefore the rest of ethics), we will be able to say whether and why those demands have authority.
- The sceptical challenge to morality
The basic question can be stated in these ways: ‘Why be moral?’ ‘What is the justification for being moral?’ ‘Is there reason to be moral even if it would be more convenient to be immoral?’
One way of stating this question in a sharp and provocative way is by imagining oneself a defender of morality against a sceptic. Think of the parallel suggestion in Descartes’ Meditations: one begins in philosophy by thinking of what one might say to the sceptic (without), or to allay the sceptical voice (within). There the scepticism is about the possibility of knowledge of the external world; here the scepticism is about the authority of morality. For Williams as for others, ‘morality implies altruism’. So, the figure who occupies the role of the sceptic in morality is the egoist.
Egoism, says Williams, is ‘the position of an amoralist who rejects, is uninterested in, or resists this aspect of moral considerations, and hence moral considerations; and is concerned solely with his own interests.’ [‘Egoism’, 251] The amoralist ‘acknowledges some reasons for doing things’. He is ‘like most of us some of the time. If morality can be got off the ground rationally, then we ought to be able to get it off the ground in an argument against him; while […] he may not be actually persuaded, it might seem a comfort to morality if there were reasons which, if he were rational, would persuade him.’ [‘The Amoralist’, in Morality: An Introduction to Ethics (Cambridge: Cambridge University Press, 1972), 4]
Williams’s question: are there ‘any rational considerations by which an egoist who is resistant to moral claims could, under the unlikely assumption that he was prepared to listen, be persuaded to be less resistant to them’? [‘Egoism’, 250]
We can put the interest of this question in historical terms. Morality, given its traditional relationship with authority and religion, can come to seem especially suspect to people in modern times. But if morality can be shown to be a demand of reason, and immorality a kind of irrationality, then morality can be shown to be ‘coherent and honorable in the conditions of modernity.’ [Bernard Williams, ‘Fictions, Philosophy and Truth’, 40]
- Altruism as a demand of reason
Thomas Nagel has a simple answer: ‘We should be / are justified in being / have reason to be moral because it would be irrational not to be.’ This answer unites Nagel with both Plato and Kant (and against Hume), who all thought of morality as deriving its authority from reason.
Compare these three cases:
(1) Contradiction. S believes that p. S also believes that not-p.
(2) Entailment. S believes that p. S also believes that p implies q. S does not believe that q.
(3) Instrumental reasoning. S wants X. S knows that Fing is the only way to get X. S doesn’t want to F.
(4) Altruism. S wants X. S knows that Fing is the only way to get X. S also knows that Fing would hurt T. S doesn’t take this to be a reason not to F.
(1) and (2) are cases of theoretical irrationality; (3) is a case of practical irrationality. Nagel’s view: So is (4), i.e. a failure to be altruistic – in the sense defined above – is a failure of (practical) rationality.
If Nagel can successfully argue for this, his argument will have enormous significance. To be sceptical about altruism (and thus, morality) involves being sceptical about rationality itself. And scepticism about rationality is, if not impossible, very hard to sustain – among other reasons, it seems to be self-undermining. A sceptic who says, ‘give me a reason to be rational’, has already presupposed the very thing he is trying to call into question.
- The method
Two questions (asked in slightly different forms by Nagel and Williams):
– What can the egoist consistently think and do while remaining an egoist?
– Does the (consistent) egoist’s existence have any appeal?
- A bad argument against the amoralist
The amoralist is ‘parasitic’ on altruism. He needs, and presupposes the existence of, altruistic dispositions among people in society in order to pursue his interests.
The argument can be reconstructed like this:
(1) The amoralist can achieve his ends only if society exists.
(2) Society exists only if altruistic dispositions are widespread.
(3) Therefore, the amoralist can achieve his ends only if altruistic dispositions are widespread.
The upshot seems to be that the amoralist cannot, without inconsistency, deny the value of altruistic dispositions – after all, their existence is a necessary condition of his pursuing and realising his ends. If he is to be an egoist, he has to be an altruist!
Neat, but the last couple of steps don’t follow even if the premises are granted. There has been an argumentative sleight of hand. All the ‘parasite’ argument shows is that the amoralist cannot, without inconsistency, deny the value of other people – or most people – having altruistic dispositions, just enough of them to make his egoistic pursuits possible. It doesn’t demonstrate the value of his having altruistic dispositions himself. (Note that (2) only said ‘widespread’, not ‘universal’.)
We could change ‘widespread’ to ‘universal’, but now (2) sounds even less plausible than before. Williams considers the possibility of asking our notional amoralist to mull over the fact that ‘if everyone were like him, he could not exist’ or ‘it is not fair for him [sc. the amoralist] to rest his egoism on others’ altruistic shoulders’ .
A good try, but dialectically impotent – i.e. it’s unlikely to persuade the one person you’re trying to persuade. You can just hear the amoralist answering: ‘yes, and that’s why everyone shouldn’t be like me, but why does that mean I can’t be as I am?’ and ‘why should I care about what’s fair?’ As Williams puts it, ‘Such an argument could not possibly have any force with the egoist unless he had already given up being one’ 
Note that the amoralist has a more radical response available: namely, to give up – at least in this case – the commitment to consistency. Robert Nozick writes: ‘Suppose that we show that some X he [sc. the amoralist] holds or accepts or does commits him to behaving morally. He now must give up at least one of the following: (a) behaving immorally, (b) maintaining X, (c) being consistent about this matter in this respect. The immoral man tells us, “To tell you the truth, if I had to make the choice, I would give up being consistent.’
Of course, we’ve closed off this line of response by setting up the initial question – with Williams – in terms that rule out giving up consistency. But if we hadn’t made that stipulation…
- An optimistic argument: Temporal neutrality and altruism
Nagel thinks he has a stronger argument against the egoist. He starts his book The Possibility of Altruism starts with the straightforward idea that there is such a thing as practical reason. Rational norms apply to actions as they apply to beliefs. Suppose we grant this. What must be the case for this to be true? A question of this kind is an attempt to construct what is called a ‘transcendental argument’. Roughly: we take some fact that we to be the case and then work, as it were, backwards, and identify the a priori conditions that make this fact possible. (This is hard; do not be perturbed if this doesn’t yet make sense.)
It is possible, says Nagel, to give a reflective account of the form of good reasoning, both theoretical and practical. In the practical case, once we have made its formal structure explicit, we’ll find that it depends on a conception of the person. In order to engage in practical reasoning (i.e. reasoning governed by certain formal principles), we have to be able to think of ourselves as (1) a single person extended through time, and (2) merely one person among equally real others. Both these principles are equally requirements of reason.
Take this example:
Unbeknownst to me, there’s some kind of electrical fault in my flat. Ten months from now, it’ll cause a fire that will destroy most of my property. Right now I know only that my home insurance policy is now due for renewal. Looking at the size at the premium, I decide there’s something else I want more right now. I don’t renew the policy. In ten months time, I regret it.
Let’s say I’m an egoist. Is it rational for me to buy the insurance? Intuitively: yes. Would it be irrational of me not to? Again, intuitively: yes. But what must be true in order for it to be (egoistically) rational? Answer: the person in whose interests I’m acting by making that self-sacrifice is still me. If not, my action would actually be altruistic.
Now consider actions that are rational because of events in the distant future: e.g. paying money towards pensions. The man who will collect that pension will be (unlike me now) old. He will look different, and no doubt want different things. Does that mean I have no reason to save for ‘his’ retirement? Intuitively, no.
Nagel’s crucial move is this: if it is rational for me to care about my future self (and how could it not be?) even though that self is very different from my present self, why can’t it be equally rational for me to care about other people, even though they are different from me?
The principle of ‘temporal neutrality’ is just as plausible as the principle of interpersonal impartiality that underlies altruism. If I ought, rationally, to conceive of my present self, as just one ‘slice’ of a being that retains its identity through time, then I ought, just as rationally, to conceive of my (temporally extended) self as one person among others, a person whose interests place no more special claim on me than anyone else.
The worry: all this argument has shown is that there is a structural analogy between other people and my future selves. To say that I am similarly bound by other people’s interests is simply to beg the question against the egoist, i.e. to assume what needs to be shown. The egoist is someone who thinks there is a deep difference between my self, even my future selves, and other people, and their future selves. What, if anything, is wrong about affirming the first of Nagel’s principles of practical rationality without also affirming the second?
- A war of attrition?
Williams, by contrast, is pessimistic about the prospects of decisively defeating (in this context, refuting) the amoralist with a single argument. What may be effective, however, is a war of attrition. We try to wear down the amoralist by isolating them into some very narrow territory. We do this by asking, as we said above, what thoughts they can have about themselves consistently with their remaining an amoralist.
Here are some thoughts that are ruled out:
– ‘No one has a right to dislike/resent me’
But ‘right’ is a moral term that brings in ideas of interpersonal justification that the amoralist rejects.
– ‘Aren’t I brave!’
Again, ‘brave’ is a moral term – is the amoralist entitled to it? Also, the perception of oneself as brave in this context presupposes something at least controversial: viz. that the only reason other people aren’t amoralists too is cowardice. (Is this true, as a matter of fact? Recall the considerations we assembled last week against psychological egoism.)
– ‘Other people admire me for my amoralism.’
Let slide the point that the amoralist is suddenly appealing to the judgements of other people. Even so, there’s the question of whether – assuming this is sometimes true – people will continue to admire the amoralist when he starts to ‘tread directly on their interests and affections’ (‘The Amoralist’, 9). And maybe the admiration is just wish-fulfilment, and even this ‘does not mean that they would be like him if they could, since a wish is different from a frustrated desire’ (ibid).
None of the considerations here is decisive. All we’re doing is to show that it may be harder to sustain a consistent amoralism than we may have thought at the start.
- The challenge of the psychopath
The main question for the amoralist: ‘Does he care for anybody? Is there anybody whose sufferings or distress would affect him?’ (ibid)
Suppose we answer ‘No’. It seems that what we have on our hands is a psychopath. ‘If he is a psychopath, the idea of arguing him into morality is surely idiotic, but the fact that it is idiotic has equally no tendency to undermine the basis of morality or of rationality.’
Should we agree with this? What is the implicit principle that Williams is rejecting here? Probably something like ‘If we can’t rationally argue a psychopath into morality, so much the worse for morality or rationality’. As opposed to this, Williams wants to say, roughly, ‘So much the worse for the psychopath, because society has other means at its disposal than rational argument, e.g. violence, incarceration, institutionalisation…’)
- The challenge of the gangster
Suppose then that we answer ‘Yes’. Does this mean the amoralist has just given up the game? Not necessarily. Williams invites us to consider some examples of people whose concern for others allows them to remain (in a recognisable sense) amoral: ‘Some stereotype from a gangster movie might come to mind, of the ruthless and rather glamorous figure who cares about his mother, his child, even his mistress.’ (‘The Amoralist’, 10)
Why is he still amoral? Because ‘no general considerations weigh with him’, ‘he is extremely short on fairness and similar considerations.’ ‘Although he acts for other people from time to time, it all depends on how he happens to feel.’ (‘The Amoralist’, 10) Note the implicit contrast with paradigmatically moral people: they are moved by general, as opposed to specific considerations; they care about fairness; their disposition to act for other people is more reliable, and depends much less on how they happen to feel.
Such a figure ‘provides a model in terms of which we may glimpse what morality needs in order to get off the ground’ (‘The Amoralist’, 10–11). This gangster figure already has what we need, viz. ‘the notion of doing something for somebody, because that person needs something.’ When he acts in this way, the thought he has is ‘“they need help”, not the thought “I like them and they need help”’. The move from here to what we normally think of as morality is a matter of extending and making more robust and reliable what is already there – i.e. the basic dispositions of sympathy for the needs and interests of other people – not a ‘discontinuous step onto … “the moral plane”’.
- What has been shown?
Either one cares for someone else or one doesn’t. If one doesn’t, one is (or is relevantly like) a psychopath and nothing of importance is shown about either morality or rationality by the fact that we can’t rationally argue a psychopath (or similar figure) into morality. If one does, one is already in the world of morality and rational argument can, when successful, extend and strengthen already existing sympathies.
The old hope: Egoism, and its radical form amoralism, will be irrational; rationality requires altruism.
Williams’s more modest hope: A consistent, thoroughgoing amoralist is possible; amoralism could be rational (i.e. rationally permissible); it’s just not all that appealing. To make it more appealing as an alternative, the amoralist must admit – in however weak, capricious and rudimentary a form – that the interests of others make some claim on him.
There is a limit to how much we can say to persuade the amoralist. But maybe it’s not that important what we say to him.
When the philosopher raised the question of what we shall have to say to the skeptic or amoralist, he should rather have asked what we shall have to say about him. The justification he is looking for is in fact designed for the people who are largely within the ethical world, and the aim of the discourse is not to deal with someone who probably will not listen to it, but to reassure, strengthen, and give insight to those who will. [Bernard Williams, Ethics and the Limits of Philosophy (Abingdon: Routledge, 2006 ), 26]
A question to think about: (why) isn’t it enough to have done this much?
Next week: From altruism to utilitarianism
Recommended reading: Krister Bykvist, “The Nature and Assessment of Moral Theories”, in Utilitarianism: A Guide for the Perplexed (Continuum, 2009).
Normative Ethics III: From Altruism to Utilitarianism
Morality involves altruism; but is altruism rational? Why not be an egoist?
Optimism: Rationality requires altruism; the egoist (and therefore, the amoralist) is irrational.
Pessimism: Rationality permits both egoism and altruism. However, most people – as a matter of empirical fact rather than rational necessity – are already altruists, insofar as they care about at least a few people other than themselves. The question is whether they have reason to care about others too. Here, rationality has no answer to give that applies to all people. Altruism, as such, doesn’t commit us to being any particular sort of altruist. Sorting this out is the task of ethical theorising.
- John Stuart Mill against the intuitionists
John Stuart Mill, the best known of the classical utilitarians (Jeremy Bentham and Henry Sidgwick were the other two) can be classified as a sort of hopeful pessimist. He was a pessimist insofar as he didn’t like a priori arguments that tried to derive substantive conclusions from formal norms, such as those of rationality. In his own time, those who tried to do something along these lines were called intuitionists.
Nineteenth-century intuitionists held that we have a special faculty – intuition – that enables us to recognise moral truths. Mill had three objections to this. First, he thought it unscientific. What evidence is there that we have such a Faculty in addition to all the others? Second, he thought it contrary to empiricism: intuitively grasped principles were supposed to be self-evident, i.e. to understand them is to know them to be true, so that no further empirical observation is needed to confirm them. Nothing about morality, Mill thought, could be known that way. Third, he thought that the moral principles yielded by this supposed principle were unsystematic, yielding a chaotic bunch of principles that were in tension with each other. This means that in the sorts of delicate situations where we are most in need of principled guidance, intuitionism – and the ‘common-sense morality’ it yields – is of little use.
- Mill’s method
Mill’s third objection – about the messy, unsystematic character of common-sense morality – pointed towards his own view about the sort of thing he was looking for: a first principle of morality, more basic than any other, which could (in principle) give authoritative guidance for all situations. Mill dismissed, then, the possibility that any (non-trivial) moral principle could be self-evident; that meant that any principle, including his own, had to be argued for, or proved. How? He saw two possibilities: deductive and inductive.
The first principle of morality couldn’t itself be deduced; because it would have to be deduced from something more basic than itself, and the whole idea of its being a first principle is that there is nothing more basic to deduce it from. This means that the deductive style of proof (‘All men are mortal; Socrates is a man; therefore, Socrates is mortal’) was not available to him.
Does this mean it simply has to be accepted on faith? Certainly not:
There is a larger meaning of the word proof, in which this question [sc. about the first principle of morality] is as amenable to it as any other of the disputed questions of philosophy. […] Considerations may be presented capable of determining the intellect either to give or withhold its assent to the doctrine; and this is equivalent to proof. [1.5]
- Morality by induction
Lots of things can’t be deductively proved. For instance, I can’t give you a deductive proof for the claim that ‘There is no elephant currently running wild around the Sidgwick Site’. (Or rather, I could, but it would involve premises that were at least as questionable as the conclusion.) But I can still prove it to you. How? Well, I could lead you downstairs and we could have a look. What better proof is there of this proposition? But is it a deduction? No. One can’t deduce this, but one can observe it; what one needs here it not deduction but perception, and in this case, through the senses. This points us to a general principle: even if a truth can’t be deductively proven, one can give reasons to accept it by appealing to the relevant faculties.
This gets us to Mill’s notorious ‘proof’ of utilitarianism. The word proof is in inverted commas because Mill himself was very clear that it wasn’t a deductive proof (something he said couldn’t be given); it was, rather, an attempt to adduce ‘considerations […] capable of determining the intellect either to give or withhold its assent to the doctrine’. Clearly, this wasn’t an empirical claim that one could literally see the truth of. What faculty then was Mill to appeal to?
Mill thought that the clue was to be found in one of his preferred formulations of his principle: ‘Happiness is the only thing desirable as an end’. What faculty does one appeal to here if not desire?
- The structure of the proof
(1) Happiness is desirable.
(2) The general happiness is desirable.
(3) Nothing other than happiness is desirable.
From which it follows that the only thing desirable is general happiness, which is roughly equivalent to the form of utilitarianism Mill was trying to defend, which, in its canonical statement, reads: [A]ctions are right in proportion as they tend to promote happiness, wrong as they tend to produce the reverse of happiness.’ [2.2]
- ‘Desirable’ and ‘visible’
I’ll restrict my attention to the first two steps of the ‘proof’. They’re both set out in a notorious paragraph of Utilitarianism:
The only proof capable of being given that an object is visible, is that people actually see it. The only proof that a sound is audible, is that people hear it: and so of the other sources of our experience. (1) In like manner, I apprehend, the sole evidence it is possible to produce that anything is desirable is that people do actually desire it. If the end which the utilitarian doctrine proposes to itself were not, in theory and in practice, acknowledged to be an end, nothing could ever convince any person that it was so. No reason can be given why the general happiness is desirable, except that each person, so far as he believes it to be attainable, desires his own happiness. This, however, being a fact, we have not only all the proof which the case admits of, but all which it is possible to require, that happiness is a good: (2) that each person’s happiness is a good to that person, and the general happiness, therefore, a good to the aggregate of all persons. Happiness has made out its title as one of the ends of conduct, and consequently one of the criteria of morality. [4.3]
A bad objection to (1): ‘Visible’ means ‘can be seen’; ‘desirable’ means ‘ought to be desired’. They are not analogous – Mill has only shown that happiness can be desired, when his argument needs to show that happiness ought to be desired.
Response: Mill, being a fluent speaker of English, obviously knew that ‘desirable’ doesn’t mean ‘can be desired’. His point rests on a different feature of the analogy between ‘visible’ and ‘desirable’, viz. how we prove claims about each. If I want to give you evidence that Newnham College is visible from the Sidgwick Site, I will simply point to it from a suitable place so that you can see it. Similarly, if I want to prove to you that happiness is desirable, I can only try to show you that you think this already; this is implicit in your desiring it. Your desire for your own happiness involves a judgement about its object, as good. (Recall the version of this claim in Plato’s Meno: ‘All desire is for the good’, i.e. when we desire something, we are judging it – rightly or wrongly – to be good. Recall also my earlier remark about one of the functions of philosophical argument, not to get you to new knowledge but to remind you of things you already believe/know.)
- Utilitarianism and impartiality
A bad objection to (2): Mill seems to be saying that every human being’s happiness is regarded as a good for every other human being. This is empirically false.
Response: Mill doesn’t mean this, and he clarifies as much in a letter from 1868 to someone who had misread him as saying this. ‘I merely meant in this particular sentence to argue that since A’s happiness is a good, B’s a good, C’s a good, etc., the sum of all these goods must be a good.’
A better objection: Doesn’t that just commit the fallacy of composition? A is a large man, B is a large man, C is a large man, but the set of A, B and C is not a large set.
Response: Only if ‘good’ is the relevant sort of predicate, and if Mill is in fact speaking of sets. But he isn’t: his word is ‘sum’. Cf. A is an amount of water, B is an amount of water and C is an amount of water, then A+B+C is also an amount of water. Mill’s point is simply that goodness, like water, is additive. Still controversial, but not obviously fallacious.
An even better objection: Even if we accept this interpretation, and grant (what is not obvious!) that goodness is additive, it is not clear that Mill has what he needs. If A’s happiness is a good for A, B’s happiness is a good for B and C’s happiness is a good for C, it may well follow that A’s+B’s+C’s happiness is a good, but it doesn’t follow that it is a good for A and/or B and/or C. (In other words, the egoist is back, granting (1) but not (2).)
Mill’s response: Yes, the view does require something like a principle of impartiality. Does this beg the question against the egoist? Yes. But the egoist is not my main concern here. I’m assuming an interlocutor who is at least somewhat altruistic (e.g. my intuitionist opponents – whom I’m trying to convince), and therefore, already takes morality seriously. I’m appealing to the basic idea of impartiality that’s involved there. The point isn’t to convince an amoralist to be moral, but to clarify just what sort of altruist one should be, assuming one is already an altruist.
Further objection: Doesn’t that mean the really basic first principle of morality is not the utilitarian principle but a principle of impartiality?
Mill’s response: In a sense, yes. But not in the sense that utilitarianism can be deduced from it. The whole thing is a single unified principle that can’t be deduced from anything more basic:
It [sc. impartiality] is involved in the very meaning of Utility, or the Greatest Happiness Principle. That principle is a mere form of words without rational signification, unless one person’s happiness, supposed equal in degree […], is counted for exactly as much as another’s. Those conditions being supplied, Bentham’s dictum, ‘everybody to count for one, nobody for more than one,’ might be written under the principle of utility as an explanatory commentary. [5.36]
- The structure of utilitarianism
We can understand the most common variety of utilitarianism as a combination of five, possibly separable, but independently plausible but not uncontroversial claims.
(1) (Act-)Consequentialism. The rightness of actions should be understood in terms of the goodness of the state of affairs it brings about; the latter is basic while the former is derivative. The ‘it’ is italicised to distinguish this view from rule-consequentialism, on which the rightness of an action should be understood in terms of the goodness of the state of affairs brought about by a rule commending it being generally followed. (Yes, this is complicated.)
(2) Welfarism. The goodness of a state of affairs derives from its being good for various subjects (people, animals).
(3) Impartiality. The goodness of a state of affairs is its goodness for the aggregate of subjects in it, not the good for any particular individual. No one’s good counts for any more than anyone else’s.
(4) Aggregation. Goods can be interpersonally aggregated; good is additive.
(5) Hedonism. What makes something good for a person is its being, or being conducive to, pleasure and/or the absence of pain.
We could call the classical view ‘impartial aggregative hedonistic welfarist act-consequentialism’, but that’s a mouthful. Let’s just stick to utilitarianism.
- How to criticise utilitarianism?
Two ways to criticise utilitarianism:
(1) Show that that arguments for it are unsound (they’re invalid; or they have false – or highly implausible – premises). This doesn’t as such refute the theory, as there could always be another, sound, argument for that conclusion. But it can put the defender of the theory on the back foot.
(2) Show that there are sound arguments against it. This can take a variety of forms. One of the most popular is to show that the view has disturbing, bizarre, offensive, absurd (etc) implications, and that these implications constitute a reason to reject the theory that has them.
- Managing your expectations
Utilitarians and their opponents have been arguing for a couple of centuries now. No one has really managed to refute anyone else once and for all. What we have managed to do – as is usual in philosophy – is much more modest. We have managed to get much clearer about the structure of utilitarian views, showing that they have a number of strands that don’t all have to stand or fall together. We have explored what is involved in being a utilitarian, both at the level of theory – i.e. what other views do we have to accept – and at the level of practice – i.e. what would be involved in trying to live as a utilitarian? Could it be done? Would we want to even if we could?
Next week: Critiques of utilitarianism. Recommended reading: Bernard Williams, ‘Utilitarianism’, in Morality: An Introduction to Ethics (Cambridge: Cambridge University Press, 1972), 82–98. Optional: Bernard Williams, ‘A Critique of Utilitarianism’, in JJC Smart and Bernard Williams, Utilitarianism: For and Against (Cambridge: Cambridge University Press, 1973).
Normative Ethics IV: Out of Utilitarianism
The structure of utilitarianism: a theory of right action + a theory of the good. In a simple equation:
Utilitarianism = Consequentialism + Hedonism
Consequentialism: The right action is one which produces the best consequences (i.e. the state of affairs with the most goodness), impartially. In other words, we are concerned with overall goodness, and the goodness experienced by each person (etc) counts as much as the goodness experienced by any other. There can be variants of this: ‘act-consequentialism’, where what makes the action right is its consequences; ‘rule-consequentialism’, where what makes the action right is the consequences of the widespread following of a rule prescribing it.
Hedonism: Goodness consists in happiness, to be understood as pleasure and the absence of pain; this can be aggregated across persons.
- Against hedonism
Suppose there was an experience machine that would give you any experience you desired. Super-duper neuropsychologists could stimulate your brain so that you would think and feel you were writing a great novel, or making a friend, or reading an interesting book. All the time you would be floating in a tank, with electrodes attached to your brain. Should you plug into this machine for life, preprogramming your life experiences? […] Of course, while in the tank you won’t know that you’re there; you’ll think that it’s all actually happening […] Would you plug in? [Robert Nozick, Anarchy, State, and Utopia (Cambridge, MA: Harvard University Press, 1974): 44-45]
Fill in any further details you like, as long as it’s clear that the experiences inside the machine, while utterly convincing to you while you’re in it, are not real. What, honestly, do we feel about this? If we (all of us? some of us?) are reluctant to ‘plug in’, why is this?
Nozick’s suggestions: We want to do certain things and not just have the experience of having done them. We want to be certain people – to plug in is to commit a form of ‘suicide’. We are limited in the experience machine to a human-created reality. A valuable life involves possessing certain character traits, the exercise of certain capacities, having relationships to others and the world, none of which seems obviously reducible to psychological states alone.
Nozick thinks our responses to the prospect of plugging in to the experience machine help us discover that there are things which matter to us more than simply having certain experiences, e.g. veridical experiences, real agency, risk…
The argument seems to be something like this: If all that mattered to us was pleasure, then we would want to plug into the experience machine. However, we do not want plug-in. Hence, there must be things which matter to us besides pleasure. Therefore, we are not (and maybe could not be?) hedonists. That is another way of saying, we cannot believe it. Now this is not the same as saying that hedonism is false. But recall Mill on the nature of proof in ethics. What’s sauce for the goose…
Utilitarians who think this argument works needn’t worry: all they need do is adopt some other theory of the good, e.g. preference satisfaction.
- Against consequentialism: no absolute prohibitions?
P1. If utilitarianism is true, then whether some act is wrong depends on the amount of happiness it produces.
P2. Some acts (e.g. the murder of an innocent) are wrong regardless of how much happiness they produce.
C. Utilitarianism is false.
Two ways of understanding this argument: (1) Utilitarianism has counterintuitive implications: it says some actions are right when (intuitively) they are wrong, and vice versa. (2) Or, more interestingly, utilitarianism, even when it gets it right, gets it right for the wrong reasons.
(1) is important: when a theory relies for its appeal on its ability to get the ‘right’ (i.e. intuitive) answer in some cases, it can’t simply ignore the cases where it gets the ‘wrong’ answer. But (2) is more philosophically interesting.
- Difficult cases for utilitarians
(a) Rights violations: The only way to save five lives is to take the life of one innocent person (e.g. some variants of the ‘trolley problem’; forced organ transplants; executing an innocent man to prevent a riot). Here utilitarianism seems to get the wrong answer.
(b) Counting the wrong sorts of consequences: A sadist gets intense pleasure from tormenting people, enough (let’s stipulate) to outweigh the pain of his victims. Utilitarianism seems to get the wrong answer. But even if his pleasure didn’t outweigh their pain, it seems that utilitarians are wrong to think it even relevant; here, it seems to get the right answer – just about! – but for the wrong reasons.
(c) Distributive justice. Faced with choices between cases where utilities aren’t equally distributed, utilitarianism seems to be committed to saying that an equal distribution with lower total utility is worse than a highly unequal distribution with slightly higher utility. Again, this is counterintuitive given certain everyday ideas about the importance of distributive justice. Consider these possible distributions:
I: A=33, B=33, C=33 (Total utility: 99; Equal distribution)
II: A=100, B=0, C=0 (Total utility: 100; Highly unequal distribution)
In all these cases, the particular kind of impartiality that utilitarianism requires makes it difficult to take into account the importance of (what John Rawls labels) ‘the separateness of persons’. There are certainly ways around this: either (1) the utilitarian accepts the conclusion and points out that utilitarianism was supposed to be a radical theory and we should not be surprised if it sometimes says things contrary to common sense; or (2) the utilitarian revises the theory (or builds in further elements into it) which prevent it having this counterintuitive implication.
The general worry is that (1) comes up against, what we have mentioned before, the fact that the support for utilitarianism is also based in intuitive judgements, and so cannot so blithely dismiss intuitions that support its critics. And (2) comes up against the worry that the more the utilitarian tries to amend the theory to get the ‘right’ answer, the less utilitarian the theory looks.
- Utilitarianism and impartiality
Another way of looking at what might be objectionable about utilitarian ideas of impartiality is this. One of the ways in which it does not take seriously the separateness of persons is that it doesn’t put any special value on the difference between myself and other people. (Recall this idea coming up when we were discussing the limits of Thomas Nagel’s argument for altruism.)
Bernard Williams’s classic essay, ‘A Critique of Utilitarianism’, is remembered mostly for its two examples, designed to clarify some features of the structure of utilitarian theories.
George: George, who has just taken his Ph.D. in chemistry, finds it extremely difficult to get a job. He is not very robust in health, which cuts down the number of jobs he might be able to do satisfactorily. His wife has to go out to work to keep them, which itself causes a great deal of strain, since they have small children and there are severe problems about looking after them. The results of all this, especially on the children, are damaging. An older chemist, who knows about this situation, says that he can get George a decently paid job in a certain laboratory, which pursues research into chemical and biological warfare. George says that he cannot accept this, since he is opposed to chemical and biological warfare. The older man replies that he is not too keen on it himself, come to that, but after all George’s refusal is not going to make the job or the laboratory go away; what is more, he happens to know that if George refuses the job, it will certainly go to a contemporary of George’s who is not inhibited by any such scruples and is likely if appointed to push along the research with greater zeal than George would. Indeed, it is not merely concern for George and his family, but (to speak frankly and in confidence) some alarm about this other man’s excess of zeal, which has led the older man to offer to use his influence to get George the job… George’s wife, to whom he is deeply attached, has views (the details of which need not concern us) from which it follows that at least there is nothing particularly wrong with research into CBW. What should he do? [Bernard Williams, ‘A Critique of Utilitarianism’]
Jim: Jim finds himself in the central square of a small South American town. Tied up against the wall are a row of twenty Indians, most terrified, a few defiant, in front of them several armed men in uniform. A heavy man in a sweat-stained khaki shirt turns out to be the captain in charge and, after a good deal of questioning of Jim which establishes that he got there by accident while on a botanical expedition, explains that the Indians are a random group of the inhabitants who, after recent acts of protest against the government, are just about to be killed to remind other possible protestors of the advantages of not protesting. However, since Jim is an honoured visitor from another land, the captain is happy to offer him a guest’s privilege of killing one of the Indians himself. If Jim accepts, then as a special mark of the occasion, the other Indians will be let off. Of course, if Jim refuses, then there is no special occasion, and Pedro here will do what he was about to do when Jim arrived, and kill them all. Jim, with some desperate recollection of schoolboy fiction, wonders whether if he got hold of a gun, he could hold the captain, Pedro and the rest of the soldiers to threat, but it is quite clear from the set-up that nothing of the sort is going to work: any attempt at that sort of thing will mean that all the Indians will be killed, and himself. The men against the wall, and the other villagers understand the situation, and are obviously begging him to accept. What should he do? [[Bernard Williams, ‘A Critique of Utilitarianism’]
Williams’ arguments in this essay are frequently (tiresomely!) misunderstood. Here is an argument that Williams is not making:
P1. If utilitarianism is true, then George should take the job and Jim should shoot one of the Indians.
P2. Intuitively, George should not take the job and Jim should not shoot one of the Indians.
C. Therefore, utilitarianism is false.
Williams is not asserting P2. He doesn’t think the utilitarian conclusion is absurd in Jim’s case, and can see that a good case can be made for it in George’s (note that the utilitarian considerations there are put into the mouth of a well-meaning friend of George’s). In any case, this would not be a good argument, as the intuitions appealed to in P2 are not all that strong, and it wouldn’t take much for the utilitarian to unsettle them (e.g. by describing them as mere squeamishness or a self-indulgent concern with ‘clean hands’).
In both cases, Williams wants us to look at the situation from George’s and Jim’s point of view. He wants us to ask: can they come to take a utilitarian view of their own situations? Are they rationally required to do so?
Take George first. Suppose that his pacifist commitments go very deep. One way in which they manifest themselves is in what he is willing and what he is unwilling to do. He can then see that things may well go worse if his zealous rival takes the job. But he can also see that one other thing will be true: that he, George, will not be the one doing those things. If he does take the job, on the other hand, he will be the one doing those things. The utilitarian outlook is one on which what matter are outcomes, not agents. In one scenario, someone is pursuing the research zealously; in another, someone is pursuing the research less zealously. As it happens, it is someone else; as it happens, it’s me, George. But from the impartial point of view utilitarianism requires, that fact – that it is I who will be doing these things – makes no difference.
Similarly, Jim’s situation – as described by the utilitarian – is this: he has to make a choice between two possible worlds. In one, one person will be dead. In another, twenty people will be dead. But Jim can reasonably think: that’s not the only thing that distinguishes these two situations. In the first situation, I, Jim, will have killed someone. In the other, Pedro will have killed twenty. The utilitarian description makes it nearly impossible to include this fact.
Williams: ‘what the outcome will actually consist of will depend entirely on the facts, on what persons with what projects and what potential satisfactions there are within calculable reach of the causal levers near which he finds himself. His own substantial projects and commitments come into it, but only as one lot among others—they potentially provide one set of satisfactions among those which he may be able to assist from where he happens to be. He is the agent of the satisfaction system who happens to be at a particular point at a particular time: in Jim’s case, our man in South America. His own decisions as a utilitarian agent are a function of all the satisfactions which he can affect from where he is; and this means that the projects of others, to an indeterminately great extent, determine his decision.’
- Integrity and alienation
Williams’s argument is sometimes called ‘the integrity objection’. This is why. Both George and Jim, and indeed all of us, have some basic commitments, aims, projects, and principles, and we tend to think that our reasons for acting one way rather than other ultimate come down to those commitments, etc. The argument can be reconstructed as follows:
P1. George and Jim have reason not to act against their deepest principles.
P2. If utilitarianism is true, then George and Jim have reason to act against their deepest principles.
C. Utilitarianism is false.
P1 expresses what, in ordinary language, we call integrity. Integrity needn’t always be a moral virtue: whether it is depends on those principles. But it is still possible to admire them for their unwillingness to compromise on their principles. And utilitarianism makes this an irrational thing to admire them for.
In both cases, of course, we can imagine that George and Jim eventually decide to do the utilitarian thing: George takes the job, Jim kills one Indian. They will then have shown a lack of integrity. So what? People make compromises all the time.
Everything turns on what kind of view they take of their compromise. Take Jim again: suppose he did shoot one Indian. The utilitarian can allow that he will feel some guilt at what he did; indeed, they can even put it into their calculus with everything else. But this guilt needs to be understood as analogous to a kind of physical pain (as if the Captain had asked Jim not to shoot another person, but to shoot himself in the foot). But guilt is partly cognitive: it involves the thought, ‘I did something wrong.’ And if the utilitarian is right, this thought is false, and irrational.
The utilitarian can of course point out that there are utilitarian justifications for his feeling this way, even though the feeling is essentially irrational (because anti-utilitarian). But the thing is that Jim cannot take this view of his own guilt: ‘my feelings are irrational – because obviously I did the right thing – but it is overall optimal that I feel this way.’ This is what Williams labels alienation: the state in which one treats one’s own principles, sentiments, beliefs, attitudes as if they were not one’s own, but simply someone’s, anyone’s.
Williams: ‘It is absurd to demand of such a man when the sums come in from the utility network which the projects of others have in part determined, that he should just step aside from his own project and decision and acknowledge the decision which utilitarian calculation requires. It is to alienate him in a real sense from his actions and the source of his action in his own convictions. It is to make him into a channel between the input of everyone’s projects, including his own, and an output of optimific decision; but this is to neglect the extent to which his actions and his decisions have to be seen as the actions and decisions which flow from the projects and attitudes with which he is most closely identified. It is thus, in the most literal sense, an attack on his integrity.’
- Whither utilitarianism?
In general, utilitarians think that Williams has scored one or two points. Most importantly, they see that it may not always be possible, or even desirable
This means that utilitarianism may often have to be ‘self-effacing’ – i.e. even though it’s true, it may be best, by its own lights, that people oughtn’t to believe it (the phrase is from Derek Parfit, Reasons and Persons). Williams thinks this is pretty much giving the game up. Utilitarians tend to think not: why can’t there be truths that one oughtn’t to believe? This requires a certain kind of moral realism, one which allows there to be a separation between the correct criterion for right action and the best decision-making procedure. Williams is generally thought to have refuted the claim of utilitarianism to be an adequate decision-making procedure. But this leaves open the possibility that it may still be the correct criterion for right action. (But isn’t this just alienation at another level? Something is true of your actions, and can be recognised as such from a third-person point-of-view, but can’t and oughtn’t to be recognised as such from a first-person point-of-view.)
Williams doesn’t by any means think his arguments simply refute all forms of utilitarianism once and for all. But as we have been seeing, very few arguments in philosophy do that anyway. What they are supposed to do is to remind us of those many concepts, values and considerations we currently have that are at odds with a utilitarian way of looking at things. And also, to make explicit certain costs of adopting a utilitarian outlook.
Are those costs outweighed by other theoretical benefits? Utilitarians haven’t stopped thinking so. If you’re a fully committed utilitarian, these arguments will be unlikely to shake your conviction. But if you haven’t yet made up your mind?
Next week: Deontology
Recommended reading: Frances Kamm, ‘Non-Consequentialism’