BD - Earth day 2024

Truth in Ethics, Truth in Science - Different?

Torbjörn Tännsjö

Torbjörn Tännsjö

More about Author

The notion of truth is same in ethics and science. However, the data in science and ethics are different. In science we rely on observation, in ethics we rely on considered moral intuitions. There is little agreement about when we should trust our ethical intuitions. It is remarkable, however, that neuroscience and psychology has recently shed new light on how our moral intuitions arise.

Is truth in ethics different from truth in science? A way of understanding the question is this: Is the notion of truth in ethics different from the notion of truth in science? The answer to the question is then straightforward: no. In my opinion, shared by many but not all thinkers, there is just one notion of truth, and it is the same in all fields. In saying that it is true that there is a table in front of me, that it is true that 7+5=12, or that it is true that one should not torture innocent children just for fun, we are using the same notion of truth. However, this notion is a ‘thin’ or ‘deflationary’ one; it doesn’t mean much. For example, if I say that there is a table in front of me, and then goes on to say that it is true that there is a table in front of me, the further implicature may be that I am certain. However, I add no information1.

But are there any truths in ethics? The answer to this question is not straightforward. Many people (expressivists) believe that when we say that an action is wrong, we merely express a con attitude towards the action in question, we express no proposition capable of being either true or false. To be sure, even expressivists can make sense of the word ‘true’, when used in ethical contexts. When, according to the expressivist, I say that it is true that it is wrong to torture innocent children just for the fun of it, this is merely a way of saying that it is wrong to torture innocent children just for the fun of it; and to say this, according to the expressivist, is not to express any proposition, but merely to express an attitude. But this means that, even if the expressivist has access to the word ‘true’ in ethical contexts, the expressivist, denying that there are moral propositions, must deny that there are truths (true propositions) in ethics. I disagree. I believe that, when we say that it is wrong to torture innocent children just for the fun of it, we do express a proposition capable of being true or false. As a matter of fact, I believe that this proposition is true. And I take it to be true (or false, if it happens to be false), independently of my conceptualisation or thinking. But this is not the place to argue the case. I will just take this ‘realistic’ understanding of ethics for granted. This means that I will both make the semantic assumption that when using moral language we express genuine propositions capable of being true and false, and the ontological assumption that some ethical propositions are true, i.e. I will assume that there are ethical facts. This allows me to ponder a further, epistemic, question: can we know the truth in ethics? If so, how do we gain ethical knowledge? Are the methods we use similar to, or different from, the ones we use in science?

Knowledge

Just as we asked whether truth is different in ethics from science, we can ask the same thing about knowledge. Is knowledge in ethics different from knowledge in science? Once again I am prepared to claim that the notion of knowledge is the same. What does it mean, then, for a person, S, to know that p? According to received wisdom, dating back to Plato’s dialogue Theaetetus, it means roughly that three conditions are satisfied: (i) S believes that p, (ii) it is true that p, and (iii) S is justified in the belief that p. Some complications with this definition have been noted by Russell and Gettier, but we need not bother about them in the present context.

Is there knowledge in ethics? I believe there is. This means that I believe we have moral beliefs, that some of them are true, and that we are justified in having them.

What does it mean, then to be ‘justified’ in a belief? Plato found no satisfactory answer to this question. Philosophers disagree. Once again I will be a bit dogmatic and just opt for one common answer, without arguing that it is the right, or best one: A person, S, is justified in the belief that p if, and only if, p coheres with the rest of his beliefs. This goes for ethics just as well as for science. What I just said is not only a bit dogmatic, but it is also simplistic. However, I will not go into detail here. I will only note that the notion of coherence employed in the present context is a bit special. Coherence is not merely a matter of logical consistency, but also a matter of explanatory relations between propositions2. All this means that justification is a matter of degree, the more closely connected your beliefs are, the stronger is your justification. We can put this in terms of what must be given up, if you jettison a particular belief. The more other beliefs you have to give up, when you give up on p, the better is your justification for p, according to this notion of justification. It should be noted that justification is different from truth. You may be justified in holding false beliefs. Our hope is, however, that our adherence to scientific methods leads us closer to the truth. Once again, I see little difference here between ethics and science.

Evidence

Some of our justified beliefs are such that we have no evidence for them. Observational beliefs are typically of this kind. I am justified in my belief that there is a table in front of me. If asked for evidence to the effect that there is a table in front of me I am at a loss. I see that there is a table in front of me, period. It is not that my seeing the table gives me evidence for the proposition that the table is there. In a sense one could say that it does; I could say that I seem to see a table in front of me, a table being in front of me is the best explanation why I do seem to see it, and hence the fact that I seem to see it is evidence for its presence. However, I need no evidence for this proposition so I don’t take my seeing the table as evidence for its presence. How could I? How do I know that I see a table in front of me? Am I more certain about my seemingly seeing a table than of a table being there? I think not.

This does not mean that my observation of the table is incorrigible, however. I may learn that a psychologist is now and then making fun of my philosophy lectures and me by cleverly projecting a hologram in front of me, just to get a chance to mock me, when I say that I am certain that there is a table in front of me. If I do learn this, then my justification in the belief that there is a table in front of me is lost (undermined). Furthermore, if I try to touch the table and find that my fingers run smoothly through it, then I have to give up my belief that there is a table in front of me. I now have evidence against the proposition that there is a table in front of me. I will return below to the possibility that further knowledge undermines our firm beliefs.

Now, even if we hold some justified beliefs for which we lack evidence, this is not true about scientific theories. Typically, we are only justified in our belief in a scientific theory if we have evidence for it. And, typically, the evidence for a scientific theory lies in observation. We say that something we observe, say that p, is evidence for the theory T, if T gives the best explanation of p (we make an ‘inference to the best explanation’, as Gilbert Harman has famously put the point (Harman, 1965)).

Can we say something similar of ethics?

How to test ethical theories - Similarities with science

Typical ethical theories state which actions are right and which actions are wrong and also why they are right and wrong respectively. Two examples of such theories are explained in this article, utilitarianism and the sanctity of life doctrine. According to utilitarianism, an action is right if and only if it maximises the sum-total of well-being in the universe; if it is not right, then it is wrong. And the fact that an action maximises the sum-total of well-being in the universe, if it does, is what makes it right.

The sanctity-of-life doctrine (as I here conceive of it) concurs in the idea that one should maximise the sum-total of well-being in the universe, but claims that the end doesn’t justify the means. It is wrong actively and intentionally to kill an innocent human being, even if killing this innocent human being means that the sum-total of well-being in the universe is maximised. The fact that an act is an act of intentional and active killing of an innocent human being (murder), if it is, makes it wrong, irrespective of its consequences.

How should we go about if we want to test these and other ethical theories? Some philosophers, of a rationalist bent, have thought that morality can be derived from reason itself, i.e. they have believed that, once we understand each moral theory thoroughly and clearly, we can simply grasp which one is true. Few stick to this belief now-a-days, however, and, I think, wisely so. When we assess putative moral theories, we must proceed in a manner, which is similar to how we assess scientific theories. We have to put our moral hypotheses to test.

We test our scientific theories against our observations. In a similar vein, we have to test our moral hypotheses against not observations, but our considered moral intuitions. A moral intuition is an immediate reaction to an action with which one is presented, to the effect that the action is right or wrong. It is ‘immediate’ in the sense that it is not the result of any conscious process of reasoning. I will return to the requirement that our moral intuitions should be considered.

A scientific theory that is at variance with (the content of) our observations is rejected. A scientific theory must be empirically adequate. In a similar vein, an ethical theory must give the right answer to moral questions; it must conform to our considered moral intuitions.

However, empirical adequacy or conformity with our considered moral intuitions respectively, is just a necessary requirement, it is not a sufficient one. The theory must also, in order to gain support from the observation (intuition), give the best explanation of (the content of) our observations and considered intuitions. This means that it must be general, simple, theoretically fruitful and so forth. Once again, I see no difference here between ethics and science. On a structural level, what goes on in the testing of both moral and scientific theories is the same. And yet, if we look closer to the ethical case, an important difference surfaces: in science we normally rely on real experiments. In ethics we must rest satisfied with thought experiments.

The trolley cases

If we want to put utilitarianism and the sanctity of life doctrine to a crucial test, then we have to turn to thought experiments. The reason that we must resort to thought experiments and not real life cases, is that it is impossible to form any definite and reliable moral intuitions with respect to real cases. An action can be wrong, according to both theories, because it does not maximise the sum-total of well-being in the universe. But we cannot know for sure about any action whether it maximises the sum-total of well-being in the universe. We can certainly not observe this.

Furthermore, according to the sanctity of life doctrine, an action is wrong if it is an act of murder. But we cannot know for sure whether an act is an act of murder or not (we cannot know for sure whether it is intentional killing or not). In abstract though experiments we need not bother with such details. We can simply assume that an action has better consequences than another, alternative, one and we can stipulate that a certain action was an act of murder and so forth. Then we can tease out our intuitions in relation to the examples.

Now, if we want to choose between utilitarianism and the sanctity of life doctrine it might be a good idea to turn to the so-called trolley-cases, developed and elaborated upon, by the philosophers Philippa Foot (Foot, 1967) and Judith Jarvis Thomson (Thomson, 1967). As we will see, these examples seem to allow for crucial tests between these two theories.

Here is the first, simple switch, case. A trolley is running down a track. In its path are 5 people who have been tied to the track. It is possible for you to flip a switch, which will lead the trolley down a different track. There is a single person tied to that track. Many believe that they should flip the switch. This is in agreement with utilitarianism, of course. More lives are saved. But it is also consistent with the sanctity of life doctrine, since the killing of the single person is not intended; it is a merely foreseen consequence of your saving the five. If there had been a third track, with no one on, you would have opted for this one, I assume.

Here is the second, the so-called footbridge case. You are on a bridge under which the trolley will pass. There is a big man next to you and your only way to stop the trolley is to push him onto the track, killing him to save five. Few think this would be right, even among those who are prepared to flip the switch in the original example. According to utilitarianism, this is what you ought to do however. But according to the sanctity of life doctrine, you should not push the man since, if you do, you kill him deliberately, and you use him merely as a means to the rescue of the five.
It may seem that people at large have intuitions that square better with the sanctity of life doctrine than with utilitarianism, then. But here comes a third version of the example.

The third case, often referred to as the loop, as in the simple switch case, you can divert the trolley onto a separate track. On this track is a single big man. However, beyond the big man, this track loops back onto the main line towards the five, and if it weren’t for the presence of the big man, flipping the switch would not save the five. Now many people, even among those who hesitate to push the big man, accept to flip the switch. But this is at variance with the sanctity of life doctrine and in accordance with utilitarianism.It seems, then, that neither utilitarianism nor the sanctity of life doctrine can gain support from the intuitions of people at large. I claimed above, however, that we should seek evidence in our considered intuitions. Is there a way of critically assessing our spontaneous reactions to the examples? Well, one question we should ponder is how we have arrived at our intuitions. And this is where neuroscience comes in.

Neuroscience enters the picture

Joshua D. Greene at Harvard University and his collaborators have studied extensively how we reach our verdicts in the trolley cases. Here are, in a very simplified form, some of their results about what happens when people react to the trolley-cases. It seems as though a dual model makes best sense of how we function. On the one hand, controlled cognitive processes drive our utilitarian judgements, while non-utilitarian judgements (don’t push the man) are driven by automatic, intuitive emotional responses. Different parts of our brains are responsible for these different responses, as can be seen from neuroimaging of our brains. ‘Utilitarian’ responses are associated with increased activity in the dorsolateral prefrontal cortex, a brain region associated with cognitive control (Greene et al. 2004). By cheering people up, before we confront them with the examples, it is possible, to move them closer to the utilitarian camp (Greene at al. 2004). By keeping people busy with intellectual tasks, while giving their verdicts on the trolley-cases, it is also possible to move people closer to the non-utilitarian camp. Moreover, those who reach the utilitarian verdict have to overcome their own emotional resistance to the conclusion, which takes some time, and so forth (Greene at al, 2004). And people suffering from focal bilateral damage to the VentroMedial Prefrontal Cortex (VMPC), a brain region necessary for the normal generation of emotions and, in particular, social emotions, easily reach the utilitarian solution when asked about the cases (Koenig et al., 2007).

When we know more about the origin of our moral intuition, can this help us to select the right moral hypothesis, utilitarianism, the sanctity of life doctrine, or some other doctrine? The results from neuroimaging of our brains and experimental psychological studies do not contradict our intuitions, Neither do they provide any evidence against them. It is not like the case in the opening of this paper where I can feel that there is no table in front of me. But perhaps they can help us to undermine the justification for some of the intuitions, in the same way that my knowledge that psychologists sometimes project holograms in front of me undermines my justification for my belief that there is a table in front of me. Which ones have their justification undermined, in that case?

This is a tricky question. It is obvious that some immediate intuitions among people at large just have to yield-you have to admit that even if you are among the majority. You have to admit that since there is no plausible theory consistent with all the intuitions. But if you want to get rid of some, but not all intuitions, which ones should yield and which ones should be retained?

One could argue that we should try to muster the same emotional response to the loop as the one we exhibit in relation to the footbridge and opt for the sanctity of life doctrine. Or one could argue that our gut feelings, just because they are immediate and probably the result of a selective pressure way back in our human history, lack credibility and hence opt for the utilitarian solution.

There is something to each line of argument. However, the proper way of approaching our intuitions, it seems to me, is to see what our reactions to the examples are, once we know about the origin of respective kind of emotion. We should not rely on our intuitions before we know all that can be known about their origin. We should expose them to a kind of cognitive psychotherapy, then.

This is not enough, however. We need philosophical therapy as well. We must ascertain that we have correctly understood the examples. We are easily misguided when we ponder thought examples. We read things into them that should not be there. The scientists who have studied our reactions have tried to compensate for this, but they may not have been entirely successful.

It is also important to make some distinctions, which are simply absent in the abstract description of the examples. We are here invited to assess what course of action is ‘morally permissible’. It is not quite clear what this means. One question is what kind of response is right and what kind of response is wrong, when we abstract from long-term consequences (by assuming that there are no such consequences of importance). Another question is: what sort of people should we be, people who push or people who don’t push the big man onto the tracks? A utilitarian may well admit that in the long run it is better that people at large are such that they don’t push. And yet, in the situation, we ought to push. Some may be less willing to make this kind of distinction and claim that the crucial question is what sort of people we should be. But then they cannot respond to the trolley cases in a reasonable manner!

Philosophical subtleties like these are lost in the experiments. When they are added, together with information about how our intuitions are formed, and comprehended, then, I submit, we are allowed to rely on the kind of (firm) intuitions we still hold. They are what I have called ‘considered’ intuitions. Our justification of them is not undermined by any knowledge we have been able to gain. Hence, quite reasonably, we take them to be indicative of the truth.

Can we expect inter-subjectivity in our thus considered moral intuitions? Perhaps, in the very long run, but I doubt it. In this, ethics may well be different from science. Observations in science may be highly theory-laden and thus controversial, but there is always a possibility of moving to neutral ground when we account for them. The physicist may claim that he has observed a path of a positron in a cloud chamber. Another scientist claims that there is no such thing as a positron. He sees no trace of any positron. Now, there is a way of switching to a less theory-laden level of description of the content of their respective observations. Perhaps they can agree, at least, that there are certain traces of a certain shapes, which they are watching. The person who believes he sees traces of a positron can urge the other scientist to explain what, if not traces of a positron passing, the traces both see are traces of. However, in ethics there is no similar neutral ground, no clearly observable traces to which we can move.

This means, then, that different people may very well be justified in their beliefs in competing moral hypothesis. I may be justified in my belief in utilitarianism, while the Pope is justified in his belief in the sanctity of life doctrine, provided we have each scrutinised our intuitions properly — and provided we have not just deduced them from our respective favoured theory. This may well be so, but since utilitarianism and the sanctity of life doctrines contradict one another, they cannot both be true.

The possibility of such epistemic relativism may prompt us to believe that, after all, there is no truth in ethics. The idea that we should give up on some of our intuitions, because they have been undermined by knowledge about their origin, may come to be generalised to all our moral intuitions. We may be tempted to accept moral nihilism and moral scepticism3.

I think we ought to resist this temptation, but I must admit that, in the present context I have not given any good argument to this effect.

Conclusion

The notion of truth (just as the notion of knowledge and justification) is the same in ethics and science. We gain moral knowledge in a way similar to how we gain scientific knowledge: we have evidence for ethical theories when these theories can best explain our data. However, the data in science and ethics are different. In science we rely on observation, in ethics we rely on considered moral intuitions.

There is little agreement about when we should trust our ethical intuitions. It is remarkable, however, that neuroscience and psychology have recently shed new light on how our moral intuitions arise.

We should ponder these data and submit our intuitions to cognitive psychotherapy. When they resist this kind of therapy, when they do not go away, once we know how we have come to hold them, we are justified in relying on them. They have then become considered moral intuitions. We are then justified in our moral beliefs. If they happen to be true, furthermore, then we know them.

All this means that theoretical moral knowledge is possible, at least in principle.

Footnotes

1. For an elaboration of my view, see (Tännsjö, 2000), and also (Tännsjö, 2006).
2. For elaborations of this view, see for example (Rawls, 1971), (Lehrer, 1974), (BonJour, 1985), and (Tersman, 1995).
3. There is much recent philosophical literature about the neuroscientific findings about our reactions to the trolley cases. See for example (Singer, 2005) and (Tersman, 2008).

References

1. BonJour, L. (1985), The Structure of Empirical Knowledge (Cambridge, Mass.: Harvard University Press).

2. Foot, P. 1967 'The Problem of Abortion and the Doctrine of the Double Effect' (Oxford Review), reprinted in Virtues and Vices (Oxford: Oxford University Press, 1978).

3. Greene, J.D., Sommerville, R.B., Nystrom, L.E., Darley, J.M, & Cohen, J.D. (2001). 'An fMRI invesitagion of emotional engagement in moral judgment', Science, 293(5537), pp. 2105-2108.

4. Greene, J.D., Nystrom, L.E., Engell, A.D., Darley, J.M. & Cohen, J.D. (2004). 'The neural bases of cognitive conflict and control in moral judgment' Neuron, 44(2), 389-400.

5. Harman, G. (1965). 'The Inference to the Best Explanation', Philosophical Review, Vol. 70, pp. 88-95.

6. Jarvis Thomson, J., 'Killing, Letting Die, and the Trolley Problem', The Monist, 1967.

6. Koenigs, M. et al. (2007). 'Damage to the prefrontal cortex increases utilitarian moral judgements'. Nature doi: 10.1038/nature05631

7. Lehrer, K. 1974. Knowledge (Oxford: Oxford University Press).

8. Rawls, J. 1971 A Theory of Justice (Cambridge, Mass.: Harvard University Press).

9. Singer, P. (2005) 'Ethics and Intuitions", The Journal of Ethics, vol. 9, 2005, pp. 331-352.

10. Tännsjö, T., (2000). 'The Expressivist Theory of Truth', Theoria, Vol. LXVI, pp. 256–272

11. Tännsjö, T., (2006). 'Understanding Through Explanation in Ethics', Theoria, Vol. 72, pp. 178-213.

12. Tersman, F. (2008) 'The Reliability of Moral Intuitions: A Challenge from Neuroscience', Australasian Journal of Philosophy, Vol. 86 (forthcoming).