My last post was about one misunderstanding of science that annoys me. Here’s another.
Just rereading Anthony Kenny’s “What I Believe” (motivated by a reply to Bob’s comment in the last post) – in the chapter on morality he summarily dismisses utilitarianism. By saying:
Either determinism is true or it is not. If it is, then there is only one course of action which is a genuinely possible choice for us…. If, on the other hand, determinism is false, then there is no such thing as the totality of the consequences of one’s action; for the total future of the world depends on the choices of others as well as one’s own.
This falls foul of a mistake I’ve come across many times. It confuses the idea of non-determinism (that the future cannot, even in principle, be determined) with the idea that prediction is impossible.
We are very good at predicting the future. You wouldn’t survive crossing a road if you weren’t. Of course, you cannot tell exactly what will happen, you couldn’t say where each car will be at a particular moment in the future, and how fast they will be travelling. You couldn’t even say what lanes each car will be in. But you don’t need to. In science this is called the probability distribution.
In a world where outcomes are continuous, any particular outcome has a likelihood of basically zero. But instead we can group outcomes into categories. A category of outcomes where car 1 is in lane 2, for example, or when car 3 is going between 50-52 mph. Given any possible grouping of future states, what are the likelihoods of each group? These are probabilities we can guess. And our survival depends on having at least a good go (though we’re not perfect, by any means). This is perfectly consistent with non-determinism, and is also consistent with determinism, under the (pretty obvious, I think) assumption that we cannot know enough to predict a specific future.
There’s another example in a book I’m reading “Introducing Persons” by Peter Carruthers. But its harder to quote because it is an extended argument. In it, he basically says that you aren’t justified in thinking that other people have a mind on the basis of what they do. In other words, you might suppose that someone has a mind, because you can see their actions, work out their perceptions, and posit an intermediate set of mental states. So what is the problem with this? Well according to Carruthers we can’t perfectly predict what will occur, we can’t know exactly what mental states they are in, we are “constantly surprised” and therefore we can’t posit any mental states.
Really, is it so hard for philosophers to understand uncertainty? I know it takes a while in Math lessons to really get probability (so you don’t think that 5 coin flips coming up heads make a tails more likely*). And it is still notoriously difficult to develop gestalts around problems such as the Monty Hall Problem or the Tuesday’s Child Problem. But still, this seems to border on the wilful misunderstanding to me. These philosophers deal with far more complex ideas all the time.
In the hands of other debaters, well I’m happy to concede it might just be an annoying point of confusion.
* Incidentally this is a difference between a mathematician and an engineer. Given a coin that flips heads 5 times in a row, a mathematician will tell you the next flip is equally likely to be heads or tails. The engineer will tell you (correctly) it is more likely to be heads.