My last post was about one misunderstanding of science that annoys me. Here’s another.

Just rereading Anthony Kenny’s “What I Believe” (motivated by a reply to Bob’s comment in the last post) – in the chapter on morality he summarily dismisses utilitarianism. By saying:

Either determinism is true or it is not. If it is, then there is only one course of action which is a genuinely possible choice for us…. If, on the other hand, determinism is false, then there is no such thing as the totality of the consequences of one’s action; for the total future of the world depends on the choices of others as well as one’s own.

This falls foul of a mistake I’ve come across many times. It confuses the idea of non-determinism (that the future cannot, even in principle, be determined) with the idea that prediction is impossible.

We are very good at predicting the future. You wouldn’t survive crossing a road if you weren’t. Of course, you cannot tell exactly what will happen, you couldn’t say where each car will be at a particular moment in the future, and how fast they will be travelling. You couldn’t even say what lanes each car will be in. But you don’t need to. In science this is called the probability distribution.

In a world where outcomes are continuous, any particular outcome has a likelihood of basically zero. But instead we can group outcomes into categories. A category of outcomes where car 1 is in lane 2, for example, or when car 3 is going between 50-52 mph. Given any possible grouping of future states, what are the likelihoods of each group? These are probabilities we can guess. And our survival depends on having at least a good go (though we’re not perfect, by any means). This is perfectly consistent with non-determinism, and is also consistent with determinism, under the (pretty obvious, I think) assumption that we cannot know enough to predict a specific future.

There’s another example in a book I’m reading “Introducing Persons” by Peter Carruthers. But its harder to quote because it is an extended argument. In it, he basically says that you aren’t justified in thinking that other people have a mind on the basis of what they do. In other words, you might suppose that someone has a mind, because you can see their actions, work out their perceptions, and posit an intermediate set of mental states. So what is the problem with this? Well according to Carruthers we can’t perfectly predict what will occur, we can’t know exactly what mental states they are in, we are “constantly surprised” and therefore we can’t posit any mental states.

Really, is it so hard for philosophers to understand uncertainty? I know it takes a while in Math lessons to really get probability (so you don’t think that 5 coin flips coming up heads make a tails more likely*). And it is still notoriously difficult to develop gestalts around problems such as the Monty Hall Problem or the Tuesday’s Child Problem. But still, this seems to border on the wilful misunderstanding to me. These philosophers deal with far more complex ideas all the time.

In the hands of other debaters, well I’m happy to concede it might just be an annoying point of confusion.

—

* Incidentally this is a difference between a mathematician and an engineer. Given a coin that flips heads 5 times in a row, a mathematician will tell you the next flip is equally likely to be heads or tails. The engineer will tell you (correctly) it is more likely to be heads.

One thing I’ve noticed about some folks is that they confuse determinism with things bein pre-determined. But chaos (mathematical) is completely deterministic, yet unpredictable.

We also need words like estimatable, guessable and approximatble. I think they would help.

The problem with both economics and statistics is that we don’t have the appropriate intuition modules build in these brains of ours. We need better design.

Shane, yes, it is important to understand the difference between predictability and determinism. I don’t know many folks who would say the exact future of the universe can be determined through any practical scheme. In fact, as you point out, for a particular definition of ‘predictability’ we can show very simply that even simple things aren’t predictable. Even the deterministic orbit of two planets around the sun, under the influence of gravity, displays this feature. It is ‘chaotic’ in the mathematical sense, as you say.

Sabio, yes, a clearer nomenclature would be useful, but those kinds of distinctions between words I think are short-lived unless the speaker understands what lies behind them. I think the fundamental problem is that folks don’t understand that a probabilistic prediction is still a completely valid and strong prediction. For example, if you roll a fair die, the outcome of that is *highly* predictable, if you let me give you a probabilistic prediction. I can tell you exactly the set of outcomes, and exactly how likely they are.

any comment that starts “Either XX is true or it is not.” rubs me the wrong way (since we’re on the subject). no nuance built into it. i get that there is “a probabilistic prediction is still a completely valid and strong prediction.” and many state this doesn’t take into account chaos theory, but it does as Shane stated it ” is completely deterministic, yet unpredictable.” we know this even in human terms, that most ppl will act a certain way given a certain set of circumstances, and largely we go on with these set patterns. but every now and again something happens that defies immediate explanation… only for us to take a look at it and say, oh, here’s how that happened and here are the factors.

as Soren Kirkegaard stated “the problem with life is that you have to live it forward, but can only understand it backward.” the future is still unwritten, yet we somehow aren’t too surprised by what words appear on the page.

* Incidentally this is a difference between a mathematician and an engineer. Given a coin that flips heads 5 times in a row, a mathematician will tell you the next flip is equally likely to be heads or tails. The engineer will tell you (correctly) it is more likely to be heads.@Ian, I am an engineer by education but I cannot see myself agreeing with you here.

If I remember my “Introduction to Probabilities 101”, the flipping of a coin is an independent event, assuming a fair coin and a fair flipper. To flip a coin six times is equivalent to flipping six coins simultaneously. So having flipped 5 Heads in a row and wanting a sixth Tail is equivalent to an event {H,H,H,H,H,T} which is as equally likely as {H,H,H,H,H,H}.

@imarriedaxtian, you would be right if you *could* assume that it was a fair coin, but the run so far suggests – only *suggests*! – that there may be a hidden variable that is influencing the results, so absolute reliance on the assumption that it’s a fair coin being fairly flipped, is perhaps not the wisest move. If it *is* a fair coin, then you have as much chance (as you say) for it to be H as T, but if there is a hidden variable that is biasing it towards H, you are better betting on H.

I’m a geneticist by trade, and regularly see families where they perceive there is a pattern in the inheritance that I *know* is not there. Now, usually I’m right (I’m a *really* good geneticist ;-), but sometimes we need to consider the possibility that other factors may be at play that we haven’t properly considered. Sometimes it leads to deeper understanding (and Nature Genetics papers, which always look good on one’s appraisal! π

@Imax, As Shane said. In my experience mathematicians are more likely to *assume* the coin is fair, engineers are more likely to deal with the fact that no coin is actually fair. After 5 heads, if I don’t assume the coin is fair, I’d suggest that the probability distribution is skewed slightly towards heads for the next flip.

Or put another way, my hypothetical mathematician assumes he knows the probability distribution, the hypothetical engineer tries to infer the probability distribution from the data.

@zero1 – Yes, exactly. I might post something on “chaos theory” at some point, if it would be interesting (I’ve used quotes because what gets called chaos theory is really just one element in the study of complex systems)

Ian, that would be a good idea – a lot of people get confused between chaos theory, complexity & self-organisation, and quantum indeterminacy. Throw Goedel’s theorem into the mix, and you have a perfect storm for some really *bad* philosophy…

Juicy stuff! But this is supposed to be my religion blog π I’ve just subscribed to your genetics blog and read through a bunch of posts. Interesting stuff. Great to see another scientist come theologian!

OK, enough with the science on here – I’ll ramp it up over at AnswersInGenes and put in a hat-tip! π

Thanks for the comment; I wonder who we could enlist for the full frontal assault on creationism?…

It wasn’t a criticism Shane, I just have to try and work out how to relate quantum indeterminacy to theology now π

Of course – the god of the quantum gaps… I am continually bewildered by the cabbage delivered by some theologians in relation to quantum theory – as if indeterminacy allows god to “play dice”, to quote the great man. That said, quantum stuff often is deeply weird, and how we navigate that is a tricky one. Don’t worry – I like robust discussion π

@Shane

you would be right if you *could* assume that it was a fair coin, but the run so far suggests β only *suggests*! β that there may be a hidden variable that is influencing the results, so absolute reliance on the assumption that itβs a fair coin being fairly flipped, is perhaps not the wisest move.The problem here is that the sample space is too small for me to conclude that there is a hidden variable influencing the result. My other unarticulated assumption is that I “know” something about coin flipping in the real world and have done a fair amount of flipping before and my chance of repeating the same experiment is practically zip. (ie repeating a sequence of 5 flips resulting in heads again and betting the sixth one will be head.) But if it is easily repeatable, than of course you and Ian would be right.

So I will stick to my position posted earlier and say that we cannot say anything about the outcome of the sixth toss based just on the result of 5 previous tosses.

PS I linked over to your site. Looks like I will be a regular visitor there too π

@Ian,

After 5 heads, if I donβt assume the coin is fair, Iβd suggest that the probability distribution is skewed slightly towards heads for the next flip.As I have said to Shane, I would not be in a hurry to assume a skewed probability distribution on such a small sample space.

“I would not be in a hurry to assume a skewed probability distribution on such a small sample space.”

Ah, but isn’t that the whole point. Why is a pre-determined and idealised probability distribution your default? Why would you have to have evidence to, as you said to Shane, “conclude that there is a hidden variable influencing the result”? There are always hidden variables, lots of them. No real coin is exactly fair.

Anyway, to statistics. If we have two outcomes H, and T, and we have p(H) = x, p(T) = 1-x. And we have an observable sequence of HHHHH, then we can explicitly work out that the probability distribution across x in (0,1) as f(x) with:

f(x | HHHHH) = k f(HHHHH | x) * f(x)

where k is a normalizing factor, such that the integral of f(x | HHHHH) is 1 [i.e. Bayes rule].

So f(HHHHH | x) is x^5. In the absence of any information about coin flipping, we get f(x) = 1, so f(x | HHHHH) is 6x^5. Which clearly shows we’re much more likely to have a heads-loaded coin (5.3 times as likely).

However, as you said, we do know about other coins so f(x) isn’t 1. f(x) is the probability distribution of the loadedness of the coin, before we flip it the first time. Your conjecture is that we’ve flipped a lot of coins, and they’re mostly fair (in fact, I disagree strongly with this – you’ve probably never flipped one particular coin enough to get anywhere near a decent confidence in its fairness – you have flipped a lot of different coins, which is a completely different thing).

So if that is the case, lets say that f(x) is normally distributed* with a mean at x=0.5 and a small variance. Reasonable? If that is the case, then f(x) is symmetric about 0.5. And f(HHHHH | x) is very definitely not. So the result will also be asymmetric towards the current coin being loaded for H, for any small variance in f(x), except zero. Which would be tantamount to saying that you believe all coins are always fair.

—

* Because this is a bounded distribution, it cannot be an actual normal distribution, but the intuition is enough for us here. In reality it will also have non-zero probabilities for exactly x=0 and x=1, since we have seen coins with two identical sides.