Let’s think about the birth of Christianity. How did it happen? We don’t know, which is to say there are a lot of different things that could have happened. Let’s use an illustration to picture this.

Complex diagram, eh? I want this rectangle to represent all possible histories: everything that could have happened. In math we call this rectangle the ‘*universe*‘, but meant metaphorically: the universe of possibilities. In the rectangle each point is one particular history. So there is one point which is the actual history, the one-true-past (OTP in the diagram below), but we don’t know which it is. In fact, we can surely agree we’ve no hope of ever finding it, right? To some extent there will always be things in history that are uncertain.

When we talk about something happening in history, we aren’t narrowing down history to a point. If we consider the claim “Jesus was the illegitimate child of a Roman soldier”, there are a range of possible histories involving such a Jesus. Even if we knew 100% that were true, there would be a whole range of different histories including that fact.

Napolean moved his knife in a particular way during his meal on January 1st 1820, but he could have moved that knife in any way, or been without a knife, and the things we want to say about him wouldn’t change. His actual knife manipulation is part of the one-true-past, but totally irrelevant for Napoleonic history^{1}.

So any claim about history represents a whole *set* of possible histories. We draw such sets as circles. And if you’re a child of the new math, you’ll recognize the above as a Venn diagram. But I want to stress what the diagram actually means, so try to forget most of your Venn diagram math for a while.

At this point we can talk about what a probability is.

There are essentially an infinite number of possible histories (the question of whether it is literally infinite is one for the philosophy of physics, but even if finite, it would be so large as to be practically infinite for the purpose of our task). So each specific history would be infinitely unlikely. We can’t possibly say anything useful about how likely any specific point is, we can’t talk about the probability of a particular history.

So again we turn to our sets. Each *set* has some likelihood of the one-true-past lying somewhere inside it. How likely is it that Jesus was born in Bethlehem? That’s another way of asking how likely it is that the one-true-past lies in the set of possible histories that we would label “Jesus Born in Bethlehem”. The individual possibilities in the set don’t have a meaningful likelihood, but our historical claims encompass many possibilities, and as a whole those claims do have meaningful likelihood. In other words, when we talk about how likely something was to have happened, we are always talking about a sets of possibilities that match our claim.

We can represent the likelihood on the diagram by drawing the set bigger or smaller. If we have two sets, one drawn double the size of the other, then the one-true-past is twice as likely to be in the one that is drawn larger.

So now we can define what a probability is for a historical claim. A probability is a ratio of the likelihood of a set, relative to the whole universe of possibilities. Or, in terms of the diagram, what fraction of the rectangle is taken up by the set of possibilities matching our claim?

If we can somehow turn likelihood into a number, (i.e. let’s say that the likelihood of a set *S* is a nmber written *L(S)*) and if the universe is represented by the set *U*, probability can be mathematically defined as:

But where do these ‘likelihood’ numbers come from? That’s a good question, and one that turns out to be very hard to give an answer for that works in all cases. But for our purpose, just think of them as a place-holder for any of a whole range of different things we could use to calculate a probability. For example: if we were to calculate the probability of rolling 6 on a die, the likelihood numbers would be the number of sides: the likelihood of rolling a 6 would be 1 side, the likelihood of rolling anything would be 6 sides, so the probability of rolling a six is ^{1}/_{6}. If we’re interested in the probability of a scanner diagnosing a disease, the likelihoods would be the numbers of scans: on top would be the number of successful scans, the number on the bottom would be the total number of scans. We use the abstraction as a way of saying “it doesn’t much matter what these things are, as long as they behave in a particular way, the result is a probability”.

Now we’ve reached probabilities, we’ve used these ‘likelihoods’ as a ladder, and we can move on. We only really worry about how the probability is calculated when we have to calculate one, and then we do need to figure out what goes on the top and bottom of the division.

Another diagram.

In this diagram we have two sets. These are two claims, or two sets of possible histories. The sets may overlap in any combination. If no possible history could match both claims (e.g. “Jesus was born in Bethlehem” and “Jesus was born in Nazereth”), then the two circles wouldn’t touch [kudos if you are thinking “maybe there are ways both could be kind-of true” – that’s some math for another day]. Or it might be that the claims are concentric (“Jesus was born in Bethlehem”, “Jesus was born”), any possibility in one set, will always be in another. Or they may, as in this case, overlap (“Jesus was born in Nazereth”, “Jesus was born illegitimately”).

I’ve been giving examples of sets of historical claims, but there is another type of set that is important: the set of possible histories matching something that *we know happened*. Of all the possible histories, how many of them produce a New Testament record that is similar to the one we know?

This might seem odd. Why does our universe include things we know aren’t true? Why are there possibilities which lead to us never having a New Testament? Why are there histories where we have a surviving comprehensive set of writings by Jesus? Can’t we just reject those outright? The unhelpful answer is that we need them for the math to work. As we’ll see, Bayes’s Theorem requires us to deal with the probability that history turned out the way it did. I’ll give an example later of this kind of counter-factual reasoning.

So we have these two kinds of set. One kind which are historical claims, and the other which represent known facts. The latter are often called Evidence, abbreviated *E*, the former are Hypotheses, or *H*. So let’s draw another diagram.

where *H∩E* means the intersection of sets *H* and *E* – the set of possible histories where we both see the evidence and where our hypothesis is true (you can read the mathematical symbol ∩ as “and”).

Here is the basic historical problem. We have a universe of possible histories. Some of those histories could have given rise to the evidence we know, some might incorporate our hypothesis. We know the one true past lies in E, but we want to know how likely it is to be in the overlap, rather than the bit of E outside H. In other words, how likely is it that the Hypothesis true, given the Evidence we know?

Above, I said that probability is how likely a set is, relative to the whole universe. This is a simplification we have to revisit now. Probability is actually how likely one sets is, *relative to some other set that completely encompasses it* (a superset in math terms).

We’re not actually interested in how likely our Hypothesis is, relative to all histories that could possibly have been. We’re only interested in how likely our hypothesis is, given our evidence: given that the one-true-past is in *E*.

So the set we’re interested in is the overlap where we have the evidence and the hypothesis is true. And the superset we want to compare it to is *E*, because we know the one-true-past is in there (or at least we are willing to assume it is). This is what is known as a conditional probability. It says how likely is *H*, given that we know or assume *E* is true: we write it as *P(H|E)* (read as “the probability of H, given E”). And from the diagram it should be clear the answer is:

It is the ratio of the size of the overlap, relative to the size of the whole of *E*. This is the same as our previous definition of probability, only before we were comparing it to the whole universe *U*, now we’re comparing it to just the part of *U* where *E* is true^{2}.

We could write all probabilities as conditional probabilities, because ultimately any probability is relative to something. We could write *P(S|U)* to say that we’re interested in the probability of *S* relative to the universe. We could, but it would be pointless, because that is what *P(S)* means. Put another way, *P(S)* is just a conveniently simplified way of writing *P(S|U)*.

So what is a conditional probability doing? It is zooming in, so we’re no longer talking about probabilities relative to the whole universe of possibilities (most of which we know aren’t true anyway), we’re now zooming in, to probabilities relative to things we know are true, or we’re willing to assume are true. Conditional probabilities throw away the rest of the universe of possibilities and just focus on one area: for *P(H|E)*, we zoom into the set *E*, and treat *E* as if it were the universe of possibilities. We’re throwing away all those counter-factuals, and concentrating on just the bits that match the evidence.

The equation for conditional probability is simple, but in many cases it is hard to find *P(H∩E)*, so we can manipulate it a little, to remove *P(H∩E)* and replace it with something simpler to calculate.

Bayes’s Theorem is one of many such manipulations. We can use some basic high school math to derive it:

*Step-by-step math explanation*: The first line is just the formula for conditional probability again. If we multiply both sides by

*P(E)*(and therefore move it from one side of the equation to the other) we get the first two parts on the second line. We then assume that

*P(H∩E)*=

*P(E∩H)*(in other words, the size of the overlap in our diagram is the same regardless of which order we write the two sets), which means that we can get the fourth term on the second line just by changing over E and H in the first term. Line three repeats these two terms on one line without the

*P(H∩E)*and

*P(E∩H)*in the middle. We then divide by

*P(E)*again to get line four, which gives us an equation for

*P(H|E)*again.

What is Bayes’s Theorem *doing*? Notice the denominator is the same as for conditional probability *P(E)*, so what Bayes’s Theorem is doing is giving us a way to calculate *P(H∩E)* differently. It is saying that we can calculate *P(H∩E)* by looking at the proportion of *H* taken up by *H∩E*, multiplied by the total probability of *H*. If I want to find the amount of water in a cup, I could say “its half the cup, the cup holds half a pint, so I have one half times half a pint, which is a quarter of a pint”. That’s the same logic here. The numerator of Bayes’s theorem is just another way to calculate *P(H∩E)*.

So what is Bayes’s Theorem *for*? It let’s us get to the value we’re interested in — *P(H|E)* — if we happen to know, or can calculate, the other three quantities: the probability of each set, *P(H)* and *P(E)* (relative to the universe of possibilities), and the probability of seeing the evidence if the hypothesis were true *P(E|H)*. Notice that, unlike the previous formula, we’ve now got three things to find in order to use the equation. And either way, we still need to calculate the probability of the evidence, *P(E)*.

Bayes’s Theorem can also be useful if we could calculate *P(H∩E)*, but with much lower accuracy than we can calculate *P(H)* and *P(E|H)*. Then we’d expect our result from Bayes’s Theorem to be a more accurate value for *P(H|E)*. If, on the other hand we could measure *P(H∩E)*, or we had a different way to calculate that, we wouldn’t need Bayes’s Theorem.

Bayes’s Theorem is not a magic bullet, it is just one way of calculating *P(H|E)*. In particular it is the simplest formula for reversing the condition, if you know *P(E|H)*, you use Bayes’s Theorem to give you *P(H|E)*^{3}.

So the obvious question is: if we want to know *P(H|E)*, what shall we use to calculate it? Either of the two formulae above need us to calculate *P(E)*, in the universe of possible histories, how likely are we to have ended up with the evidence we have? Can we calculate that?

And here things start to get tricky. I’ve never seen any credible way of doing so. What would it mean to find the probability of the New Testament, say?

Even once we’ve done that, we’d only be justified in using Bayes’s Theorem if our calculations for *P(H)* and *P(E|H)* are much more accurate than we could manage for *P(H∩E)*. Is that true?

I’m not sure I can imagine a way of calculating either *P(H∩E)* or *P(E|H)* for a historical event. How would we credibly calculate the probability of the New Testament, given the Historical Jesus? Or the probably of having both New Testament and Historical Jesus in some universe of possibilities? If you want to use this math, you need to justify how on earth you can put numbers on these quantities. And, as we’ll see when we talk about how these formulae magnify errors, you’ll need to do more than just guess.

But what of Carrier’s (and William Lane Craig’s) favoured version of Bayes’s Theorem? It is is derived from the normal version by observing:

in other words, the set E is just made up of the bit that overlaps with H and the bit that doesn’t (~H means “not in H”), so because

(which was the rearrangement of the conditional probability formula we used on line two of our derivation of Bayes’s Theorem), we can write Bayes’s Theorem as

Does that help?

I can’t see how. This is just a further manipulation. The bottom of this equation is still just *P(E)*, we’ve just come up with a different way to calculate it^{4}. We’d be justified in doing so, only if these terms were obviously easier to calculate, or could be calculated with significantly lower error than *P(E)*.

If these terms are estimates, then we’re just using other estimates that we haven’t justified. We’re still having to calculate *P(E|H)*, and now *P(E|~H)* too. I cannot conceive of a way to do this that isn’t just unredeemable guesswork. And it is telling nobody I’ve seen advocate Bayes’s Theorem in history has actually worked through such a process with anything but estimates.

This is bad news, and it might seem that Bayes’s Theorem could never be any useful for anything. But there are cases when we do have the right data.

Let’s imagine that we’re trying a suspect for murder. The suspect has a DNA match at the scene (the Evidence). Our hypothesis is that the DNA came from the suspect. What is *P(H|E)* – the probability that the DNA is the suspect’s, given that it is a match? This is a historical question, right? We’re asked to find what happened in history, given the evidence before us. We can use Bayes here, because we can get all the different terms.

*P(E|H)* is simple – what is the probability our test would give a match, given the DNA was the suspect’s? This is the accuracy of the test, and is probably known. *P(E)* is the probability that we’d get a match regardless. We can use a figure for the probability that two random people would have matching DNA. *P(H)* is the probability that our suspect is the murderer, in the absence of evidence. This is the probability that any random person is the murderer (if we had no evidence, we’d have no reason to suspect any particular person). So the three terms we need can be convincingly provided, measured, and their errors calculated. And, crucially, these three terms are much easier to calculate, with lower errors, than if we used the *P(H∩E)* form. What could we measure to find the probability that the suspect is the murderer and their DNA matched? Probably nothing – Bayes’s Theorem really is the best tool to find the conditional probability we’re interested in.

While we’re thinking about this example, I want to return briefly to what I said about counter-factual reasoning. Remember I said that Bayes’s Theorem needs us to work with a universe of possibilities where things we know are true, might not be true? The trial example shows this. We are calculating the probability that the suspect’s DNA would match the sample at the crime scene – but this is counter-factual, because we *know* it did (otherwise we’d not be doing the calculation). We’re calculating the probability that the DNA would match, assuming the suspect were the murderer, but again, this is counter-factual, because the DNA did match, and we’re trying to figure out whether they are the murderer. This example shows that the universe of possibilities we must consider has to be bigger than the things we know are true. We have to work with counter-factuals, to get the right values.

So Bayes’s Theorem is useful when we have the right inputs. Is it useful in history? I don’t think so. What is the *P(E)* if the *E* we’re interested in is the New Testament? Or Jospehus? I simply don’t see how you can give a number that is rooted in anything but a random guess. I’ve not seen it argued with any kind of rational basis.

So ultimately we end up with this situation. Bayes’s Theorem is used in these kind of historical debates to feed in random guesses and pretend the output is meaningful. I hope if you’ve been patient enough to follow along, you’ll see that Bayes’s Theorem has a very specific meaning, and that when seen in the cold light of day for what it is actually doing, the idea that it can be numerically applied to general questions in history is obviously ludicrous.

—

But, you might say, in Carrier’s book he pretty much admits that numerical values are unreliable, and suggests that we can make broad estimates, erring on the side of caution and do what he calls an *a fortiori* argument – if a result comes from putting in unrealistically conservative estimates, then that result can only get stronger if we make the estimates more accurate. This isn’t true, unfortunately, but for that, we’ll have to delve into the way these formulas impact errors in the estimates. We can calculate the accuracy of the output, given the accuracy of each input, and it isn’t very helpful for *a fortiori* reasoning. That is a topic for another part.

As is the little teaser from earlier, where I mentioned that, in subjective historical work, sets that seem not to overlap can be imagined to overlap in some situations. This is another problem for historical use of probability theory, but to do it justice we’ll need to talk about philosophical vagueness and how we deal with that in mathematics.

Whether I get to those other posts or not, the summary is that both of them significantly reduce the accuracy of the conclusions that you can reach with these formula, if your inputs are uncertain. It doesn’t take much uncertainty on the input before you loose any plausibility for your output.

—

^{1} Of course, we can hypothesize some historical question for which it might not be irrelevant. Perhaps we’re interested in whether he was sick that day, or whether he was suffering a degenerating condition that left his hands compromised. Still, the point stands, even those claims still encompass a set of histories, they don’t refer to a single point.

^{2} Our definition of probability involved *L(S)* values, what happened to them? Why are we now dividing probabilities? Remember that a Likelihood, *L(S)*, could be any number that represented how likely something was. So something twice as likely had double the *L(S)* value. I used examples like number of scans or number of sides of a die, but probability values also meet those criteria, so they can also be used as *L(S)* values. The opposite isn’t true, not every Likelihood value is a probability (e.g. we could have 2,000 scans, which would be a valid *L(S)* value, but 2,000 is not a valid probability).

^{3} Though Bayes’s Theorem is often quoted as being a way to reverse the condition *P(H|E)* from *P(E|H)*, it does still rely on *P(E)* and *P(H)*. You can do further algebraic manipulations to find these quantities, one of which we’ll see later to calculate *P(E)*. Here the nomenclature is a bit complex. Though Bayes’s Theorem is a simple algebraic manipulation of conditional probability, further manipulation doesn’t necessarily mean a formula is no longer a statement of Bayes’s Theorem. The presence of *P(E|H)* in the numerator is normally good enough for folks to call it Bayes’s Theorem, even if the *P(E)* and *P(H)* terms are replaced by more complex calculations.

^{4} ~~You’ll notice, however, that P(E|H)P(H) is on both the top and the bottom of the fraction now. So it may seem that we’re using the same estimate twice, cutting down the number of things to find. This is only partially helpful, though. If I write a follow up post on errors and accuracy, I’ll show why I think that errors on top and bottom can pull in different directions.~~ I did write that other post, but I didn’t talk about this, as my views on this are rather half-baked, and not easily demonstrable. For the purpose of this discussion this claim is not very relevant, however, so I’ll withdraw it.

Reblogged this on The Heretical Philosopher and commented:

Here’s a good explanation of Bayes theorem, and its limitations. It isn’t a magic explanation, it is a limited tool to be used in assocation with an appropriate probability model.

Thanks Neil. I did notice that, despite my previous post about textbook writing, this explanation is structured very linearly.

I appreciate you taking the time to do this Ian. As I have said, I am a BT novice and your post presented a good bit of info in an easy-to-follow format. If you do more post regarding probabilities, the impact of errors, philosophical vagueness, etc., I will be more than happy to read those as well!

FYI-Links to your review have been posted on Carrier’s FtB page. Carrier basically stated he would only engage with you if you commented over on his blog. Oh well…

Here’s a typo: “That’s a good question, and one that turns out to b very hard to give an answer for that works in all cases” (the typo is b rather than be). Searching for the sentence should locate it for you.

Also the very next sentence, actually: “But for our purpose, just think of them as a place-holder for a any of a whole range of different things we could use to calculate a probability” “for a any of” should read “for any of”.

…just so I’m not leaving only proofreading edits, let me say this was a fabulous post, really helpful. (Did just cut down on the probability that I’ll buy Carrier’s book, though, so perhaps not helpful from *his* point of view.) I for one hope you do get to the follow-up posts on evidence & accuracy (even if I do sort of see, I think, where you’re going with it).

Thanks again.

Thanks a lot Stephen! I appreciate the copyediting help – it is a very weak point for me.

Pingback: Problematizing Richard Carrier’s Treatment of the Historical Jesus

I think you will find that the purpose of Carrier’b book “Proving History: Bayes theorem and the search for the historical Jesus” is not, as you say, to use Bayes’s Theorem to prove Jesus didn’t exist. Rather Carrier seems to be trying to show that Bayes theorem is the best way to argue about historical claims, and to show that some generally accepted criteria for historicity, widely used in historical Jesus studies, can be shown to be invalid using Bayes theorem. Carrier has promised a forthcoming book where he will try to assess the probability that Jesus didn’t exist, which is also different from proving Jesus didn’t exist.

However, your post

isrelevant because you claim that Bayes theorem cannot be used for historical reasoning at all. It would have helped if you had taken one or many or all of Carrier’s applications of Bayes theorem to history, or the presently used historicity criteria, and showed in particular why Carrier’s analysis is invalid.Your promise to show later that errors make the Bayes method useless for history seems too ambitious. A wide range of error does not necessarily make a result useless, does it? If the best we could do was determine that the chance Jesus did not exist was between 15% and 50%, this would be quite an advance in knowledge, since the present scholarly consensus is that the chance Jesus didn’t exist is something like a million to 1. Even if the Baysian result was between 0% and 10% that Jesus didn’t exist, Jesus studies experts (who presently use only possibly invalid methods) would have to think harder about their claims in the future. But I look forward to you error analysis post.

“Your promise to show later that errors make the Bayes method useless for history seems too ambitious.”

The point is, why use Bayes’s Theorem at all in domains that don’t have the right data? Do you think, in general, we should pick a random theorem and put guesses in, and claim that that is the basis of all history? Or should such extraordinary claims require some evidence of validity?

How about I write a book about how all history is fundamentally modelled by a hidden Markov system (they’ve been proved, don’t you know!), and then go on to demonstrate why, if we assume the Markov system is the one-true-way to do history, how all mythicist arguments are rubbish. Would you believe that? Of course not. Not only because it disagreed with your biases, but also because you’d rightly ask: please at least demonstrate that a hidden Markov model is the best model of history.

His *premises* are invalid, for specific reasons in both my posts: he misrepresents what Bayes’s Theorem is, misunderstands priors (i.e. the source data needed) and fails to explain what the theorem is doing.

If he’d like to show why his premises are valid, he’s welcome to do the ground work. It would be a tour-de-force paper or monograph if he were to do it (not to mention unifying frequentist and Bayesian interpretations to boot). And would have massive implications far from the tiny niche study of the Historical Jesus. Not to mention having real commercial impact in data analysis. That he hasn’t is significant, to me.

Many of his *conclusions*, however, I think are correct, and are significantly represented in the academy. I haven’t arrived at those conclusion from Bayes’s Theorem, because you can’t, and I don’t believe he has either. I agree that Authenticity Criteria are suspect (a position which is a growing part of the scholarly landscape), and that the Questing approach to historical Jesus studies is dubious at best (that, I’d say is probably a majority position among scholars now, certainly a huge minority if not). But just because your conclusions are correct, doesn’t mean your argument isn’t worthless.

Bayes’s Theorem isn’t some piece of magic. I’ve used it, professionally, in my research as a scientist. It doesn’t work in the way Carrier says it does, simple as that. I’d encourage anyone interested to learn real probability theory and see. In the same way as I encourage creationists who’ve learned their biology from Ken Ham to actually pick up an undergraduate textbook on evolution.

“since the present scholarly consensus is that the chance Jesus didn’t exist is something like a million to 1”

This is only the scholarly consensus in the imaginary world of mythicist psuedoscholarship. I’ve posted several times on this blog about the straw-man version of the scholarly consensus that mythicists seem to be arguing against. It might be worth actually going to the HJ program at the SBL and seeing what scholars actually say. Or reading a contemporary scholarly account, such as Allison’s Constructing Jesus, or LeDonne’s new volume on Authenticity and Criteria.

1) I was assuming ehrman in his recent book represented the scholarly consensus, and it seemed to me that a million to 1 would represent ehrman,s view.

2) I still think a specific example of why carrier is wrong would help more than general statements about what can,t be done with the theorem. Or an example of a misuse of the Maths to get a clearly wrong result, that would help.

3) will you do the error analysis you promised? Thanks.

Do you think, in general, we should pick a random theorem and put guesses in, and claim that that is the basis of all history?

No I don’t think that in general a randomly selected theory would be the basis of all history. However, I think it is possible that the question “What is the chance that we would have all the evidence we do concerning Christianity, if Jesus were not an historical person” might be answered by a mathematical theory. As I say, I think I’d like to see some more detailed demonstration, some specific example perhaps, about why Bayes theorem fails when applied to history.

Can you point to the specific blogs about mythicist pseudo-Scholarship? I assume you are right about that since I notice even Carrier himself says that most mythicist arguments are rubbish. But that seems to be a different question than whether Bayes theorem is of any use here.

“What is the chance that we would have all the evidence we do concerning Christianity, if Jesus were not an historical person”

But that isn’t the question, is it? The question is: what is the chance of Jesus not being a historical person, given we have the evidence we have. We don’t care, actually, about the chance of getting the evidence from a non-existent person. That is just a step on the way to Carrier’s version of Bayes’s Theorem.

Your question could be stated, what is the probability of a non-existent person leaving evidence. Or, put another way, of all the non-existent people, what proportion of them have left evidence? Can you answer that, even in principle. You certainly do need to, to calculate the denominator of Carrier’s version of Bayes’s Theorem (and even then, the other part of the same term is to figure out P(~H) – the probability that any given person doesn’t exist). But how? You can pick a random number – but how do you know you didn’t fail to include a large number of other non-existent people?

“1) I was assuming ehrman in his recent book represented the scholarly consensus, and it seemed to me that a million to 1 would represent ehrman,s view.”

I don’t pretend to speak for Ehrman, but I would be absolutely amazed if he’d put the chances within orders of magnitude of that region. Given what I know of his work generally, I am almost certain he’d not estimate *any* historical probability anywhere near that point. I suspect you’re reading a fictional caricature of where scholars are. Maybe you should take some time to actually get familiar with the way biblical / early church scholarship works, outside of mythicism.

2) I’ve been thinking about a post, working through Bayes’s Theorem several times to show totally different conclusions to the same question. I can get the probability of Jesus existing to be both 0 and 1 and anything in between fairly easily with minor changes on how we look at the universe of possibilities. Posts take time.

3) Possibly, but again it takes time. Depends if there’s any need. Nobody so far has pushed back on the main point: lack of applicability, with an error-based argument, so I don’t feel the need to go there. If you’re really interested in using Bayes’s Theorem with probability density functions, then there are plenty of undergrad probability textbooks that cover that material.

Re ehrman: probably a million to 1 is an exaggeration of his view, maybe it is only a thousand to 1. I was not basing this on any caricature of anyone,s views but on what ehrman said in his book on the topic, in particular his mention of holocaust denial as similar to historical Christ denial.

It would certainly help me if you gave the argument for zero probability and probability 1. I could then see if your assumptions in each case are reasonable or not (reasonable meaning based on reasons and argument and things we know). if you can show that both results are equally reasonble, then you will have proved your point that Bayes theorem is no good here.

My assumptions won’t be reasonable. Except the one that happened to confirm your predetermined conclusion, perhaps. But even then you shouldn’t find it so. I find *none* of them reasonable. That’s the whole point. They are all just made up numbers. I’m confused how I ended up being the one having to make up the made up numbers to somehow prove that numbers that Carrier doesn’t even bother to make up are not reasonable!

And what if I have no predetermined opinion? I didn’t,t ask you to make up numbers and unreasonable estimates. I asked you to show by example why carrier,s method can,t work. I am still puzzled by some of the things you have said, so I assume I might be misunderstanding you, and am looking for more information. For example I don’t understand why you say my question could be stated as: what is the probability of a nonexistent person leaving evidence. I assume you mean your restated question is silly (?) but I don,t see why you think it is the same as the question I asked.

My question could, in theory, be answered by: there is no chance that we would have the evidence we do have if Jesus didn’t,t exist. in which case Jesus historical existence would be proved. I can’t see how your version of my question could be answered like that.

Okay, thanks for the clarification.

So fundamentally, Carrier’s approach is unreasonable because he does not base his calculations on any reasonable data. If he or anyone else could work through a calculation based on reliable data, that would be great.

You said “What is the chance that we would have all the evidence we do concerning Christianity, if Jesus were not an historical person”, which is P(E|~H) in Carrier’s denominator. So how do you calculate this. What does this mean? A probability is the ratio of the likelihood that a particular thing is true, relative to the whole: what is the thing in this case: that Jesus did not exist, and that he left evidence (E and ~H); what is the whole? that Jesus did not exist. How do you put numbers on those without guessing?

You could, for example, use data from comparable characters: so you estimate P(E|~H) by saying: of comparable characters who were mythical what proportion left evidence. So how do you figure how many mythical comparable characters didn’t leave evidence?

So instead you probably have to look at comparable characters who did leave subset of evidence, and look at what proportion left some other kind of evidence. So what is in your ‘some’ evidence here and why would another choice be less reasonable?

My opposition to the approach is simply saying: try to calculate these values without just making up random numbers or unreasonable assumptions. When we have fuzzy, improperly defined sets like this, there is just no way to do it with any credibility.

I am wondering if your objection is not to the use of Bayes theorem specifically but to any “use” of probability in the study of history? Everybody accepts that Alexander the Great exists, because they say the evidence is overwhelming. This could be expressed as P(E|~H) = zero (where H = Alexander existed as historical person). In the case where P(E|~H) = 0, we do not need any prior probabilities and we don’t need P(E), or P(H) to calculate that P(H|E) = 0. So do you object to a statement such as: it is at least 99.99% sure that Alexander the Great existed because we can’t explain all the evidence if he didn’t exist?

To correct Grizel above, Carrier stated he would look at your original post on Sept. 12 and 17, both times saying in a week or so. I’d give him until the end of the month before declaring he’s avoiding your argument.

It is entirely possible to talk about probabilities informally in that sense. But probabilities have specific meanings in terms of statistics, and it is in that context that theorems in probability are proved. You can use probabilities in other ways, but to then make bold claims about what the theorems show is simply fallacious.

You can’t, for example, say that the probability of the existence of Alexander the great is 99.9% on the basis of the definition of conditional probability, unless one can reliably calculate the terms in that definition. Or, more specifically, unless one can calculate the terms more accurately than the result.

When we say Alexander existed, we are estimating P(H|E), we are not doing so on the basis of an independently verifiable estimate P(E|~H). I don’t believe you started from P(E|~H), and then thought – oh look that means P(H|E) = 1. You started thinking “Alexander probably existed, what does that tell me about what the terms in Bayes’s Theorem should be”. In short, you arrived at estimates for all the terms simultaneously based on your historical intuitions, and then later jiggled them around.

If you look at the P(E|~H) = 0 assumption independently, you’ll get into tendentious territory right away. On what numerical basis would you conclude P(E|~H) = 0? Nobody else who’s left that kind of evidence hasn’t existed? How do you know? Who else is directly comparable? How do you know that isn’t just a fluke? How do you know that, if you just had a couple more samples, you wouldn’t have non-existent people showing that evidence? The idea that a historian is even concerned about such questions, from a numerical point of view, seems dubious to me.

Not to mention that P(E|~H) = 0 would be a mathematical conclusion that would normally require a proof, not a statistical argument for. In real math P(E|~H) might be very small, but even if absolutely vanishingly small, you would need the priors. One common mistake among folks learning this stuff is assuming that small values vanish. They don’t. Often all the values in these calculations are tiny. Carrier’s claims that we can ignore a bunch of other possibilities as long as they have low probabilities is a good way to fail a course on this.

So we can talk about probabilities, as a kind of idiom for our confidence, but to pretend we are doing, or could do, math with the results is wrong. Carrier, Craig, and others who use Bayes’s Theorem use it merely to give structure to their foregone conclusions. There is no evidence in any of their work that they’ve seriously attempted to do anything but guess that the terms involved. And therefore, not surprisingly, their conclusions match their long-standing claims.

H = suspect A committed the murder.

E = medical evidence pins down time of death to a certain 12 hours. evidence shows struggle with murderer and fatal knife stab wound. Suspect was in custody in county jail during the relevant 12 hours. i.e. suspect A has iron clad alibi.

.

Hence P(E|H) = 0. The chance we would have this evidence if Suspect A committed the murder is zero. Also P(E|~H) > 0. Hence P(H) = 0 on any reasonable view of the priors. I can’t see anything wrong with that.

But course, space aliens may have done it, arranged the evidence and MIB used their memory sticks etc, but we do ignore that sort of stuff.

Similarly I am convinced Alexander the Great Existed because the chance of having coins showing his head as king of ruler and other archaeological plus written evidence is zero if he didn’t exist.

As far as I know, every time I have seen a coin with a person’s head on it, inscribed with Elizabeth II or Mao or King of Thailand or president of France etc, the people depicted have existed. So I do have some statistical evidence about coins carrying a rulers image. If I was an historian I assume I would know a lot more information about whether people in ancient times put imaginary rulers on their coins, so I don’t think historians have nothing to go on when saying this evidence would not exist is Alexander was fictional.

Like, If it was an image of Buddha or an elephant headed, seven armed woman, I have other information and would not assume it was automatically a real person on the coin.

Thanks again Michael, for carrying on the discussion. All good stuff. Four observations about your post (I want to write more, but I’m too verbose as it is).

1.

I suspect I may have read too many detective novels, because I came up with a dozen situations for how your accused person might be guilty, given the evidence. None of them involved space aliens. The probability can definitely not be assumed to be zero…

2.

Let’s imagine two years after the case is abandoned, a cold case detective discovers that the murder victim is on an unrelated list of people who owe money to a notorious local thug, a person the accused has known ties to. A bit more digging shows that the officer on duty that week in county lock-up has since been fired and charged with corruption, after evidence arose that he was smuggling drugs into the prison for that same thug. A forensics review shows that the original accused person is a direct DNA match to skin particles found under the victim’s nails, and that the mugshot of the accused, taken when they were booked in shows scratch marks to the neck. The remaining evidence in the case is how you describe it. Do you think you have a reasonable case to put to the DA? I do.

You said P(E|~H) = 0, where E is the original evidence in the case. Now we have more evidence F, presumably you’d say P(E∩F|~H) > 0 – i.e. there is now more chance of getting all this evidence if the accused is not innocent. Certainly P(E∩F|~H) cannot be zero, right?

But mathematically this cannot be the case, because P(A∩B) ≤ P(A). Adding new information can *never increase* the probability of something. P(E|~H) has to be at least as large as the the probability of P(E∩F|~H),

for any F. No matter what evidence Fcould possiblycome to light at any point in the future.There are classic experiments showing this is a widespread cognitive bias. One that Carrier, to his credit, specifically warns against in the book. Most folks will estimate the probability that their friend slapped her husband, as lower than the probability that their friend slapped her husband after finding he’d had an affair.

3. This is one reason why, I think, your P(E|~H) is a red herring. You’re correctly intuiting that P(H|E∩F) could be higher or lower then P(H|E), depending on what new evidence comes to light. Your intuitions about the impact of evidence on the situation are intuitions about P(H|E), and P(E|~H) is just working backwards from there.

4. The other reason P(E|~H) is a red herring is that, if we can assume the P(E|~H) = 0, we are no longer making a probability argument at all, much less an argument from Bayes theorem. Think back to the Venn diagram, the only way P(E|~H) = 0 is if P(E∩~H) = 0, and the only way that can happen is if E∩~H = ∅ (empty set). Which can happen if and only if E∩H = E. No probability or statistics needed, the assumption that the evidence cannot be seen unless the hypothesis is true, is merely another way of saying that the evidence implies the hypothesis. It is a gimmick then, to dress this up in Bayesian probability. Much better to say that one’s conclusion is a tautological restatement of one’s assumption. The most important reason to be clear here, is that, we don’t want to give the impression that if P(E|~H) were almost but not quite 0, then our conclusion would hold. It might, but we need to engage our priors then, and things could well turn out differently depending on those values.

I agree it is hardly probability and a special case if in fact P(E|H) = 0. I also thought you could come up with scenarios in which you could get around the alibi, but I tried to suggest they were all too fantastic to be considered (a short hand way of saying I was too lazy to specify things like many cops could say he was in lock-up, not just one).

And I am puzzled about whether probability can be taken as a measure of degree of knowledge, rather than strict frequency, but isn’t that a very old topic that has been sorted out? Anyway I am 1/3 of the way thru Carrier’s book and will read it critically in light of your comments.

Yes, I got that you were trying to suggest it was clear, but I think the key takeaway is just the question: is there any possible additional evidence tht could come to light (no matter how unlikely) that could change your mind? If there is, then P(E|~H) is not zero.

Probability is rather abstract, and can be made to represent a bunch of different things. Frequencies are one, confidence is another. Probability can also be used to model vagueness, what is sometimes called ‘fuzzy logic’ (though other math is more often used). It is perhaps easier to see why frequencies need to be accurate. And I think ‘confidence’ can feel like an intuitive, non-numeric kind of thing, so it is appealing to try to map such intuitions onto numbers and think probability theory is meaningful then. But ‘confidence’ in Bayesian probabilities means something quite specific statistically, and it is dubious that the conclusions one can derive from the math are valid if you use the intuitive rather than the statistical definition of ‘confidence’.

But none of this is to say that understanding Bayesian probability (or probability and decision theory more generally) isn’t useful in informing one’s thinking about all sorts of matters, historical and otherwise. I wholeheartedly agree with Carrier on that. I just think we need to be careful when someone claims that the involvement of Bayes’s Theorem (for example) lends their conclusions any additional credibility. Such claims are, as far as I can see, ways to pretend that there is a level of mathematical rigour in one’s method that simply isn’t there.

P(E∩~H) = 0 does not imply E∩~H = ∅. It only implies that E∩~H has measure zero.

True, thanks for the spot. It doesn’t change the argument, that I san see, to have U be a set with no non-empty subsets of zero measure. Given we’re still talking about the application of this to history, can you think of a practical implication of making such an assumption?

Doesn’t fix the fact that I was wrong to assume it without comment, obv.

“if a result comes from putting in unrealistically conservative estimates, then that result can only get stronger if we make the estimates more accurate.”

This is true, so I have no idea what you claim it isn’t:

“This isn’t true, unfortunately, but for that, we’ll have to delve into the way these formulas impact errors in the estimates.”

The reason why it is true is because Bayes’s formula is monotonically increasing in P(H) and P(E|H) and monotonically decreasing in P(E|~H) (using the version of the formula that Carrier does because the 3 inputs are independent in it). So if one plugs into the formula a maximum possible value for P(H) and P(E|H) and a minimum possible value for P(E|~H) then one will get a maximum possible value for P(H|E) and similarly to get a minimum possible value for P(H|E). Moreover, if one tightens the possible range for the 3 inputs then the possible range for P(H|E) will be smaller as well. This justifies the a fortiori argument.

Now, you may want to ague that P(H|E), for some values of the inputs at least, is very sensitive to changes in the errors in some of the inputs (P(H) in particular), but this isn’t usually the case and in any event the a fortiori argument is still valid.

Finally, you made a related statement:

“You’ll notice, however, that P(E|H)P(H) is on both the top and the bottom of the fraction now. So it may seem that we’re using the same estimate twice, cutting down the number of things to find.”

It doesn’t cut down the number of things, since we’ve replaced the unknown P(E) with P(E|~H). You continue:

“This is only partially helpful, though. If I write a follow up post on errors and accuracy, I’ll show that errors on top and bottom pull in different directions, and so while you have fewer numbers to estimate, any errors in those estimates are compounded.”

I guess by “different directions” you mean the increases in P(E|H)P(H) increase the denominator and numerator, so the fraction is increased by one and decreased by the other. But these effects don’t compound, they offset!

This is also true for P(E|H) by itself, but changes in P(H) can increase or decrease the denominator, depending on the relative sizes of P(E|H) and P(E|~H), so under certain circumstances errors in P(H) coming from the top and bottom of the fraction can compound, but this was true even for the original version of Bayes’s formula, since P(E) is not independent of P(H).

“P(E) is not independent of P(H)”

I meant “P(E) is not independent of P(H)P(E|H).”

I think I know what your objection to an a fortiori argument is: If the lower bounds for the inputs to the Bayes formula are 0, in particular, for P(E|~H) and either P(H) or P(E|H), then the range for P(H|E) would be 0 to 1, i.e., BT doesn’t tell you anything. This would be true even if the upper bounds were very tight, and wouldn’t matter if you made them tighter. For example, if one estimates P(H)<5% and P(E|~H)<2%, your range for P(H|E) is 0 to 1. Changing those bounds to P(H)<2% and P(E|~H)<1% doesn't do anything to tighten the range for P(H|E).

So this method will only work if you can bound at least P(E|~H) or both P(H) and P(E|H) away from zero.

Carrier hasn’t responded to my knowledge, but I’m still hoping he does, whether he admits error or argues back. As the comments of Malcolm reinforce, I’m not able to independently judge whether finer points of probability are cogent or not, so I rely on his and your integrity to debate honestly and admit correction. Hope you write another post about this and/or reply to Malcolm’s latest comments here.

Malcolm, So sorry its taken me a week to respond. Its been a crazy week!

So errors. Your last response is coming at the same point I’d want to make but from a slightly different angle.

It helps, I think, to keep in mind the source of the probabilities and their scale, to therefore be able to show what kinds of error one can expect, and how it will move through the formula. So we’re talking reference classes: the grounds on which these things are estimated. I’d contend that the probabilities are highly contingent on small changes in reference class, the reference classes involved are either so highly constrained that their innate sampling error dwarfs any ‘conservatism’ in the number Carrier picks, or else are broad enough that the actual probabilities are so small (or large) that the problem is at least ill-conditioned. Even in cases where zero isn’t in bounds. But clearly then, as you say, we get no information out.

As for the ‘pulling in different directions’, that came from a rather more half-baked thought, which was reified into my post in a way that is, as you say, not true. I spent a bit of time trying to figure out the actual process one uses to come up with these numbers. The psychology of it, if you like. I have a half-baked notion that what is being done is a kind of iterative process that effectively adjusts P(~E|H) to suitably offset the numerator, because the numerator and the same term in the denominator are independent wrt to the way the probabilities are estimated. Half-baked, but there’s something there.

In both cases, the wording in the PS to this post is unhelpful, so I’m going to edit that section to better reflect that.

Mark, thanks for coming back.

“I’m not able to independently judge whether finer points of probability are cogent or not”

I think that’s a bit of a sideshow, to be honest. The question for me is simply: where can you get the data from in a way that is independent of the conclusion one wants to reach? Can you honestly estimate the inputs in a way that doesn’t use the same biases that you have about the output? Or is what is really happening that you’re estimating the inputs based on a prior estimate of the output? Because, after all, the output is the thing we’re most used to thinking about, and which we have the best gestalt for.

I agree with many of his conclusions. But I agreed with them before, not after the math. I cannot see how an honest enquirer (whether there are any of them is another debate) could come to opinion-changing conclusions using Carrier’s methods. And if that is the case, then the method is merely a fig-leaf for ones biases.

Carrier sometimes makes the point (from memory at least once quite strongly) we need to examine our assumptions more. I agree, and I think that the chapter on the application of Bayes’s Theorem to the criteria can help build a kind of probabilistic intuition that can definitely help. But I despair of the impression he cultivates that it is a strict input-output process.

Ian, You say that “either way, we still need to calculate the probability of the evidence, P(E)” and “how likely are we to have ended up with the evidence we have?”. Does it really matter when we can use the odds form

P(H|E)/P(~H/E) = P(H)/P(~H) x P(E|H)/P(E|~H)

to establish if H is more probable than ~H?

Consider the well-known drug test example: the test has a 5% chance of giving a false positive and let’s say no chance of giving a false negative. Let hypothesis H = you are a heroin user, and let E = you were tested and returned a positive result for heroin, i.e. P(E|H) = 1 and P(E|~H) = 0.05.

Now I don’t know exactly the ratio P(H)/P(~H), i.e. I don’t know how many users and non-users are tested but can’t I say, for the sake of argument, that it is a 1 to a 1000? Then the odds that you are a heroin user, given that you tested positive is 1/1000 x 1/0.05 = 1/50. If we have a disagreement about this, or maybe the test is not randomly applied, but at least we know what it is we must argue about, as Carrier says. I can’t see that the method has no use for history, just because there is error in the terms. As long as the arguments and assumptions are clearly laid out we are better off, it seems to me.

[Do you want me to respond to the odds form of Bayes’s Theorem – do you really think we’re not actually estimating P(E) when we’re estimating those terms, just because it is arranged in that way?]

In the drug test example, of course it is the case, because the terms you supply are those we can more confidently ascertain. I see no evidence that Carrier, Craig or anyone else is actually proceeding from probabilities that are known to conclusions that aren’t.

Isn’t it obvious that such reasoning is post-hoc rationalisation of their prior positions? We don’t run the drug test by sayying “well, I kindof think he’s probably guilty, so… I think the probability the test gives false negatives is about this, and the probability of false positives is probably somewhere around that, and the prior probability of him being a drug user is the other, so, yup, it turns out he’s probably guilty”.

Now there’s nothing wrong with that, that’s the way historians work. If you want to look at historical reasoning probabilistically, it is a iterative process of estimating a very large number of probabilities, drawn (inevitably) from mismatched and poorly specified reference classes.

So understanding how probabilities change under probabilistic reasoning (not just Bayes’s theorem, but in general) can definitely inform a historian’s (or anyone’s) thinking. Knowing things like Bayes’s Theorem, or even more foundationally, things like P(A∩B) ≤ P(A), is very useful I think. If you can build up the intuition to get Monty Hall or Tuesday’s Child, then I can’t see how it can’t be useful in any kind of reasoning under uncertainty.

But to suggest that one actually performs these calculations to reach one’s conclusions is cozened nonesense, as far as I can see. Certainly Carrier doesn’t do it in the book. And to suggest (by implication) that those scholars who don’t or can’t express their complex reasoning in three probability numbers to run them through Bayes’s Theorem, is just insulting.

Ian, The ratio P(H)/P(~H) tells us the specific values for P(H) and P(~H), since they sum to 1. The ratio P(E|H)/P(E/~H) does not tell us the specific values for P(E|H) and P(E|~H). Isn’t it possible that we know only that the evidence is 20 times more likely given H, than if H is not true? In that case we get our answer without knowing the value of P(E), don’t we?

I don’t want to minimize the difficulty of using probability theory for history, but it seems to me you might be making make it sound more difficult than it is, when you say we have to know the probability of the evidence P(E).

Even worse, you say “What would it mean to find the probability of the New Testament, say?” which indeed sounds like an unanswerable question. Now, maybe I am the only person who could misunderstand what you meant by that, but it sounded the same as asking, in the drug test case, “What is the probability that someone went to all the trouble of applying the drug test (so that we have a positive or negative result)?”, a question which also sounds unanswerable.

Of course the real question is, given that we have the result of the test, how likely is the result given H or ~H. Or given that various people, at various times, wrote about Jesus Christ, how likely are the contents of what they wrote, given that Jesus of Nazareth did or did not exist, and given all our knowledge of the culture

et ceterain which they wrote?“Isn’t it possible that we know only that the evidence is 20 times more likely given H, than if H is not true?”

It might help if you could give an example of how one could know that without using some method that could also allow us to calculate P(E|H) and P(E|~H). Not a synthetic one, a historical one. Hypotheticals are the root of my problem, after all.

“Even worse, you say “What would it mean to find the probability of the New Testament, say?” which indeed sounds like an unanswerable question. ”

I was trying to draw attention to the fact that you have no reference class here that is useful. What is U? Can you give me an example that allows us to do more than guess at P(E|H) terms, and that crucially covers all features in the evidence. There’s no good defining your reference class as “Jewish messiahs” and your evidence as “claims of divinity”, because then you’re simply atomizing the evidence. The evidence consists of the details of all the claims, not just one claim or the claims in general. One can, of course, use probability theory to combine claims and figure out joint probabilities over them. But you can’t do that with anything nearly as simple as Carrier’s Bayes’s Theorem. You need to figure joint probability distributions over the different features of the evidence, [e.g. P(A,B,C) = P(A|B,C)P(B|C)P(C). ] and the rules for updating existing beliefs based on new evidence are much more complex

So my question was deliberately ridiculous, but completely serious. And if you want to give a counter, maybe you could give an example of a basis on which to calculate a P(E) (or P(E|H), or P(E|~H)) term that isn’t horribly sensitive to slight changes in the reference class.

In the drug case example, it is very clear what the reference class is, it is the set of all drug tests. Therefore E is obviously the set of positive drug tests. If one uses the drug test as one piece of evidence among many, then one will obviously need to move beyond to the kind of questions above, and in some medical domains we can do this. Bayesian networks, for example, are used in automated diagnosis systems, but beyond rather simple toy cases, you don’t do the math for these by hand. So I struggle to see how this can be applied in history, even if you did have meaningful values for the inputs.

—

Basically we can go round and round this theory as long as you like, but until someone (Carrier, Craig, you) is willing to actually put up some numbers to scrutiny, you’re arguing over the color of the Emperor’s New Clothes. In theory, the Emperor might potentially have clothes of the most beautiful God thread, but if you want to convince someone, you sooner or later need to produce them. So far your examples of Alexander and the criminal have floundered on a basic mistake about joint probabilities, and your drug case example has a well defined reference class. If the point you want to make is that thinking in Bayesian terms is helpful as a way of informing intuition, then I’ve conceded that long ago. So where do we go from here? More discussion of the nature of the hypothetical garments?

I will think about your last reply for a while. Just to clarify. I assume you know the CIA uses BT to assess things like “likehood of war”. See https://www.cia.gov/library/center-for-the-study-of-intelligence/kent-csi/vol16no2/html/v16i2a03p_0001.htm.

Do you objections cover that sort of use of BT?

Ian, You say “until someone (Carrier, Craig, you) is willing to actually put up some numbers to scrutiny, you’re arguing over the color of the Emperor’s New Clothes …”

By way of clarification, can you apply your criticisms to the numbers, and conclusions, given by Carrier on pp. 57-59. He first considers a hypothetical story of a 3 hour world-wide darkness in 1983, where lots of evidence is exists, and the Gospel story of a 3 hour world-wide darkness at the crucifixion. In each case h = the darkness happened and ~h the story is false for whatever reason.

I am particularly interested in how you think Carrier’s biases or forgone opinion shaped his estimated probabilities, and how small changes change the conclusions. Carrier calculates the 1983 story, given enough evidence could be at least 99% probable and the Gospel story no better than 0.01%. The 1983 story is given to show that good evidence can overcome the high probability that such stories of supernatural events are made up or the result of errors or confusion (a criticism often made against David Hume “On Miracles”).

I think the article you linked does a pretty good job of saying why BT is very difficult to use in intelligence gathering. It didn’t say the CIA used the technique, its examples were intuitive not quantitative, and raised significant issues such as non-independence which it didn’t resolve. There was nothing I read in there that suggested the CIA have solved the issues to make the results useful. Issues that the paper makes clear, are hardly unique to me.

I don’t have a problem with putting numbers on guesses, incidentally. Saying “you think this is likely – out of ten from zero impossible to ten being a certainty, how likely do you think it is.” My problem comes from doing too much with that and thinking the results that come out are as meaningful (or more meaningful) that what you put in.

Zlotnick, in the paper, is right that independent evidence can often be more accurately appraised in isolation. But then goes on to gloss over the fact that real evidence is not independent, and without the ability to calculate the joint probability over sets of evidence, running it through a Bayesian update is invalid.

“I am particularly interested in how you think Carrier’s biases or forgone opinion shaped his estimated probabilities … the Gospel story no better than 0.01%.”

I can’t figure out if you’re playing me here. You think that Carrier concluded that the gospel story was overwhelmingly unlikely because Bayes’s Theorem told him so? I can’t figure out how anyone can not see that, if you asked Carrier about the likelihood of the gospel story being right, before he did the calculation, he wouldn’t have said something like “one in ten thousand, at best”: given some numeric odds that, if you feed back into Bayes’s would give values for the other terms that you could post-hoc rationalize. Do you really think his judgement on the day of darkness wasn’t a forgone conclusion? There is nothing at all in that passage that I can see, that suggests Carrier is using evidence independent of his conclusion. Do you really think he is? I confess I find it very hard to see how you could… seems incredibly naive to me.

It doesn’t matter if Carrier wanted some particular result before applying Bayes theorem to get his result, provided his use of Bayes theorem is valid. Anyone can guess or dream a result before they prove it, but that doesn’t make the proof wrong. I was asking where Carrier’s method was wrong. If you say “Well I disagree with that probability, it could be anything from 0 to 1, or that probability must be at least 100 times larger than Carrier assumes or changing this probability by a tiny amount changes the whole result”, then I could understand what you are getting at. But you would have to give cogent reasons for disagreeing, since Carrier gives reasons for his choices, which appear to be good reasons, on the face of it.

In summary, for the gospel story, Carrier puts P(h|b) = 0.01 (which he considers very generous, i.e. a large value) based on our general experience that fantastic stories like this are usually wrong. He puts P(e|h,b) = 0.01 where the main feature of the evidence is the lack of any Roman accounts, lack of any non-Gospel derived accounts, lack of any Indian accounts, Egyptian accounts etc of a 3 hour darkness of the sun. He claims to use b, his background knowledge of the period and knowledge of what has survived. He puts P(e|~h,b) = 1.0 on the theory that the evidence, silence outside the synoptic Gospels, is exactly what we expect if the darkness didn’t happen.

The view you share here appears to entail a skepticism about historical claims in general, at least, without further elucidation on your part. If you believe that we can’t achieve reliable beliefs about history using precise reasoning as probability theory requires, do you believe that when we are less precise both in our process (i.e. deviating from probability theory) and our judgement of evidence, we can achieve reliable beliefs?

To me, probability theory in combination with other actions (actually gathering evidence, counting all evidence available, considering all relevant models, generating expert consensus etc), is a part of the most optimal process at achieving correspondence between our beliefs and reality that humans have discovered. If in the case of historical claims we stick with with imprecise heuristics and criteria (argument from analogy, argument to the best explanation, etc) which rely on statements of probability hidden behind words (“likely”, “very likely” etc), should we expect our conclusions to be more reliable? I can’t see why we should expect this, these methods appear as open to the same abuse you suggest Bayes’s Theorem is (argument to the best explanation suffers from subjective probability estimates problems such as estimating ad-hocness, reference class judgements and model expectation problems etc), and as I think Carrier and others have demonstrated, these methods suffer from problems probability theory doesn’t (threshold issues, in the case of various historicity criteria invalidity). I think Carrier also has a great point, that putting our reasoning into a precise framework makes it much easier to spot error in our own and others arguments, and I believe will do a lot to draw out the unstated assumptions and invalid reasoning people rely on.

We should focus on resolving the problems you detail in this blog post, like ways to get better at estimating P(E|H) and P(E|~H), and putting Carrier’s suggested a fortiori method on a more solid mathematical footing if possible (I eagerly await your actual criticisms of his argument). This to me seems like a more better approach to fixing problems in historical epistemology, rather than looking at the problems with Bayesian historical reasoning and judging the method as having less value than what is currently available.

When we understand what the ideal solution to the problem is, i.e. probability theory, shouldn’t our aim to follow it as closely as possible, inventing ways to mitigate the methods abuse and imprecision?

Carrier has responded, with a mighty assist from a commenter, MalcolmS. http://freethoughtblogs.com/carrier/archives/2616/. I think I understand the debate well enough to say that Carrier states that he doesn’t view or explain BT as “a strict input-output process” and I agree with him.

Ian,

It seems that you missed the point of Carrier’s application of BT to the Gospel story about darkness over the earth in his Chapter 3. He starts out by considering, without using BT, the likelihood of such a darkness, in the situation where we have lots of documentary evidence and the situation where we don’t, and argues that in one case the probability of it actually having occurred may be very high and in the other case very low, even though the claimed event is the same in both cases and has very low a priori probability of being true. Then he applies BT to the same questions, plugging in estimates for the unknowns that correspond to his earlier argument, and arrives at the same answer as before. This is not a defect of his method but his whole purpose, to show that BT produces the same result as what you would ordinary get if you reason properly, from a set of initial assumptions (or initial probabilities that you can argue for, which he does). You really need to start at the beginning of the chapter and read all the way through to page 60 or so to see what he’s doing.

I have a few criticisms of this example, though, regarding reference classes and a fortiori reasoning, which I’ll address in my next comment.

Mark: “I think I understand the debate well enough to say that Carrier states that he doesn’t view or explain BT as “a strict input-output process” and I agree with him.”

If he views it that way, I agree with him too. Very much so. And I wouldn’t have had a problem if I’d have got that more clearly from him. Maybe I just missed the obvious.

Malcolm, Michael, I suspect I misunderstood which bit of Michael’s comment was most important. By both your responses.

I read this section in Carrier as being intended to show that P(H|E) is not just dependent on P(H), that increasing the evidence can increase the probability. We can make the cases more similar, differing only in the number of independent reports. The probability is bounded below by the prior (the probability of the event, in the absence of reports), and above by 1, but will never reach 1 (under some fairly uncontroversial additional assumptions).

To the extent that that is the intuition being communicated, that’s fine.

But what about numbers? So take the prior (I could do the same performance with any of the numbers).

What is the prior probability of having a worldwide darkness for a few hours? How do you calculate that? How about this: there have been, say 10^12 days on earth, in which we have no good reason to believe there have been any temporary worldwide darkness events – so the prior is zero. No wait, ‘worldwide’ is a bit unfair – maybe it was a hyperbole – maybe it just meant everywhere that we heard about that day – it isn’t a claim about whether the Clovis were experiencing a darkness – so local darkness, could be dense thunderclouds – that happens around passover about 25% of the time. No way, that’s the worst kind of reinterpretation! The claim was “worldwide darkness”, not an unusually cloudy day. The right approach is to make a list of valid causes that we would admit as constituting ‘darkness’ – so a volcanic obscuration cloud would pass, and… erm, I’m struggling here. Hang on, why choose the volcanic cloud and not a solar eclipse? Well, solar eclipses are common. But, Carrier is explicit, we’re not talking about the prior of a worldwide darkness, we’re talking about “when those kind of claims” are true. What’s that prior? Well, that’s easier. Collect all of the claims for worldwide darkness, and figure out how many were true. So how many are there? Erm. I can think of one: the darkness following Krakatoa eruption (we can’t discount it, based on it being widely evidenced, remember we’re talking about a *prior*, we haven’t got to the evidence yet). So it looks like we’re batting 100% for our prior. Maybe you can dig out nine unsubstantiated claims of the same thing, so maybe it should be 10%. But no, that doesn’t work, because we can’t *assume* those other historical accounts weren’t also true (and we can’t use the lack of evidence for them to help, because our whole cause is to try to figure out if this kind of thing could happen without leaving lots of evidence). Clearly that doesn’t help. So how do we get a prior?

We just guess. “Merely for convenience I will employ the value of 1% for the prior probability that such a story would be caused by a real unprecedented darkness” p.57

Ah, but a sneaky tell just dropped, there in the word ‘unprecedented’ – the prior cannot be calculated based on data, by definition.

But this is all a bit silly, right? Surely we can just agree that the prior probability of a claim of worldwide darkness is less than 1%, right? Okay I’ll agree, but then, I agree with Carrier that the chances of the gospels being right in this case are almost nill. And those two agreements are deeply connected!

Cam, thanks for commenting, and welcome to the blog.

I actually agree with the first 80% of your comment. So clearly, weak assumptions imprecisely bounded and hidden behind ‘likely’ or ‘almost certain’ are no better than numerical estimates of those same values. Clearly probabilistic reasoning helps us avoid certain major cognitive biases inherent in the way we intuit probability. I’ve blogged here before about the counter-intuitive nature of probability. Forcing yourself to put numbers to things definitely improves your intuition. I locked myself away and wrestled for a week as a grad student with the Tuesday’s Child problem, and at the end of about 70 hours of just thinking about that, and its implications, I had a huge epiphany around information theory and Bayesianism. So I am happy to endorse the value of really forcing oneself to deal with one’s faulty probabilistic intuition.

What I think is a problem, however, is the idea that Bayesian analysis is therefore the best way to do history. And that is a problem.

For a start, real situations involve many claims and many features. So evidence isn’t ‘E’ it is an almost infinite sequence of Es, many of which are highly correlated, but not determined by one another. So we can’t just feed these through Carrier’s form of Bayes’s Theorem. We could use Bayesian chains for some of this, but in real history the reference classes can’t be the same for each step, so even that doesn’t work for long. Secondly, because in history, claims about both evidence and hypotheses are linguistic claims, and therefore vague, we’ve got a problem with normal probability calculus. It can model confidence, or vagueness, but not both at the same time. So maybe we should model history with Bayesian statistic over fuzzy sets, or some such. Obviously neither of these is going to happen (and various other similar problems that makes the mapping from history to probability theory more difficult).

So the question becomes, which is better:

a) skilled historian sits down and uses decades of experience to weigh up evidence and come up with a conclusion about which hypothesis is more likely. Effectively using an intuitive form of the probability+fuzzy logic+massively parallel evidence streams (intuitive and therefore obviously wrong to some extent)

b) we get historians to independently estimate the required terms that Bayes’s Theorem requires, and turn the handle to get an answer.

I’ve yet to see any reason to think that a) gives us less reliable or useful results than b). Quite the opposite, as someone who’s used Bayesian probability in science, I know how attractive b is, but how easily it can lead you (me, as it happens) astray. I’ve read only a few uses of b so far in the study of Christian origins, but so far, they’ve come out with conclusions that differ, and recapitulate the pre-stated positions of the investigators. That is not a slam-dunk argument they’re wrong, but should lower one’s prior confidence in the approach.

As I mentioned in my previous post (which seems not to have been posted yet), I have some criticisms of Carrier’s example of the darkness during the day.

First of all, there’s a reference class problem: He defines H as the probability that there really was such a 3-hour darkness covering all of the (known) world, but then, as usual, takes the prior probability of H to be the probability that a claim of such a thing is normally true. What would be considered an equivalent claim? Are we only considering examples of claimed 3-hour global darknesses? If not, how would we know whether a claim was included in our reference class or not? Would we have to only include claims that are similarly extraordinary? That would imply that we would need to have an estimate of the probability of such a darkness, without conditioning on the claim, as he has been doing, and we would also need to compute the probability of other claimed rare events to see whether they are comparable. He picks a number, 1%, seemingly out of thin air – no attempt is made to argue for this value rather than one much smaller, under the guise of being conservative, but as we’ll see in my next paragraph this doesn’t always work.

The second problem is constructing an a fortiori argument for the case where the event is well documented (the 1983 case). In that situation, P(E|~H & B) and P(H|B) are both very small, so it becomes crucial to determine whether P(H|B) is really smaller or larger than P(E|~H & B), and just saying, “well, we’ll pick P(H|B) to be unrealistically large, say, 1%,” won’t work. Moreover, if one were to assign ranges for P(H|B) and P(E|~H & B), since these are both very small numbers, it would be hard to argue for a lower bound above 0 (and Carrier notably doesn’t try), but this would then yield a range of 0% to 1% (i.e., no information) and couldn’t be improved by lowering the upper bounds on the inputs.

Ultimately, to get any kind of remotely useful result for this case, one must be able to say something about the ratio P(H|B)/P(E|~H & B). Which is more likely, that there was a supernatural darkness over the earth in 1983 or that there wasn’t but for some reason there’s a lot of photographic evidence, etc., that there was sch a darkness? You would have to answer this question to the problem correctly, but Carrier doesn’t address it.

Finally, I have to ask, in case of the darkness in the Gospels, how could one say that P(E|~H & B) = 1? So if there was no darkness (but there was a claim of such), we’d expect, with certainty, evidence exactly like what we see in the Gospels? Not even a small chance that we’d get something different?!

And recall that E should not just be the verses that discuss the darkness but the totality of the evidence, including the rest of the Christian record. After all, a fundamentalist Christian would argue that P(E|~H & B) is very small, because to him (or her), the probability that the Bible could be in error is very low, and that conclusion is based on all the evidence for Christianity,.

Thanks Malcolm, it puts comments into moderation if there’s any change in email or ip address. Thanks for the additional analysis of the claim about darkness.

So, can I ask you specifically where you stand on this. You’re obviously competent on the math, and less inclined to make bold claims. To what extent do you think that Bayes’s Theorem can support historical conclusions, and to what extent do you think all historical reasoning ought to be seen through its lens?

Malcolm, just one note about the darkness issue. If you drop methodological naturalism and admit that the darkness is supernatural, then I don’t see how any numbers can’t be justified. We can say that, since it was supernatural, God ensured the record of it would only be in the bible. We’re then faced trying to figure out if God acts like that, maybe on the basis of trying to figure out if that is a hypothesis justifiable by other things we know God did. At least I can see how you can end up with some kind of reference class without supernaturalism, even if it is a dubious one. Once we allow the supernatural, aren’t all bets off? And given that Carriers book is aimed at mainstream historical scholarship (which does normally assume methodological naturalism), isn’t that fair?

I have mixed feelings about that question. I think that BT is implicit in logical reasoning, when one has a need piece of evidence to integrate into previous opinions, but in practice putting actual numbers is so difficult, I doubt it would change one’s conclusions. However, the main advantage I do see in using it is that it forces one to be explicit about ones assumptions and consider alternatives. I’d be interested to see attempts to employ it, but I don’t have high expectations about the results.

To take these 2 examples with the worldwide darkness, first I would be interested to see what sort of numbers someone who does believe in the darkness would plug into the equation and more importantly what arguments would be used to justify them. OTOH, for the 1983 example, I’m still trying to decide what would be my judgment if faced with that situation. My first reaction, even after viewing documentary evidence, would be to reject it, both because P(H) is so low but also because my estimate of P(E|H) would also be much lower than Carrier’s, since he hasn’t taken into consideration the fact that we have never heard about this event until now, and we are old enough to remember 1983. What would be the cutoff for P(E|~H) where I would start believing even despite the tiny numerator? I don’t know.

And as I alluded to in my previous post, reference class issues are a major headache. Even in that simple lightning strikes problem in his book (which he didn’t complete) they are already a major obstacle.

One qualification is that any attempt to use BT must be accompanied by explicit lists of the inputs used and why. If it is only going to used like the way Craig did in his debate with Ehrman, where he just wrote the formula down and then claimed a certain result without saying what numbers he plugged in, then it is less than useless.

So I’ll withhold judgment until Carrier’s next book, but I’m skeptical about how he’ll do it. How is he going to include all the information relevant to the question of the historical Jesus? Even tackling a much simpler problem, such as the Synoptic Problem, the authentic letters of Paul. or whether James was a biological sibling of Jesus would be a real challenge.

One final comment: I don’t think that Carrier is using BT just to dazzle (or intimidate) people with mathematics. Yes, the conclusions he derives are what he thought to begin with, but that’s because he plugging in the same inputs as he did with his pre-BT opinion, and he was thinking logically before. I suspect he is using it to try to force his opponents to give his theory a fair hearing, by challenging them to point out the flaws in his calculation.

Ian,

As you can probably tell, my previous comment was on whether BT should be used more widely in historical research. As for whether the darkness was supernatural, Carrier explicitly included that possibility for P(H), saying that we couldn’t entirely rule it out. You’re right that this would then force him to consider the possibility in P(E|H) (which you just addressed) and P(E|~H) (which I discussed earlier).

I agree, though, that he’s really writing for mainstream NT scholars (liberal Christians, agnostics, or atheists), who would all concur that the Gospels’ darkness was extremely improbable. There’s no sense arguing the existence of Jesus with someone who believes in Biblical inerrancy.

Thanks Malcolm, much appreciated on all fronts.

“I suspect he is using it to try to force his opponents to give his theory a fair hearing, by challenging them to point out the flaws in his calculation.” I suspect it will be counter-productive though. If you try to engage with the hegemony by suggesting that if they all learn probability theory, it would be easier to make your point, I suspect won’t go down well. But then if your position has been continually rejected by the hegemony where do you go? I sympathise with those opposing academic hegemony – I think it is very hard to see how to change things from outside, however.

On making inputs explicit: it is important, I think, to be clear that if one reasons in this way, then it isn’t just the numbers that are the inputs but at least

a) The choice and definition of what constitutes the hypothesis. What set of historical possibilities does one count as Jesus existing? This is a massive problem with some sectors of mythicism, I think, because it seems to address whether Jesus is thoroughly mythological (which most scholars agree he is) rather than whether he existed (again most scholars think he did).

b) The choice and definition of the pieces of evidence one uses. If one isn’t going to attempt E = the new testament, then one has to be clear. One can’t be exhaustive, so selection bias is important.

c) The choice of reference class for each term.

Only then can one address whether one’s estimate is valid.

So I’d love to see assumptions made explicit, but we should see them all.

Thanks again for the full response. I’ve enjoyed engaging with you on here, and I hope there might be other things on the blog that tickle your fancy. Always great to have folks around who’ll keep me honest.

This is a superb refutation, completely accurate, true and incisive. The problem with the Bayes’ nonsense is that its connection to the historical Jesus is a tissue of improbabilities derived from false assumptions supported by an inapplicable method. What Bayes’ does prove is that yokels will buy it if you wrap it in pretty pink science paper with a bow on top. Can we please have your permission to reprint it as a complete post attributed to you, on The New Oxonian? http://rjosephhoffmann.wordpress.com/2012/05/29/proving-what/

Sure, no problem.

Pingback: An Introduction to Probability Theory and Why Bayes’s Theorem is Unhelpful in History « The New Oxonian

You may also want to look at my latest two posts as well, though. Particularly the latest one, for a note of caution I’d want to voice on the whole affair!

Thanks – I posted the links for wider appreciation in the comments. Excellent, superbly accurately savage. ;)

Thanks, I saw the post and corrected ‘could of’ accordingly. My wife’s an English teacher, I’m glad she didn’t catch me writing that.

Atrocious! But you weren’t consistent in your wickedness. You had written ‘could have’ elsewhere in your post – even preceding the error. The yokels must have got in… :-)

Isn’t the probability of the New Testament being true, far, far more likely if there really was an Historical Jesus? If there was no Jesus as all, the probability of the NT being right is zero. Since the NT, most think, founded itself on the claim that Jesus was real.

What is the probability of the New Testament being right? How do we calculate it? By comparing its factual claims, with what science and experience confirm as facts, as evidence.

What is the problem exactly?

Who’s trying to determine if the New Testament is ‘true’? Even the most maximalist scholars would agree the NT isn’t true, in the sense of being entirely historically accurate. That boat sailed centuries ago.

The trick is to find out which bits are historical and which aren’t. If you aren’t familiar with the scholarship, it can be easy to think that scholars claim historicity of things they do not, and so think this argument is positioned somewhere it is not. Just about everyone agrees some things in the NT are historical, and just about everyone agrees many things aren’t. On what basis can we decide on the other things?

Very much enjoyed you current blog post, btw.

There is a problem with Carrier’s example of the sun going out for three hours in the first century. He emphasizes the point that the darkness must be worldwide and not just local. He then tries to think of a physical cause for this and comes up with the idea of an interstellar cloud. In fact, an interstellar cloud would be far too tenuous to block the sun. He doesn’t suggest any other physical cause but there is one possibility: all the photons heading towards the earth during those three hours might simply have missed the target. Let’s say that the chances of that happening are one in ten to the power of a trillion trillion. That would then be the prior probability. That is rather different from his one in a hundred. Since he hasn’t suggested any other mechanism to cause the darkness that would have to be the prior.

The problem is that Carrier then argues that we could have enough evidence to show that the sun really did go out. If we had numerous independent reports from different countries that would be sufficient evidence. I think he says that the chances of having that evidence if there wasn’t actually such a darkness would be one in a trillion. But a trillion to one would be utterly insignificant compared to the prior.

You seem to be implying no amount of evidence could make you think the hypothetical 3-hour darkness of the sun in 1983 really happened (because you will always be uncertain). But is that reasonable? We might well find it more improbably that all the evidence we have is wrong or faked that than the 3 hour darkness happened, for reasons we may not understand. For example, we might know pretty well that the chance of the evidence being wrong is that 100 independent people (out of all the 1000s of witnesses) had to be lying, and we conservatively assess the chance that any one of them is lying as no more than 25%. Hence we estimate that the chance of them all lying is less than 10^60.

Can we really say that a new type of interstellar gas cloud (“it’s a cloud Jim, but not as we know it”) couldn’t have done it? Or aliens didn’t do it? Or some warp in space-time didn’t do it? Can you really say the chance of all the unknown possible causes add up to less than 10^60? I don’t think so and that is why, given all the evidence, we have we believe that the darkness did happen (knowing that we may later be proved wrong).

It is true we can’t use BT to get the exact chance that it didn’t happen, but we can use BT to check our intuitive argument. It is up to anybody who doubts the evidence, or thinks the odds favour the darkness, to supply reasonable values that make P(D)/P(~D) < P(E|~D)/P(E|D) [which, assuming P(E|D) = 1, we can simplify to P(D)/P(~D) is not less than P(E|~D) : our informal argument in a nutshell].

Watch Sharif Ari debate WL Craig on youtube. Sharif says, concerning the resurrection I think, something like “That evidence isn’t nearly good enough, it would require something like 1000 reliable eyewitnesses”: an example of this sort of reasoning – there is some amount of evidence that makes it reasonable to believe extraordinary things.

Pingback: Ian's Review of Carrier's Proving History and Carrier's Reply « Living Without Faith Living Without Faith

This is related to what I said on his blog, namely, that is this case (such a claimed darkness in 1983), both the prior and the probability of the evidence given the hypothesis isn’t true are so low (and P(E|H) isn’t that large either), that it is very hard to say anything with BT. A small change in your estimate of any of these variables could have a huge impact on your final result.

You could call it the problem of mindboggling numbers. If you use Bayes’ theorem in history do you use mindboggling numbers or tame numbers? Suppose, for example, that you want to know the chances of Adolf Hitler existing. If the question is what the chances are of someone like Adolf Hitler existing then the answer is easy. There were lots of people like Hitler around in Germany at the time. But what if the course of history depends on there being someone exactly like Hitler? To make the question simpler let’s ask what the chances are of someone with Hitler’s exact genetic makeup existing. People differ genetically by about one base pair in a thousand, giving a total of 3 million base pair differences. So the chances of one particular person existing are one in four to the power of 3 million.

I suspect that if you wanted to understand history mathematically those are the sorts of numbers you would have to use.

Michael, I think you’re right, but this illustrates part of the problem with figuring this out from very poorly defined definitions. What exactly are we asking? Whether there was some form of darkness that could reasonably described as ‘worldwide’, or whether there was a gas cloud that blocked the sun? Are we limited to potentially naturalistic explanation, or could it have been supernatural? You’ve also added a different criteria: not whether it happened, but you seem to be asking whether the purported witnesses are *lying*, which is an altogether different thing, with a whole bunch of other causes. A virus that causes diminished light perception, for example, would make them honest but mistaken. While we talk about evidence, are we able to incorporate the information that we all lived through that time and didn’t see the darkness as evidence? Or is the evidence set just the eyewitnesses we’re now supposed to be considering?

We can do this all day. And any “no, my reasoning is better” is going to be, as your comment above, just full of new assumptions and potential re-definitions.

That we can’t even figure out how to define this toy problem nearly specifically enough to get any reasonable numbers out of it is really my point about this whole business. Sure each one of us can define it in the way we choose and get results that please us. But that doesn’t mean much.

According to Richard Swinburne, the prior probability of the resurrection is 0.25. Perhaps the same applies to the three hour darkness. In that case the prior probability of the darkness is between one in 10 to the power of a trillion trillion and one in four.

How about a Bayesian analysis of the claim that Adolf Hitler didn’t really die at the end of the Second World War? Consider the evidence: there was a mystery over what happened to the body. The only body that was found was burnt beyond recognition. Also, there were numerous claims that people had seen Hitler after his death. And in this case the prior could genuinely be quite high. It’s plausible that Hitler might try to fake his death.

William Lane Craig could have a field day debating this.

Carrier would claim that it is good and proper that Swinburne put his argument in a Bayesian form, because we can easily see where we disagree with it.

You mean we can’t see where Swinburne is full of crap without putting his arguments into Bayesian form? The thing is, I’ve not yet seen an example of where Bayes’s Theorem suddenly makes a disagreement obvious where previously it wasn’t. Carrier uses it to show his disagreements with criteria used in Jesus Questing, but it’s not like his views on that subject weren’t long declared.

What I mean about Swinburne is that by framing his argument with BT we have a clearer picture of what he means. That could be useful for everybody. If I remember correctly he got his prior 0.25 from a 50% chance that God exists followed by a 50% chance that God would want to save us via a Resurrection. A Deist might accept for the sake of argument 50% chance that God exists, but really think it is 100%, but doubt the 50% chance that God would act in the world that way, citing lots of evidence that God generally (to never) acts like that. At the least it stops pointless arguments when issues of difference are clearly shown.

In answer to your question would I know Swinburne was full of crap without BT, I would say the issues are clearer with BT. If I don’t fully understand someone’s argument then I might have to reserve judgement on their crap content.

“According to Richard Swinburne, the prior probability of the resurrection is 0.25” Really? That’s quite something.

The point was made elsewhere in the comments (by Malcolm or Michael, I don’t recall) about the prior of a biblical claim being wildly different depending on whether you were pre-disposed to thinking bible claims were generally truthful. So, let’s say that, in my experience, 99% of the bible claims are trustworthy, the other 1% are not determined yet (i.e. they aren’t false, but we may not have found the evidence to prove them yet). Therefore the prior of the resurrection is 0.99, because it is a biblical claim. So I think you can probably widen your range from 0.99 to one in a heptillion.

If I remember correctly Swinburne judged that P(E|~R) was 0.001. He thought that P(E|R) was high but I can’t remember the exact figure. I think Richard Carrier had it the other way round.

What do you think about P(E|~R)? Swinburne thinks it’s very low, Carrier thinks it’s very high. Craig has claimed that three quarters of biblical scholars believe that the empty tomb story is true. Does that mean that P(E|~R) can’t be higher than 0.25?

“Craig has claimed that three quarters of biblical scholars believe that the empty tomb story is true.”

This is very unlikely, but I think even if the true number of biblical scholars who thought there was some truth in the empty tomb story were, say 25%, it is very unlikely that they mean what he then goes on to use them as meaning. This is part of the issue on meaning and definition.

I’d be happy to have a Bayesian confidence of around 25% in the empty tomb. But what I mean by that cannot then be used in the way Craig wants to use it.

It is a subtle but important point: you get probability estimates by thinking about the problem one way, then you don’t determine whether those probability estimates are applicable when you look at the problem a slightly different way. The example of ‘lying’ about the darkness from earlier in this comment thread is a prime example.

“In answer to your question would I know Swinburne was full of crap without BT, I would say the issues are clearer with BT”

Thanks Michael. Can you give an example of something that you came to understand more clearly when you considered Swinburne’s arguments in Bayesian terms?

It’s easy to see how there could be some truth in the empty tomb story. One thing we know for certain is that if Jesus (assuming he existed) had been buried in a tomb, the tomb would have contained other bodies. This means that if the women had gone to the tomb to see the body they would have had to start poking around amongst a lot of other corpses. The women could easily have lost their nerve and decided to leave without ever seeing the body, and, of course, no significance would have been attached to this at the time.

If people then started having visions of Jesus the visit to the tomb could have acquired a new significance. A story about the women going to the tomb and not seeing the body could evolve into a story about them discovering that the body had gone. By that time the body would have decomposed to the point where the story couldn’t be refuted.

Stuart, exactly. So ’empty tomb’, like ‘worldwide darkness’ is an absolute pain to define. Your definition, for example, includes a tomb that is very far from being empty! And using one definition to derive a number for it, does not mean that you are using the same definition elsewhere in whatever you wanted the number for.

Pingback: A Review of Proving History by Richard Carrier (Part II) « Diglotting

Hey Ian,

I was just going to send this as a personal e-mail to you but I couldn’t find where that is located (I’m sure I just missed it.) Anyway, just wondering if you had seen this:

http://www.davegentile.com/synoptics/main.html

It’s a statistical approach to the Synoptic problem. Have you come across anything like this before?

Thanks Grizel.

I’m going to write a post on this now, so check the front page in an hour or so!