Monthly Archives: October 2012

Flash Photo History

An interesting interpretation of what is significant about the history of the universe.

I noticed the creation museum’s “Adam And Eve” are there standing for early Homo Sapiens. Is that more insulting to science or creationism?

And Zelda gets equal billing with the emergence of multicellular life. And why not?

US centric in the second half, but fun nontheless. What would be in your 2 minute ‘story of us’ video?

3 Comments

Filed under Uncategorized

What the Meaning of Is, Is.

I am working on a post about modern religions that don’t think of themselves as such. To do that I needed to talk about what the meaning of ‘religion’ is. But that grew too large, so I think it is worth trying to break the idea down and be clearer about how I think about the meaning of words.

This post is an invitation to be critical about my ideas, so please weigh in.

Words have a meaning by virtue of being used in similar situations by more than one person. Roughly speaking, the meaning of a word is the set of circumstances in which a group of people would consider it suitable to use that word.

In the case of concrete nouns, we could say that the meaning of a word is the set of things that a group of people would label with that word. For example: what is a chair? If I got my friends and family together, and showed them a whole series of objects, we’d surely agree that some of them were chairs and others weren’t (I’ll return to the disagreements. below). So, to us, the meaning of chair includes those objects we all agreed were chairs.

At least four issues are important here.

  1. In our parade of chairs we didn’t see every object or conceivable object. So this idea about meaning isn’t simply extensional (i.e. I’m not defining chair as the set of all possible chairs). It recognizes that people are inherently good at generalizing and pattern matching and interpreting. The definition is inherently fuzzy around the edges, because, even if you and I agree on 100% of the examples we’ve seen so far, there may be some object we’d disagree about.
  2. A group of people may not agree on all uses of ‘chair’. Some things may split the jury. This is fine, definitions are not precise and their edges are not cleanly delineated. I’m happy to say that certain things are more of a chair than others, or more clearly a chair than others.
  3. Different groups may have different patterns of what they determine a chair to be. If I include the consensus of all English language speakers, I may get a very narrow definition of a chair. If the group is the furniture design class at RISD, the definition of ‘chair’ would include all kinds of object that I might not choose to call a chair.
  4. A dictionary definition simply primes us to use a word in a way that would correspond to its use by a broad range of other users of that word. A dictionary definition doesn’t specify what a word should mean, or really means.

I am saying all this because, in online discussions about religion, meanings become offensive weapons.

When someone insists on what a word should mean, it is an attempt to exert control over the use of the word. This is a political act, and can be a deliberate act to disenfranchise certain people’s use of it, or identification with it.

It is valid for me to suggest that your use of a term suggests to me meanings that you didn’t intend. It is valid for me to suggest that it is likely to do the same for many other people (though presumably if we disagree on that, we’d have to resolve the disagreement empirically).

It is not valid for me to say that therefore you are wrong to use the word, or should not use it in the way you wish — unless I also want to claim that I am the arbiter of linguistic morality.

Once we have established how we are using a word, I should be happy to converse using it — unless I want to suggest I am such an idiot that I can’t accommodate your intent.

If, however, it becomes clear that I keep on misunderstanding you because of that term, then you should help the conversation by suggesting a different one, more neutral of the problematic connotations. Doing so is not a concession of the term on your part, nor a rejection of it on mine, just a recognition that it is not helpful.

Philosophical notes:

When I talk about the meaning of words, I am referring to a descriptive definition, I believe such definitions are never extensionally adequate (we can never give a definition without someone giving a counter-example), much less intensionally adequate (a definition that can have no possible counter-example). The classic example of an intensionally adequate definition being that water is H2O, seems obviously wrong to me, since I call various things water that aren’t pure H2O (tap water or sea water, for example), and yet other things that are more purely water (dilute aqeous acid, for example) I would not call water, and there are forms of H2O that I do not typically call water.

I recognize that there are certain philosophical uses of meaning and definition that are not descriptive. But those senses are not useful in this context, most are not even applicable.

I’ve talked here primarily about nouns, but the same idea works for other words, though we rapidly stop being able to point at objects. That’s why I started talking about the contexts in which a word is used. At its core this is post-Tractatus Wittgenstein: language is a performance, and meaning is a by product of the situations in which a particular word may be performed. I think this explanation of the meaning of language is also sufficient to build notions of language acquisition upon. But I’d be interested in rehearsing criticisms of it, if anyone finds it objectionable.

9 Comments

Filed under Uncategorized

Vague Knowledge

[This isn't another post on Richard Carrier's Proving History. It is, however, a post I suggested I might write as a follow on to my review, that explores another way to think mathematically about knowledge, since there are quite a few folks who've arrived here because of my posts on probability theory.]

So in recent posts we’ve been looking at probabilities and their use to talk about knowledge. Probability, as we saw, relies only on some general notion of ‘likelihood’, and in one of the comments, I mentioned that probabilities can be used for a whole range of related ideas. The most common are the odds of something happening (a so-called frequentist interpretation) and the confidence one can have in something being true (the Bayesian interpretation). It might be obvious that in many cases these two interpretations are rather similar: the odds of rolling a 6 on a die, may seem obviously the same as your confidence that a hidden die-roll came up 6. The two interpretations do diverge, however, they aren’t always the same (but that’s not the point of this post, so I won’t go into the Frequentist-Bayesian problem).

But there are other ways we can interpret probability, and one of those will let us jump into some territory that is less well trodden, but even more fruitful.

We can use probability to represent how true something is. This is quite different from a Bayesian interpretation which is how confident we can be that something is black-and-white true. We’ll call our ‘how true’ interpretation the fuzzy interpretation, because it treats truth as somewhat fuzzy.

Let’s take an example (and to avoid the criticism that this is about Carrier or mythicism, let’s talk sport). If I described an NBA basketball player, and asked “What is the probability that he is tall?”, a Bayesian might say: “let’s define tall as over 6’6″, how confident am I that an NBA player is at least that tall?” A fuzzy interpretation might say “someone who is 5’6″ is not at all ‘tall’, someone who is 7′ is entirely ‘tall’, people in between are somewhat tall, how tall is my player?”

So let’s imagine that the average height of NBA players is 6’9″, and only 30% are shorter than 6’6″. The Bayesian then says, the probability of your NBA player being tall is 70% (because only 30% aren’t), the ‘fuzzyist’ says the probability is 83%, because all we know about NBA players on average is they are 83% tall[1].

There are some bits of knowledge that are black-and-white, but perhaps not many. The philosophical problem of vagueness is important (see my post on Sorites paradox for some more background), and it crops up in many places. When you ask questions about knowledge, it turns out to be very difficult to be precise enough to avoid having vague criteria. Even in our Bayesian NBA example there is a small degree vagueness: how do we measure, should the person be standing as straight as they can, how firm do we press down their hair, do we round near values up or down, how confident are we in our measuring stick? In this case, the vagueness probably isn’t important, but it is still there.

We can get away with ignoring vagueness, if we can be specific enough, but in many cases it would be impossible to agree standards that are specific-enough so there isn’t some degree of interpretation needed. So fuzzy values turn out to be a very natural way of representing knowledge in a large number of domains. It turns out to be much easier to agree the extremes than find an exact boundary in the middle where something flips from being false to being true.

So far I’ve said that ‘fuzzy values’ are another way of interpreting probability. They certainly can be. Probability theory is perfectly valid way to model and manipulate them. But it isn’t a very common one. In fact, it turns out that the math of probability theory isn’t very good for representing non-trivial reasoning.

Logic has a series of logical connectives (also called ‘truth functions’) which combine bits of knowledge into bigger wholes. So ‘and’ is such a connective: if we have claims A, B, then we can form a new claim “A and B”, other connectives are “not” and “or”, and most significantly, “therefore”. Probability theory can model these connectives, but doing so requires such to figure out the way the probability of each claim depends on all the other claims, before we can combine them. This is difficult at best and often impossible.

Here’s a sample of the difference in math. If I have two claims, that are independent, A has a Bayesian probability of P(A)=0.5, B of P(B)=0.2, then, the probability of both claims: “A and B” is:

P(A and B) = P(A)*P(B) = 0.1.

if I’m interested in P(A or B) I get:

P(A or B) = P(A)+P(B)-P(A*B) = 0.6

In fuzzy logic if A has a value e(A)=0.5 and B is e(B)=0.2, then

e(A and B) = min(e(A), e(B)) = 0.2[2]

and

e(A or B) = max(e(A), e(B)) = 0.5

Both have the same definition of “not”:

P(not A) = 1-P(A)

e(not A) = 1-e(A)

And, you can confirm that the numbers work for logical identities such as

A or B = not (not A and not B)

Note that, in the probability case, we have to make sure that the two claims are totally independent, otherwise the calculation is wrong, in the fuzzy case, this is not so. The fuzzy approach is more general.

When probabilities are used with operations commonly found in logic, it is called ‘probabilistic logic’, whereas one of the alternate mathematical formulations is normally called a ‘fuzzy logic’ (note that we can still use our ‘fuzzy interpretation’ with a probabilistic logic, the fuzzy interpretation is just how we interpret the number, not what we do with it afterwards).

Unless you have good reason not to, I’d suggest that any logical reasoning with knowledge should probably be done with fuzzy logic, not probability theory.

Aside from the ability to easily do math on common concepts such as ‘and’ and ‘therefore’, fuzzy logics also provide sets of tools for modelling other ideas about knowledge. There are a set of mathematical tools to represent intensives, known as hedges. So “X is *very* Y” has a natural mathematical form in fuzzy logic. We can put all this together in questions such as “if very tall players make successful centers, is a not-very-tall player better as a forward?”, and it will have a natural mathematical expression, that can be manipulated, and a truth value will result (i.e. the result will be “from zero to one, how true is it that a not-very-tall player is better as a forward?”).

Fuzzy logic has advantages over probabilistic systems.

The advantage of not having to account for dependence is one. A second is that the output of fuzzy reasoning is generally more robust to inaccuracies in input data. Fuzzy logic systems can cope with hundreds of thousands of different pieces of input data, some dependent, some independent. Probabilistic logics often struggle beyond a few inputs (here I mean different sources of data, probabilistic logics tend to be fine when you have masses of data that all mean the same thing, like when you’re analysing SAT scores for a state).

The results or intermediate values in probabilistic calculations often get very small, making errors an absolute nightmare to handle in practice. As I showed in the last post of probability theory, Bayes’s Theorem isn’t nicely behaved with small inputs.

But the biggest, by far the biggest, win with fuzzy logics, is that there is a way to turn expert knowledge and expert reasoning into math. So, in our NBA case, we can interview expert talent scouts, and they can come up with rules like “If a player is fast, but is easily put off his dribble by a larger opponent, then I try him with strength drills, if he shows the potential to keep his ground, even if his skills don’t stay as precise, he’s worth a second look.”

This is part of my skepticism about how useful probability theory is in the humanities. Because it has so few rules, and they are so universal, reasoning is reduced to estimating inputs. There is no place for process, for higher order reasoning. It encourages atomization of complex problems, without due consideration of whether the pieces can be reassembled into anything meaningful.

Many automated financial trading systems work on fuzzy logic, for this reason; it is relatively easy to add sophisticated rules, new sources of evidence, and new evaluation criteria. You can easily bring the results of probabilistic calculations into a fuzzy logic core[3], but not the other way around.

If anyone is serious about putting reasoning in the humanities on a mathematical footing, it might be a good place to start, because it works well in domains where things aren’t clean. The kind of tricky judgement-call reasoning, in the presence of huge bodies of contradictory evidence, where it isn’t clear what should be an exception and what should be a rule. That’s where it gets used every day.

[1] Some ‘fuzzyists’ might object to basing an estimate off the average NBA player height at all, and would insist they can only give you the result if you tell them the player in question’s exact height. Different probability theorists can be quite finicky about what statistics they allow in their calculations, and reasoning with fuzzy values isn’t normally done with averages in this way. I intend this example to be more illustrative of the meaning of truth values, not an example of actually doing statistics with these quantities.

[2] I alluded to the fact that there are different ways to do the math in fuzzy logic, this definition of the AND operator is the most common, it is called a Zadeh operator. Fuzzy logic math is actually determined by the ‘therefore’ operator and the modus ponens rule of classical logic through a mathematical relation called the ‘T-norm’. The details of this are irrelevant here, but I mention it because probability theory defines one such T-norm, and this is how we can show that probability theory forms a perfectly valid basis for reasoning about fuzziness.

[3] At the simplest, you can simply interpret your Bayesian probability for some claim X as the fuzzy truth of the statement “I am confident that X”. The latter statement is one with graduated truth.

4 Comments

Filed under Uncategorized

Say What I Want to Hear!

I think that, although Jesus Christ is clearly a mythical figure, it is quite likely the myth coalesced around a real historical figure, with whom there is a direct chain of familiarity to key figures in the early church. Thus I am not a Mythicist, but would lie on the minimalist end of the academic consensus on Jesus. I am, for example, rather skeptical of the applicability of many historical ‘criteria’ when applied to individual features of the Jesus evidence. In this I would have been outside the mainstream a few years ago, but am now somewhat more in line with the changing wind of NT scholarship (c.f. this recent conference).

I recently reviewed a book by Richard Carrier, a figure often associated with Mythicism (though his specific beliefs are a little mercurial, I find). Because my review was criticial, I got linked to from various sources who defend the academic consensus against Mythicism, who presumably agree that Carrier is wrong. Similarly Carriers response, was linked to by those supportive of mythicism, and he received favourable comments for dismissing my criticisms.

I am not a professional NT scholar (I make my living doing algorithmic research and development mainly for big software companies), though I have a degree in theology (specialising in original languages) and have been studying the NT for 20 years. So really my views on the historical Jesus should probably not have much weight. But I suspect it is those views, rather than any mathematical expertise I may have, that determine whether someone accepts my analysis.

The tribal linking patterns are even more amusing since I made clear in my review that Carrier voices many of my own skepticisms on the utility of criteria (he is more skeptical than I think is warranted, but it is only a difference in degree). So perhaps, if I’d written a review that wasn’t concerned primarily with the math, then the linking patterns would have been reversed.

It is interesting just how biased we all are towards people in our tribe. I am just the same.

I wonder just how much crap I’ve assimilated over the years, because it has been spouted by folks who I have filed under the ‘I agree’ tag.

3 Comments

Filed under Uncategorized

Error in Bayes’s Theorem

[Edit: 23:11 UTC - if you got this by email, this version is rather different, I edited and expanded it to make it clearer.]

This is another follow on post from my criticism of the use of Bayes’s Theorem in Richard Carrier’s book Proving History. (Apologies if you’re bored of this topic). In the review, and my follow up introduction to Bayes’s Theorem, I did a bit of ‘vague handwaving’ about errors, and was asked to be more specific. This is an attempt, hopefully still accessible without a lot of mathematical knowledge.

The Effect of Error

So what do we mean by error? Any time we give a number, we have to recognize that the number we give is only an approximation. There is some underlying value, but whatever we do, we can only generate an approximate version of it. There are different ways to deal with the approximation, but one of the most intuitive is as an error range. If we estimate something as 0.2, for example, we could instead say “it is between 0.1 and 0.3, but 0.2 is the most likely”.

When you have an equation that involves inputting some approximate value, you can move the error range through the equation, and come out with a range of possible results. You can say, for example, that if our input X is between 0.1 and 0.3, and the formula we are working is 2X, then the range of outputs is 0.2 to 0.6.

When we have multiple inputs, then the situation is more complex. We might have two inputs: X is between 0.1 and 0.3, while Y is between 0.5 and 1.0, so X times Y is between 0.05 and 0.3 [1]. We have to try all combinations of low and high for both values (and in the case of some formulae, but not Bayes’s Theorem, intermediate values too), and find the minimum and maximum result[2].

So let’s think about the errors in Bayes’s Theorem. I’ll use Carrier’s preferred version:

Let’s look at the errors in this. There are three inputs to this equation, P(H) [note that P(~H) is just 1-P(H)], P(E|H) and P(E|~H).

Here’s a graph of the equation:

Because there are three inputs and one output, the graph of the function would actually be four dimensional. So to draw it, we need to lock one value. This shows P(H) locked – so it shows how the result changes, when we change the P(E|X) terms. We can graph the same thing with a different P(H), let’s say it is 1%:

You can see that, with a much lower prior, the output probability is almost always very low. Except near the extremely low values of P(E|~H) and the high values of P(E|H). While the center of the graph has got flatter, the back has got steeper. This steepness will cause problems for us below.

Any point on this surface consists of a single set of values moving through Bayes’s Theorem. To look at errors, we instead consider a patch of space on the surface, some range for P(E|H) and P(E|~H), and look at the range of outputs for that patch. In the second diagram, if P(E|H) and P(E|~H) are both about 1/5 then the range of values are pretty small, the vertical range in that area is pretty small.

If, however, we say that P(E|~H) is near zero, or P(E|H) is near one, then suddenly the vertical range is huge, because the graph is steep at that point. Even small errors in the input (i.e. a small patch) can give a range of outputs that is from nearly zero, to nearly one (i.e. it could be anywhere from almost impossible to almost certain).

So in this case I’ve locked P(H), as if P(H) were certain, but of course that is also a range, and so you have to imagine two of these surfaces stacked, one for the minimum P(H), and one for the maximum. The output range is the minimum and maximum points for both upper and lower surfaces.

We can draw a different graph, using a different locked value, and see how the output varies with the varying axes.

So here we can see that, if P(H) is small, we get this similar vertical range, and corresponding problems with errors.

Look back at the first or second figure, it shows that, if P(E|H) and P(E|~H) are both small, then the range will be very badly behaved. In other words, if the evidence is genuinely unusual, in both cases, then we’ve got a problem. So it isn’t just as trivial as saying “let’s use a conservative value of X”, because behind that value, may be a big change in the output.

This is a problem because, almost by definition, when dealing with events such as the founding of major religions, or the possibility of a human being having a divine parent, or the likelihood of a resurrection, we’re dealing with insanely small probabilities. Exactly the times when Bayes’s Theorem isn’t well behaved.

If P(E|~H) is high, and P(E|H) is low, then things behave quite well. But if we want to be conservative (Carrier’s ‘a fortiori’ method) about P(E|~H), say, and allow the possibility that it is small, then the errors can swamp any useful conclusions.

Sources of Error

It is important to think about the sources of error to be able to make reasonable estimates of how much error we have.

Here are some sources of error (there may be others):

Incomplete Data — Say we’re trying to figure out the P(H) for Julius Caesar being in Alexandria at a particular date. We decide that P(H) (the prior) should be the rough proportion of time that he spent in the city in the years of his reign. We go through the documents and come up with a figure, let’s say it is 2%. This figure will have some error. It is possible we don’t have some records of some of his visits. It is possible that some of the recorded visits are mistaken, misleading or falsified. There will be error in the value we give. With data from ancient history this error can be large. It is particularly important not to assume that a lack of a piece of evidence for something didn’t mean it didn’t happen. Counts of events are almost always going to be wrong, when our documentary record is so fickle.

Choice of Reference Class — When we figure out values for probabilities, we take a set of similar events, and we compare how often something is true among them. For Caesar in Alexandria, we choose the set of days when Caesar was anywhere, and see how often he was in Alexandria. This set of events is called the reference class. But there are several reference classes we can choose. To determine the prior of Julius in Alexandria, we might note that, on the day in question, local rulers from around the Mediterranean were invited to Alexandria. We might say the prior, therefore, is given by the proportion of significant rulers we know attended. Perhaps 80% of rulers who were invited, came. So the prior is 80%. The choice of reference class affects values hugely. Now it might be obvious that one reference class is a better choice than another (we could calculate the value based on what proportion of the whole world lived in Alexandria on that day, for example, which would obviously be a poor choice). But no matter how well we chose, there will be some degree of error. Carrier’s approach (not unreasonably) is to try to pick the best reference class we can, and then assume it is correct. In reality the choice, no matter how good, is a source of additional error.

Choice of Definition — When we ask whether something is true, we are asking a black and white question. It is, or it is not? But most questions we could ask could have intermediate answers. Was Caesar in Alexandria on that date. Well, he left on that date, does that count? Even if he left at 1am? Does it count if he was at his camp just outside the walls? Or if he was 20 miles away, but he was communicating by messenger? Once we get into questions like “was there a worldwide darkness as reported in the gospels”, the vagueness is rather significant: how much land area do we need to qualify as worldwide? How dark is enough to qualify? Does the darkness have to be uniform over the affected area? How long must it last?Should we exclude rather obvious natural possibilities? So whatever figure we give, there are errors based on how we interpret the question. We can phrase the question more tightly to reduce the error, but we may be in danger of answering the wrong question. We might insist on a purely supernatural total darkness from sun and moon covering the whole globe, only to find we end up disproving a claim that nobody wants to make. Or else we might find that our tight definition gives us no obvious reference class, or a reference class who’s data is hopelessly incomplete. Instead we might allow a wider definition, and allow that ‘world-wide darkness’ could refer to a huge storm complex over the Mediterranean (the whole of the Roman Orbit Terranum), but either get a huge range of possible outputs, or else show something we all agree on. It is hard to give definitions that are tight enough to avoid error, while lose enough to be interesting. I’ve posted before about the fact that the definition of “Jesus was a myth” is so vague as to basically include both mainstream scholars and mythicists.

So each term in our Bayes’s formula acquires errors from all of these three factors. And each factor compounds the errors in the others. As a result, for questions that are potentially vague, with a range of possible reference classes, each with poor quality or incomplete data, we should expect to have large errors.

Bias Error

So far I’ve assumed that errors are just random. We are as likely to be higher than lower in our estimates. But this isn’t true.

Carrier, for example, seems to recognize this, and decides to use ‘a fortiori’ reasoning. Which is a way of saying “I’m going to bias the error in a way that doesn’t support my case, so I avoid the criticism that I may have accidentally biased it towards my conclusions.” This is admirable, and (barring the caveats around small values above) reasonable. But that only looks at bias from one source: bias from the available data. In reality Carrier (and anyone else doing this) will also be choosing the definitions, and choosing the reference classes, and there is no similar a fortiori process for determining which are the least favourable definitions to ones cause, and which reference classes are the most troubling, and adopting those[3].

Conclusion

So, what can we learn?

Well, for one, the inputs to Bayes’s Theorem matter. Particularly small inputs. When we’re dealing with rare evidence for rare events, then small errors in the inputs can end up giving a huge range of outputs, enough of a range that there is no usable information to be had.

And those errors come from many sources, and are difficult to quantify. It is tempting to think of errors only in terms of the data acquisition error, and to ignore errors of choice and errors of reference class.

These issues combine to make it very difficult to make any sensible conclusions from Bayes’s Theorem in areas where probabilities are small, data is low quality, possible reference classes abound, and statements are vague. In areas like history, for example.

[1] This is simple to calculate, but may not be true. If, when we estimate X and Y, we might rely on the same underlying data, which means errors in one value might be related to errors in the second. In that case, we say the errors are correlated, and the way they are correlated changes the way the errors flow through the equation. For the purpose of this post, I’ll assume that errors are independent. I don’t think this is a valid assumption for using Bayes’s Theorem in history, because the person doing the estimates is using their biases to do so: so errors will be quite correlated. The effect of this is to make the analysis of error even more complex.

[2] I can hear my internal math tutor weeping at some of the generalizations I’m making here. The ranges are really just an approximation of something called the probability density function: which is a way of saying how likely any possible value might be (the values close to our guess are hopefully the most likely). And when you put the p.d.f.s through a formula you use a process called convolution, which generates another p.d.f telling you the likelihood that your output is any particular value. General convolution is hard, though, and involves calculus. So you might have to trust me that the approximations that I’m giving in terms of ranges, do reflect what would happen if you ran the full math.

[3] There is also another source of bias at this point that isn’t easily mapped onto errors.Choice of Evidence The person considering Bayes’s Theorem on a historical event gets to choose what features of the evidence they consider. Are the gospels “stories of a Divine Human”? If one uses “stories of a Divine Human” as the definition of one’s evidence, then one will end up with a different conclusion than “stories of a Jewish Messiah”, for example. Perhaps you say we should have both “stories of a Divine Jewish Messiah”, that would be better, but we’ll hit reference class problems with that (to what other figures do you look for similar evidence?). One can inadvertently introduce bias by considering different pieces of evidence. You might object by saying that Bayesian probability allows us to accumulate any number of pieces of evidence. We can keep adding extra claims and updating our probabilities. This is true, but it is rarely done (never in Carrier’s book), and even if done, some of these ways of defining or separating out the evidence are not independent (the messiah and divine claims above, for example), so cannot be accumulated by Bayes’s Theorem in the normal way.

10 Comments

Filed under Uncategorized

Code Sculpture: Two Adjustments

(define (updated-belief current opinion)
  (let* ((novelty (- opinion hegemony))
         (impact (significance novelty hegemony)))
    (if (and (not (supports novelty hegemony))
             (> impact minor-significance))
        (+ current opinion)
        current)))
class Belief {
public:
  void update(Opinion& opinion) {
    Opinion novelty = opinion - current;
    Significance impact = novelty.significance(current);
    if (novelty.supports(current) || impact < MINOR) {
      current += opinion;
    }
  }
protected:
  Opinion current;
};

If this makes no sense, then feel assured this is a one off. If it does, I’d love feedback.

11 Comments

Filed under Uncategorized