In the comments of the last post, I was asked where I thought my moral sense comes from. This is an attempt to untangle the threads.
Human beings aren’t alone in displaying ethical behavior. Other animals with social groups larger than a family display behavior we’d consider ethically motivated if performed by a person. For example: helping unrelated suffering individuals (even of other species), ostracising those who steal food, reciprocating kindness, putting oneself in danger to draw predators away from others.
Human beings, like many other creatures, use social strategies for survival, and so are typical in having evolved systems of ethics to support those strategies.
So my morality is partly innate, a function of our evolutionary history.
A lot of the process of growing up, is the process of learning morality. Learning right ethical behavior, learning to ignore one’s selfish desires, to be able to function in society.
I notice how we do this with our son. We teach him a mix of cultural ethics and genuine morality. The former are things that would be wrong to do, because we have made a rather arbitrary decision as a culture not to do it – such as belching loudly at the dinner table. The latter would be wrong under rational scrutiny – such as hitting someone when they don’t do what you want them to.
So my morality is partly taught, a function of the society and family I grew up in.
But both previous sources could be wrong. I could have grown up in an evil family. And I do not think evolved behaviors are a reliable determiner of morality. So I need a rational way of determining what is right and wrong.
The core of morality, in both categories above, is the idea that I am a member of a community of other individuals. And I recognize that those other individuals are like me. From this follows something like the golden rule: that I should act to others, as I would have them act to me.
This idea is common to cultures and philosophies going right back to our earliest written artefacts. The form I use we could call arbitrary substitution: I make moral judgements on choices. The options in that choice affect a set of people. The moral choice is the one who’s affected parties I would least fear to be.
I also think this is the objectively correct basis. Because it seems to me to be a logical consequence of objectivity, which requires that my own consciousness is a non-privileged vantage point. Similarly for there to be any objective morality, its conclusions cannot depend on which affected party’s consciousness belongs to me, and therefore the moral choice is the one all affected parties would agree upon, if they were faced with being arbitrarily assigned to each others consciousness. There’s much more to say here, but I’ll gloss over it all and assume the conclusion for the purpose of this post.
This leads to a moral calculus and therefore, I think, to a form of utilitarianism (again, skipping several intermediate steps of reasoning). Morality is not about rules, but about consequences. There is no ethical rule that should not be broken in some situation, though that situation might have to be extraordinarily contrived to the point of fantasy.
So it is not absolutely immoral to kill someone, for example. When I say that murder is immoral I am saying that, in the vast vast majority of cases, killing someone is the immoral option. In the same way, war is immoral, yet in some cases I believe it can be the morally correct choice.
There are moral decisions which are finely balanced, or carried out with incomplete knowledge of the consequences, or morally neutral. I think it is only utilitarianism that can rationally make sense of how we make difficult ethical decisions in real life.
So this is a rational basis, but choosing an action based on a moral calculation would be far too time consuming, and unnecessarily complex in most situations.
So in most cases I rely on heuristics that short cut the moral calculation and (hopefully) give the correct answer.
But it is important to fall back on a more complete basis when a situation is not clear, or when someone calls into question a rule. My cultural and innate baggage may rarely go unchallenged, since it is incorporated into my moral heuristics. But when a heuristic is bought to my attention, I at least have a basis on which to reconsider it. And if on reconsideration I find it not to be useful for making the right moral choice, I am obliged to stop using it.
 Where utilitarianism falters is in its account of ethical or moral judgements that aren’t weighed up rationally. As I go on to say, I operate on a rules (heuristic) basis most of the time, and deontological accounts of morality map onto this better. But ultimately I think rule-based ethics are adequately accounted for as convenience approximations of utilitarian calculations, whereas deontological approaches strain more when faced with moral judgements under conflicting rules.