From Whence Cometh the Mind?

I wanted to return to the topic I broached in my series on “God Does Exist, After All” last month. That mini-series didn’t really go down well (so much so I didn’t complete part 4, I felt the wheels came off on the debate about uniqueness). And I’ve been trying to work out why. One, very good, possibility is that the whole idea sucks. Possible, but I find it very difficult to accept, though I’ve been trying. Then I read Dawkins again this weekend, and found some solace…

Dawkins wrote two great books on evolutionary biology: The Selfish Gene, and The Extended Phenotype. The Selfish Gene has become phenomenally successful. The Extended Phenotype not so much. On the back of my copy of The Extended Phenotype, Dawkins encourages bookstore browsers: “it doesn’t matter if you never read anything else of mine, please at least read this”.

In high-school biology I was taught that natural selection operates on the level of the individual. Individuals compete for resources, individuals get to reproduce or not. Individuals are the units of evolution.

The Selfish Gene challenges that. It breaks apart the individual to find mini-individuals within. Different, of course, from the whole. The units of replication and evolution are the genes. The individual is significant primarily because it locks a bunch of genes together in a firm way (at least for one generation).

The Extended Phenotype also challenges the view that individuals are the units of evolution. It looks beyond the individual to find meta-individuals without. Different, again, from their constituent creatures. If the genes are the units of replication, then it makes sense to look at larger groupings where genes are locked together. Not locked as tightly as an individual, but connected nonetheless.

Why is the Selfish Gene a bestselling book, while the Extended Phenotype has sold a fraction of its units? Why, when Dawkins himself, seems more proud of the conclusions and implications of the latter model?

Let’s talk about the mind. It is simplest to associate one mind with one brain (I’m ignoring the fruitless suggestion that minds aren’t just what the brain does – if you want the spiritual debate, hit the comments). That’s basic biology, psychology and common sense.

But we’re also used to thinking about minds as smaller units: the unconscious mind; the mind of the left-brain and the mind of the right; our work-mind and our family-mind. From Freud to Marvin Minsky, a serious amount of ink has been spilled on the minds within our mind. Sabio, for example, at Triangulations, has done an excellent job of developing a mini-mind model of religious belief.

To me it seems clear that the smallest units capable of thinking are smaller than our brain. The mind we perceive, is made up of lots of these thinkers bolted together.

The analogy should be clear. If a group of thinking units grouped together becomes a mind, then why stop at those thinkers that are most tightly bound? Thinkers distributed over multiple people have as much claim to be a mind as genes distributed over multiple individuals have of giving rise to a phenotype. Those thinking units may have slower communication links (via language rather than direct electrical stimulation, say), but does that matter? I don’t think so.

I’m not terribly excited about the mini-minds. But I do think the meta-minds are fascinating. Fascinating and under-considered. Like their evolutionary analogues, meta-minds suffer from our natural tendency to reductionism: to looking for smaller components, rather than larger scale effects.

James Lovelock put forward the idea that one can think of the whole earth as a living organism, which he named Gaia: it is autopoietic, homeostatic (within bounds), and potentially self-reproducing. A certain group of people, with a particular predisposition, decided that the meta-organism Gaia was worthy of worship. Lovelock’s academic reputation has largely been muddied by the credulous spiritualization of this group.

Its a salutary lesson, I think. For the purpose of being clear, I don’t have any desire to worship or have anyone else worship the meta-mind.



Filed under Uncategorized

41 responses to “From Whence Cometh the Mind?

  1. Is it the “meme” of which you speak? If so, I get a bit confused. Are you saying:
    ” Genes are the basic unit of bodies (meta-genes), and Memes are the basic unit of ideas (meta-meme, or meta-mind).”

    “META” has always been a troublesome concept for me.

  2. Ian

    Meta is a horrible word to use. It doesn’t even mean (in Greek) what it seems to be used for in English. I wish there was a better word. But I can’t think of one. For better or worse, meta has come to be used for phenomena at higher levels of abstraction, compared to some starting point.

    I’m not talking about memes, no. Memes are a bit of a read herring here.

    I’m talking about minds.

    I have one brain. I have a mind. That mind is made up of smaller units (let’s call them ‘thinkers’), sometimes in tension, sometimes collaborative. Together the dynamics of those thinkers produce what I perceive to be my mind.

    Why stop there? If I have 100 thinkers in my brain making up my mind. Could there be 1 thinker in 100 people’s brain that makes up a mind? (numbers for illustration only, of course).

    If not is there something ontologically different about having the thinkers connected by dendrites, rather than by other kinds of signals (e.g. linguistic, observational, pheronomal)? Sure dendrites are *fast*, but that isn’t an ontological difference.

  3. “I have a mind.”

    “Mind” is a rather abstract fuzzy notion that seems all ripe and ready for philosophical flights. I think we would have to be very careful about definitions here before we create a “META-mind”! And you just confessed that “meta” is a bit slippery.

    I have a brain.
    I have perceptions and notions (including the perception/notion of having perceptions and notions).
    As you stated, I have lots of little calculator circuits in my brain responsible for these perception/notions. But I am not sure I have a mind, unless you can tell me what that is.

    This is hackneyed philosophical stuff which I do not know at all, but my antenae go way up with abstractions are layered into speculative areas.

    I guess I have never been enamoured by the notion of mind and perhaps am a bit of a phenomenologist (if I am using that term correctly). I see no need to abstract to much farther than I need to.

    I am not stating myself well here, but I guess I have the same allergic reaction as I had to your last posts. You want to claim this thing called “mind” and then take it and run.

    I view abstractions as
    (1) convenient tools for fast communications between people who agree on usage
    (2) tools of manipulation that play on cognitive illusions.

    But perhaps other readers like where you are going with this.

  4. Ian

    Well if you don’t consider yourself to have a mind, then that’s fine. I can imagine why you wouldn’t like this. I’ve never got that impression from your blog though. You tend to use the word ‘self’ more than ‘mind’, iirc.

    But I don’t think any of this relies on a particular definition of mind. If you can enumerate some features common to the outworkings of a brain, and those features you think are contingent on smaller units of brain-processing. Then the conclusion works fine.

    You can call that bundle of features a mind, cognition, a self, a bob, or a wendy.

  5. Ian

    “You want to claim this thing called “mind” and then take it and run.”

    That’s the worrying part of your comment I think. I think you perceive a threat in thinking about this kind of model, because it is painfully close to a lot of woo. And in fact, I’ve done a disservice there by intentionally conflating terms like ‘God’ with it. Fair enough if that is true, mea culpa. But I think it is irrelevant to the question of a) whether such processes exist, and b) what questions they answer.

  6. Remember, Ian, I am a MANY-self kind of guy — not one self, not one mind. I don’t have a mind, I have many minds. (yes, yes, then, someone would say, “What does ‘I’ mean in that sentence?” But that is a long discussion.)

    Luke (at Common Sense Atheism) has a video today that challenges your possessive statement that you have “A Mind”. And I think it is evidence toward the Many-Minds (modularity) theory I like (but about which you are “not very excited”).

    Questions come to mind for your word “mind”.
    Does a chimp have one? — a dog, a cockroach, a flower?
    It is this ONE thing that you want to have that seems odd. Both being ONE THING and “Having”. I don’t think the earlier “God” label is bugging me, it is the actually philosophical moves. But I am probably wrong, I have been more wrong than right in most of my past.

  7. I think my last comment is again in your spam filter or hold as new commentor filter.

    I forgot to add:
    Do you feel some of our present day computers mind?
    You know where all this goes, of course.

  8. Ian

    Fair enough, as I said before, if you don’t perceive of yourself as anything other than a disparate set of separable pieces, then I can understand why you don’t want to talk about larger scale phenomena. It helps to see where you’re coming from.

    Personally I’m quite comfortable about thinking of myself as an I. I’m quite comfortable at assigning some identity to the minds running on my particular pound of grey matter. And I’m quite comfortable about asking questions about the kinds of behavior that such a hardware bound collection of minds give rise to.

    I’m not very excited about ‘many minds’ because I’ve kinda been there done that, to be honest. I read Society of Mind 20 years ago as a teenager. It was awesome and inspiring, but awe is rarely permanent. I’ve used that, thought about it, used it in my work. I’m very happy with the reductionist idea that my cognition is made up of smaller things. I’m happy to call them minds, as Minsky does, or selves, as you do. I prefer ‘thinkers’ because both minds and selves are conventionally associated with individual people, whereas ‘thinkers’ in this context isn’t a common term. In AI, one might call them ‘agents’, which is also fine by me.

    I think trying to force me back into a position I agree with doesn’t get us very far. I agree mostly with your inference from Luke’s video. But I think you’re fixating on the word ‘mind’ as if you think I have some predetermined view of what that is that I’m imposing on cognitive phenomena. The opposite is true, and as I said above, the definition of mind is entirely irrelevant for this argument.

    To anyone who wants to argue that significant phenomena arise from the conjunction of the thinkers (selves, minds, whatever) running on one brain (if you’re not such a person, if you think that brain-level phenomena are irrelevant and thinkers are as large scale as you need to consider – so be it), I still come back to the question – what is is about that hardware that is significant. Couldn’t the same kinds of brain-scale phenomena arise from the conjunction of minds processing in multiple lumps of grey matter?

    I don’t want to have ONE THING at all. You got sidetracked in the God question about what I wanted to have too. I don’t recognize my desires in your characterisation of them. In fact I perceive your position to me more fixated on ONE THING. There are just these selves, there is nothing else, no other phenomena are interesting, no other levels of abstraction are worth considering.

    As for the other questions – do other things have a mind? Well that is irrelevant to this discussion. I do have opinions, but to voice them here would be to completely sidetrack things unnecessarily.

  9. Ian

    Okay, how about abstracting this to get rid of any kind of terminology whatsoever.

    Let’s say running on your brain are 2 things (selves, minds, agents, bobs, wendys – doesn’t matter what they are or how you defined them, as long as you’re happy to say there are 2 of them and they are separate). Call them a and b.

    a has some observable effects A.
    b has some observable effects B.

    When A and B are both running at the same time, it is conceivable that we see some *new* effects, C. I’d say that’s pretty uncontroversial. Surely your reductionism isn’t that strong as to deny any effects above the level of your ‘selves’.

    So my point is really just this: do a and b have to run on the same physical brain in order to give rise to the effects C? I don’t see why. It may be that C is less noticable to us if they aren’t (particularly because we maybe notice effects C because both of its components are occurring in our brains). But I don’t see any reason to say that ontologically a and b have to share the same grey matter to give rise to the same effects.

    It doesn’t matter what a, b, A, B or C are (though as we’ve seen we can pick lots of words for them, and model them in lots of ways), the basic structure of my argument still works, doesn’t it?

    I just happened to call a and b ‘thinkers’ and C a ‘mind’. Not because I have an agenda with those terms, just because they are the nearest terms I recognize. If you want to call them something else, fine. Call them ‘selves’ and ‘inter-self dynamics’ if you like.

  10. Scenario 1:
    brain-module A: Dislikes a movie
    brain-module B: once studied philosophy
    Thought Sabio did not like the movie to start, late in the movie, his memory of the philosophical issues came on-line and suddenly he enjoyed the movie deeply.

    Scenario 2:
    sub-module A: Dislikes a movie

    sub-module B: Loved that movie because of all the philosophical issues subtly explored.

    Person-Ian informs Person-Sabio about the subtle aspects and enlightens him about the movie. Sudden Sabio see the movie in a different light and is very glad he watched it.

    Sure, that works !
    Is that what you mean?

  11. PS: Just as long as “Liked Movie” is the feeling in an individual’s head and not floating out their in some Universal Mind.
    Which I am pretty sure you would agree with.

  12. Ian

    Yes, that’s an instantiation of it certainly.

    But the question has to be more general. Do you think there are any kinds of higher-level behaviours that can *only* exist when two thinkers are in the same brain?

  13. I am not sure I understand. “higher-level behavior” ?

  14. Ian

    @Your PS – you appear to have answered my question.

    For you ‘liked movie’ belongs to which self, A or B? Or is it just ‘floating around’ in the brain, not belonging to any self?

  15. Ian

    A higher level behavior is any phenomenon of interaction between selves that isn’t part of the behavior of any particular self. The process of appreciating a difficult movie, where no particular self fully gets it, for example.

  16. Both Sabio and Ian would describe something that they could come to agreement on using the English “I liked the movie” but we all know how flimpsy the statement is. For “liking” is a fluid, fluxing, temporary, fleeting, complex event loaded with several emotions, thoughts, past experience and such.

    Thus, no one self in Sabio owns it, and it has no such substance that speaking of “belongs to” makes no real sense at a deep level but maybe at a conversational level. This is perhaps my being sloppy with my notion of “
    “level of truth” – a
    Buddhist notion.

  17. Ian

    You are very slippery in this. Let me suggest a different instantiation then.

    How does a relatively good person take advantage of someone they love. Here’s one *possible* example (note – no pretence at universality, no ulterior motive, just a scenario to talk about).

    There is one self that is focussed on a selfish goal. We’re being concrete, so it is the career-focussed work-self.

    There is another that is sensitive and caring to the loved one, the spouse-self.

    These two can get into a nasty interaction in this way:

    The work-self makes demands on the loved one. The loved one follows along until they are pissed with the demands, and they voice that.

    Then the caring-self wakes up and takes priority. It apologizes genuinely, makes expressions of love and brings the loved one back into relationship. Then it goes dormant and the work-self wakes up.

    The overall effect (“higher-level behavior”) is that the person is manipulating their loved one, toying with their emotions to extract the maximum benefit with minimum consideration.

    So, in the interest of trying to pin you down onto anything we can discuss. Do you agree that:

    this manipulation is caused by the interaction of different ‘selves’ and isn’t reducible to the behaviour of any single particular self.

  18. 😆 First, I was the one who first applied the word “slippery” above to describe your rhetorical maneuvers. So I must declare a foul when you try to hijack my slimy pejorative accusation. Please be more original, and let’s not fall into the despicable Tu Quoque fallacy! 😆

    OK, now to business. You describe an excellent systems issue. I could think of two similar scenarios (“iterations”):

    A) Morning Sickness:
    as a result of a parasite (fetus) in a pregnant female. One is trying to rob the other but not too much. But in some situations the result is the death of the fetus. Here, when both survive, we see your quotation come true again:

    “The overall effect (“higher-level behavior”) is that the [fetus] is manipulating their [gene-donator], toying with their [physiology] to extract the maximum benefit with [maximally tolerated] consideration”.

    But do we really want to call this “higher-level behavior”.
    Or take the next example, stripped of personhood, love and other such slippery, nuanced Trojan horses.

    B) Orbit:
    An object with a given velocity approaches another object with a much larger mass. In most cases a deflected bi-pass will result and in a few, an orbit.

    Does “Orbit” exist. Who does it “belong” to. Is “orbit” a “higher-level behavior”?

    Seriously, I am not trying to be slimy or slippery or evasive or disagreeable. I am just cautious.

    Sure, our thoughts are influenced by others. I get that. Planets all influence each other too. Organisms all influence each other. I just sense an upcoming effort to make something more of mind that I am comfortable with. And I am not just bracing for it, I actually sense its embryo in the arguments you are already presenting. But I could be totally wrong. But in these last too dialogues, it is easy for me to make your iterations much more bland.

    You are right, we can call it “God, Mind, cognition, self, bob or wendy” and it still feels the same to me. You seem like a magician who is about to make something out of nothing. But I could be wrong in my reductionist narrow ways. BTW, I didn’t read Society of Mind until just recently and then only 2 chapters because I already knew the stuff from other readings and ponderings. I too have had a lot of “been there done its”. One of them is making something out of nothing. I do it time and again.

    😆 BTW, I did notice that bob and wendy did not come from your approved generic names list. 😆

    Trying to keep it light ! (I smiled while writing this) Sorry, I posted it in the wrong place the first time.

  19. Ian

    Fair enough. I appreciate the lightness of tone. I also understand your caution. But we can’t actually get past that caution to talk about a specific, even very simple, example. So I’m a bit stumped how to proceed.

    Your other examples are interesting, sure, and they are worth considering. I don’t think “mind” is anything special. It is just chemistry, which is just physics. So sure you can have physically or chemically, or even musically analogous situations. But so what? If I want to talk about chordal harmony it doesn’t help to try to have the conversation with someone who wants to insist that birds make sounds too.

    In the spirit of describing tones (you’re right, it is very difficult to tell). I’m a little frustrated at what I perceive as your fear on this, and annoyed at myself for giving you grounds to think I might be trying a crappy dunderhead magic trick, but I’m not pissed or anything.

  20. Smiling. Good. Seems well taken. Yeah, it seems we are stumped. You can just proceed for the other readers. Maybe I will catch on and understand.

    But like your last essay, you seem to want to take us somewhere and I am having trouble from the get-go even if I do assume you are not creating a god or an unembodied mind — which I really wasn’t imagining too much. But something triggered my caution and I couldn’t buy into the examples. But maybe others are and I am being a bit dense. So please do proceed for their sakes.

  21. Ian


    I think everyone else gave up reading this thread a *long* time ago 🙂

    I don’t understand what is difficult about ‘buying into’ the manipulation example. Okay you can be cautious about where I’ll take it, but I struggle to see how it is in any way controversial. The only controversial bit I can see is the notion of the multiple ‘selves’ at all. But I assume that’s the bit your okay with. So I really am a bit baffled.

  22. You mean we are just talking alone in this cave — and throwing our thoughts in a bottle out to sea?

    When doing analogy argument — your manipulation example — it is best to choose analogies stripped of unnecessary abstractions, nuances and such that may later offer difficulty. So I offered counters — given that arguments from analogy are cumbersome at best.

    Your example came loaded with all of these:
    makes demands

    Too many possible unfruitful tangents. I tried to keep it to a simple systems analogy. Does that make sense?

  23. Ian

    I wasn’t doing an argument from analogy. I was trying to be concrete with a specific example because I am talking about mental phenomena, not planets or reproductive biology. Replacing them with non-mental phenomena because talking about thought is too difficult doesn’t get us anywhere.

    Why stop there? What definition of gravity were you using? Quantum or Relativistic or Newtonian? Too much room for misunderstanding. Why don’t we just talk about integers? If I add 2 and 3 does the resulting 5 belong to the 2 or the 3? It is a meta-level phenomenon?

    That game never ends. At some point you have to talk about what you’re talking about. If terminology proves a problem, then you can address it. That may be painful, sure, it may be complex and difficult. But refusing to start a conversation before all the terminology is perfectly agreed just means you never start talking. I contend that there is nothing in my use of the terms you outlined above that should prevent anyone understanding my scenario. If you think I’ve used some term that could completely undermine the prima facia sense of the story, then say so. It seems like you’re playing linguistic games now…


  24. Now you appear to be getting testy or frustrated again.

    When you do a mental discussion, I try to bring the example back to non-mental so you can show me how you think mental and non-mental differ. If they don’t differ, then we should be able to do the discuss in simpler analogies, no? Am I mistaken.

    I am not playing a game at all. But I can’t obviously explain my objections in any way that seems more than nit-picky, reductionist, quibbling or some other image to you. I don’t get what you are talking about when you say “mind” — and I reread your main post 4 times now.

    I am not sophisticated in the philosophical or even neurological means to discuss mental events.

    You said:

    The mind we perceive, is made up of lots of these thinkers bolted together.

    I can let that go as a definition but it seriously does not mean something to me. I don’t perceive a mind. I understand how to use the word in normal conversation, of course.

    The analogy should be clear.
    — Ian (main post)

    I wasn’t doing an argument from analogy.
    –Ian last comment

    You said:

    If a group of thinking units grouped together becomes a mind

    Let’s say these modules (“thinkers”) link together — when they do it, they produce an effect or many effects in part of the brain. I don’t call that mind.

    But let me see, let me fumble with a definition of “Mind”. I guess I use the word “mind” in three ways:
    (1) my brain
    (2) the thoughts, feelings, perceptions … I am aware of at any moment
    (3) all the activity created by this brain at any moment.

    So it is not one thing, it is not stable and it is not owned. In fact, I don’t really imagine it as an it. I think I am weird though. Probably most people would buy into your story right away. But hopefully this explains that since I don’t seem to share fundamentals with you, I can’t even begin to share thoughts about:

    “thinkers distributed over multiple people having as much a claim to be a mind …”

    Seriously, just post your next post in your line of thought. I will let others respond who perhaps are sympathetic and not shut down the thread.

  25. Ian

    Oh frustrated certainly, but not testy. Sometimes it is worth driving outside one’s comfort zone, I think.

    My comment about analogy in the original post was referring to biology. The situation between genes and phenotypes is analogous to the one I’m posing about minds. My comment about not making an analogy referred specifically to the example I gave that you wouldn’t respond to: the example of manipulation as a higher level behavior caused by the interaction of ‘selves’.

    My frustration is that you’re still trying to head as fast as possible to the most contentious thing you can find (back to trying to find some definition for ‘mind’ when I’ve already said multiple times that the particular definition is irrelevant).

    I still fail to understand how you can not understand what is meant by the manipulation example. If we can start there we can build up, and at any point we can work out where we disagree. You don’t need fancy philosophical or neurological anything. I’m trying to talk about terms that are every day. That I would imagine you use every day, without an existential crisis about what they mean.

    My problem is, I think, that I simply don’t believe you can read that manipulation example and genuinely doubt what I meant by it, without first convincing yourself that you need some kind of esoteric code to unlock what I mean. My 11 year old nephew would understand what I meant, and I think if you asked 100 people to describe the story back to me at least 99 would describe it well enough for it it be useful in this discussion. If I’m just deluded and you don’t get what I was saying then I really am stumped.

    Unless we can start somewhere, we can get nowhere.

    Maybe you don’t want to think about this. Fair enough, but you are tenatious in responding, so I was hoping!

  26. I think my above conversation about “self” is critical to our understanding. Hell, I spend tons of time talking about it on my site because I think it puts a unique spin on how I address religion.

    Indeed, you are right, 99% of people would probably buy into your examples. But I think that is because most people buy into a normal sense of “mind”, “self”, “me” etc. I am not *trying* to run head-long into “contention”, it is just that your statements seem to take me there naturally. But maybe this is due to my perverse understanding of mind. And I am pretty sure 99% of folks have very different feelings about this from me. But then, you are chatting with me, aren’t you? You really aren’t saying, “Sabio, just be normal, would you?”

    You are reading too much into my attitude. I think you are mistaken in your evaluation of my stance toward this conversation.

    This conversation thread has gotten too long — too much scrolling. But to help, let me state:

    In your LOVERS iteration you conclude asking:
    “this manipulation is caused by the interaction of different ’selves’ and isn’t reducible to the behaviour of any single particular self.”

    Answer: Sure, an “interaction” is by definition not reducible to a single agent.

    If that is not suffice, I need you to re-state your claim, hypothesis, definition or whatever. I don’t know what you are trying to say. But then maybe that is what you planned to do in Part II as you did with your God post. I must say, however, I find it odd that your post title is “From Whence Cometh the Mind?” and yet in your last comment you say:

    [you are] trying to find some definition for ‘mind’ when I’ve already said multiple times that the particular definition is irrelevant

    So, like Avalos, either the title is just intentionally exaggerated so as to make some other points and one must read carefully for all the caveats, or you changed your mind. (smile)

  27. Ian

    Okay cool. I’m glad I’m not going completely mad.

    Indeed, you are right, 99% of people would probably buy into your examples.

    I wasn’t going for that, so much. I meant that 99% understood what I was saying without thinking I was using a technical jargon or had used terms with the kinds of ambiguity which could lead them to understanding something other than I expected.. I think a much smaller % would buy into it, or even be bothered with trying to understand. I was just worried that you felt that terminology was so difficult that we couldn’t talk about *any* mental phenomena sensibly. I flicked back through your blog this afternoon and wondered how you could say almost everything you wrote there if you really were that skeptical about terminology. I’m glad we resolved that.

    I must say, however, I find it odd that your post title is “From Whence Cometh the Mind?” and yet in your last comment you say: [you are] trying to find some definition for ‘mind’ when I’ve already said multiple times that the particular definition is irrelevant

    There’s stages, I think. We’ve come a long way in this discussion, and we’re discussing more foundational things now that when I started the post. Often happens. You’ve experienced it in teaching, I’m sure: you say “let’s talk about topic A” but you spend the whole time talking about topic B on the way to A, and then someone says “I thought we were going to talk about A”.

    I think the road from here can lead back to discussions of mind. But, given that I don’t have any strong opinions on what phenomena are or aren’t a ‘mind’, it really doesn’t matter to me if we call it that, or if we talk just about systems. It does matter to me that we’re talking about brains, though. I think most folks have a naive view of ‘mind’ and I think the average naive view is also fine for this discussion (though it doesn’t depend on its naivity), which is the purpose of the title.

    My thesis (which we haven’t discussed in these comments, and I absolutely understand you haven’t conceded) is that, if we can find something that one brain can do, there is no reason in principle why the same thing can’t be distributed over multiple brains.

    So lets, adjourn. There isn’t really a part 2, but I’m sure I’ll post more on the topic in the future.

    Can I also say that, I find this kind of difficult discussion *really* helpful. I have learned far more about describing this stuff in the two sets of unconvincing run-ins we’ve had than in the previous year of talking to myself and jotting random notes. You bring up challenges that I just never in my wildest dreams anticipated, and it is good that I’m forced to answer them. Particularly as neither directly confront the core of what I’m thinking, but completely challenge the ways I have of explaining them – they help me immeasurably to understand what is obvious and where I’m being cocksure and naive.

  28. Glad the dialogue is more than just frustrating.
    Two Questions on your last comment:

    I flicked back through your blog this afternoon and wondered how you could say almost everything you wrote there if you really were that skeptical about terminology. I’m glad we resolved that.

    Did you see how important my view of mind is to much of what I write? Doesn’t it seem rather different for the “naive” view? (I prefer to call it the “common sense” view)

    My thesis … is that, if we can find something that one brain can do, there is no reason in principle why the same thing can’t be distributed over multiple brains.

    I think you have already said this, or I think you said it. And I am pretty sure I disagree. But it is still vague enough that it is hard to know where to address it, but I will try.

    Here is something one brain can do: feel.
    But that “feeling” is particular to the brain. I can’t understand what you are meaning when you say “feeling can be distributed over many brains.

    There, simple as that.

  29. Ian

    Did you see how important my view of mind is

    Well it looked before like you were objecting to the use of any mental term because of possible ambiguity. I understand (I think) what you write about the self. And I mostly agree with it. But I was worried because, applying your seemingly nihilistic position on words about the mind, I wondered how you could have written any of it. I concluded that you were quite happy to use words describing mental phenomena, but maybe just not when discussing this point.

    Here is something one brain can do: feel.
    But that “feeling” is particular to the brain.

    Is it? I don’t think so. For example. I’ve been involved in small companies that have definitely had feelings distributed over multiple employees. For example, I once worked at a company of a handful of very dedicated engineers. They worked their ass off, and were super-enthusiastic about the tech. The company as a whole got into a depressive slump. But each of us in candid conversations were still optimistic. In ‘multiple selves’ term, part of all of us saw the writing on the wall, and those parts were interacting badly, perhaps subconsciously, to palpably change the atmosphere in the office. I didn’t feel depressed about the company, but the company did start to seem depressed.

    In every job I’ve done, I’ve always had the part that thinks it isn’t very useful, so that’s not new. I think that experience is best understood as the interaction of everybody’s ‘its futile’ self becoming something more than each individual.

    That’s not a mystical process to me. It is just the interaction of lots of individual mental processes across multiple people.

    Now I struggle to see how that is different to the days when I’m just under a cloud, and I can’t pin down why. When parts of my mind are feeding off things subconsciously and bringing my mood down.

    I can’t understand what you are meaning when you say “feeling can be distributed over many brains.

    Well what do you mean by ‘feel’ in that case? Can you describe what it means to feel something, such that feelings can’t be distributed over multiple people?

  30. man you guys really going at it.

    just found the blog, love it. i did a psych degree and am a liberal christian standing in solidarity with the palestinians.

    what about language? language is mental, but is always from one person. if you had lots of people somebody would still have to do the talking.

  31. Ian

    Welcome to the blog! Thanks for stopping in. And thanks for reading to the bottom of such a tortuous thread (I was dead certain it was only Sabio and me down here!).

    Language is a great example, I think. You’re right someone has to do the talking. To a certain extent that means one brain is fundamentally responsible for the utterance. But there is good evidence that *what* you say is in fact a consequence of very complicated interactions between many people.

    Informally, here is a fun example with two ad execs.

    We pick up words, senses, inferences, contexts, and so on, which we then use. We do this internally, of course. When you are planning a discourse event, the various bits of your brain contribute, concepts are refined, phrases and words are suggested/remembered, and emotional tugs are felt. Eventually we speak. In what I’m suggesting, its no different. Its just that the ‘other bits of your brain’ can be ‘bits of other people’s brain’.

    So you may say: this is completely obvious. Everyone *knows* we’re influenced by other people, no need to come up with some wierd theory. Well I agree. All I’m saying is that the very same kinds of processes that go on inside the brain also go on across brains. They aren’t mystical processes, they are natural and somewhat obvious. Like Dawkins extending the phenotype out beyond the individual, I’m suggesting that psychologically there is no strict dividing line between mental processes that run completely in our brain and those that run across brains.

    So I think language is a great example of what I’m talking about.

    PS: This isn’t meant to be funny, but I don’t think I’ve ever seen someone who punctuates their sentences so well but who doesn’t bother with capital letters! 😉 Hey – all shades are welcome here!

  32. Ian

    My thesis … is that, if we can find something that one brain can do, there is no reason in principle why the same thing can’t be distributed over multiple brains.

    There is a philosophical dimension to this. Saying that there are things that only a brain can do, that can’t be distributed, seems to smack of dualism to me. It suggests there is something special, non-material or magical about a person’s brain that isn’t just the firing of neurons. If it were just the firing of neurons then there’d be no reason why those neurons couldn’t be in different people’s brains. It is just a matter of the inter-connects between brains in that case.

    We know how neurons in my brain fire as a result of neurons in your brain firing. You communicate with me (in any way), that causes some response in me. So the question becomes: if unit A in my brain causes unit B to respond in a particular pattern, is communication sufficient to get unit A in your brain to make my unit B fire in a similar kind of way. I think so, because I think mental processes are relatively modular in that respect. It wouldn’t be entirely the same, but it would be similar.

    Here’s an example: I came up with a strap line for the church in our village. I’m friends with the minister. He asked me, and the bit of my brain that does marketing kicked in and refined this idea. Now I don’t have any real desire to market the church. But the same process does occur when I’m communicating my business to people. In the latter case the motivation, creativity and refinement was in my head. In the former case there were two brains at work.

  33. Juan Pablo de la Torre

    Hi there, another new guy here. Found the blog yesterday via some other atheist blog. Natural born atheist, no degree, self-learning web geek, native Spanish speaker and, virgin Mary Immaculate, I love to classify myself and others.

    Isn’t it amazing how much can a guy read just to put his ideas in a blog post?

    Well, to the point.

    Ian, there’s something called Cloud Computing that, I think, serves to visualize your thesis.

    Cloud computing is Internet-based computing, whereby shared resources, software and information are provided to computers and other devices on-demand, like the electricity grid.

    Or, in simpler terms, it’s a bunch of computers working as if they were one, connected only through an Internet connection. They can do whatever a single computer can do, only faster at the cost of more hassle on data transfer.

    This works only because computers can actually communicate pretty much like brain functional areas do, humans can’t. Collective Intelligence is often achieved through common language, but more complex processes are impossible because of the characteristics of human interactions.

    If, and only if, humans were able to communicate at lower level (this is, chemical or electrical signs being send from one brain to another) a real “mind” could be achieved.

    Meanwhile, you can have social networks and the Internet in general.

    Refer to Wikipedia for sources.
    Someone wrote about the Internet being a brain, if you are interested. (

  34. Ian

    Welcome to the blog!! I like the metaphor. Cloud computing does seem to be the same kind of idea – pre ‘cloud’ computing it was also called ‘federation’.

    Thanks for the link – interesting idea that the internet is a form of brain.

    So I want to ask you one thing. In your comment you say:

    If, and only if, humans were able to communicate at lower level (this is, chemical or electrical signs being send from one brain to another) a real “mind” could be achieved.

    I have to ask why you think that.

    You see, the communication between people is much slower, and it isn’t direct. But if I say something to you, I get neurons to fire in your brain. So how is that different to if I had dendrites directly wired into your skull?

    Let me clarify that I don’t think that the minds that emerge from these processes are exactly like a single human mind. But I don’t see any reason why some form of processing can’t be distributed.

    “Minds in the cloud”, hmmm – catchy, but unfortunately it might be more descriptive of this train of thought that I’d like 🙂

  35. Ian

    Oh, and Collective Intelligence is one of a whole range of phenomena by which I think this process is already partially understood. Thinking of this kind of distributed processing as being similar to the internal workings of one brain has the potential to unify a bunch of currently unrelated ideas, I think.

  36. Juan Pablo de la Torre

    I have to ask why you think that.

    Quite simple, you can’t stimulate someones taste or smell by talking to them. As long as you can’t share that kind of stimulus (and many other) you can’t have an analogy of a human brain.

    Thus, this meta-mind can’t have the same processing capabilities of a real mind.

  37. Ian

    So there are two answers to that objection that come to mind:

    1. Can someone without a sense of smell or taste (or any other sense) have a mind? If so the relevance of sense data is dubious.

    2. If you can break down the function of your brain, only a very small part of it actually processes sense data. By the time sensory stimuli propagate out to other bits of the brain it does so in a form that is already interpreted. This can be shown by doing fMRI scans – other than small areas directly receiving sense data, most of the brain activity for, say, feeling a cold breeze, is the same as *thinking* about feeling a cold breeze. If that is the case, then what is to stop the meta-mind having its low-level sensing circuits in one person, and whatever responds to those in someone else. The communication between the two is surely no worse than telling someone to imagine a cold breeze in an fMRI scanner.

  38. Juan Pablo de la Torre

    Now I think you are comparing cognitive capabilities of a meta-mind with those of a human mind more than a complete analogy with a human brain.

    Well, a meta-mind would certainly be able to process information in a similar fashion a human brain does but, to be comparable to one, its components need to be able to communicate using a language more sophisticate than any I’ve known of. A language that grants them the ability to communicate unambiguous feelings and sensations.

    Answers, sort of.

    1. Can something without senses at all have a mind?

    2. Humans have some capabilities of emulation that drive them to believe their senses are being stimulated, but I’m sure those capabilities are not required for a intelligence to be considered a mind.

  39. Ian

    I think you’re right. A meta-mind is somewhat different to a human mind. That’s certainly where I started. But the more I think about it, the more I struggle to see any absolute dividing lines.

    A language that grants them the ability to communicate unambiguous feelings and sensations.

    Now this is going well off topic – but what makes you think that the different parts of our brain communicate unambiguous feelings and sensations? I think the evidence is to the contrary: the brain is messy, prone to error and misinterpretation of itself, and various functions in the brain can often be poorly integrated.

  40. (1) Here is a review of a book discussing how Religion is Like Language. I think it is a wonderful analogy in many ways.

    (2) Ian: When you phrase things certain ways I agree with you 100%. But other times, it seems to go another direction and I can’t nail it down. And other stuff I just don’t understand – like your minister and marketing story. Sometimes I waver between thinking — (a) Is he just saying something obvious? (b) Is he sneaking something in (c) Am I not understanding him.

    (3) I don’t get your reference to Collective consciousness

    (4) You ask:

    Can someone without a sense of smell or taste (or any other sense) have a mind?

    And yet you don’t care about a definition of mind?

    I must say, Juan Pablo’s questions make sense to me. He seems to worry about similar things that I do. Maybe we are both confused. Maybe you can spot both of our false assumptions.

  41. Ian

    1) Thanks.

    2b) That suggests you think I have something to sneak in. Some ulterior motive 🙂

    3) I don’t think I mentioned collective consciousness, did I? I do think about consciousness. But that’s a chunk of stages on, and would derail this discussion. Whether or not groups of brains are conscious is a bit more fanciful than I want to go right now. But as I said before, I don’t want to overload this argument with my view of ‘mind’. So if you want to run with consciousness, that’s fine too.

    4) The response you quoted was in response to Juan Pablo, so don’t take it out of context. If Juan Pablo thinks that you can’t have a meta-mind without sensory perception, does he think you can have a regular mind without those same senses? Again I’m not trying to put forward any definition of mind. It is a basic method of arguing against A is-not-the-same-as B, when someone says “Ah but A-is-an-X” you say “Isn’t B an X too?”. Or if they say “but B isn’t a Y” you say “Are you sure A is a Y?”.

    Juan Pablo’s questions make perfect sense to me too. I’m encouraging him (and you, then) to be clear about what, specifically, under *your* understanding of mind, makes these particular properties significant.

    I think what Juan Pablo is arguing is that tight neural connection is essential for his definition of mind. My job, if I’m right, is to find out why he thinks that, and to encourage him to be explicit about what it is about tight neural connections that makes it so. My hypothesis is that focusing on tight neural connections for the definition of mind is our natural, gestalt position. But it isn’t correct. It might modify things, but doesn’t form a black and white dividing line.

    If part A of my brain, enters a pattern of stimulation a, and thereby influences part B of my brain to enter pattern b. Is there any reason to believe that part A of my brain entering pattern a couldn’t indirectly cause part B of your brain to enter pattern b? Are there any such combinations that wouldn’t be possible, in principle?

    So JP suggests this: Part A (my taste processing centers) of my brain enters a pattern of stimulation a (tasting a delicious ice-cream), and thereby influences part B (my visual memory) to enter pattern b (remembering the colors of the old ice cream shop at the seaside where I holidayed as a child). Could you tasting an ice-cream indirectly trigger my visual memory? Of course, that happens all the time. Indirectly through you telling me about it, or maybe me seeing you enjoy the ice-cream.

    To me this part of the argument is that simple. We can build up to the things you can do with the argument later. But at this stage, JP (and gaza) are right in trying to test the basic premise that mental phenomena are not bounded by a single person’s brain. Any more than phenotypic expression in Dawkins is bounded by a single individual’s body.

Leave a Reply

Fill in your details below or click an icon to log in: Logo

You are commenting using your account. Log Out /  Change )

Google+ photo

You are commenting using your Google+ account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )


Connecting to %s