Minds, Brains and Computers
Lavelle (Suilin)
Source: Ward (Dave), Pritchard (Duncan), Massimi (Michela), Lavelle (Suilin), Chrisman (Matthew), Hazlett (Allan) & Richmond (Alasdair) - Introduction to Philosophy
Paper - Abstract

Paper StatisticsBooks / Papers Citing this PaperNotes Citing this PaperColour-ConventionsDisclaimer


Author’s Abstract

  1. What is it to have a mind? We deal with this question during the third week of the course. We are certain that anyone reading this text has a mind. But what are the special properties that beings with minds have? What sorts of things have those properties: animals? Infants? Computers? In this lecture, we will discuss some of the approaches contemporary philosophers have taken to the question of what it is to have a mind. In the first part of the lecture, we begin our discussion with Cartesian dualism, which claims that mind is immaterial, continue to identity theory, a view that mind is identifiable with physical matter, and finish with functionalism, according to which a mental state is essentially identifiable with what it does. In the second part, we concentrate on the problems that thought experiments1 of Alan Turing and John Searle pose to the functionalist account of mind.
  2. Author: Dr. Suilin Lavelle joined the Edinburgh department in Spring 2011, having completed a PhD at the University of Sheffield. Her primary research interest is the field of social cognition, and more specifically, in the various answers given to the question ‘How do we understand other people’s psychological states?’.

Contents
    Introduction
  1. Theories of Mind
    • i. Cartesian dualism
    • ii. Identity theories
    • iii. Functionalism
  2. Mind as Computer
    • iv. Turing machines
    • v. Searle’s Chinese Room argument
    Conclusions

Comments
  1. There’s a full transcript available (Link), with an appendix on Descartes’ Argument from Doubt.
  2. Works recommended in the transcript are:-
  3. Other references given for the week are:-
  4. Finally, there’s a very brief YouTube clip of Putnam saying how the computer metaphor of the philosophy of mind moves us away from physicalism in the direction of functionalism, given the software / hardware distinction and the fact that we don’t care about the hardware, provided it’s up to the job. Link.

Correspondence

Correspondent 1
  1. We did week 3 on Wednesday – philosophy of the mind. We found in mind boggling in places, and Mike was all for giving up in the middle!!!
  2. Maybe I found it a bit easier, having read that book you lent me a while ago from a Christian philosopher who expounded dualism4.
  3. Part of the problem is the terminology used – it’s like learning a whole new vocabulary for concepts.
  4. So – identity theory – we found the explanation of token identity versus type identity very difficult to follow. Also we had never heard of “C-fibres firing”, and replayed it several times to try and work out what she was saying! Unfortunately we’ve found the quality of the videos is bad (for us) as our connection is rather slow, so it often pixelates, and jumps, so it doesn’t help our understanding of difficult subjects! Having said that, we thought she was a very good lecturer, as she managed to make most of it understandable, despite covering very complex thoughts (we thought).
  5. I found it difficult to understand what she meant about the “about-ness of thoughts”. Presumably she just meant knowing what thoughts “mean” rather than just understanding what the words are? Presumably that was the point of John Searle’s Chinese Room?
  6. Am I correct in saying that she thought it was difficult to say where we get our understanding of the “about-ness of thoughts” from?
  7. Also, was she saying it was difficult to say why we have self-consciousness5 and self-awareness?
  8. Is it still a philosophical puzzle that providing a functional analysis of something doesn’t explain why it has a conscious experience?
  9. Is any of the material presented new to you, or have you looked at all of this content before?

Response
  1. I think this lecture is too difficult for the intended audience. It covers almost everything that would be mentioned in first and second level undergraduate courses, but too fast and sketchily really for the material to be fully understood, and the follow-up reading is vast. While it does give a flavour of a lot of the field, it might have been more prudent just to take one or two self-contained topics (like the previous lecturer did), and deal with these in more detail with less haste. As you noticed, there are a few things that can only really be understood if you've already done the more detailed course! None of this was new to me, though it's still not easy. I agree that the video quality was poor - indeed, I couldn't see the point of watching a young lady waving her hands about, so didn't - but the sound stuttered from time to time, especially if I was using the PC for something else while she rambled on.
  2. To answer your questions properly would mean I'd need to go on at very considerable length - and you might not want to read all that, even if I could write it before we'd moved on to the next topic. I'll just jot down a few pointers (though not to further reading!), not necessarily in the order asked:-
    • C-Fibres firing: Well, this is the standard physicalist example, and was originally given in the 1960s (maybe). It doesn't really matter what C-Fibres are (though you can find out all about them at Wikipedia: Group C nerve fiber). The point is that a physicalist wants to argue that pains just are what goes on when the relevant neurological events happen. It's pointed out that C-fibres aren't the whole story - it looks like these are just the peripheral nerve endings (I've not read the article!) - so wherever the sensation of pain takes place, it's probably not there but further along the neural pathway. But anyway, "C-Fibres Firing" is just shorthand for "whatever physical events happen in humans when a sensation of pain occurs" - and the physicalist claim is that this is all there is to pain - the pain is experienced by the physical structure that gives rise to it, not by something else - a mind or soul that's not physical. These physical events are all there is to it. Indeed, the identity theorist says that the sensation of pain is identical to these physical goings on (whatever they are). Philosophers argue about whether "identity" is the correct term (rather than merely that pain is "nothing more than" this).
    • Type-Type vs Token-Token identity theories: physicalism has it that mental events are identical to physical events. But at what level? Token-token identity says that a particular pain in a particular person at a particular time is identical to whatever neural goings on went on at that time. This is fine, but not very enlightening. As Dr. Lavelle said, Type-Type identities would generate a research programme - they would provoke us into finding out what pains - any pains in any sentient being anywhere - actually are. Just what is it (physically-speaking) that makes all pains painful? So, you have a type of mental event (pain) and try to find the type of physical event (C-Fibres firing) to which is is allegedly identical. The trouble with this is that it seems to rule out non-humans from having pains if they don't have C-Fibres, or whatever. Personally I'd by happy with different sorts of pain - human-pain and octopus-pain, but philosophers say (in a loud voice) BUT WHAT MAKES THEM BOTH PAIN? That's where the Functionalists come in. They seem to have a different sort of identity - and it seems to hark back to the Behaviourists - that being in pain just is to be disposed to behave in a certain way (wincing, aversive behaviours and all that). This might be fine for mental states such as beliefs - but it's not half the story for sensations, especially pains. The most important thing about pains isn't that they make us hop about, or that they are necessary reports of bodily damage - but that they hurt! And who knows what it feels like to an octopus, whether it feels anything like it feels for us (or even if it feels anything at all). So, while functionalism may have something useful to say about the externals, if has nothing to say about inner feelings - and there's no reason why these feelings should be the same in different species. I like the Wittgensteinian idea of "family resemblance" in this context. Wittgenstein raised the idea with things like games - just what do all games have in common? Well, nothing, probably, but any two games share some similar characteristics, or they wouldn't both be games, but they don't share the same characteristics as another pair of games (just as some - but not all - family members share blue eyes or a big nose). So, all human pains may share some things with other human pains, though not everything, and human pains share something, but much less, with octopus pains, but still enough for them both to be pains. Something like that.
    • Aboutness of thoughts: this is very difficult. The idea is that computers just manipulate (for them) meaningless symbols. So, a symbol (or set of symbols - like a question in Chinese) - which has meaning for a human being, but not for a computer - is fed in; the computer jiggles about with it not knowing what it's doing other than following some algorithm (and it doesn't even know it's doing that); and out pops another set of symbols (the answer to the question, again in Chinese) that has meaning for the human being reading the output, but not for the computer that generated it. In the trade, this is referred to as "original intentionality" - the source of the "aboutness". It is said that digital computers have all their concepts programmed in to them, and don't derive them in the way that makes them meaningful for them. Well, maybe you could make a learning computer, so that it learnt that the word (symbol) "octopus" stands for an octopus in the same way that you or I learnt the concept. I then get a bit confused - there are words, concepts and things. Octopuses aren't concepts - because concepts are mental things, and octopuses are squishy marine animals with 8 legs - though we have a concept of an octopus, for which the word "octopus" stands. So, I suppose it's alleged that computers have symbols (words) but no concepts. But what is a concept? A sort of aggregate of the properties a thing (in this case) generally has? I can't see why a computer can't possess concepts in this sense.
    • The Chinese Room: the point is (allegedly) to show that computers only operate at the syntactic level - fiddling around with symbols - rather than at the semantic level - understanding meanings. In the thought experiment6, nothing and no-one within the room knows Chinese, even though the room as a whole acts as though it does. There are a lot of answers to Searle. Usually, it's alleged that he's looking at the wrong level - obviously no component understands Chinese, but the whole assembly - if it really was of sufficient power to answer questions put to it in Chinese - would have that understanding. It's no use asking which brain-cell knows what - it's the whole brain - or large swathes of it - that does (assuming physicalism - Searle is a physicalist, he just doesn't think the digital computer is a good model of the human mind).
    • Consciousness: the hard problem of consciousness isn't explaining which bits of the brain give rise to particular conscious thoughts or feelings - you can poke about and find this out, and while hardly trivial, neuroscientists know how to go about the research. The really hard problem is explaining why all this jiggling about of neurons feels like anything at all, never mind the particular feeling it has on the occasion in question. I think hard-line physicalist just shrug off the question and say "it just does" - it's a brute fact that when matter is structured in this way, it feels (for that matter) the way it does: and that asking further questions just reveals the questioner to be a closet dualist.
    • Functionalism: as I've said, functionalism works fine for some mental events but not for others, and is another argument levelled against digital computers being an appropriate model of the human mind. As the lecturer pointed out, a computer can be made of anything you like provided it runs the algorithms correctly. Maybe out of baked bean tins and string. But it's difficult to envisage anything anything made out of baked bean tins and string having sensations. But it's difficult imagining an assembly of baked-bean tins and string being complex enough to model what the human brain does; and, come to think of it, it's difficult to imagine how the lump of goo that is the human brain does what it does. So, failure of imagination may not tell us much. Thankfully, the lecturer didn't launch into consciousness studies itself, where a favourite line is that consciousness is a quantum-mechanical thing that arises in the human brain itself (the current favourite is microtubules that collectively make up the coarse structure of the brain; Wikipedia: Microtubules - Postulated role in consciousness). If so, functionalism is wrong.

Correspondent 2
    It's interesting that you are wrestling with the paper on Animal Minds. You know I have always thought that humans are very arrogant in their dismissal of animals as almost automatons just because they cannot speak. I am sure they are much more intelligent than we suspect. Now you have a dog of your own I am sure you realise they are more devious and cunning than you would have thought before you owned one. Their sense of pain seems on par with ours, in the way they pull away if they have a thorn stuck in their paw. Descartes un-anaesthetized vivisection fills me with loathing and disgust. But what really amazes me is the brain of an ant. Can you remember when, as boys, we used to keep a colony of ants in a big sweet jar? Well, I was sure that they must communicate with one another as everything in the nest seemed to be so organised. If I was to disturb the nest they would all rush around trying to repair the damage, but they all seemed to do just their bit, they didn't all try and move the same piece of dirt (for instance). They also have a good sense of direction and self-preservation etc. and support all these life functions with a brain less than the size of a pin-head. Incredible.

Response
  1. I agree with you on animals. It's always struck me (maybe unreflectively) that it's likely that mammals at least - being of the same general structure as ourselves - will perceive things much as we do, only scaled down a bit (given their less-complex brains). I'm sure they feel pain, though maybe not as intensely as we do (it would prevent them going about their business in the wild). My experience with Henry is that when he has to have a smack, it really does hurt me a lot more than it hurts him. I think it's the arrogance of Descartes and company that I find so alarming. They were willing to give more credit to their reasoning ability than the manifest evidence of their senses. Mind you, they didn't restrict themselves to animals on that score - surgeons would routinely operate on babies without anaesthetics until a few decades ago "because babies can't feel pain". Well, they were wrong there too. See Link (Defunct).
  2. Yes, ants are extraordinary - but bees are the favourites for insect language. You have to be a bit careful about imputing too much to lower animals. Very complex "organismic" behaviour can arise from very simple local behaviour. There have been successful computer simulations of "starling flocking" or "fish shoaling" where very complex coordinated movements involving thousands of individuals can arise without the need for any intelligence at all - just the ability to follow automatically some simple rules.

Comment:



In-Page Footnotes

Footnote 2: For instance:- Footnote 3: I don’t have this, but I have his later "Crane (Tim) - Elements of Mind - An Introduction to the Philosophy of Mind", which I have analysed to death!

Footnote 4: I think this was "Cooper (John) - Body, Soul and Life Everlasting: Biblical Anthropology and the Monism-dualism Debate".


Text Colour Conventions (see disclaimer)

  1. Blue: Text by me; © Theo Todman, 2018
  2. Mauve: Text by correspondent(s) or other author(s); © the author(s)



© Theo Todman, June 2007 - Dec 2018. Please address any comments on this page to theo@theotodman.com. File output:
Website Maintenance Dashboard
Return to Top of this Page Return to Theo Todman's Philosophy Page Return to Theo Todman's Home Page