Comments1
- What is Consciousness?: Examples – dental work with or without anaesthetic; eyes shut or open; sleep with or without dreams.
- The Indefinability of Consciousness:
- There seems no scientific, objective2 definition to capture the essence of consciousness.
- Definitions in terms of the psychological role conscious states play, or in terms of the physiology, seem to miss out the essential feel.
- We can imagine robots with electronic brains that satisfy a scientific definition, but have no feelings. Lights on but no-one at home.
- Similarly3 for androids4 made of defined chemical and physical stuff.
- Analogy5 with jazz “Man, if you gotta ask, you’re never gonna know”.
- What is it like to be a Bat?:
- "Nagel (Thomas) - What is it Like to Be a Bat?" raises the distinction between the subjective and the objective – which we often run together as humans, though they might come apart in intelligent robots.
- The point is that while we can imagine what it would be like for a human to act like a bat, why cannot imagine what it is like for a bat, despite knowing all there is to know about bat physiology.
- For instance – no doubt – echo-location would seem to the bat more like vision than hearing does to us, but we can’t fill in any of the details.
- Experience and Scientific Description: “… no amount of scientific description will convey a subjective grasp of conscious experience.”
- How Does Consciousness Fit In?: The central problem is to explain the “what it’s like” of phenomenal consciousness – how this fits into the objective world and in particular how it relates to the scientific goings-on in the brain. There are three options6, which will be examined in due course:-
- Dualist: conscious experience is genuinely distinct from brain activity. But, if the world contains subjective elements, how do they interact with normal space-time entities, and what principles govern their emergence?
- Materialist7: despite the appearances, there is a unity8 between the subjective and objective. The problem is to explain how the mind and brain can possibly be identical9 if they appear so different.
- Mysterian: the understanding of phenomenal consciousness is beyond human beings at present, and maybe forever.
- Hard and Easy Problems: David Chalmers distinguishes between the “hard” and “easy” problems of consciousness.
- The “easy” problems – while far from simple – are those that psychologists and physiologists know how to address using standard scientific methods without raising insurmountable philosophical obstacles.
- Easy problems deal with the objective study of the brain – the causal roles played by psychological states, and how these roles are implemented in the brains of different creatures.
- For example, pain is a state typically caused by bodily damage which typically causes a desire to avoid further damage. In humans it is implemented by A- and C-fibre transmissions. Similar analyses are available for vision, hearing, memory and so on.
- But such analyses tell us nothing about the feelings involved. Analyses involving causal roles and physical implementation apply just as much to unfeeling robots.
- The “hard” problem is to explain where these feelings come from.
- The Explanatory Gap: This is the expression used by Joseph Levine. Objective science can only take us so far – there seems to be a gap between what science tells us and what we most want to know – why does pain hurt?
- Creature Consciousness: Animals are said to be conscious if they have some conscious states.
- The Hard Problem is New: It’s only since the 20th century that the physical world view has tended to squeeze out consciousness. In earlier times it was taken for granted that there was a world of conscious minds independent of the physical world. The mental was supposed to be at least as basic, with matter a second-rate citizen.
- Rene Descartes’ Dualism: Despite his contributions to the foundations of modern physical science, Descartes never doubted that conscious minds exist at a distinct non-physical level. As a dualist, he thought that the mental and physical were two separate but interacting realms.
- Matter in Motion: Descartes’ account of the physical world was austere and differed from most accounts before and since. He though the physical world was just matter in motion, with all action by contact. Secondary qualities are not in the things themselves, but colours – say – are impressions produced by the action of material particles on our sense organs.
- Mind Separate from Matter: Outside the physical world is the metal world of unextended mental / conscious10 elements – thoughts, emotions, pleasures. The only property these elements share with matter is location in time, though the mental and physical interact – eg. sitting on a pin causes pain, and pain causes your body to jump.
- The Pineal Gland: a “whacky idea” but an honest answer – the function of the PG is still not fully understood – to a serious problem. The causal interaction of mind and matter remains the Achilles heel of contemporary dualism.
- Berkeley’s World of Ideas: sceptics argued that Descartes’s dualism left us ignorant of the world of matter. Berkeley “solved” the problem by denying that there was a world of matter for us to be ignorant of; mental experiences are all there is. “Esse est percipi”. While Idealism is an affront to common sense, Dr. Johnson’s “refutation” by kicking a stone fails: Berkeley didn’t deny the existence of sense impressions – he simply claimed that their causes were mental rather than physical.
- The Idealist Tradition: Idealism’s philosophical advantages and impregnability to disproof attracted many philosophers, including
- Idealism in Britain: Despite British philosophy being renowned for adherence to common sense, many of its leaders have been idealists:-
- John Stuart Mill: physical objects are “permanent possibilities of sensation” with no separate existence apart from our sensory awareness of them.
- Bertrand Russell: Regarded the physical world as a figment of our mental perspective, a logical construction out of the sense-data we are aware of in perception.
- A.J. Ayer: The material world has no reality apart from its reflection in the deliverances of our sense organs.
- The Scientific Reaction to Idealism:
- Idealism has no problem with consciousness, out of which it builds reality.
- But this just shifts the problem – how are material things part of reality?
- The tide moved against idealism firstly because – if mental events are private, how can third parties know anything about them – and hence – according to idealism – about reality?
- Behaviourist Psychology:
- The behaviourist response – from John B. Watson and B.F. Skinner – was to claim that a scientific psychology should be based on an experimental study of behaviour, not on individuals’ judgements of their feelings.
- They learned a lot about the training of rats and pigeons using rewards & punishments.
- The Skinner Box:
- “Operant Conditioning Apparatus”: initial random lever-press releases food, which reinforces the pressing, the frequency of which increases with Positive Reinforcement.
- An operantly conditioned rat will continue pressing the lever even when the food reward ceases.
- Both Watson and Skinner applied their views to human beings.
- Watson was an extreme environmentalist: claiming that the human mind is formed entirely by nurture – rewards and punishments – rather than by genetic nature.
- Skinner wrote Walden Two - as a “sequel” to the America idyll "Thoreau (Henry David) - Walden"; therein, Skinner proposed child-rearing should be rigorously based on a system of rewards.
- The Ghost in the Machine:
- While psychologists – “methodological behaviourists” - rejected subjective experience as bad methodology, philosophers – “logical behaviourists” – made out that subjective experience was incoherent.
- They claimed that all that was meant by “mental states” were publicly observable inclinations to behave in certain ways.
- Gilbert Ryle believed this and described the traditional view of the mind as a subjective realm controlling the body as “the ghost in the machine”.
- The Beetle in the Box:
- Ludwig Wittgenstein’s Private Language Argument associates him with Logical Behaviourism. Wittgenstein11’s argument12 is that public verification is required for the working of language, and that a language that can only be checked by one person makes no sense.
- Similarly13, we wouldn’t know what we were talking about if talk of mental states referred to private inner episodes.
- We might each be referring to different things if we referred to “the beetle (hidden) in our box”, or to nothing at all (if the box were empty).
- For mental talk to have objective content, we must regard the mental realm as intrinsically connected to the behaviour that makes it publicly observable.
- Psychological Functionalists:
- Both logical and methodological behaviourism are now seen as over-reactions to the subjectivist view of the mind.
- There’s craziness in supposing that mental states can never be known introspectively, only by observation.
- The “behaviourist joke14” – “you’re feeling OK, how am I?”
- Behaviourism has now been superseded by Functionalism15, which upholds behaviourism’s resistance to metal states as essentially subjective, yet allows them to be internal while not necessarily being observable in behaviour.
- The example given is someone being exhorted not to display pain-behaviour to avoid discovery.
- So, mental states are to be thought of as internal states identifiable by their typical causes and effects – pain typically arises from bodily damage and typically causes avoidance-behaviour, though actual behaviour depends on interaction with other beliefs and desires.
- Functionalists consequently think of mental states as causal intermediaries. While internal, they are not to be identified purely according to their feel. While unobservable, they are still objective parts of the causal-scientific world.
- Mental states are consequently thought of as being real – analogous to scientific unobservables such as atoms, genes or quarks16.
- Structure versus Physiology:
- Functionalism has no commitment to what mental states are made of.
- While functionalists turned inwards from behaviour to the brain, they didn’t involve themselves with neurons and functional areas of the brain, but instead drew flowcharts.
- So, they hypothesised mental structure – in terms of functional roles – in abstraction from physical structure.
- The Mind as the Brain’s Software:
- The analogy is with digital computers – with the brain as the hardware and the mind as the software.
- The analogy is stretched to allow that the mind might run on different hardware platforms, just as computer programs can run on different models of computer.
- What matters with software is its causal structure. On any particular hardware, running the software causes some17 internal states – it doesn’t matter quite what these are as long as they satisfy the structural requirement.
- Variable Realisation:
- According to functionalists, mental states are software rather than hardware / “wetware”.
- Example: Human and Octopus Pain
- Humans and Octopuses both show pain-behaviour – which typically arises from bodily damage and typically causes avoidance behaviour18.
- This is in spite of the fact that their “circuitry” differs.
- So – the functionalist claims – because the function of pain is the same in both cases, they have the same software but different wetware.
- A Physical Basis for Mind:
- Because functionalism deals only with structure rather than what mental states are made of, it is consistent with dualism or even idealism.
- Maybe some special “mind-stuff” arises in the brains of conscious creatures to provide the infrastructure needed to fulfil the functional roles.
- But most functionalists are materialists19, arguing that the analogous computers are entirely made of matter.
- A Modern Dualist Revival:
- Functionalism linked to materialism20 leads to the Hard Problem of consciousness – the mental “feels” that make life worth living.
- one response to this problem is to insist that the mind must occupy a separate non-physical realm after all. If modern orthodoxy implies that human beings are unfeeling automata, so much the worse for orthodoxy.
- David Chalmers is cited as a modern-day dualist, though one less extreme than Descartes. Descartes thought of mind and matter as separate substances. However, …
- A Dualism of Properties:
- Chalmers and the like think of human beings as a single substance with two kinds of property – they are property dualists.
- They consider both behaviourism and functionalism as over-reactions to subjectivist idealism.
- Rather than being just an intuition, modern dualist have two arguments:-
- The Argument from Possibility: derives from Rene Descartes.
- The Argument from Knowledge: derives from Gottfried Leibniz.
- Descartes’ Argument from Possibility21:
- Descartes argued that it is “perfectly possible” for mind and body to exist separately.
- For, there’s no apparent contradiction in the ideas of ghosts or immortal souls.
- While – in point of fact – there might not be any ghosts, it still makes sense to think we might continue to exist as conscious beings without a body, as millions have comfortably hoped.
- Even if in reality mind and body are always found together, the possibility of posthumous survival implies their real distinction.
- If they were really the same thing, what sense could be made of their coming apart?
- A modern variant of this argument – attributed to Saul Kripke – is the Zombie argument.
- A Zombie Duplicate22:
- Kripke imagines a molecule-by-molecule duplicate23 of himself – such as might be produced by a Star Trek holocopier24, but which has no conscious experience.
- The philosophical term for such individuals is “zombie25”, not to be confused with the living dead of B-movies.
- Kripke’s zombies don’t bang into the furniture but act just like normal human beings. They have the same brain structure, but just lack the inner feelings.
- Even if – as seems likely – there are no such zombies in the physical universe (as with Descartes’ argument) it is the mere possibility that is important. There doesn’t seem to be anything logically contradictory26 about the notion of a zombie.
- If we admit that zombies are possible, then conscious properties must differ from physical or structural properties.
- Papineau has an interesting diagram of two doppelgangers screwing together a couple of androids27. One asks the other “By the way, which one of us is the zombie?”. The answer is “How should I know. You’re the one with the feelings.” This is presumably ironic, but raises interesting questions28.
- Leibniz’s Argument from Knowledge:
- This argument is attributed to a passage in Gottfried Leibniz’s Monadology29.
- The argument is that if we walked about in a (super-sized) machine that “produces thinking, feeling and perceiving” we would see nothing but moving parts – and “never anything that could explain perception”.
- While the text goes on to investigate the “Red Mary” thought-experiment30 that Papineau considers to be the modern variant of Leibniz’s idea, it strikes me that this argument is similar to John Searle’s “Chinese Room31” argument.
- Leibniz’s point – says Papineau – is that even if we knew all there was to know about the workings of the brain, we still wouldn’t know about consciousness32.
- The Modern Argument from Knowledge:
- This is a TE proposed by Frank Jackson33.
- Mary’s a research psychologist – supposed to have grown up in a black-and-white environment – who knows all there is to know about colour – the properties of light and of eyes and the visual centres in the brain – but has ever seen red. Then one day she sees a red rose and so - it is said – comes to learn something new, namely, what it is like to see red.
- If this is right, not all mental properties are physical or structural properties – because (ex hypothesi) Mary knew all these before she experienced the colour red.
- Hence, this further property must be distinct from the physical and structural properties she already knew about, since she learned something new – the phenomenally conscious “what it is like”.
- A Dualist Science of Consciousness:
- We’re referred to David Chalmers who – Papineau claims – while not anti-science is persuaded by these dualist arguments that the scope of science should be expanded to include consciousness, just as physics was expanded to include electromagnetism rather than hope to reduce it to mechanics. Chalmers thinks that “there is a separate phenomenal realm where conscious awareness can be found”.
- We need a theory that accounts for the emergence of conscious states in the same way34 that Maxwell’s laws govern electromagnetic fields.
- Arguments against Dualism:
- Attempts to revive dualism have to face the old problems of mind-body interaction originating with Descartes and his pineal gland.
- However, the modern dualism is a theory of properties rather than substance, so doesn’t have to explain how two difference substances communicate causally.
- But we’re still left with the most difficult question – how can mind influence matter without violating the laws of physics?
- Causal Completeness:
- The physical world appears to be causally complete.
- An example35 is given of a goalie’s save: retinal registration of ball’s motion → neuronal activity in sensory cortex → physical activity in motor cortex → electrical messages travel through nerves → physical contraction of muscles.
- The Demise of Mental Forces:
- Hence, we never seem to leave the realm of the physical, and there’s no room for non-physical properties like conscious experience to make a difference to behaviour.
- Conscious experiences seem to be causal danglers, with as little impact on events as a child’s toy steering wheel has on the direction of her mother’s car.
- Squaring dualism with the causal completeness of physics was recognised as a problem in the 17th century. While Descartes wasn’t concerned, his successors pointed out that deterministic physics ruled out mind36 influencing matter.
- Newtonian Physics:
- This problem receded a bit in the 18th and 19th centuries as Newtonian physics allowed action at a distance, rather than insisting that change in motion requires contact between bodies.
- Not just gravity, but chemical forces and “forces of adhesion”.
- Also, vital and mental forces that allow living creatures to direct their bodies.
- It is only recently that such forces have begun to seem cranky. Newtonians thought of them as no more mysterious than gravity or magnetism. According to Papineau, C.D. Broad in his 1923 book Mind and its Place in Nature defended an “emergentist37” philosophy whereby special “configurational” forces arise in matter of sufficient organisational complexity, like living bodies and intelligent brains.
- Back to Descartes38:
- The current consensus:-
- Still have action at a distance rather than cause requiring contact.
- Quantum mechanics39 means we no longer have physical determinism.
- Material effects always have material causes.
- There are three fundamental forces
→ Strong nuclear
→ Electroweak
→ Gravity
- All non-random influence on the motion of material bodies results from these forces.
- So, there’s no room for an independent mind to make a material difference. The mind cannot move the body.
- Materialist40 Physiology:
- The major reason special mental forces have been discredited is advances in neuroscience.
- While it seems contrary to common sense that a physical system can display human behaviour, brain research suggests the opposite.
- Particularly, this is demonstrated by advances in understanding the body’s neuronal network, and the chemistry of the neurotransmitters that enable inter-cellular communication.
- No Separate Mental Causes:
- While the picture is far from complete, we’ve learnt enough in the last 100 years to be sure there are no “special mental forces” lurking in intelligent brains.
- Two of the last to hold out against the causal completeness of physics were John Eccles and Roger W. Sperry, both Nobel prize-winners. But the consensus is against them, even though theories of physics as we know it may not be correct or complete.
- Modern physical science would be very surprised indeed were matter to move other than from physical causes, though this idea is not incoherent.
- What about Quantum Indeterminism?:
- Does quantum indeterminism create the loophole to allow mind to make a difference?
- No, because QM fixes the probabilities of the outcomes of events. If the mind – by way of independent conscious decisions – could affect the probabilities of neurotransmitter movements, then their prior probabilities wouldn’t be fixed by QM after all.
- Again, loading of the quantum dice isn’t incoherent, but it would greatly surprise modern science to find it to be true.
- Causal Impotence:
- According to Papineau, most41 contemporary dualists – who accept the completeness of physics – consider it to be an illusion42 that the mental has any effect on the material world. We appear to be in control, but we are not – no more than a child with a toy steering wheel.
- Consequently, the conscious mind is “causally impotent”.
- Pre-established Harmony:
- This is the (crazy43) idea of Leibniz44 – who was a dualist who accepted the causal completeness of physics – and therefore that mind cannot really influence matter. God has set up the initial conditions45 so that mind and matter keep in step, and so that pains follow bodily insult and action follows the willing thereof.
- Modern Epiphenomenalism:
- Epiphenominalism doesn’t require divine action or advance planning, but simply allows influence from brain to mind, while denying influence in the other direction.
- This makes the conscious mind an epiphenomenon of the brain – a “dangler” with no causal powers of its own.
- While it so happens that the brain gives rise to conscious experience, it might have been otherwise, and everything would work the same.
- Using the analogy of a train – it all works fine at the physical level. The mental experience is just like the smoke – a by-product that makes no difference to the motion.
- The Oddity of Epiphenomenalism:
- This is rather counter-intuitive, implying (at least) two things:-
- Firstly – say – the sensation of thirst doesn’t cause us to go and get a drink.
- Secondly, it also implies that phenomenological Zombies46 would act the same as us.
- Papineau gives a longish quotation from "Chalmers (David) - The Conscious Mind: In Search of a Fundamental Theory", where Chalmers says that his Zombie-simulacrum would carry on writing volumes about consciousness – including about his experiences – just like he does, without experiencing a thing.
- I think these are two completely different situations, and are not equally plausible47, and aren’t equally necessary as consequences of epiphenominalism.
- The Materialist48 Alternative:
- Papineau agrees with the absurdity of the zombie implication, especially of our verbal accounts of our sensations. While it looks as though physicalism49 forces epiphenomenalism upon us, there are alternatives.
- The materialist50 option is to question whether conscious states really are different from physical states. If they are not, then they can cause physical events, and we can ignore zombies, because if mental states are necessarily brain states, then zombies can’t exist51.
- On the materialist52 view, physical duplicates are necessarily conscious duplicates.
- So, materialism53 avoids the drawbacks of epiphenomenalism. But, is it an options – What about the objections of Saul Kripke (Zombies, Section 26) and Frank Jackson (Section 28) who purported to prove that conscious states must differ from brain states?
- Materialism54 is not Elimination:
- We must note that materialism55 agrees that conscious experiences are real. If you’re in a particular brain state, that’s just “what it’s like” for you.
- In contrast with David Chalmers’s analogy with electricity (see Section 2956), here we have an analogy with temperature.
- The Example from Temperature:
- Rather than adding temperature to the ontology of physics, it has been reduced to mean kinetic energy.
- Temperature57 hasn’t been eliminated – unlike “animal spirits”. Similarly, Consciousness really exists – it just isn’t anything over and above brain activity.
- Functionalist Materialism58:
- Philosophers like Jerry Fodor – a Functional59 Materialist60 – want to associate conscious experience with structures rather than with physiology.
- Just like different hardware can run the same software, so – it is claimed – different physiologies can have the same kind of conscious experience.
- So, humans and octopuses both feel pain, despite their different physiologies, because they share a structural property of being in some physical state arising from bodily damage that causes a desire not to incur further damage.
- Similarly – it is claimed – with silicon-based aliens, provided they had the same structural property.
- However, many theorists – including myself61 – find this implausible as “it seems odd that your material construction should be irrelevant to how you feel” and it makes computer consciousness too easy to attain.
- Making a Computer Conscious?:
- We can program a computer to realise any causal structure whatever, and to role-play actions associated with pain or the emotions based on its internal states.
- But it’s hard to believe62 that such a computer would share our rich mental life – and our fear of death – even if structured in the right way.
- But functionalism also applies to – correctly configured – heaps of junk, not just to slick 2001-HAL-style computers. Would a scrap-metal machine really “feel anything”?
- The Turing Test:
- Standard description of the Turing Test63
- Standard objection: how can a computer feel anything? One that passes the Turing Test is only simulating a conscious mind, and is no more conscious than a weather program is wet.
- Papineau thinks this naturally leads on to John Searle’s Chinese Room argument.
- The Chinese Room:
- The usual exposition of the argument is given, where it’s clear that the Homunculus in the Room, carefully following the manual, doesn’t understand Chinese. So, the appearance of understanding doesn’t guarantee the real thing – and no more does the appearance of consciousness. Therefore, the Turing test doesn’t guarantee consciousness.
- This is much disputed64, of course.
- Language and Consciousness:
- The Chinese Room argument is – strictly speaking – only an argument against the functionalist65 account of linguistic understanding, rather than against the functionalist account of consciousness.
- But, claims Papineau:-
- Language is an intentional66 notion, and
- Intentionality and consciousness are closely related, a claim to be followed up later.
- Searle believes that linguistic understanding67 requires conscious experience, so his argument also applies68 to conscious experience.
- But functionalists can counter that it’s the whole Room that understands Chinese – not a part of it (the homunculus). Similarly, it’s the whole computer that might be conscious, not a part of it.
- Also, isn’t the CR TE somewhat under-specified? If the room really could answer questions addressed to it, it would need to be sensorially attached to its environment to update its understanding thereof. Given this, it would no longer be so obvious that it doesn’t know – say –what the Chinese symbol for “rain” is or – in general – what it’s talking about.
- Functionalist Epiphobia:
- There’s a more basic reason than the CR TE for not wanting to follow functionalism in assigning conscious states to functional ones.
- The attraction of materialism was its promise to restore causal power to conscious states – in contrast to epiphenomenalism – in particular by equating conscious properties with brain properties.
- It’s presumably the passage of specifically human neurotransmitters across my synapses that causes my muscles to contract, rather than some abstract structural property shared with octopuses.
- Given that it’s the physiological properties – that differ between human and octopuses – rather than their functional properties that cause muscles to contract, this leaves the functional properties as epiphenomena: they cause nothing.
- If pain is a purely functional property, it would – on this account – be inefficacious in itself, though emitted by a train of real causation.
- Mental States are “Wetware”:
- Functionalism implies a software-approach to mental states, but the above epiphobia inclines us towards a hardware approach (or “wetware”), whereby pains – for instance – are identified69 with physiological states.
- Papineau says this “blocks” Chinese-room-type anti-software arguments, but I’d have thought it just makes them irrelevant. If feelings depend on wetware, then we already know they are no longer reducible to software without the need for any clever TEs.
- Human Chauvinism:
- Papineau suggests that abandoning functionalism means that we can no longer be confident that70 beings with physiologies different to ours share our feelings.
- The text “rules out” octopuses having the same pains as us. But it goes on to admit that materialism doesn’t deny that octopuses have pains of some sort, just that they’re not (necessarily) the same as ours. Seems reasonable: why would anyone think71 they would be the same?
- Facing up to the Dualist Arguments:
- So, what about the dualist arguments of Saul Kripke and Frank Jackson, which seem effective whether we equate mental properties with structural or physiological ones?
- Kripke’s zombies share structural and physiological properties with us – yet (allegedly) lack consciousness.
- Jackson’s Mary knew everything about colour vision, yet lacked the conscious experience of it.
- The materialist answer is that these are objections to concepts72, not to properties73. There’s one property thought of in two ways – mental properties can be thought of as conscious or material, just as a person can have two names – eg. Judy Garland / Frances Gumm. Similarly, heat and mean kinetic energy.
- So, Mary acquires a new concept of “seeing red”, a new way of thinking about the experience: now she’s seen red, she can imagine it; before, she couldn’t. But, she could still think about the experience before she had it, and her imaginative thoughts refer to the same experience she previously thought of only scientifically.
- Similarly, the existence of two types of concepts for thinking about experience deceives us into thinking zombies are possible when they aren’t. Since conscious properties just are material properties, our imaginative concept of a zombie is self-contradictory.
- Zombies are Impossible:
- So – according to the materialist – the believer in zombies is like a person who believes that a person with two names74 can be in two places at once, or that a gas can have two different temperatures yet the same mean kinetic energy. All these things – zombies included – might seem possible, but are not.
- According to dualists, when God had made our physical bodies, he still had more work to do – namely putting the feelings in. He could have left us a zombies75. But according to materialists, God had no more work to do, and zombies are beyond even an omnipotent God.
- Mysteries of Consciousness:
- Not everyone is convinced by the examples of names and temperature: these seem certain, but the identity of consciousness with brain processes less so. Maybe the once accompanies the other without being identical to it.
- Colin McGinn thinks it beggars belief that phenomenal experience can be the same thing as neurons firing.
- Similarly, Thomas Nagel - while appreciating the reasons for equating mind and brain – argues that we lack and conception of how they could be identical.
- However, such philosophers don’t was to return to dualism, and agree that epiphenomenalism is absurd.
- The Mysterian Position:
- The Mysterian position is that consciousness is beyond human comprehension.
- But while we can’t live with the identity of consciousness and the physical, we can’t live without it either – on pain of ‘mental impotence’ (epiphenomenalism).
- Mysterian philosophers claim we lack the concepts to understand the issue.
- While science may some day resolve the issue, it may be that the structure of our minds means we can never comprehend the truth.
- These concepts may be as far beyond us as calculus is beyond monkeys.
- A Mysterian Speculation:
- Colin McGinn is not beyond some – to me wild – speculation, suggesting that consciousness is the resurrection of the non-spatial reality that existed before the big bang, when spacetime came into being, detected once complex-enough brains arose in their turn.
- Special Concepts of Consciousness:
- Papineau asks whether such flights of fancy76 are necessary.
- Materialists claim that the Mysterians have given up too quickly and that their objections to mind-brain identity rest on nothing other than blank incredulity that “soggy grey matter might constitute technicolour phenomenology”.
- Materialists agree that this identity is more difficult to believe than other identities, but maybe they can offer an account of just why it is so difficult to accept, even if true, by appealing to the special kinds of imaginative concepts we use when referring to mental items as conscious.
- So, ‘Red Mary’ gets the ability – which she lacked before seeing red – to re-create the experience in her imagination, which provides a particularly vivid way of thinking about consciousness. But, thinking about ‘soggy grey matter’ is still thinking about the same thing77 as is her re-imaginative experience. Re-enactment is just a special way of thinking about conscious experience.
- Everybody Wants a Theory:
- In discussing the three options to date – materialism, dualism and mysterianism – we’ve not asked which parts of the brain might be responsible for conscious experience. Not all parts do, because lots of the brain’s activities are sub-conscious (or unconscious).
- So, we need a theory of consciousness, which would tell us:-
→ What is required for consciousness
→ Which brain activities yield consciousness, and
→ Which animals are conscious
- Once we’ve identified the brain processes responsible for consciousness, we’ll be able to check whether these are present in various animals, though this is far from straightforward.
- While materialists will seek to identify consciousness with the physical processes, the dualist will think of consciousness as something extra that accompanies them78
- Papineau seems to think this project is independent of the metaphysical position adopted79. In particular, he thinks that dualists will be happy with a physicalist account provided something extra is needed.
- He says that theorists are often unclear as to their metaphysical stance – but often use terminology that – Papineau claims80 – only make sense in a dualist context, by using expressions such as (physical processes) ‘generate’, ‘cause’ or ‘give rise to’ consciousness.
- So, Papineau thinks we can proceed in pursuit of our theory irrespective of our metaphysical commitments.
- Neural Oscillations:
- Francis Crick & Christof Koch have developed the theory that the key to consciousness – at least in the visual sphere – are neural oscillations in the 35-75 Hz range.
- These oscillations are invoked to solve the binding problem81, whereby the brain – which processes the location, shape, colour and category of a perceived object in different areas of the visual cortex – yet perceives a unified object. So, how are the properties of different objects differentiated and how are those of a particular object bound together so that a unified object is perceived?
- The supposed answer is by use of a particular frequency and phase in the available space. Additionally, this neural correlate of consciousness is supposed to82 account for our conscious visual awareness.
- Neural Darwinism:
- This theory – nothing to do with Darwinian evolution, except by analogy – is down to Nobel Prize-winner Gerald Edelman, late in his career83.
- The basic idea is that we are born with more neurons and interconnections than we need, and that 70% of our neurons are culled by age 8 months.
- Neural interconnections not encouraged by stimulation ‘wither and die’. Those that remain form a set of interconnected ‘neural maps’ used for visual and other perception.
- On receipt of some stimulus by the brain, these maps become activated and send signals to one another.
- Re-entrant Loops:
- Edelman calls these activations of the neural maps ‘re-entrant loops’. They – and new structures that are laid down – continue to evolve in response to further stimulation.
- Edelman’s theory84 is that it is this evolving structure of re-entrant loops that is responsible for conscious awareness. It is responsible for a form of memory that enables the categorisation of information, and also plays a part in thinking, reasoning and the control of behaviour.
- Evolution85 and Consciousness:
- Papineau now asks whether Darwinian evolution can help us understand consciousness.
- Papineau says that knowing that the ‘evolutionary purpose’ of the heart and saliva – respectively – is to pump blood and digest food – respectively – helps us to understand these ‘traits86’.
- But, both87 materialists and epiphenomenalist dualists agree that conscious properties produce no bodily effects other than those produced by the brain.
- Yet, the evolutionary purpose of a trait is that benefit it has for survival. We have hearts because hearts benefitted our ancestors, but no such obvious benefit accrues to conscious experience.
- Our ancestors would have survived even had they been zombies, since88 their brains would have had the same physical effects.
- The Purpose of Consciousness:
- Papineau does admit that materialist philosophers who identify consciousness with brain processes can say that consciousness does have physical effects89, namely those of these very brain processes.
- Even so, such materialists won’t know which of the brain processes resulting from natural selection cause consciousness.
- So, to know the evolutionary purposes of consciousness, materialists need to know which brain processes constitute consciousness and which don’t.
- Since they need a theory of consciousness before evolution can explain its purpose, evolution cannot – without circularity90 – be used to explain the purpose of consciousness.
- Quantum Collapses:
- A rather91 speculative view ties consciousness to the collapse of the quantum wave function.
- Papineau describes QM as a very odd theory, with the indeterminism – ‘God playing dice’ – only a very small part of the theory. He claims much isn’t indeterministic at all, with the wave function evolving deterministically. Schrodinger’s equation is given. QM is in this sense similar to CM, following equations of motion.
- How Quantum Physics Differs:
- The wave function only specifies probabilities for measurements of velocity and position.
- Measurements seem to make the wave function ‘collapse’ to a particular set of values.
- This type of change isn’t predicted by Schrodinger’s equation and it’s correct understanding is extremely controversial.
- Schrodinger’s Cat:
- We’re given a potted account of the standard TE: an electron gun has a 50% chance of hitting the top half of a detector – when poison gas is emitted – and a 50% chance of hitting the bottom, when nothing happens. The cat’s fate isn’t sealed until the wave function collapses and it’s decided which half of the detector is hit. When does this collapse occur? Schrodinger’s equation doesn’t tell us. It’s happy for the electron to exist as a superposition of upward and downward paths, and the cat therefore to exist as a superposition of alive and dead states.
- Quantum Consciousness:
- A ‘bold view’ is that collapse only occurs when a conscious observation is made. So, the cat is neither alive nor dead until someone92 looks in the box to check, unless the cat itself is conscious – when things become definite when they register on the cat’s consciousness: when the cat smells93 the poison or not. This all sounds a bit like94 George Berkeley’s ‘to be is to be perceived’.
- The American physicist Henry P. Stapp claims that quantum waves collapse when intelligent brains select one of the quantum possibilities as a basis for future action. This is also a theory of consciousness – it’s those parts of the brain that are implicated in quantum collapse that constitute consciousness.
- On this view, consciousness does have physical effects – it causes quantum collapse (though it’s still down to chance what outcome becomes actual). A conscious observer ensures the cat has a definite fate, but it’s down to God’s dice what this fate is.
- This allegedly means that consciousness serves a biological purpose – eliminating alternative realities allows us to better plan our actions95.
- Another Link to Quantum Physics:
- Roger Penrose thinks consciousness is tied to activity in cytoskeletal microtubules, which provide the scaffolding in all cells – including neurons. Their dimensions are appropriate for orchestrating quantum collapse. Penrose’s idea differs from Stapp’s, as he thinks gravitational effects are responsible. He argues that microtubules focus quantum waves until they reach the gravitational threshold96 for collapse.
- Quantum Collapses and Gödel’s Theorem:
- So, for Penrose, consciousness doesn’t cause quantum collapse, it’s just the way that collapse manifests itself in our minds.
- Gödel’s Incompleteness Theorem demonstrates that no axiomatic system is sufficient to prove all the theorems of arithmetic. Penrose thinks that this shows that the human mind must have some non-algorithmic element that goes beyond axioms and rules.
- Not all logicians agree with this inference, but Penrose goes ahead to claim that the non-algorithmicity of consciousness derives from QM.
- However, there seems to be no virtue in ‘explaning’ one thing we don’t understand – consciousness – in terms of another thing we don’t understand – QM.
- The Global Workspace Theory:
- Bernard Baars argues for the theory that states that the specialised cognitive information-processing sub-systems for perception, attention, imagery, language (and the like) are largely unconscious, but that consciousness arises when they are brought together in a ‘global workspace’, which makes their individual contributions available to the brain as a whole.
- The global workspace is a bit like a blackboard (or other outdated analogies) from which other sub-systems can analyse and interpret information. This processing is conscious, whereas that of the subsystems in unconscious.
- Papineau seems fairly supportive of this, saying it ‘happily explains the interplay of conscious and unconscious processes …”. I have my doubts97.
- CAS Information Processing (CAS = Conscious Awareness System):
- Other psychologists have tried to explain consciousness by its central role in information-processing and decision making.
- For instance, Daniel L. Schacter has it that phenomenal consciousness consists in98 a cognitive system that mediates between specialised knowledge modules – like vision and hearing – and the executive system that controls reasoning and action.
- The CAS can also receive information from the Episodic Memory Store when we consciously recall previous experiences and from the Executive System when we mull over our plans.
- There’s a useful diagram of Schacter’s Model, which also includes the Response Systems and the Procedural / Habit System , which is identical to this one: Schacter's Model of Consciousness. Papineau notes that the CAS is responsible for integrating information, which must all be routed through it. In particular, there’s no direct contact from the specialised knowledge modules, or from Episodic Memory, to the Executive System.
- Equal Rights for Extra-Terrestrials:
- All discussion of the proposed mechanisms of consciousness has so far focused on human physiology and psychology. This is absurdly chauvinistic.
- It’s OK to argue that the consciousness of other creatures – such as octopuses – differs from ours because of their different physiology; but this is far from saying that non-humans cannot have consciousness at all.
- Some philosophers99 – but not Papineau – argue that all non-human animals (including cats, dogs & chimpanzees) lack consciousness.
- But surely intelligent aliens – with none of the physiology that humans (or animals generally) have – should be accorded consciousness, and an all-embracing theory of consciousness should include them.
- Intentionality and Consciousness:
- A mental state is intentional if it is about something. Papineau lists various examples from a consideration of Sydney in Australia. Intentionality is independent of any particular implementation, so avoids ‘terrestrial chauvinism’.
- Papineau mentions the development of intentionalist theories by Franz Brentano and Edmund Husserl, who argued for Phenomenology – that philosophy should be grounded in how consciousness presents objects to us.
- Consciousness and Representation:
- Contemporary non-phenomenologists have also espoused intentionality. These include Michael Tye, Fred Dretske and David Chalmers (described as a ‘dualist’).
- Tye and Dretske want to identify Consciousness with Representation, while Chalmers hopes to show that Consciousness and Representation are separate but related features of the Mind.
- Chalmers speculates that his science of consciousness will explain how consciousness arises in the presence of representation (for which he prefers the technical term ‘information100’). Apparently ‘information’ is present even in meaningless syntactical structure.
- Explaining Intentionality:
- Given that intentionality is itself difficult, does it take us any further in explaining consciousness?
- How can words – written or spoken – stand for something else, especially something we’ve never encountered? If they represent because we understand their meaning, doesn’t this just kick the can down the road?
- Thus, intentionality seem as hard a problem as101 consciousness, so may not take us any further.
- Can We Crack Intentionality?:
- But, if we could show that consciousness involved nothing over and above intentionality, we’d only have one riddle where we’d had two, and that would be an advance.
- There are a few proposed solutions to the ‘hard problem; of intentionality – which try to explain how intentionality fits into the world of cause and effect – but none with universal acceptance, though it’s too early to say none will be.
- Non-representational Consciousness:
- The trouble is that – so it seems – not all conscious states are representational and not all representational states are conscious.
- Examples of intentional conscious states: thoughts, perceptions, images, memories.
- Examples of prima facie non-representational conscious states: pains, itches, headaches, emotions, moods, orgasms.
- In Defence of Representation:
- In defence, it may be argued that such allegedly non-representational states do – on closer consideration – have representational content.
- Pains and itches represent trauma102 in particular parts of the body. Emotions represent the general state of things. Orgasms represent physical changes.
- Non-conscious Representation:
- Plenty of representations don’t seem to be conscious: sentences, unconscious beliefs.
- For example103, if you consciously think your wife’s faithful, but are always checking up on her, then you have an unconscious belief that she isn’t.
- In response, we can argue that these are second-order representations that depend on states that are conscious. Sentences represent because they are consciously understood by those that use them104. Maybe unconscious beliefs represent because they are similar to conscious beliefs with the same content. But there are harder cases.
- Many brain states appear to represent unconsciously without assistance from conscious states. For example the early stages of visual processing105 involve brain states that represent light wavelength and intensity, yet these are unconscious. We don’t see these properties of light waves even though our brain knows about them (in a sense).
- The above kind of representation can’t plausibly be labelled ‘second-hand’. People consciously interpret the sentences they speak, but no-one consciously interprets the brain-states involved in visual processing. Nor are such unconscious states counterparts of conscious ones since most people have no conscious beliefs about the properties of light waves.
- Other example of non-conscious representation can be found outside the human brain – for instance in primitive animals such as bacteria and machines such as thermostats neither of which are normally supposed to be conscious.
- Panpsychist Representation:
- So, a representational approach to consciousness has two options.
- The first is to stick with the theory and resist the temptation to deny consciousness to bacteria, thermostats and early visual processing. This seems to be David Chalmers’s approach, as he concludes that bacteria have a limited form of consciousness as they embody intentional states. Indeed, almost any physical system is conscious for Chalmers as his definition of Information106 is satisfied by almost any causal process. Chalmers ends up as a panpsychist107.
- The other option is to modify the representational approach to say that only representations of a certain kind yield consciousness.
- Behaviour without Consciousness:
- A natural suggestion – made by Fred Dretske and Michael Tye – is that consciousness arises when representations play a role in controlling behaviour. The key point is that there has to be a range of behaviours to control, which allegedly eliminates108 bacteria, thermostats and early visual processing.
- Unfortunately, behaviour-control seems insufficient to ensure109 consciousness. Recent evidence suggests that much human behaviour is directed by subconscious processes. As an example, Benjamin Libet asked subjects to decide to move their hands and simultaneously note the time110 using a wall-clock. Libet used scalp electrodes to monitor the motor-cortex activity that initiated the hand movement. He found that this was 1/5 of a second before the subjects were aware of making any conscious decision.
- The precise interpretation111 of this experiment is still up for debate, but it certainly suggests that some processes governing human behaviour do not involve consciousness.
- What versus Where:
- Similar implications flow from visual illusions. When a chip (a circular disk) is placed in the middle of larger chips and then an identical disk is placed amongst smaller disks, everyone sees the disk in the second situation as larger than the one in the first. However, when they try to pick up the disk, their fingers are equally- and appropriately-spaced.
- So, here again it seems that behaviour is controlled by unconscious rather than conscious representations. Many neuropsychologists now think there are two pathways through the visual system. The ‘low’ or ‘what’ path leads to conscious recognition of objects. The ‘high’ or ‘where’ path controls bodily movement; despite being unconscious, it still controls behaviour.
- The Problem of Blindsight:
- Some brain-damaged individuals report no conscious vision at all, but when asked to guess are quite good at112 guessing lines, flashes of light and even colours. Their performance must be controlled by information at the unconscious level.
- So, in summary, all these cases threaten the idea that representation is conscious whenever it plays a role in controlling behaviour. It may be possible to tweak what we mean by “controlling behaviour”, but it’s not obvious how to do this, especially ‘if we want to avoid chauvinistic appeals113 to the details of human cognition’.
- HOT Theories (HOT = Higher-Order Thought):
- These theories suggest that consciousness arises when we’re introspectively aware of the representations. The representation meta-represents itself. We think about those experiences at the same time as having them.
- The HOT acronym is due to David Rosenthal.
- While this theory might be true of human consciousness, can it be generally applicable?
- Criticism of HOT Theories:
- It seems odd that I don’t become conscious of x before I think about my experience of x.
- If a visual experience – say – is not conscious in and of itself, it’s difficult to see how it can become conscious by being thought about.
- Additionally, HOT theories seem to demand a lot of sophistication of conscious creatures. Those that can’t think about their mental states wouldn’t be conscious at all. While this would (rightly) deny consciousness to thermostats and bacteria it would also seem to deny it to114 ‘rats, bats and cats’.
- Self-consciousness and Theory of Mind:
- Creatures that can think about mental states are said to have a ‘Theory of Mind’ (TOM). So, they not only have vision, emotion and belief, but are capable of having thoughts about vision, emotion and belief.
- While humans have this ability to think about mental states, including their own, it’s not clear whether any other animals do.
- The classic test is the ‘False Belief Test’: human children can pass this test when they are 3 or 4, but not before.
- The False-Belief Test:
- Papineau describes one version of the test (FBT): a child puts her sweets in a basket; while she’s out of the room, another child puts the sweets in a drawer. When the first child returns, our test candidate is asked where she will look for her sweets. Young children will say ‘in the drawer’ because they know that’s where the sweets are, but they don’t have the understanding that the first child doesn’t know this. All mature humans, and most children over the age of 4 correctly say ‘in the basket’.
- It’s not clear whether any other animals can pass the FBT; maybe chimpanzees and some other apes may scrape through115.
- Conscious or Not?:
- The jury seems to be out on whether apes can pass the FBT. Most experiments have been performed on chimpanzees, and the lack of language causes methodological difficulties. Also, chimps get bored with experiments and start messing around.
- Anyway, even if apes do have a TOM, other mammals certainly don’t – cats and dogs116, for instance. If they can’t think about minds, then they can’t think about their own minds. By the lights of HOT theories, this would deny them consciousness.
- Cultural Training:
- Some philosophers are happy to accept117 the counter-intuitive conclusion that cats and dogs are not conscious.
- Daniel Dennett is – Papineau claims – not only willing to argue that consciousness requires something like HOT but that ‘such thinking118’ requires acculturation and not just biology.
- This has the consequence that none of our ancestors would have been conscious before the advent of human culture119.
- Sentience and Self-consciousness:
- Well, most theorists reject any idea that consciousness is HOT and accept the common-sense idea that ‘dumb’ animals are conscious.
- But, we must distinguish sentience from self-consciousness. The latter – defined as thinking about one’s experiences – requires HOT. It is natural to think of many animals as sentient but not self-conscious.
- So, for example, cats and dogs are conscious of sights, sounds, pains and so forth, even though they don’t think about them. There is ‘something it is like’ for them to have these experiences.
- Future Scientific Prospects:
- Future scientific research – in particular brain-scanning technologies – will tell us more about human consciousness.
- These will supplement older techniques like behavioural experimentation, studies of brain damage and electro-encephalography (EEG, which detects brain waves using electrodes placed on the skull).
- PET and MRI:
- Positron Emission Tomography120 (PET Scans): use a radioactive marker in the blood to measure brain activity.
- Magnetic Resonance Imaging121 (MRI Scans): achieve the same effect by placing the brain in a powerful magnetic field.
- With computer-assistance, these techniques provide striking images of which brain areas are activated by which mental tasks. They will give an increasing understanding of the cerebral underpinnings of consciousness.
- However, whether this will lead to a general theory of consciousness is another matter. These – and any other imaginable techniques – will only tell us about consciousness in humans, who are the only beings capable of telling us about their states of consciousness. Human beings can report their experiences, which allows us to pinpoint the brain processes involved – say – in distinguishing the two senses of an optical conundrum (like the ‘faces or candelabrum’ test or filling in missing triangles). You can’t do this with animals.
- Nor can we use the fact – deduced from their behaviour – that animals are sensitive to visual stimuli. Even if we knew what was going on in their brains, blindsight and similar phenomena show that is possible to behave sensitively in the absence of consciousness.
- A Signature of Consciousness:
- If we’re lucky, we may find some key feature that applies to all human Brain122 states that yield consciousness. This may involve representation – as is claimed by those supporting intentional theories of consciousness – or it might be something yet unimagined.
- If such a ‘Signature of Consciousness’ does come to light, it might yield a general theory via comparative anatomy (in the case of animals) or other similarities123 in the case of extra-terrestrials or AIs. .
- But if – as seems equally likely – there is no common signature other than being reported as conscious – ie. having introspective accessibility and reportability. Then we’d be stuck for non-human creatures124.
- Also, introspective reportability is a form of self-consciousness, so we don’t what to make that the essential condition of consciousness as it would arbitrarily deny consciousness to cats and dogs, who don’t stop to think about125 their own minds.
- So, then, how do we decide which animals qualify for unselfconscious sentience? Cats and dogs seem clearly positive cases, but what about fish, crabs126 or snails? Or extra-terrestrials or AIs? Without a clear signature, there seems nowhere to go.
- The Fly and the Fly-bottle:
- Wittgenstein127 thought that philosophical problems arise from muddles and need therapy rather than solutions – maybe this is how we should approach Consciousness.
- If we can’t make any progress head on maybe we can move sideways and examine our philosophical assumptions.
- Let’s return to the two options considered earlier – dualism and materialism – rejecting mysterianism as lacking in ambition!
- The Dualist Option:
- Dualists have little room for manoeuvre as by their lights consciousness will depend on the presence of some sort of “mind-stuff” – so snails and supercomputers will be conscious just in case they have it.
- But this mind-stuff must be epiphenomenal and causally impotent128, so we won’t be able to determine its effects.
- So, dualism sheds no light on129 the conscious life on non-human creatures.
- The Materialist130 Option:
- Materialists deny the existence of “mind stuff”. There are only brain processes, some of which are “like something” for the creature to have them.
- While dualists consider consciousness as binary – you either have it or you don’t, depending on whether or not you have the extra “mind stuff” – materialists allow for a continuum of “what it is likeness”.
- Papineau claims that some cases are clear: humans, chimps and cats are conscious while stones, seaweed and bacteria are not. But in between there need be no fact of the matter131. There may be no definite point at which inner life shuts off into nothingness.
- A Question of Moral Concern:
- Daniel Dennett claims that attributions of consciousness are grounded in moral concern: it’s because we care about our cats that we attribute consciousness to them.
- Similarly, if we ever encounter aliens it’ll be our mode of interaction with them that decides the issue of their consciousness. If we consider them mere machines132, we’ll not deem them conscious. But if we discuss their and our hopes and fears with them, we’ll come to regard them as conscious.
- Some philosophical sceptics might still ask whether they are really conscious, but – if we were to make alien friends133 – this might come to seem as silly as asking whether other human beings are really conscious.
- Is There a Final Answer?:
- Papineau thinks that – at first sight – Dennett’s idea very odd. How can a being become conscious merely based on our attitude towards it?
- But – maybe – this has to do with expanding our concepts. It’s not that our concern would change what it’s like to be an alien (say). Rather, it would give us a reason to refine our vague concept of consciousness to include them.
- Papineau says that – of course – our concern won’t change aliens’ inner lives but it might make it rational for us to extend the term ‘consciousness’ to include that inner life134, which we’d have reason to think of as akin to our own.
- We may be disappointed that there’s no ultimate answer to the riddle of consciousness. But – says Papineau – in the end it all comes down to definitions.
- Others may be satisfied with no answer, and will make their own way out of the fly bottle.
References
- Papineau’s Reading list consists of the following, all of which I now possess:-
- "Baars (Bernard) - In the Theater of Consciousness: The Workspace of the Mind",
- "Block (Ned), Flanagan (Owen) & Guzeldere (Guven) - The Nature of Consciousness",
- "Chalmers (David) - The Conscious Mind: In Search of a Fundamental Theory",
- "Crick (Francis) - The Astonishing Hypothesis - The Scientific Search for the Soul",
- "Dennett (Daniel) - Consciousness Explained",
- "Edelman (Gerald M.) - Bright Air, Brilliant Fire: On the Matter of the Mind",
- "Hofstadter (Douglas) & Dennett (Daniel), Eds. - The Mind's I - Fantasies and Reflections on Self and Soul",
- "McGinn (Colin) - The Problem of Consciousness: Essays Towards a Resolution",
- "Metzinger (Thomas), Ed. - Conscious Experience",
- "Nagel (Thomas) - The View from Nowhere",
- "Penrose (Roger) - Shadows of the Mind",
- "Shear (Jonathan), Ed. - Explaining Consciousness: The Hard Problem",
- "Tye (Michael) - Ten Problems of Consciousness".
- Web Refs: Papineau gives two:-
- Psyche:
- This was an open-access journal with discussion lists, published by the Association for the Scientific Study of Consciousness, and which was hosted by Monash University, Australia, but it seems to have moved on as the URL no longer works.
- The old site is now at Psyche, and was active from 1994 – 2010. It looks worth investigating as there are a lot of articles by well-known authors therein.
- For the ASSC, see ASSC: Association for the Scientific Study of Consciousness.
- The journal of the ASSC is Neuroscience of Consciousness, published from 2015 onwards. It is neither a straight replacement for Psyche, nor is it open access (or even on JSTOR).
- David Chalmers
- Papineau gives a defunct reference at the University of Arizona, saying that – as well as Chalmers’ own works – it contains
→ a substantial bibliography of works on consciousness,
→ excellent links to other sites and
→ a section devoted entirely to zombies.
- Chalmers’ current site is: David Chalmers: Home Page.
- This site has links to earlier versions of his site at David Chalmers: Website History.
- Chalmers notes that much of the values of the old sites – which are no longer updated – has been taken over by PhilPapers.
- However, the above links are most likely: -
→ Bibliographies: David Chalmers: Contemporary Philosophy of Mind: An Annotated Bibliography.
→ Links: David Chalmers: People with online papers in philosophy.
→ Zombies: David Chalmers: Zombies on the Web.
In-Page Footnotes
Footnote 1:
- The book’s discussion is a bit like a stream of consciousness; my intention is to reproduce the main points and add comments.
- Where I’ve not yet analysed the text, I’ve just extracted the headings.
- The book is (very heavily) illustrated – later editions are explicitly called “Graphic Guides”. My notes remove that part of the appeal of the book. In general, however, the illustrations are motivational rather than essential to the exposition.
Footnote 2: Footnote 3:
- I don’t think the two cases are similar.
- The consciousness of digital computers depends on functionalism, but – as we’ll see – there might be QM-style physical constrains on what can be conscious.
- Also, I didn’t think Papineau had the right understanding of Androids.
- Finally, conceivability is no guide to possibility, as we’ll find when we discuss zombies.
Footnote 5:
- Rather a poor one. Jazz can doubtless be defined, as in the dictionary, though I’m not going to try.
Footnote 6:
- Are these the only three possibilities?
Footnote 9:
- This seems a rather early commitment to the identity theory.
- Is this Papineau’s view, or are there other materialist options?
Footnote 10:
- I don’t think Descartes distinguished consciousness from the mind generally, assuming all thought to be conscious.
Footnote 12: Footnote 13:
- I’ve added this comparator – the (highly compressed) “arguments” – really just claims – are run together in Papineau’s text.
Footnote 14:
- This is a sanitised version of the usual one – “it was great for you, how was it for me?”
Footnotes 15, 59, 65: Footnote 16:
- These are rather a mixed bunch.
- Quarks are theoretically as well as practically unobservable.
- Some atoms can now be observed (I think, in lattices) via electron microscopes.
- Genes may not be observable as they are rather obscure theoretical entities – unless equated with specific strings of DNA.
- Also, it is possible to hold an anti-realist, or instrumentalist view of all these items – treating them as convenient fictions.
Footnote 17:
- I’ll note a couple of slight misgivings about all this.
- Firstly, software is a universal rather a particular – an instantiation is the particular. Compare “Little Dorritt” with a copy of the book. I am hardware, not software; though – like a computer – pretty useless without an operating system and some programs to run.
- Secondly – while the physical structure of computer memory is changed as a program runs, the CPU is not and nor are the various interconnections. However, in the brain, while new neurons – as far as I know – are not created as the result of mental activity, the connections between them are. The physical structure of the brain is changed as the result of learning and experiencing. And much else besides.
Footnote 18:
- Functionalism is not much of an improvement on behaviourism in this regard.
- The most important thing about pain to those suffering from it is it’s “feel”, and this is left out of the picture completely.
- Worms and people may display analogous pain-functionality, but people certainly feel something, though we doubt that worms do, or at least not to an analogous degree.
Footnote 21:
- This is the “Real Distinction” argument, which relies on the doubtful premise that “conceivability implies possibility”: what I clearly and distinctly conceive must be possible.
- There are two objections to this:-
- Descartes relies on the goodness of God in not allowing me to be deceived over those things (I think) I most clearly and distinctly perceive.
- It’s not always clear that I can in fact clearly and distinctly conceive of what I think I can. I may be muddled, or ill-informed.
- I wrote a finals pre-submission essay on this. See “What is Descartes’s argument for the ‘real distinction’ between mind and body? Is it a good one?”.
Footnote 24: Footnote 26:
- There’s a lot that could be said about this, but …
- This seems to beg the question against materialism. If feelings “just do” arise from matter appropriately organised, then zombies are impossible. We can’t imagine what we think we can.
Footnote 28: Footnote 29: Footnote 31: Footnote 32:
- But it looks to me that Leibniz’s argument applies to all mentation, even unconscious thought or perception.
- He treats the brain as a machine and asks “where’s the thought?” – just as Searle asks – of a digital computer – where’s the language faculty?
Footnote 33: Footnote 34:
- Does he really?
- How would this theory be quantified, or is this a very loose analogy?
Footnote 35:
- “→” = “causes”
- Papineau’s actual example is in reverse, with “caused by”.
Footnote 36:
- If mind is a separate substance.
- Leibniz held that if all changes in motion are caused by collisions between material particles, there’s no room for mind (“immaterial souls”) to influence matter via the pineal gland.
Footnote 37:
- I’d thought such emergentism was still a popular idea.
- Papineau implies that it ought to have gone out with the ark, and it’s surprising that Broad was still a professor of philosophy at Cambridge in 1953.
- My reading, of course.
Footnote 38:
- I didn’t understand this reference.
- Apparently, we’ve left “Newtonian liberality” behind and reverted to “Cartesian austerity”.
Footnote 41:
- I presume this doesn’t include Richard Swinburne, who’s a more robust dualist.
- Papineau doesn’t seem to mention Swinburne anywhere.
Footnote 42: Footnote 43:
- To me – at any rate – this idea is a non-starter, but no philosophical idea is so silly as to not offer some insights, I dare say.
Footnote 45: Footnote 47:
- The epiphenomenalist claim about thirst is that the same brain events that cause me (unconsciously) to get a drink also cause me to feel thirsty. This claim doesn’t deny that I feel thirsty, only that it’s not this feeling that causes my behaviour. This is implausible, but not absurd.
- The “zombie” claim is – to me – much less plausible. It claims firstly that I would still get a drink even if I didn’t have a sensation of thirst, which is fine as far as it goes, but it also claims that I would complain that I was thirsty even if I had no sensations at all. I find this much less plausible. I can see how a race of zombies would have arisen within a race of sentient beings – because displaying sensations is adaptive for social beings – but it wouldn’t be so if everyone had no sensations? You – the zombie – wouldn’t then be able to free-ride on others’ feelings of empathy, as they wouldn’t have any feelings either.
Footnote 51: Footnote 56:
- The argument was that just as physics was expanded to include electromagnetism, so it should be further expanded to include consciousness.
Footnote 57:
- Temperature is Papineau’s favourite example – I can’t remember another.
- Note that we’re here talking about thermodynamic temperature, not the “sensation of heat”.
Footnote 61:
- I agree that it’s right to say that the octopus is engaging in “pain behaviour”, but we can only know that it feels like it does for us in accord with the similarity of the physiology.
- But we should assume it’s unpleasant, and the degree of unpleasantness will depend on the complexity of its nervous system.
- We ought also to consider what use the sensation of pain would be to the organism or entity in question, given that natural selection has selected for it.
Footnote 62:
- Unfortunately, this intuition is hard to justify. It’s similar to the one that denies that “mere matter” could be conscious.
- Also, it doesn’t really imagine the immensely complex software that would be needed to mirror our “rich mental life”.
Footnote 63: Footnote 64: Footnote 66: Footnote 67:
- Well – trivially – consciousness of linguistic understanding requires consciousness, but linguistic performance doesn’t.
- Most people have only tacit knowledge of grammar.
- Google Translate now works pretty well and has consciousness of nothing.
- I suspect the success of associative learning will up-end all theories of linguistic knowledge.
Footnote 68:
- Quite how does the argument carry over?
Footnote 69:
- Does this mean we’re back with the mind-brain identity theory, pains being identical with C-fibre firings, and all that. Presumably we need some middle way?
- But at least identity theories get us away from any conceivable uploading.
Footnote 70:
- He actually overstates the case, saying “they cannot share our feelings”.
- That said, no-one really thinks that non-human animals’ feelings are the same as ours, just analogous in some way. The important claim is that they have them at all; we can never really know what it is like to be them, but we can be confident – for the higher animals at least – that there is something it is like to be them.
- The epistemological difficulties arise for those with very different physiologies.
Footnote 71:
- it seems to me that philosophers care more about retaining the purity of our concepts than about the well-being of other animals.
Footnote 74:
- Given Kripke’s theory of rigid designation – Hesperus and Phosphorus are both Venus – I’m suspicious of just what he means – or Papineau thinks he means – by allowing for Zombies. I need to look into this again.
Footnote 75: Two points here:-
- This is just what Descartes thought God had done with the (non-human) animals – they are zombies, feeling nothing, though acting for all the world as though they do. A damnable doctrine!
- Genesis seems to suppose that what extra had to be added was “the breath of life” rather than “consciousness”.
Footnote 76:
- While I agree with this evaluation, it’d only be fair to have the motivation behind McGinn’s idea revealed, and how it would explain anything even if correct.
- There is also the question how we might ever know whether there was a “reality” before the big bang, and of what sort it was if so.
- Presumably we should read "McGinn (Colin) - The Problem of Consciousness: Essays Towards a Resolution", which is in Papineau’s rather brief reading-list.
Footnote 77:
- This is a critique of the ‘Knowledge Argument’, which requires careful consideration.
- But, there might be other reasons for resisting the Identity theory.
Footnote 78:
- I have my doubts, as it’s the conscious feeling we’re trying to explain in physical terms, which non-materialists claim is impossible.
Footnote 79:
- To me, all this seems to be predicated on non-dualist theories.
- Of course, I am a physicalist, so reject dualism; but, it’s the rigor of the argument that’s important rather than just trotting out your prejudices.
- Doesn’t dualism suppose that it’s only the soul that’s conscious?
Footnote 80:
- This is an important point, which I had not appreciated, as such terminology implies that ‘consciousness’ is some ‘stuff’ that is distinct from the physical processes themselves.
- Presumably the orthodox physicalist terminology should be that such physical processes ‘constitute’ conscious experience.
Footnote 81: Footnote 82: Footnote 83: Footnote 84:
- Papineau doesn’t go into any detail, so how the theory works is a mystery.
- It’s all normal science as far as unconscious information processing and decision-making is concerned. Computers do it all the time. But, the phenomenal consciousness part is again so much hand-waving.
Footnote 86:
- This seems a complete muddle. I’m not sure the heart and saliva are ‘traits’, but – even if we allow this terminology – what have they got to do with evolution? They would be just as useful if directly created.
- Of course, we might consider their evolution, and see how their development benefited the organism, by increasingly improving their performance of these functions. This is – presumably – what Papineau means.
Footnote 87:
- Well, epiphenomenalists might take this view, but why need materialists?
- If conscious properties just are brain processes, then noticing a stimulus – in the heightened sense of being acutely aware of it – does have physical effects. If pain wasn’t so painful, it’s avoidance wouldn’t be such a priority, and the organism would suffer in worse ways as a result – as lepers – who feel no pain – know.
Footnote 88:
- Well, this depends on the conceivability of zombies. But they are only apparently conceivable.
- Additionally, it’s the immediacy of conscious experience that’s important. The brain can ignore unconscious or subconscious experience, but not conscious experience – or not as it gets more and more intense. This has evolutionary advantage.
- The brain becomes aware of its own processes by internal perception.
Footnote 89:
- I’m not sure what these might be … unless consciousness is itself taken to be a physical effect.
Footnote 90:
- I didn’t get the circularity, nor see why a theory of consciousness is required before evolution might explain its purpose.
Footnote 91:
- This is Papineau’s choice. I would say ‘very’, at least.
Footnote 92:
- This still strikes me as absurd.
- The detector is the ‘observer’, and it is not conscious.
- Quantum Interference patterns on plates appear before someone looks at them, don’t they?
Footnote 93:
- This sounds circular to me. The poison isn’t released – or not – until the quantum wave has collapsed.
Footnote 94:
- Maybe, but it’s not really the same thing. QM doesn’t require metaphysical idealism.
Footnote 95:
- This is all rather obscure. Random events allow us to make choices on their basis, but our decisions themselves aren’t random?
Footnote 96:
- It’ll be necessary to read Penrose’s books to get any idea what this is all about.
- Papineau cites "Penrose (Roger) - Shadows of the Mind" in his bibliography.
- Which gravitational effects are these – quantum gravity? Is a massive nearby gravitating body needed, or is it the brain’s own gravity?
- If the former, presumably consciousness would disappear in deep space, which doesn’t bode well for interstellar space travel!
- This might explain the lack of alien visitations, assuming there have been none, though there are better explanations.
- And I suppose the notion that interstellar travel involves long periods of suspended animation might get round this problem.
Footnote 97:
- It does nicely explain why we might need to be aware of some things and not others, but some of the ‘raw feels’ – the qualia – would seem to be straight out of the perceptual sub-systems – though doubtless this is debateable (eg. ‘a red patch’ might be from a sub-system, but the perception of a rose requires integration; as does pain if it is perception of bodily damage).
- Also, we have to beware that language and concepts don’t become central to any theory of consciousness if this were to deny consciousness to higher animals.
Footnote 98:
- Papineau doesn’t reference any works by Schacter.
- Again, it doesn’t seem to explain the ‘raw feels’ which might be taken to reside in the perceptual modules themselves.
Footnote 99:
- Presumably Papineau has contemporary philosophers – he uses the term ‘thinkers’ – in mind, but doesn’t say who.
- I’ve a feeling that "Carruthers (Peter) - The Animals Issue: Moral Theory in Practice", which denies that animals have rights, also denies that they are conscious, but the Chapter that discusses the issue – Chapter 8 – has been removed from the electronic version that I’ve got on the grounds that it ‘contains errors’. We’re referred to a couple of other papers:-
- "Carruthers (Peter) - Sympathy and subjectivity" has a conclusion that ‘The mental states of non-human animals lack phenomenal feels (unless those animals are apes, perhaps) ’.
- But "Carruthers (Peter) - Suffering without subjectivity" says that it is possible for animals to suffer – and therefore to be objects of our concern – ‘in the absence of phenomenal consciousness’. I’ve no idea what this means.
Footnote 101:
- Is this really the case? Intentionality doesn’t seem to have the same feeling of a priori intractability within a materialist framework that consciousness has.
- After all, aren’t representations stored in computers – especially associative learners?
Footnote 102:
- I wrote three essay on this general topic early in my undergraduate studies in Philosophy of Mind.
- See Sensations as Perceptions and links therefrom.
Footnote 103:
- I’m surprised there’s no mention of the Freudian unconscious here. Maybe it’s now too controversial.
- Also, the sort of things we come out with when drunk, which are often taken to show our real thoughts and beliefs, that we often suppress: Wikipedia: In vino veritas.
Footnote 104:
- I think this used to be an argument against computers understanding anything – on the presumption that they are not conscious.
- While I agree that they are not conscious, I think autonomous machine learning – such as that employed by Google to train Google Translate (see Wikipedia: Google Translate) – does demonstrate that computers do ‘understand’ in the sense that they are not just parroting what’s fed into them or following human-devised rules. Similarly with Chess engines – what they now say is Gospel. Yet they still aren’t conscious.
Footnote 105: Footnote 107:
- As it’s absurd to suggest that thermostats are conscious – even though they are intentional systems – is this a pejorative account of what Chalmers says, a supposed logical conclusion that he’d refuse to accept, or does he admit straight out that ‘thermostats are conscious – well sort-of’?
Footnote 108:
- Presumably because these are ‘one-trick ponies’.
Footnote 109:
- This is the other horn of the dilemma; we’ve eliminated some obviously unconscious actors, but are left with some that – by the lights of this modified representational theory – ought to be conscious, but aren’t.
Footnote 110:
- This experiment isn’t very well described. See Wikipedia: Benjamin Libet - Volitional acts and readiness potential for a better description. The clock is a modified oscilloscope with a dot that goes round in 45ms hops. The subject is asked to note the location of the dot when they decide to click (‘moving the hand’ would be too vague; ‘clicking’ is easier to register precisely).
- Basically, it’s 300ms from the start of the motor-cortex ‘action potential’ to noting the will to click, and a further 200ms until the clicking. So, Papineau has it slightly wrong.
Footnote 111: Footnote 112:
- I think ‘statistically significant’ is what’s usually said. This is often ‘bigged up’ and the phenomenon grows in the telling.
Footnote 113:
- I wasn’t quite sure what this meant, so have quoted it in full.
- It seems to have popped up without any introduction. I suspect it’s more important in the next section on HOT Theories.
- Presumably theorists have to be careful not to deny consciousness to animals in their efforts to get a theory to work for humans. Descartes, for instance.
- This is very important, as it guides how some of us treat animals. Given the consequences for them of us arrogantly following our theories for which there is little evidence, we should not take these theories too seriously if they imply something that could have catastrophic consequences for our fellow-animals.
Footnote 114:
- I’m sure that HOT theories are more sophisticated – and less patently absurd – than is briefly outlined here.
- But I seem to remember that some philosophers – in particular Peter Carruthers do follow the argument where it leads and accept the obnoxious that non-human animals aren’t phenomenally conscious. As I’ve no doubt said elsewhere, I think promulgating such views is ethically and philosophically irresponsible for two reasons:-
- Theories with abominable consequences if taken seriously should be subjected (by their promulgators) to double scrutiny before being made public.
- In this case, it is more likely that any HOT theory that implies that the higher animals are not phenomenally conscious is false than that its implication to that effect is true.
- Information I’ve got on HOT theories – and especially on their implications for animal minds – includes:-
→ "Carruthers (Peter) - Consciousness: Explaining the Phenomena",
→ "Carruthers (Peter) - Dispositionalist higher-order thought theory (1): function",
→ "Carruthers (Peter) - Dispositionalist higher-order thought theory (2): feel",
→ "Carruthers (Peter) - Language, Thought and Consciousness",
→ "Carruthers (Peter) - Natural theories of consciousness",
→ "Carruthers (Peter) - Replies to Critics: Explaining Subjectivity",
→ "Carruthers (Peter) - Why the question of animal consciousness might not matter very much",
→ "Gennaro (Rocco) - Animals, consciousness, and I-thoughts",
→ "Guzeldere (Guven) - Is Consciousness the Perception of What Passes in One's Own Mind?",
→ "Lurz (Robert) - Animal Minds",
→ "Millar (Alan) - Rationality and Higher-Order Intentionality",
→ "Nelson (Thomas O.) - Consciousness, Self-Consciousness, and Metacognition",
→ "Raymont (Paul) - From HOTs to Self-Representing States",
→ "Rosenthal (David) - Being Conscious Of Ourselves",
→ "Rosenthal (David) - Consciousness, Content, and Metacognitive Judgments", and
→ "Zahavi (Dan) & Parnas (Josef) - Phenomenal Consciousness and Self-awareness: A Phenomenological Critique of Representational Theory".
Footnote 115:
- As non-human animals don’t have language, this must be difficult to set up. Maybe using sign-language with acculturated chimpanzees is a possibility.
- I couldn’t find much in my database, other than:-
→ "Tschudin (Alain J.-P. C.) - Belief attribution tasks with dolphins: What social minds can reveal about animal rationality".
- This looks to be an interesting paper as its focus is on the non-verbal false belief test as applied to dolphins (though the author thinks it may apply to apes as well). There seem to be interesting methodological discussions.
- Note that this is here used as a test for animal rationality rather than as a test for TOM. We might expect social animals to have a TOM even if they aren’t rational in the human sense.
Footnote 116:
- Is this definitely known?
Footnote 117:
- Unlike cats and dogs, who would be very unhappy to be treated as unconscious being – as was once assumed they were.
- It’s amazing that some philosophers are ‘happy to accept’ that plants and even stones are conscious, while others have difficulty believing that the great apes are conscious.
- While I think it’s important to follow arguments where they lead, if they lead to absurdities you should back-track on your reasoning somewhat before you cause anyone any harm.
Footnote 118:
- So, we learn to think about thinking – it doesn’t just come naturally.
- By which, I suppose, he doesn’t mean that we’re taught to think that way, but that it comes up naturally in social life.
Footnote 119: Footnote 120:
- The book has ‘Topography’ rather than ‘Tomography’!
- See Wikipedia: Positron Emission Tomography.
- It seems that a CT-Scanner (Computerised Tomography: see Wikipedia: CT Scan) is an optional extra, though I’ve not studied it enough to understand what use PET is without CT. It looks like CT uses X-Rays, while PET effectively uses Gamma rays created by the annihilation of positrons and electrons.
- It seems Wikipedia: Brain Positron Emission Tomography may be a more relevant link.
- For ‘tomography’ itself see Wikipedia: Tomography; it seems the word comes from Greek τόμος: "slice, section". Scanners produce a series of 2D sections, combined by a computer into a 3D model.
Footnote 121: Footnote 123:
- Papineau says ‘the consciousness of such creatures would depend on their brains showing the right signature’, but it’s not clear that extra-terrestrials or AIs would have the same brain-anatomy, even if they had a brain at all.
- But, presumably, there would be some analogue that might be invoked.
Footnote 124:
- Because they can’t report what they are feeling – but this only applies to animals – extra-terrestrials or AIs would presumably be able to tell us what they were feeling.
- Our problem would be whether or not to believe them.
Footnote 125:
- How do we know they don’t?
- At the moment (August 2022) my dog Henry has some as yet unknown but probably dread illness. He’s flopped down not doing much, but not sleeping all the time.
- No doubt he’s not thinking about his impending death and the possibility of post-mortem survival.
- But he might be thinking he feels right rotten, if he does. How do we know he’s not?
Footnote 126: Footnote 128:
- This was discussed earlier under the head of the causal closure of physics. I doubt dualists need to be committed to this.
Footnote 129:
- Traditional dualists have denied that non-human animals – lacking souls – were conscious. This is light of a sort, though most likely false.
Footnote 131:
- I’d expected this to be an epistemological question and the ‘continuum’ to be reduced degrees of consciousness.
- So:-
- We might not know whether oysters are conscious, but there is a fact of the matter, known only to oysters.
- But, if oysters are conscious, they are less conscious than ‘higher’ organisms. There is less of ‘what it is like to be’ them than ‘what it is like to be’ a dog, or one of us.
- There might be a vague metaphysical boundary, where – even if we knew the facts – we’d not know what to say.
- Maybe: but this is a problem for any attribute that comes in matters of degree. It’s often a conceptual matter, to do with language. Just when does our concept of ‘consciousness’ apply?
- But – according to Papineau – there’s ‘no fact of the matter’, not just that we don’t know all the facts.
Footnote 132:
- Papineau says ‘mere physical objects’; but, we’re all physical. So, the key word – other than ‘mere’ – is ‘objects’. That is, we don’t think of them as ‘subjects’.
Footnote 133:
- But there still is a fact of the matter. AIs can be used in therapeutic situations and – especially as they become more sophisticated – they might come to seem like friends. But – one presumes – they are not conscious, and we’re projecting on to them our own hopes and desires.
- And – picking up on Dennett’s ethical point – there’s a real matter of importance. If AIs are conscious, then we can’t deal with them as we would an unconscious being. We’d not merely being sentimental in having ethical concerns.
Footnote 134:
- But there might be no inner life.
- That we might not know what that inner life is like – as with bats – is another matter entirely, though even then we can’t expand our concept based on something that’s inconceivable for us.
Text Colour Conventions (see disclaimer)
- Blue: Text by me; © Theo Todman, 2026
- Mauve: Text by correspondent(s) or other author(s); © the author(s)