Introducing Consciousness
Papineau (David)
Source: Papineau - Introducing Consciousness
Paper - Abstract

Paper StatisticsBooks / Papers Citing this PaperNotes Citing this Paper


Comments1

  1. What is Consciousness?: Examples – dental work with or without anaesthetic; eyes shut or open; sleep with or without dreams.
  2. The Indefinability of Consciousness:
    • There seems no scientific, objective2 definition to capture the essence of consciousness.
    • Definitions in terms of the psychological role conscious states play, or in terms of the physiology, seem to miss out the essential feel.
    • We can imagine robots with electronic brains that satisfy a scientific definition, but have no feelings. Lights on but no-one at home.
    • Similarly3 for androids4 made of defined chemical and physical stuff.
    • Analogy5 with jazz “Man, if you gotta ask, you’re never gonna know”.
  3. What is it like to be a Bat?:
    • "Nagel (Thomas) - What is it Like to Be a Bat?" raises the distinction between the subjective and the objective – which we often run together as humans, though they might come apart in intelligent robots.
    • The point is that while we can imagine what it would be like for a human to act like a bat, why cannot imagine what it is like for a bat, despite knowing all there is to know about bat physiology.
    • For instance – no doubt – echo-location would seem to the bat more like vision than hearing does to us, but we can’t fill in any of the details.
  4. Experience and Scientific Description: “… no amount of scientific description will convey a subjective grasp of conscious experience.”
  5. How Does Consciousness Fit In?: The central problem is to explain the “what it’s like” of phenomenal consciousness – how this fits into the objective world and in particular how it relates to the scientific goings-on in the brain. There are three options6, which will be examined in due course:-
    1. Dualist: conscious experience is genuinely distinct from brain activity. But, if the world contains subjective elements, how do they interact with normal space-time entities, and what principles govern their emergence?
    2. Materialist7: despite the appearances, there is a unity8 between the subjective and objective. The problem is to explain how the mind and brain can possibly be identical9 if they appear so different.
    3. Mysterian: the understanding of phenomenal consciousness is beyond human beings at present, and maybe forever.
  6. Hard and Easy Problems: David Chalmers distinguishes between the “hard” and “easy” problems of consciousness.
    1. The “easy” problems – while far from simple – are those that psychologists and physiologists know how to address using standard scientific methods without raising insurmountable philosophical obstacles.
    2. Easy problems deal with the objective study of the brain – the causal roles played by psychological states, and how these roles are implemented in the brains of different creatures.
    3. For example, pain is a state typically caused by bodily damage which typically causes a desire to avoid further damage. In humans it is implemented by A- and C-fibre transmissions. Similar analyses are available for vision, hearing, memory and so on.
    4. But such analyses tell us nothing about the feelings involved. Analyses involving causal roles and physical implementation apply just as much to unfeeling robots.
    5. The “hard” problem is to explain where these feelings come from.
    6. The Explanatory Gap: This is the expression used by Joseph Levine. Objective science can only take us so far – there seems to be a gap between what science tells us and what we most want to know – why does pain hurt?
  7. Creature Consciousness: Animals are said to be conscious if they have some conscious states.
  8. The Hard Problem is New: It’s only since the 20th century that the physical world view has tended to squeeze out consciousness. In earlier times it was taken for granted that there was a world of conscious minds independent of the physical world. The mental was supposed to be at least as basic, with matter a second-rate citizen.
  9. Rene Descartes’ Dualism: Despite his contributions to the foundations of modern physical science, Descartes never doubted that conscious minds exist at a distinct non-physical level. As a dualist, he thought that the mental and physical were two separate but interacting realms.
    1. Matter in Motion: Descartes’ account of the physical world was austere and differed from most accounts before and since. He though the physical world was just matter in motion, with all action by contact. Secondary qualities are not in the things themselves, but colours – say – are impressions produced by the action of material particles on our sense organs.
    2. Mind Separate from Matter: Outside the physical world is the metal world of unextended mental / conscious10 elements – thoughts, emotions, pleasures. The only property these elements share with matter is location in time, though the mental and physical interact – eg. sitting on a pin causes pain, and pain causes your body to jump.
    3. The Pineal Gland: a “whacky idea” but an honest answer – the function of the PG is still not fully understood – to a serious problem. The causal interaction of mind and matter remains the Achilles heel of contemporary dualism.
  10. Berkeley’s World of Ideas: sceptics argued that Descartes’s dualism left us ignorant of the world of matter. Berkeley “solved” the problem by denying that there was a world of matter for us to be ignorant of; mental experiences are all there is. “Esse est percipi”. While Idealism is an affront to common sense, Dr. Johnson’s “refutation” by kicking a stone fails: Berkeley didn’t deny the existence of sense impressions – he simply claimed that their causes were mental rather than physical.
  11. The Idealist Tradition: Idealism’s philosophical advantages and impregnability to disproof attracted many philosophers, including
  12. Idealism in Britain: Despite British philosophy being renowned for adherence to common sense, many of its leaders have been idealists:-
    • John Stuart Mill: physical objects are “permanent possibilities of sensation” with no separate existence apart from our sensory awareness of them.
    • Bertrand Russell: Regarded the physical world as a figment of our mental perspective, a logical construction out of the sense-data we are aware of in perception.
    • A.J. Ayer: The material world has no reality apart from its reflection in the deliverances of our sense organs.
  13. The Scientific Reaction to Idealism:
    1. Idealism has no problem with consciousness, out of which it builds reality.
    2. But this just shifts the problem – how are material things part of reality?
    3. The tide moved against idealism firstly because – if mental events are private, how can third parties know anything about them – and hence – according to idealism – about reality?
  14. Behaviourist Psychology:
    1. The behaviourist response – from John B. Watson and B.F. Skinner – was to claim that a scientific psychology should be based on an experimental study of behaviour, not on individuals’ judgements of their feelings.
    2. They learned a lot about the training of rats and pigeons using rewards & punishments.
  15. The Skinner Box:
    1. “Operant Conditioning Apparatus”: initial random lever-press releases food, which reinforces the pressing, the frequency of which increases with Positive Reinforcement.
    2. An operantly conditioned rat will continue pressing the lever even when the food reward ceases.
    3. Both Watson and Skinner applied their views to human beings.
    4. Watson was an extreme environmentalist: claiming that the human mind is formed entirely by nurture – rewards and punishments – rather than by genetic nature.
    5. Skinner wrote Walden Two - as a “sequel” to the America idyll "Thoreau (Henry David) - Walden"; therein, Skinner proposed child-rearing should be rigorously based on a system of rewards.
  16. The Ghost in the Machine:
    • While psychologists – “methodological behaviourists” - rejected subjective experience as bad methodology, philosophers – “logical behaviourists” – made out that subjective experience was incoherent.
    • They claimed that all that was meant by “mental states” were publicly observable inclinations to behave in certain ways.
    • Gilbert Ryle believed this and described the traditional view of the mind as a subjective realm controlling the body as “the ghost in the machine”.
  17. The Beetle in the Box:
    • Ludwig Wittgenstein’s Private Language Argument associates him with Logical Behaviourism. Wittgenstein11’s argument12 is that public verification is required for the working of language, and that a language that can only be checked by one person makes no sense.
    • Similarly13, we wouldn’t know what we were talking about if talk of mental states referred to private inner episodes.
    • We might each be referring to different things if we referred to “the beetle (hidden) in our box”, or to nothing at all (if the box were empty).
    • For mental talk to have objective content, we must regard the mental realm as intrinsically connected to the behaviour that makes it publicly observable.
  18. Psychological Functionalists:
    • Both logical and methodological behaviourism are now seen as over-reactions to the subjectivist view of the mind.
    • There’s craziness in supposing that mental states can never be known introspectively, only by observation.
    • The “behaviourist joke14” – “you’re feeling OK, how am I?”
    • Behaviourism has now been superseded by Functionalism15, which upholds behaviourism’s resistance to metal states as essentially subjective, yet allows them to be internal while not necessarily being observable in behaviour.
    • The example given is someone being exhorted not to display pain-behaviour to avoid discovery.
    • So, mental states are to be thought of as internal states identifiable by their typical causes and effects – pain typically arises from bodily damage and typically causes avoidance-behaviour, though actual behaviour depends on interaction with other beliefs and desires.
    • Functionalists consequently think of mental states as causal intermediaries. While internal, they are not to be identified purely according to their feel. While unobservable, they are still objective parts of the causal-scientific world.
    • Mental states are consequently thought of as being real – analogous to scientific unobservables such as atoms, genes or quarks16.
  19. Structure versus Physiology:
    • Functionalism has no commitment to what mental states are made of.
    • While functionalists turned inwards from behaviour to the brain, they didn’t involve themselves with neurons and functional areas of the brain, but instead drew flowcharts.
    • So, they hypothesised mental structure – in terms of functional roles – in abstraction from physical structure.
  20. The Mind as the Brain’s Software:
    • The analogy is with digital computers – with the brain as the hardware and the mind as the software.
    • The analogy is stretched to allow that the mind might run on different hardware platforms, just as computer programs can run on different models of computer.
    • What matters with software is its causal structure. On any particular hardware, running the software causes some17 internal states – it doesn’t matter quite what these are as long as they satisfy the structural requirement.
  21. Variable Realisation:
    • According to functionalists, mental states are software rather than hardware / “wetware”.
    • Example: Human and Octopus Pain
      1. Humans and Octopuses both show pain-behaviour – which typically arises from bodily damage and typically causes avoidance behaviour18.
      2. This is in spite of the fact that their “circuitry” differs.
      3. So – the functionalist claims – because the function of pain is the same in both cases, they have the same software but different wetware.
  22. A Physical Basis for Mind:
    • Because functionalism deals only with structure rather than what mental states are made of, it is consistent with dualism or even idealism.
    • Maybe some special “mind-stuff” arises in the brains of conscious creatures to provide the infrastructure needed to fulfil the functional roles.
    • But most functionalists are materialists19, arguing that the analogous computers are entirely made of matter.
  23. A Modern Dualist Revival:
    • Functionalism linked to materialism20 leads to the Hard Problem of consciousness – the mental “feels” that make life worth living.
    • one response to this problem is to insist that the mind must occupy a separate non-physical realm after all. If modern orthodoxy implies that human beings are unfeeling automata, so much the worse for orthodoxy.
    • David Chalmers is cited as a modern-day dualist, though one less extreme than Descartes. Descartes thought of mind and matter as separate substances. However, …
  24. A Dualism of Properties:
    • Chalmers and the like think of human beings as a single substance with two kinds of property – they are property dualists.
    • They consider both behaviourism and functionalism as over-reactions to subjectivist idealism.
    • Rather than being just an intuition, modern dualist have two arguments:-
      1. The Argument from Possibility: derives from Rene Descartes.
      2. The Argument from Knowledge: derives from Gottfried Leibniz.
  25. Descartes’ Argument from Possibility21:
    • Descartes argued that it is “perfectly possible” for mind and body to exist separately.
    • For, there’s no apparent contradiction in the ideas of ghosts or immortal souls.
    • While – in point of fact – there might not be any ghosts, it still makes sense to think we might continue to exist as conscious beings without a body, as millions have comfortably hoped.
    • Even if in reality mind and body are always found together, the possibility of posthumous survival implies their real distinction.
    • If they were really the same thing, what sense could be made of their coming apart?
    • A modern variant of this argument – attributed to Saul Kripke – is the Zombie argument.
  26. A Zombie Duplicate22:
    • Kripke imagines a molecule-by-molecule duplicate23 of himself – such as might be produced by a Star Trek holocopier24, but which has no conscious experience.
    • The philosophical term for such individuals is “zombie25”, not to be confused with the living dead of B-movies.
    • Kripke’s zombies don’t bang into the furniture but act just like normal human beings. They have the same brain structure, but just lack the inner feelings.
    • Even if – as seems likely – there are no such zombies in the physical universe (as with Descartes’ argument) it is the mere possibility that is important. There doesn’t seem to be anything logically contradictory26 about the notion of a zombie.
    • If we admit that zombies are possible, then conscious properties must differ from physical or structural properties.
    • Papineau has an interesting diagram of two doppelgangers screwing together a couple of androids27. One asks the other “By the way, which one of us is the zombie?”. The answer is “How should I know. You’re the one with the feelings.” This is presumably ironic, but raises interesting questions28.
  27. Leibniz’s Argument from Knowledge:
    • This argument is attributed to a passage in Gottfried Leibniz’s Monadology29.
    • The argument is that if we walked about in a (super-sized) machine that “produces thinking, feeling and perceiving” we would see nothing but moving parts – and “never anything that could explain perception”.
    • While the text goes on to investigate the “Red Mary” thought-experiment30 that Papineau considers to be the modern variant of Leibniz’s idea, it strikes me that this argument is similar to John Searle’s “Chinese Room31” argument.
    • Leibniz’s point – says Papineau – is that even if we knew all there was to know about the workings of the brain, we still wouldn’t know about consciousness32.
  28. The Modern Argument from Knowledge:
    • This is a TE proposed by Frank Jackson33.
    • Mary’s a research psychologist – supposed to have grown up in a black-and-white environment – who knows all there is to know about colour – the properties of light and of eyes and the visual centres in the brain – but has ever seen red. Then one day she sees a red rose and so - it is said – comes to learn something new, namely, what it is like to see red.
    • If this is right, not all mental properties are physical or structural properties – because (ex hypothesi) Mary knew all these before she experienced the colour red.
    • Hence, this further property must be distinct from the physical and structural properties she already knew about, since she learned something new – the phenomenally conscious “what it is like”.
  29. A Dualist Science of Consciousness:
    • We’re referred to David Chalmers who – Papineau claims – while not anti-science is persuaded by these dualist arguments that the scope of science should be expanded to include consciousness, just as physics was expanded to include electromagnetism rather than hope to reduce it to mechanics. Chalmers thinks that “there is a separate phenomenal realm where conscious awareness can be found”.
    • We need a theory that accounts for the emergence of conscious states in the same way34 that Maxwell’s laws govern electromagnetic fields.
  30. Arguments against Dualism:
    • Attempts to revive dualism have to face the old problems of mind-body interaction originating with Descartes and his pineal gland.
    • However, the modern dualism is a theory of properties rather than substance, so doesn’t have to explain how two difference substances communicate causally.
    • But we’re still left with the most difficult question – how can mind influence matter without violating the laws of physics?
  31. Causal Completeness:
    • The physical world appears to be causally complete.
    • An example35 is given of a goalie’s save: retinal registration of ball’s motion → neuronal activity in sensory cortex → physical activity in motor cortex → electrical messages travel through nerves → physical contraction of muscles.
  32. The Demise of Mental Forces:
    • Hence, we never seem to leave the realm of the physical, and there’s no room for non-physical properties like conscious experience to make a difference to behaviour.
    • Conscious experiences seem to be causal danglers, with as little impact on events as a child’s toy steering wheel has on the direction of her mother’s car.
    • Squaring dualism with the causal completeness of physics was recognised as a problem in the 17th century. While Descartes wasn’t concerned, his successors pointed out that deterministic physics ruled out mind36 influencing matter.
  33. Newtonian Physics:
    • This problem receded a bit in the 18th and 19th centuries as Newtonian physics allowed action at a distance, rather than insisting that change in motion requires contact between bodies.
    • Not just gravity, but chemical forces and “forces of adhesion”.
    • Also, vital and mental forces that allow living creatures to direct their bodies.
    • It is only recently that such forces have begun to seem cranky. Newtonians thought of them as no more mysterious than gravity or magnetism. According to Papineau, C.D. Broad in his 1923 book Mind and its Place in Nature defended an “emergentist37” philosophy whereby special “configurational” forces arise in matter of sufficient organisational complexity, like living bodies and intelligent brains.
  34. Back to Descartes38:
    • The current consensus:-
      • Still have action at a distance rather than cause requiring contact.
      • Quantum mechanics39 means we no longer have physical determinism.
      • Material effects always have material causes.
      • There are three fundamental forces
        → Strong nuclear
        → Electroweak
        → Gravity
      • All non-random influence on the motion of material bodies results from these forces.
    • So, there’s no room for an independent mind to make a material difference. The mind cannot move the body.
  35. Materialist40 Physiology:
    • The major reason special mental forces have been discredited is advances in neuroscience.
    • While it seems contrary to common sense that a physical system can display human behaviour, brain research suggests the opposite.
    • Particularly, this is demonstrated by advances in understanding the body’s neuronal network, and the chemistry of the neurotransmitters that enable inter-cellular communication.
  36. No Separate Mental Causes:
    • While the picture is far from complete, we’ve learnt enough in the last 100 years to be sure there are no “special mental forces” lurking in intelligent brains.
    • Two of the last to hold out against the causal completeness of physics were John Eccles and Roger W. Sperry, both Nobel prize-winners. But the consensus is against them, even though theories of physics as we know it may not be correct or complete.
    • Modern physical science would be very surprised indeed were matter to move other than from physical causes, though this idea is not incoherent.
  37. What about Quantum Indeterminism?:
    • Does quantum indeterminism create the loophole to allow mind to make a difference?
    • No, because QM fixes the probabilities of the outcomes of events. If the mind – by way of independent conscious decisions – could affect the probabilities of neurotransmitter movements, then their prior probabilities wouldn’t be fixed by QM after all.
    • Again, loading of the quantum dice isn’t incoherent, but it would greatly surprise modern science to find it to be true.
  38. Causal Impotence:
    • According to Papineau, most41 contemporary dualists – who accept the completeness of physics – consider it to be an illusion42 that the mental has any effect on the material world. We appear to be in control, but we are not – no more than a child with a toy steering wheel.
    • Consequently, the conscious mind is “causally impotent”.
  39. Pre-established Harmony:
    • This is the (crazy43) idea of Leibniz44 – who was a dualist who accepted the causal completeness of physics – and therefore that mind cannot really influence matter. God has set up the initial conditions45 so that mind and matter keep in step, and so that pains follow bodily insult and action follows the willing thereof.
  40. Modern Epiphenomenalism:
    • Epiphenominalism doesn’t require divine action or advance planning, but simply allows influence from brain to mind, while denying influence in the other direction.
    • This makes the conscious mind an epiphenomenon of the brain – a “dangler” with no causal powers of its own.
    • While it so happens that the brain gives rise to conscious experience, it might have been otherwise, and everything would work the same.
    • Using the analogy of a train – it all works fine at the physical level. The mental experience is just like the smoke – a by-product that makes no difference to the motion.
  41. The Oddity of Epiphenomenalism:
    • This is rather counter-intuitive, implying (at least) two things:-
      1. Firstly – say – the sensation of thirst doesn’t cause us to go and get a drink.
      2. Secondly, it also implies that phenomenological Zombies46 would act the same as us.
    • Papineau gives a longish quotation from "Chalmers (David) - The Conscious Mind: In Search of a Fundamental Theory", where Chalmers says that his Zombie-simulacrum would carry on writing volumes about consciousness – including about his experiences – just like he does, without experiencing a thing.
    • I think these are two completely different situations, and are not equally plausible47, and aren’t equally necessary as consequences of epiphenominalism.
  42. The Materialist48 Alternative:
    • Papineau agrees with the absurdity of the zombie implication, especially of our verbal accounts of our sensations. While it looks as though physicalism49 forces epiphenomenalism upon us, there are alternatives.
    • The materialist50 option is to question whether conscious states really are different from physical states. If they are not, then they can cause physical events, and we can ignore zombies, because if mental states are necessarily brain states, then zombies can’t exist51.
    • On the materialist52 view, physical duplicates are necessarily conscious duplicates.
    • So, materialism53 avoids the drawbacks of epiphenomenalism. But, is it an options – What about the objections of Saul Kripke (Zombies, Section 26) and Frank Jackson (Section 28) who purported to prove that conscious states must differ from brain states?
  43. Materialism54 is not Elimination:
    • We must note that materialism55 agrees that conscious experiences are real. If you’re in a particular brain state, that’s just “what it’s like” for you.
    • In contrast with David Chalmers’s analogy with electricity (see Section 2956), here we have an analogy with temperature.
  44. The Example from Temperature:
    • Rather than adding temperature to the ontology of physics, it has been reduced to mean kinetic energy.
    • Temperature57 hasn’t been eliminated – unlike “animal spirits”. Similarly, Consciousness really exists – it just isn’t anything over and above brain activity.
  45. Functionalist Materialism58:
    • Philosophers like Jerry Fodor – a Functional59 Materialist60 – want to associate conscious experience with structures rather than with physiology.
    • Just like different hardware can run the same software, so – it is claimed – different physiologies can have the same kind of conscious experience.
    • So, humans and octopuses both feel pain, despite their different physiologies, because they share a structural property of being in some physical state arising from bodily damage that causes a desire not to incur further damage.
    • Similarly – it is claimed – with silicon-based aliens, provided they had the same structural property.
    • However, many theorists – including myself61 – find this implausible as “it seems odd that your material construction should be irrelevant to how you feel” and it makes computer consciousness too easy to attain.
  46. Making a Computer Conscious?:
    • We can program a computer to realise any causal structure whatever, and to role-play actions associated with pain or the emotions based on its internal states.
    • But it’s hard to believe62 that such a computer would share our rich mental life – and our fear of death – even if structured in the right way.
    • But functionalism also applies to – correctly configured – heaps of junk, not just to slick 2001-HAL-style computers. Would a scrap-metal machine really “feel anything”?
  47. The Turing Test:
    • Standard description of the Turing Test63
    • Standard objection: how can a computer feel anything? One that passes the Turing Test is only simulating a conscious mind, and is no more conscious than a weather program is wet.
    • Papineau thinks this naturally leads on to John Searle’s Chinese Room argument.
  48. The Chinese Room:
    • The usual exposition of the argument is given, where it’s clear that the Homunculus in the Room, carefully following the manual, doesn’t understand Chinese. So, the appearance of understanding doesn’t guarantee the real thing – and no more does the appearance of consciousness. Therefore, the Turing test doesn’t guarantee consciousness.
    • This is much disputed64, of course.
  49. Language and Consciousness:
    • The Chinese Room argument is – strictly speaking – only an argument against the functionalist65 account of linguistic understanding, rather than against the functionalist account of consciousness.
    • But, claims Papineau:-
      1. Language is an intentional66 notion, and
      2. Intentionality and consciousness are closely related, a claim to be followed up later.
    • Searle believes that linguistic understanding67 requires conscious experience, so his argument also applies68 to conscious experience.
    • But functionalists can counter that it’s the whole Room that understands Chinese – not a part of it (the homunculus). Similarly, it’s the whole computer that might be conscious, not a part of it.
    • Also, isn’t the CR TE somewhat under-specified? If the room really could answer questions addressed to it, it would need to be sensorially attached to its environment to update its understanding thereof. Given this, it would no longer be so obvious that it doesn’t know – say –what the Chinese symbol for “rain” is or – in general – what it’s talking about.
  50. Functionalist Epiphobia:
    • There’s a more basic reason than the CR TE for not wanting to follow functionalism in assigning conscious states to functional ones.
    • The attraction of materialism was its promise to restore causal power to conscious states – in contrast to epiphenomenalism – in particular by equating conscious properties with brain properties.
    • It’s presumably the passage of specifically human neurotransmitters across my synapses that causes my muscles to contract, rather than some abstract structural property shared with octopuses.
    • Given that it’s the physiological properties – that differ between human and octopuses – rather than their functional properties that cause muscles to contract, this leaves the functional properties as epiphenomena: they cause nothing.
    • If pain is a purely functional property, it would – on this account – be inefficacious in itself, though emitted by a train of real causation.
  51. Mental States are “Wetware”:
    • Functionalism implies a software-approach to mental states, but the above epiphobia inclines us towards a hardware approach (or “wetware”), whereby pains – for instance – are identified69 with physiological states.
    • Papineau says this “blocks” Chinese-room-type anti-software arguments, but I’d have thought it just makes them irrelevant. If feelings depend on wetware, then we already know they are no longer reducible to software without the need for any clever TEs.
  52. Human Chauvinism:
    • Papineau suggests that abandoning functionalism means that we can no longer be confident that70 beings with physiologies different to ours share our feelings.
    • The text “rules out” octopuses having the same pains as us. But it goes on to admit that materialism doesn’t deny that octopuses have pains of some sort, just that they’re not (necessarily) the same as ours. Seems reasonable: why would anyone think71 they would be the same?
  53. Facing up to the Dualist Arguments:
    • So, what about the dualist arguments of Saul Kripke and Frank Jackson, which seem effective whether we equate mental properties with structural or physiological ones?
      1. Kripke’s zombies share structural and physiological properties with us – yet (allegedly) lack consciousness.
      2. Jackson’s Mary knew everything about colour vision, yet lacked the conscious experience of it.
    • The materialist answer is that these are objections to concepts72, not to properties73. There’s one property thought of in two ways – mental properties can be thought of as conscious or material, just as a person can have two names – eg. Judy Garland / Frances Gumm. Similarly, heat and mean kinetic energy.
    • So, Mary acquires a new concept of “seeing red”, a new way of thinking about the experience: now she’s seen red, she can imagine it; before, she couldn’t. But, she could still think about the experience before she had it, and her imaginative thoughts refer to the same experience she previously thought of only scientifically.
    • Similarly, the existence of two types of concepts for thinking about experience deceives us into thinking zombies are possible when they aren’t. Since conscious properties just are material properties, our imaginative concept of a zombie is self-contradictory.
  54. Zombies are Impossible:
    • So – according to the materialist – the believer in zombies is like a person who believes that a person with two names74 can be in two places at once, or that a gas can have two different temperatures yet the same mean kinetic energy. All these things – zombies included – might seem possible, but are not.
    • According to dualists, when God had made our physical bodies, he still had more work to do – namely putting the feelings in. He could have left us a zombies75. But according to materialists, God had no more work to do, and zombies are beyond even an omnipotent God.
  55. Mysteries of Consciousness:
    • Not everyone is convinced by the examples of names and temperature: these seem certain, but the identity of consciousness with brain processes less so. Maybe the once accompanies the other without being identical to it.
    • Colin McGinn thinks it beggars belief that phenomenal experience can be the same thing as neurons firing.
    • Similarly, Thomas Nagel - while appreciating the reasons for equating mind and brain – argues that we lack and conception of how they could be identical.
    • However, such philosophers don’t was to return to dualism, and agree that epiphenomenalism is absurd.
  56. The Mysterian Position:
    • The Mysterian position is that consciousness is beyond human comprehension.
    • But while we can’t live with the identity of consciousness and the physical, we can’t live without it either – on pain of ‘mental impotence’ (epiphenomenalism).
    • Mysterian philosophers claim we lack the concepts to understand the issue.
    • While science may some day resolve the issue, it may be that the structure of our minds means we can never comprehend the truth.
    • These concepts may be as far beyond us as calculus is beyond monkeys.
  57. A Mysterian Speculation:
    • Colin McGinn is not beyond some – to me wild – speculation, suggesting that consciousness is the resurrection of the non-spatial reality that existed before the big bang, when spacetime came into being, detected once complex-enough brains arose in their turn.
  58. Special Concepts of Consciousness:
    • Papineau asks whether such flights of fancy76 are necessary.
    • Materialists claim that the Mysterians have given up too quickly and that their objections to mind-brain identity rest on nothing other than blank incredulity that “soggy grey matter might constitute technicolour phenomenology”.
    • Materialists agree that this identity is more difficult to believe than other identities, but maybe they can offer an account of just why it is so difficult to accept, even if true, by appealing to the special kinds of imaginative concepts we use when referring to mental items as conscious.
    • So, ‘Red Mary’ gets the ability – which she lacked before seeing red – to re-create the experience in her imagination, which provides a particularly vivid way of thinking about consciousness. But, thinking about ‘soggy grey matter’ is still thinking about the same thing77 as is her re-imaginative experience. Re-enactment is just a special way of thinking about conscious experience.
  59. Everybody Wants a Theory:
    • In discussing the three options to date – materialism, dualism and mysterianism – we’ve not asked which parts of the brain might be responsible for conscious experience. Not all parts do, because lots of the brain’s activities are sub-conscious (or unconscious).
    • So, we need a theory of consciousness, which would tell us:-
      → What is required for consciousness
      → Which brain activities yield consciousness, and
      → Which animals are conscious
    • Once we’ve identified the brain processes responsible for consciousness, we’ll be able to check whether these are present in various animals, though this is far from straightforward.
    • While materialists will seek to identify consciousness with the physical processes, the dualist will think of consciousness as something extra that accompanies them78
    • Papineau seems to think this project is independent of the metaphysical position adopted79. In particular, he thinks that dualists will be happy with a physicalist account provided something extra is needed.
    • He says that theorists are often unclear as to their metaphysical stance – but often use terminology that – Papineau claims80 – only make sense in a dualist context, by using expressions such as (physical processes) ‘generate’, ‘cause’ or ‘give rise to’ consciousness.
    • So, Papineau thinks we can proceed in pursuit of our theory irrespective of our metaphysical commitments.
  60. Neural Oscillations:
    • Francis Crick & Christof Koch have developed the theory that the key to consciousness – at least in the visual sphere – are neural oscillations in the 35-75 Hz range.
    • These oscillations are invoked to solve the binding problem81, whereby the brain – which processes the location, shape, colour and category of a perceived object in different areas of the visual cortex – yet perceives a unified object. So, how are the properties of different objects differentiated and how are those of a particular object bound together so that a unified object is perceived?
    • The supposed answer is by use of a particular frequency and phase in the available space. Additionally, this neural correlate of consciousness is supposed to82 account for our conscious visual awareness.
  61. Neural Darwinism:
    • This theory – nothing to do with Darwinian evolution, except by analogy – is down to Nobel Prize-winner Gerald Edelman, late in his career83.
    • The basic idea is that we are born with more neurons and interconnections than we need, and that 70% of our neurons are culled by age 8 months.
    • Neural interconnections not encouraged by stimulation ‘wither and die’. Those that remain form a set of interconnected ‘neural maps’ used for visual and other perception.
    • On receipt of some stimulus by the brain, these maps become activated and send signals to one another.
  62. Re-entrant Loops:
    • Edelman calls these activations of the neural maps ‘re-entrant loops’. They – and new structures that are laid down – continue to evolve in response to further stimulation.
    • Edelman’s theory84 is that it is this evolving structure of re-entrant loops that is responsible for conscious awareness. It is responsible for a form of memory that enables the categorisation of information, and also plays a part in thinking, reasoning and the control of behaviour.
  63. Evolution85 and Consciousness:
    • Papineau now asks whether Darwinian evolution can help us understand consciousness.
    • Papineau says that knowing that the ‘evolutionary purpose’ of the heart and saliva – respectively – is to pump blood and digest food – respectively – helps us to understand these ‘traits86’.
    • But, both87 materialists and epiphenomenalist dualists agree that conscious properties produce no bodily effects other than those produced by the brain.
    • Yet, the evolutionary purpose of a trait is that benefit it has for survival. We have hearts because hearts benefitted our ancestors, but no such obvious benefit accrues to conscious experience.
    • Our ancestors would have survived even had they been zombies, since88 their brains would have had the same physical effects.
  64. The Purpose of Consciousness:
    • Papineau does admit that materialist philosophers who identify consciousness with brain processes can say that consciousness does have physical effects89, namely those of these very brain processes.
    • Even so, such materialists won’t know which of the brain processes resulting from natural selection cause consciousness.
    • So, to know the evolutionary purposes of consciousness, materialists need to know which brain processes constitute consciousness and which don’t.
    • Since they need a theory of consciousness before evolution can explain its purpose, evolution cannot – without circularity90 – be used to explain the purpose of consciousness.
  65. Quantum Collapses:
    • A rather91 speculative view ties consciousness to the collapse of the quantum wave function.
    • Papineau describes QM as a very odd theory, with the indeterminism – ‘God playing dice’ – only a very small part of the theory. He claims much isn’t indeterministic at all, with the wave function evolving deterministically. Schrodinger’s equation is given. QM is in this sense similar to CM, following equations of motion.
  66. How Quantum Physics Differs:
    • The wave function only specifies probabilities for measurements of velocity and position.
    • Measurements seem to make the wave function ‘collapse’ to a particular set of values.
    • This type of change isn’t predicted by Schrodinger’s equation and it’s correct understanding is extremely controversial.
  67. Schrodinger’s Cat:
    • We’re given a potted account of the standard TE: an electron gun has a 50% chance of hitting the top half of a detector – when poison gas is emitted – and a 50% chance of hitting the bottom, when nothing happens. The cat’s fate isn’t sealed until the wave function collapses and it’s decided which half of the detector is hit. When does this collapse occur? Schrodinger’s equation doesn’t tell us. It’s happy for the electron to exist as a superposition of upward and downward paths, and the cat therefore to exist as a superposition of alive and dead states.
  68. Quantum Consciousness:
    • A ‘bold view’ is that collapse only occurs when a conscious observation is made. So, the cat is neither alive nor dead until someone92 looks in the box to check, unless the cat itself is conscious – when things become definite when they register on the cat’s consciousness: when the cat smells93 the poison or not. This all sounds a bit like94 George Berkeley’s ‘to be is to be perceived’.
    • The American physicist Henry P. Stapp claims that quantum waves collapse when intelligent brains select one of the quantum possibilities as a basis for future action. This is also a theory of consciousness – it’s those parts of the brain that are implicated in quantum collapse that constitute consciousness.
    • On this view, consciousness does have physical effects – it causes quantum collapse (though it’s still down to chance what outcome becomes actual). A conscious observer ensures the cat has a definite fate, but it’s down to God’s dice what this fate is.
    • This allegedly means that consciousness serves a biological purpose – eliminating alternative realities allows us to better plan our actions95.
  69. Another Link to Quantum Physics:
    • Roger Penrose thinks consciousness is tied to activity in cytoskeletal microtubules, which provide the scaffolding in all cells – including neurons. Their dimensions are appropriate for orchestrating quantum collapse. Penrose’s idea differs from Stapp’s, as he thinks gravitational effects are responsible. He argues that microtubules focus quantum waves until they reach the gravitational threshold96 for collapse.
  70. Quantum Collapses and Gödel’s Theorem:
    • So, for Penrose, consciousness doesn’t cause quantum collapse, it’s just the way that collapse manifests itself in our minds.
    • Gödel’s Incompleteness Theorem demonstrates that no axiomatic system is sufficient to prove all the theorems of arithmetic. Penrose thinks that this shows that the human mind must have some non-algorithmic element that goes beyond axioms and rules.
    • Not all logicians agree with this inference, but Penrose goes ahead to claim that the non-algorithmicity of consciousness derives from QM.
    • However, there seems to be no virtue in ‘explaning’ one thing we don’t understand – consciousness – in terms of another thing we don’t understand – QM.
  71. The Global Workspace Theory:
    • Bernard Baars argues for the theory that states that the specialised cognitive information-processing sub-systems for perception, attention, imagery, language (and the like) are largely unconscious, but that consciousness arises when they are brought together in a ‘global workspace’, which makes their individual contributions available to the brain as a whole.
    • The global workspace is a bit like a blackboard (or other outdated analogies) from which other sub-systems can analyse and interpret information. This processing is conscious, whereas that of the subsystems in unconscious.
    • Papineau seems fairly supportive of this, saying it ‘happily explains the interplay of conscious and unconscious processes …”. I have my doubts97.
  72. CAS Information Processing (CAS = Conscious Awareness System):
    • Other psychologists have tried to explain consciousness by its central role in information-processing and decision making.
    • For instance, Daniel L. Schacter has it that phenomenal consciousness consists in98 a cognitive system that mediates between specialised knowledge modules – like vision and hearing – and the executive system that controls reasoning and action.
    • The CAS can also receive information from the Episodic Memory Store when we consciously recall previous experiences and from the Executive System when we mull over our plans.
    • There’s a useful diagram of Schacter’s Model, which also includes the Response Systems and the Procedural / Habit System , which is identical to this one: Schacter's Model of Consciousness. Papineau notes that the CAS is responsible for integrating information, which must all be routed through it. In particular, there’s no direct contact from the specialised knowledge modules, or from Episodic Memory, to the Executive System.
  73. Equal Rights for Extra-Terrestrials:
    • All discussion of the proposed mechanisms of consciousness has so far focused on human physiology and psychology. This is absurdly chauvinistic.
    • It’s OK to argue that the consciousness of other creatures – such as octopuses – differs from ours because of their different physiology; but this is far from saying that non-humans cannot have consciousness at all.
    • Some philosophers99 – but not Papineau – argue that all non-human animals (including cats, dogs & chimpanzees) lack consciousness.
    • But surely intelligent aliens – with none of the physiology that humans (or animals generally) have – should be accorded consciousness, and an all-embracing theory of consciousness should include them.
  74. Intentionality and Consciousness:
    • A mental state is intentional if it is about something. Papineau lists various examples from a consideration of Sydney in Australia. Intentionality is independent of any particular implementation, so avoids ‘terrestrial chauvinism’.
    • Papineau mentions the development of intentionalist theories by Franz Brentano and Edmund Husserl, who argued for Phenomenology – that philosophy should be grounded in how consciousness presents objects to us.
  75. Consciousness and Representation:
    • Contemporary non-phenomenologists have also espoused intentionality. These include Michael Tye, Fred Dretske and David Chalmers (described as a ‘dualist’).
    • Tye and Dretske want to identify Consciousness with Representation, while Chalmers hopes to show that Consciousness and Representation are separate but related features of the Mind.
    • Chalmers speculates that his science of consciousness will explain how consciousness arises in the presence of representation (for which he prefers the technical term ‘information100’). Apparently ‘information’ is present even in meaningless syntactical structure.
  76. Explaining Intentionality:
    • Given that intentionality is itself difficult, does it take us any further in explaining consciousness?
    • How can words – written or spoken – stand for something else, especially something we’ve never encountered? If they represent because we understand their meaning, doesn’t this just kick the can down the road?
    • Thus, intentionality seem as hard a problem as101 consciousness, so may not take us any further.
  77. Can We Crack Intentionality?:
    • But, if we could show that consciousness involved nothing over and above intentionality, we’d only have one riddle where we’d had two, and that would be an advance.
    • There are a few proposed solutions to the ‘hard problem; of intentionality – which try to explain how intentionality fits into the world of cause and effect – but none with universal acceptance, though it’s too early to say none will be.
  78. Non-representational Consciousness:
    • The trouble is that – so it seems – not all conscious states are representational and not all representational states are conscious.
    • Examples of intentional conscious states: thoughts, perceptions, images, memories.
    • Examples of prima facie non-representational conscious states: pains, itches, headaches, emotions, moods, orgasms.
  79. In Defence of Representation:
    • In defence, it may be argued that such allegedly non-representational states do – on closer consideration – have representational content.
    • Pains and itches represent trauma102 in particular parts of the body. Emotions represent the general state of things. Orgasms represent physical changes.
  80. Non-conscious Representation:
    • Plenty of representations don’t seem to be conscious: sentences, unconscious beliefs.
    • For example103, if you consciously think your wife’s faithful, but are always checking up on her, then you have an unconscious belief that she isn’t.
    • In response, we can argue that these are second-order representations that depend on states that are conscious. Sentences represent because they are consciously understood by those that use them104. Maybe unconscious beliefs represent because they are similar to conscious beliefs with the same content. But there are harder cases.
    • Many brain states appear to represent unconsciously without assistance from conscious states. For example the early stages of visual processing105 involve brain states that represent light wavelength and intensity, yet these are unconscious. We don’t see these properties of light waves even though our brain knows about them (in a sense).
    • The above kind of representation can’t plausibly be labelled ‘second-hand’. People consciously interpret the sentences they speak, but no-one consciously interprets the brain-states involved in visual processing. Nor are such unconscious states counterparts of conscious ones since most people have no conscious beliefs about the properties of light waves.
    • Other example of non-conscious representation can be found outside the human brain – for instance in primitive animals such as bacteria and machines such as thermostats neither of which are normally supposed to be conscious.
  81. Panpsychist Representation:
    • So, a representational approach to consciousness has two options.
    • The first is to stick with the theory and resist the temptation to deny consciousness to bacteria, thermostats and early visual processing. This seems to be David Chalmers’s approach, as he concludes that bacteria have a limited form of consciousness as they embody intentional states. Indeed, almost any physical system is conscious for Chalmers as his definition of Information106 is satisfied by almost any causal process. Chalmers ends up as a panpsychist107.
    • The other option is to modify the representational approach to say that only representations of a certain kind yield consciousness.
  82. Behaviour without Consciousness:
    • A natural suggestion – made by Fred Dretske and Michael Tye – is that consciousness arises when representations play a role in controlling behaviour. The key point is that there has to be a range of behaviours to control, which allegedly eliminates108 bacteria, thermostats and early visual processing.
    • Unfortunately, behaviour-control seems insufficient to ensure109 consciousness. Recent evidence suggests that much human behaviour is directed by subconscious processes. As an example, Benjamin Libet asked subjects to decide to move their hands and simultaneously note the time110 using a wall-clock. Libet used scalp electrodes to monitor the motor-cortex activity that initiated the hand movement. He found that this was 1/5 of a second before the subjects were aware of making any conscious decision.
    • The precise interpretation111 of this experiment is still up for debate, but it certainly suggests that some processes governing human behaviour do not involve consciousness.
  83. What versus Where:
    • Similar implications flow from visual illusions. When a chip (a circular disk) is placed in the middle of larger chips and then an identical disk is placed amongst smaller disks, everyone sees the disk in the second situation as larger than the one in the first. However, when they try to pick up the disk, their fingers are equally- and appropriately-spaced.
    • So, here again it seems that behaviour is controlled by unconscious rather than conscious representations. Many neuropsychologists now think there are two pathways through the visual system. The ‘low’ or ‘what’ path leads to conscious recognition of objects. The ‘high’ or ‘where’ path controls bodily movement; despite being unconscious, it still controls behaviour.
  84. The Problem of Blindsight:
    • Some brain-damaged individuals report no conscious vision at all, but when asked to guess are quite good at112 guessing lines, flashes of light and even colours. Their performance must be controlled by information at the unconscious level.
    • So, in summary, all these cases threaten the idea that representation is conscious whenever it plays a role in controlling behaviour. It may be possible to tweak what we mean by “controlling behaviour”, but it’s not obvious how to do this, especially ‘if we want to avoid chauvinistic appeals113 to the details of human cognition’.
  85. HOT Theories (HOT = Higher-Order Thought):
    • These theories suggest that consciousness arises when we’re introspectively aware of the representations. The representation meta-represents itself. We think about those experiences at the same time as having them.
    • The HOT acronym is due to David Rosenthal.
    • While this theory might be true of human consciousness, can it be generally applicable?
  86. Criticism of HOT Theories:
    • It seems odd that I don’t become conscious of x before I think about my experience of x.
    • If a visual experience – say – is not conscious in and of itself, it’s difficult to see how it can become conscious by being thought about.
    • Additionally, HOT theories seem to demand a lot of sophistication of conscious creatures. Those that can’t think about their mental states wouldn’t be conscious at all. While this would (rightly) deny consciousness to thermostats and bacteria it would also seem to deny it to114 ‘rats, bats and cats’.
  87. Self-consciousness and Theory of Mind:
    • Creatures that can think about mental states are said to have a ‘Theory of Mind’ (TOM). So, they not only have vision, emotion and belief, but are capable of having thoughts about vision, emotion and belief.
    • While humans have this ability to think about mental states, including their own, it’s not clear whether any other animals do.
    • The classic test is the ‘False Belief Test’: human children can pass this test when they are 3 or 4, but not before.
  88. The False-Belief Test:
    • Papineau describes one version of the test (FBT): a child puts her sweets in a basket; while she’s out of the room, another child puts the sweets in a drawer. When the first child returns, our test candidate is asked where she will look for her sweets. Young children will say ‘in the drawer’ because they know that’s where the sweets are, but they don’t have the understanding that the first child doesn’t know this. All mature humans, and most children over the age of 4 correctly say ‘in the basket’.
    • It’s not clear whether any other animals can pass the FBT; maybe chimpanzees and some other apes may scrape through115.
  89. Conscious or Not?:
    • The jury seems to be out on whether apes can pass the FBT. Most experiments have been performed on chimpanzees, and the lack of language causes methodological difficulties. Also, chimps get bored with experiments and start messing around.
    • Anyway, even if apes do have a TOM, other mammals certainly don’t – cats and dogs116, for instance. If they can’t think about minds, then they can’t think about their own minds. By the lights of HOT theories, this would deny them consciousness.
  90. Cultural Training:
    • Some philosophers are happy to accept117 the counter-intuitive conclusion that cats and dogs are not conscious.
    • Daniel Dennett is – Papineau claims – not only willing to argue that consciousness requires something like HOT but that ‘such thinking118’ requires acculturation and not just biology.
    • This has the consequence that none of our ancestors would have been conscious before the advent of human culture119.
  91. Sentience and Self-consciousness:
    • Well, most theorists reject any idea that consciousness is HOT and accept the common-sense idea that ‘dumb’ animals are conscious.
    • But, we must distinguish sentience from self-consciousness. The latter – defined as thinking about one’s experiences – requires HOT. It is natural to think of many animals as sentient but not self-conscious.
    • So, for example, cats and dogs are conscious of sights, sounds, pains and so forth, even though they don’t think about them. There is ‘something it is like’ for them to have these experiences.
  92. Future Scientific Prospects:
    • Future scientific research – in particular brain-scanning technologies – will tell us more about human consciousness.
    • These will supplement older techniques like behavioural experimentation, studies of brain damage and electro-encephalography (EEG, which detects brain waves using electrodes placed on the skull).
  93. PET and MRI:
    • Positron Emission Tomography120 (PET Scans): use a radioactive marker in the blood to measure brain activity.
    • Magnetic Resonance Imaging121 (MRI Scans): achieve the same effect by placing the brain in a powerful magnetic field.
    • With computer-assistance, these techniques provide striking images of which brain areas are activated by which mental tasks. They will give an increasing understanding of the cerebral underpinnings of consciousness.
    • However, whether this will lead to a general theory of consciousness is another matter. These – and any other imaginable techniques – will only tell us about consciousness in humans, who are the only beings capable of telling us about their states of consciousness. Human beings can report their experiences, which allows us to pinpoint the brain processes involved – say – in distinguishing the two senses of an optical conundrum (like the ‘faces or candelabrum’ test or filling in missing triangles). You can’t do this with animals.
    • Nor can we use the fact – deduced from their behaviour – that animals are sensitive to visual stimuli. Even if we knew what was going on in their brains, blindsight and similar phenomena show that is possible to behave sensitively in the absence of consciousness.
  94. A Signature of Consciousness:
    • If we’re lucky, we may find some key feature that applies to all human Brain122 states that yield consciousness. This may involve representation – as is claimed by those supporting intentional theories of consciousness – or it might be something yet unimagined.
    • If such a ‘Signature of Consciousness’ does come to light, it might yield a general theory via comparative anatomy (in the case of animals) or other similarities123 in the case of extra-terrestrials or AIs. .
    • But if – as seems equally likely – there is no common signature other than being reported as conscious – ie. having introspective accessibility and reportability. Then we’d be stuck for non-human creatures124.
    • Also, introspective reportability is a form of self-consciousness, so we don’t what to make that the essential condition of consciousness as it would arbitrarily deny consciousness to cats and dogs, who don’t stop to think about125 their own minds.
    • So, then, how do we decide which animals qualify for unselfconscious sentience? Cats and dogs seem clearly positive cases, but what about fish, crabs126 or snails? Or extra-terrestrials or AIs? Without a clear signature, there seems nowhere to go.
  95. The Fly and the Fly-bottle:
    • Wittgenstein127 thought that philosophical problems arise from muddles and need therapy rather than solutions – maybe this is how we should approach Consciousness.
    • If we can’t make any progress head on maybe we can move sideways and examine our philosophical assumptions.
    • Let’s return to the two options considered earlier – dualism and materialism – rejecting mysterianism as lacking in ambition!
  96. The Dualist Option:
    • Dualists have little room for manoeuvre as by their lights consciousness will depend on the presence of some sort of “mind-stuff” – so snails and supercomputers will be conscious just in case they have it.
    • But this mind-stuff must be epiphenomenal and causally impotent128, so we won’t be able to determine its effects.
    • So, dualism sheds no light on129 the conscious life on non-human creatures.
  97. The Materialist130 Option:
    • Materialists deny the existence of “mind stuff”. There are only brain processes, some of which are “like something” for the creature to have them.
    • While dualists consider consciousness as binary – you either have it or you don’t, depending on whether or not you have the extra “mind stuff” – materialists allow for a continuum of “what it is likeness”.
    • Papineau claims that some cases are clear: humans, chimps and cats are conscious while stones, seaweed and bacteria are not. But in between there need be no fact of the matter131. There may be no definite point at which inner life shuts off into nothingness.
  98. A Question of Moral Concern:
    • Daniel Dennett claims that attributions of consciousness are grounded in moral concern: it’s because we care about our cats that we attribute consciousness to them.
    • Similarly, if we ever encounter aliens it’ll be our mode of interaction with them that decides the issue of their consciousness. If we consider them mere machines132, we’ll not deem them conscious. But if we discuss their and our hopes and fears with them, we’ll come to regard them as conscious.
    • Some philosophical sceptics might still ask whether they are really conscious, but – if we were to make alien friends133 – this might come to seem as silly as asking whether other human beings are really conscious.
  99. Is There a Final Answer?:
    • Papineau thinks that – at first sight – Dennett’s idea very odd. How can a being become conscious merely based on our attitude towards it?
    • But – maybe – this has to do with expanding our concepts. It’s not that our concern would change what it’s like to be an alien (say). Rather, it would give us a reason to refine our vague concept of consciousness to include them.
    • Papineau says that – of course – our concern won’t change aliens’ inner lives but it might make it rational for us to extend the term ‘consciousness’ to include that inner life134, which we’d have reason to think of as akin to our own.
    • We may be disappointed that there’s no ultimate answer to the riddle of consciousness. But – says Papineau – in the end it all comes down to definitions.
    • Others may be satisfied with no answer, and will make their own way out of the fly bottle.

References
  1. Papineau’s Reading list consists of the following, all of which I now possess:-
  2. Web Refs: Papineau gives two:-



In-Page Footnotes

Footnote 1: Footnote 2: Footnote 3: Footnote 5: Footnote 6: Footnote 9: Footnote 10: Footnote 12: Footnote 13: Footnote 14: Footnotes 15, 59, 65: Footnote 16: Footnote 17: Footnote 18: Footnote 21: Footnote 24: Footnote 26: Footnote 28: Footnote 29: Footnote 31: Footnote 32: Footnote 33: Footnote 34: Footnote 35: Footnote 36: Footnote 37: Footnote 38: Footnote 41: Footnote 42: Footnote 43: Footnote 45: Footnote 47: Footnote 51: Footnote 56: Footnote 57: Footnote 61: Footnote 62: Footnote 63: Footnote 64: Footnote 66: Footnote 67: Footnote 68: Footnote 69: Footnote 70: Footnote 71: Footnote 74: Footnote 75: Two points here:-
  1. This is just what Descartes thought God had done with the (non-human) animals – they are zombies, feeling nothing, though acting for all the world as though they do. A damnable doctrine!
  2. Genesis seems to suppose that what extra had to be added was “the breath of life” rather than “consciousness”.
Footnote 76: Footnote 77: Footnote 78: Footnote 79: Footnote 80: Footnote 81: Footnote 82: Footnote 83: Footnote 84: Footnote 86: Footnote 87: Footnote 88: Footnote 89: Footnote 90: Footnote 91: Footnote 92: Footnote 93: Footnote 94: Footnote 95: Footnote 96: Footnote 97: Footnote 98: Footnote 99: Footnote 101: Footnote 102: Footnote 103: Footnote 104: Footnote 105: Footnote 107: Footnote 108: Footnote 109: Footnote 110: Footnote 111: Footnote 112: Footnote 113: Footnote 114: Footnote 115: Footnote 116: Footnote 117: Footnote 118: Footnote 119: Footnote 120: Footnote 121: Footnote 123: Footnote 124: Footnote 125: Footnote 126: Footnote 128: Footnote 129: Footnote 131: Footnote 132: Footnote 133: Footnote 134:

Text Colour Conventions (see disclaimer)

  1. Blue: Text by me; © Theo Todman, 2026
  2. Mauve: Text by correspondent(s) or other author(s); © the author(s)



© Theo Todman, June 2007 - March 2026. Please address any comments on this page to theo@theotodman.com. File output:
Website Maintenance Dashboard
Return to Top of this Page Return to Theo Todman's Philosophy Page Return to Theo Todman's Home Page