<!DOCTYPE html><HTML lang="en"> <head><meta charset="utf-8"> <title>Thomas (Janice L.) - Functionalism (Theo Todman's Book Collection - Paper Abstracts) </title> <link href="../../TheosStyle.css" rel="stylesheet" type="text/css"><link rel="shortcut icon" href="../../TT_ICO.png" /></head> <BODY> <CENTER> <div id="header"><HR><h1>Theo Todman's Web Page - Paper Abstracts</h1><HR></div><A name="Top"></A> <TABLE class = "Bridge" WIDTH=950> <tr><th><A HREF = "../../PaperSummaries/PaperSummary_16/PaperSummary_16537.htm">Functionalism</A></th></tr> <tr><th><A HREF = "../../Authors/T/Author_Thomas (Janice L.).htm">Thomas (Janice L.)</a></th></tr> <tr><th>Source: Thomas (Janice L.) - Mind and Person in the Philosophy of Religion</th></tr> <tr><th>Paper - Abstract</th></tr> </TABLE> </CENTER> <P><CENTER><TABLE class = "Bridge" WIDTH=400><tr><td><A HREF = "../../PaperSummaries/PaperSummary_16/PaperSummary_16537.htm">Paper Summary</A></td><td><A HREF="#ColourConventions">Text Colour-Conventions</a></td></tr></TABLE></CENTER></P> <hr><P><FONT COLOR = "0000FF"><U>Contents</U><FONT COLOR = "800080"><ol type="1"><li>Essential reading  61</li><li>Further reading  61</li><li>Introduction  61</li><li>Response to the chauvinism objection  multiple realisability  62</li><li>The second avenue to functionalism  the computer analogy  63</li><li>Arguments for functionalism  64</li><li>Objections to functionalism  64</li><li>Learning outcomes  68</li><li>Sample examination questions  68</li><li>Tips for answering sample questions  68</li></ol></FONT><U>Full Text</U><FONT COLOR = "800080"><ol type="1"><li><B>Essential Reading</B><ul type="disc"><li>Campbell, Keith - <I>Body and Mind</I>, 1984. </li><li><a name="12"></a>"<A HREF = "../../BookSummaries/BookSummary_00/BookPaperAbstracts/BookPaperAbstracts_76.htm">Churchland (Paul) - Matter & Consciousness</A>", 1984, pp. 38-41.</li><li>Crane, Tim - <I>The Mechanical Mind: a philosophical introduction to minds, machines and mental representation</I>, 1995, Chapter 3.</li><li><a name="13"></a>"<A HREF = "../../BookSummaries/BookSummary_04/BookPaperAbstracts/BookPaperAbstracts_4096.htm">Graham (George) - Philosophy of Mind: An Introduction</A>", 1998, Chapter 5, especially pp. 89-93.</li><li><a name="14"></a>"<A HREF = "../../BookSummaries/BookSummary_00/BookPaperAbstracts/BookPaperAbstracts_123.htm">McGinn (Colin) - The Character of Mind - An Introduction to the Philosophy of Mind</A>", pp. 33-36.</li><li><a name="15"></a>"<A HREF = "../../BookSummaries/BookSummary_04/BookPaperAbstracts/BookPaperAbstracts_4095.htm">Priest (Stephen) - Theories of the Mind</A>", 1991, Chapter 5, pp. 133-49.</li><li><a name="16"></a>"<A HREF = "../../BookSummaries/BookSummary_00/BookPaperAbstracts/BookPaperAbstracts_151.htm">Smith (Peter) & Jones (O.R.) - The Philosophy of Mind - An Introduction</A>", 1986, pp. 159-62. </li></ul></li><li><B>Further reading</B> <ul type="disc"><li><a name="3"></a>"<A HREF = "../../Abstracts/Abstract_00/Abstract_815.htm">Armstrong (David) - The Nature of Mind</A>", 1981.</li><li>Heil, John - <I>Philosophy of Mind: a contemporary introduction</I>, 1998, Chapter 4, sections I and II, pp. 87-104.</li><li><a name="17"></a>"<A HREF = "../../BookSummaries/BookSummary_00/BookPaperAbstracts/BookPaperAbstracts_111.htm">Kim (Jaegwon) - Philosophy of Mind</A>", 1996, Chapters 4 & 5.</li><li><a name="4"></a>"<A HREF = "../../Abstracts/Abstract_00/Abstract_115.htm">Lewis (David) - An Argument for the Identity Theory</A>".</li><li><a name="5"></a>"<A HREF = "../../Abstracts/Abstract_00/Abstract_117.htm">Lewis (David) - Mad Pain and Martian Pain</A>".</li><li><a name="6"></a>"<A HREF = "../../Abstracts/Abstract_00/Abstract_649.htm">Lewis (David) - Psychophysical and Theoretical Identifications</A>"</li><li><a name="7"></a>"<A HREF = "../../PaperSummaries/PaperSummary_01/PaperSummary_1872.htm">Lycan (William) - Functionalism</A>"</li><li><a name="8"></a>"<A HREF = "../../PaperSummaries/PaperSummary_01/PaperSummary_1873.htm">Block (Ned) - Functionalism</A>"</li></ul></li><li><B>Two avenues to functionalism</B> <ul type="disc"><li>Functionalism in one form or another is one of the most widely held theories among present day philosophers of mind. Two different routes brought thinkers to functionalism. On the one hand, some reacted against the identity theory because of the chauvinism discussed in the previous chapter, but thought that the <U><A HREF="#On-Page_Link_P16537_1">type-type</A></U><SUB>1</SUB><a name="On-Page_Return_P16537_1"></A> identity theory or at least some closely allied form of materialism was basically on the right track. On the other hand, some were inspired by striking similarities they noticed between mental states and the functional or logical states of computers. Functionalism seemed to offer both a solution to the chauvinism problem and a satisfying account of mind that put the computer analogy to fruitful use. In what follows, we will look at each approach in turn and try to see how they draw together. </li></ul></li><li><B>Response to the chauvinism objection - multiple realisability</B><ul type="disc"><li><B>Reading</B><BR>& <a name="18"></a>"<A HREF = "../../BookSummaries/BookSummary_00/BookPaperAbstracts/BookPaperAbstracts_123.htm">McGinn (Colin) - The Character of Mind - An Introduction to the Philosophy of Mind</A>", p. 34.</li><li>The chauvinism objection to the identity theory says  mental states like pain cannot simply be identified with certain sorts of human brain states (like c-fibre activation) because humans are not the only beings that feel or could possibly feel pain'. Non-humans obviously lack human brains and therefore cannot have human brain states, but most people are inclined to think that many of them can nonetheless suffer pains. If we want to know what pain (or any mental state) is for human, animal, angel and alien alike, we must look for what all sufferers could have in common when they are all suffering.</li><li>Functionalism suggests that we look on mental states as functional states of their subjects. A functional state is one dependent upon a functional property and a functional property is one a thing has in virtue of some job, task or function it is designed or apt to perform. So mousetraps all have the functional property  being a mouse trap'. The important thing to notice is that mousetraps could differ greatly from each other. There is no single, physical specification which every mousetrap must meet. So a mousetrap can be made of metal and wood and work by a spring mechanism or be made of clear plastic with no working parts. A mousetrap can be savage or humane, baited or unbaited, depending on design. As long as it is able to perform the same task of capturing a mouse, any contrivance will do. There need be no similarity at the level of physical description  no shared physical properties  even though, of course, no one would suggest that any mousetrap is anything other than a purely physical object.</li><li>This is the phenomenon of multiple realisability. Mousetraps can be realised in any one of a wide range of physical bodies, differing from one another in structure and constituent materials. What they all share is no single physical property or properties  rather it is the functional property of  being a mousetrap'.</li><li>Functionalists believe that the best account of mental states is to view them as functional states of their subject. Pain is the state you are in which is caused by injury, tissue damage or severe pressure and which in turn produces other mental states (say, fear and a desire to escape/avoid the injurious thing) and behaviour (crying out, pulling away). Thirst is the state you are in which is caused by dehydration and in turn produces a desire for, and purposeful movement to acquire, drink.</li><li>If mental states are whatever state fills a causal role rather than specific types of physical states then they could be realised in any material. They might be realised by different human brain states in us and different dog brain states in the family dog. In the Martians invented by David Lewis (<a name="9"></a>"<A HREF = "../../Abstracts/Abstract_00/Abstract_117.htm">Lewis (David) - Mad Pain and Martian Pain</A>"), psychological states are realised by hydraulic activity in the Martians' feet. If there are any nonmaterial supernatural beings their mental states might be realised in various types of immaterial substance states. You should be aware that functionalism, although it is the theory of mind favoured by many materialists is not necessarily a materialist theory. As Priest explains (134), functionalism only entails materialism if you add the premise that  all causes and effects are physical causes and effects'. Since functionalism gives an account of mental states as  constituted by their causal relations to one another and to sensory inputs and behavioural outputs' (<a name="10"></a>"<A HREF = "../../PaperSummaries/PaperSummary_01/PaperSummary_1873.htm">Block (Ned) - Functionalism</A>"), if all causes are physical then mental states, being both the effects of sensory episodes and the causes of behaviour, are bound to be physical  but only if the added premise is true. </li></ul></li><li><B>Activity</B><ul type="disc"><li>Test your grasp of the notion of a functional property by thinking about  being an anti-biotic',  being a heart' and  being unlocked' (as one possible state of a bicycle lock). In the latter case, you could verify your thoughts about what a functional property is and how mental states might be viewed as such by reading David Lewis's <a name="11"></a>"<A HREF = "../../Abstracts/Abstract_00/Abstract_115.htm">Lewis (David) - An Argument for the Identity Theory</A>"; the bicycle lock analogy plays a key role in his argument. </li></ul></li><li><B>The second avenue to functionalism - the computer analogy</B><ul type="disc"><li><B>Reading</B><BR>& <a name="19"></a>"<A HREF = "../../BookSummaries/BookSummary_04/BookPaperAbstracts/BookPaperAbstracts_4095.htm">Priest (Stephen) - Theories of the Mind</A>", 1991, Chapter 5, pp. 145-47.</li><li>Functionalism deals with the chauvinism objection by arguing that mental states should be viewed as functional states and thus multiply realisable. One and the same functional state can be realised (causal role can be performed) by different physical states or events in quite disparate physical structures.</li><li>One sort of object defined by a functional concept and having a functional property which makes a striking analogue (if nothing more) for mental states and the mind is the computer. Computers come in many different shapes and sizes and can be made from many different materials and components. One of the earliest computers (designed by Charles Babbage) called for, among other things, a thousand axle rods and fifty-thousand geared wheels: when eventually built it weighed three tons. In contrast, recent laptops are made of silicon, plastic and liquid crystal and weigh only a few pounds.</li><li><a name="1"></a><A HREF="../../Notes/Notes_1/Notes_108.htm">What matters</A><SUP>2</SUP> for being a computer is what it does, its causal role. If it takes in data (information), performs operations on those data and puts out data (which we interpret as answers to questions we have posed to the machine), then it is a computer whatever hardware it is made of and whatever software it is running or instructions it is following. We can understand it completely because its outputs are a joint function of its inputs and its current states and internal structure. Moreover, two computers can be functionally equivalent (sometimes writers use  functionally isomorphic' as a synonym for this phrase) even if they are built of very different materials, to very different designs.</li><li>Of course, what is particularly interesting about computers where philosophers of mind are concerned is that (unlike mousetraps) what computers all do (their behaviour or output) is something that is remarkably like the things we do with our minds. We say that computers solve problems, process information, make diagnoses. We even credit them with various mental states: we say the computer is  searching its memory',  looking for' this or that,  recognising' a code word, even that it  understands' instructions of one sort or another,  wants' to do this or that and  believes' such and such is the right answer or procedure in certain circumstances. The thought is bound to occur   maybe the way computers do these things is the way we do them too!'. Perhaps the brain is a computer, performing operations, on symbols that represent items in the world outside our heads.</li><li>Indeed, the whole subject of artificial intelligence is built on the general principle that what computers do and what we creatures with minds do is sufficiently similar that we can learn about human understanding by studying computers and designing and building computers whose performance matches human intelligent performance of various kinds in different respects. The conclusion many have reached is that minds when thinking are exactly analogous to (or perhaps simply are) computers running programs: the brain and central nervous system (CNS) are the hardware and the mind is the software. </li></ul></li><li><B>Arguments for functionalism</B><ul type="disc"><li>Functionalism's main defence is that it has all the advantages already claimed for the identity theory (in fact it is often said to be at least a token token identity theory  if all causes are physical, every token mental state is a token physical state) but without the chauvinism that undermines central state materialism's type identity theory. Much of the weight of the functionalist's case rests on the intuitive persuasiveness of the computer analogy as an account of the nature of mind. </li></ul></li><li><B>Objections to functionalism</B>: The most important objections to functionalism are: <ul type="disc"><li>It leaves something crucial out of the definitions it gives us of the various mental states  namely, their distinctive, qualitative character or feel.</li><li>It cannot co-exist with the truth of the inverted spectrum hypothesis (or the absent qualia hypothesis).</li><li>Computers running programs do not thereby have genuine mental states or understanding so analysing the mental states of persons as functional states of a kind of computer is a dead end. It will not give us a plausible account of the nature of mind. </li></ul></li><li><B>Functionalism leaves out the distinctive character of mental states</B><ul type="disc"><li>Suppose that neurophysiologists find a way to identify a specific inner state of yours (a state of your brain/CNS) which occupies the right causal role for pain. It is caused by a burn to your finger (for example) and in turn causes you to cry out and pull your finger away. Would you regard it as appropriate to call this inner state of yours  pain' if you felt nothing when your brain was in that state? In other words, does pain have to hurt? Can you have a mental state without its characteristic  feel'?</li><li>Supporters of functionalism, thinking as they do that specifying a complete causal role is all there is to specifying any particular type of psychological state, will argue as follows: although, of course, most pains are felt, there is no barrier of principle to the existence of unfelt pains. Indeed, examples could be given of actual unfelt pain  like that of the soldier who, in the heat of battle, does not notice his grievous wound. Surely, the argument goes, there is pain there but the victim is simply not conscious of it.</li><li>The inevitable reply is that for as long as the soldier remains completely distracted so that he does not notice his wound he is feeling no pain  which is just to say there is no pain to feel. That is why we take pain killers and employ anaesthetics  in order to eliminate or prevent pain. With pain, this reply continues, its existence lies in its being perceived. If it is not perceived it does not exist.</li><li>The functionalist may respond again by claiming that his opponent is nave. Change the example from pain to, say, fear or perception of blue and it becomes quite clear that it makes perfect sense to talk of unconscious fears or unconscious (for example, subliminal) sense perceptions. Modem psychology has made appeal to subconscious or unconscious mental states not just perfectly respectable and orthodox but, in some cases, mandatory.</li><li>The opponent will reply again by saying that there is more to his point. Even if he granted the existence of unconscious mental states  fears, pains, colour sensations and so forth which are consciously experienced have each their own characteristic, specific, definitive, phenomenal feel or quality. (These are sometimes called 'qualia' - the singular of this word is  a quale'.) Pains must, when consciously experienced, hurt. Conscious fears must feel fearful and anxious. Colour sensations of which we are aware have specific, unique qualitative characters: sensing red just has a different felt quality from having a sensation of blue. The functionalist definition ignores what to many thinkers are essential or definitive qualitative aspects of mental states. </li></ul></li><li><B>Activity</B><ul type="disc"><li>Think about the question raised above. If you were told you were in pain by the scientist who had detected a particular brain state occupying the right causal role in you, would you agree or disagree, supposing you felt nothing, not even a twinge of discomfort? If you are inclined to judge that you do have authority here then this means you think that functionalism has left something vital out. </li></ul></li><li><B>Functionalism rules out the inverted spectrum hypothesis</B><ul type="disc"><li><B>Reading</B><BR>& <a name="20"></a>"<A HREF = "../../BookSummaries/BookSummary_04/BookPaperAbstracts/BookPaperAbstracts_4096.htm">Graham (George) - Philosophy of Mind: An Introduction</A>", 1998, pp. 8-15.<BR>& Campbell, Keith - <I>Body and Mind</I>, 1984, pp. 102ff. <BR>& <a name="21"></a>"<A HREF = "../../BookSummaries/BookSummary_00/BookPaperAbstracts/BookPaperAbstracts_76.htm">Churchland (Paul) - Matter & Consciousness</A>", 1984, pp. 38-39.</li><li>It is now time to discuss the <B>inverted spectrum</B> hypothesis which was earlier said to threaten behaviourism (p. 52 above), and now furnishes the same objection to functionalism. The strategy of this objection can be schematically outlined as follows:</li><li><B>1</B>. Functionalism, if true, would rule out as impossible a possibility which most people would regard as a genuine possibility (namely the possibility of a genuine case of spectrum inversion).</li><li><B>2</B>. If a theory says a particular hypothesis is false when we have independent grounds for accepting that hypothesis as true, we should reject the theory. (Something has to give  they cannot both be true.)</li><li>First we need to know what a genuine case of spectrum inversion would consist in. When you and I both look at a ripe tomato and say  that is red', you are having a private, inner sensory experience which you identify as (what it feels like when) sensing red. I identify my experience in exactly the same terms. But is it not possible that the sensation you are having is qualitatively unlike the sensation I am having? If  what is admittedly impossible  I were to have your sensation, perhaps I would identify it as a sensation of green.</li><li>It even seems possible that, wholly unknown to us, our colour sensations are all systematically different in such a way that what creates a sensation of orange in me creates a sensation which, if I could have it, I would describe as a sensation of blue, and so on round the spectrum. Of course we actually identify exactly the same things in the world as red, orange, blue etc.</li><li>If you judge that the possibility of spectrum inversion which I have just described is a genuine possibility (it makes sense, it could happen, though of course no one would or could know), then functionalism as already described in this chapter is refuted. For functionalism says that all there is to having a particular mental state (sensation of colour or pain or fear etc.) is being in, or having, a particular state with the right causal role. If you and I are both sensing red, then according to functionalism there is and can be no difference between our mental states. To sense red is nothing more nor less than to be in that inner state caused by a certain, specific kind of irradiation of the eyes which in turn causes the subject to identify the source of the irradiation as red in colour. According to functionalism there is nothing more to either of our sensations of red which could make a point of difference between them.</li><li>The functionalist might reply that, in fact, anything which differed between the qualitative character of your mental state and mine would have a knock on effect on the functional character of your state; it would differentiate your state's functional character from that of mine as well. If your state did not hurt it would not make you cry out and pull away. </li><li>But, course, in that case it would no longer be a pain even on the functionalist's definition. Perhaps the objector needs to say that it is not really conceivable, for example, that a perfectly set up causal intermediary between irradiation and judgement of colour could lack the characteristic qualitative feel of a sensation of the colour in question. Red things just do always invoke a warm sensation; blue things, a cool one. Characteristic phenomenal feel (qualitative character) may not be definitive of mental states, but it is an invariable accompaniment. The objector's response will be that the essence of mental states is their feel or character. The customary accompaniment is the causal role  being typically caused by this: typically causing that. But should phenomenal feel and causal role come apart, the mental state is the phenomenal feel, not the causal role. Pain is pain if it hurts, whether it is caused by tissue damage and leads to avoidance or not. </li></ul></li><li><B>Activities</B><ul type="disc"><li>Think about the inverted spectrum hypothesis  do you think it represents a genuine possibility? (Could my sensation of red differ in qualitative character from yours?) Doesn't this show functionalism is wrong?</li><li>Read through the section on inverted spectrum again and ask yourself whether the anti-functionalist argument would work equally well if the hypothesis opposed to functionalism were the hypothesis of the possibility of philosophers' zombies. In philosophical parlance, a zombie is a being who is physically, molecule-for-molecule exactly like a human subject and behaves exactly like a human subject, but has no conscious states, no sensations with any phenomenal character, no feelings.</li><li>Could there exist such a being? (Notice that this is a question about logical possibility or conceivability, not about practical possibility, still less actuality.) If zombies are possible, they would be functionally equivalent to (isomorphic with) humans  that is, they would have inner states which were defined by their causal role, caused by distinct types of contact with the world, in turn causing certain behaviour but which had no qualia (no phenomenal feel).</li><li>If such a zombie could exist (if the concept makes sense, has no internal incoherence) then, again, functionalism must be wrong. Functionalism says that there could be no difference between you and your zombie twin in respect of your mental states. Having a particular mental state simply is being in the right functional state.</li><li>Now go back to behaviourism. Ask yourself whether behaviourism is in the same awkward position as functionalism with respect to the inverted spectrum hypothesis or the zombie hypothesis (which is sometimes called  the problem of absent qualia'). If the inverted spectrum hypothesis is a genuine one then any theory which says it is not must be itself mistaken. By this test, behaviourism stands or<BR>falls with functionalism. </li></ul></li><li><B>Is understanding nothing more than computing?</B><ul type="disc"><li><B>Reading</B><BR>& Crane, Tim - <I>The Mechanical Mind</I>, 1995, Chapter 3.</li><li>Pioneers of computing theory like Alan Turing were very optimistic about the view that human intelligence or mindedness would turn out to be nothing additional to computing or computation and thus that a machine capable of sufficiently complex computing would count among the intelligent <U><A HREF="#On-Page_Link_P16537_3">subjects</A></U><SUB>3</SUB><a name="On-Page_Return_P16537_3"></A>. </li><li>There is a famous <a name="2"></a><A HREF="../../Notes/Notes_0/Notes_32.htm">thought experiment</A><SUP>4</SUP> devised by John Searle which aims to call into question the view that a crucial mental state, that of understanding, might be exhaustively accounted for or defined in terms of occupying a causal role  more specifically, the causal role of running a program: in other words, computing. He had the inspired idea of casting himself as the device running the program. He would  run a program' by performing certain operations on some symbols  operations determined by following rules having to do only with the physical shape of the symbols, not their meanings.</li><li>Searle's experiment is called  the <B>Chinese Room</B>' for the following reason. The symbols Searle chose for his example were the symbols used in written Chinese. Searle imagined himself locked in a room with a slot through which Chinese characters could be handed in to him and another slot through which he could feed Chinese symbols out. In the room was a supply of Chinese characters and a rule book full of rules of the form  if the received symbols are shaped thus and so, feed out symbols shaped like this'. Searle, being completely ignorant of the Chinese language, would not know the meaning of the symbols handed in but, since the rule book is in English, he would be able to select appropriate symbols to hand out. In Searle's story, people outside the room who feed in the symbols take themselves to be dispatching questions in Chinese into the room and receiving appropriate answers in return.</li><li>The question is whether the man in the room would (or would eventually) understand Chinese. Searle's view is that he would never come to understand the language simply by repeatedly rearranging symbols according to rules for doing so. And this is all a computer running a program ever does  follow rules for symbol manipulation. For one thing, endless repetitions of the procedure could never teach the man in the Chinese room what the symbols are about, what they refer to.</li><li>When you understand something, say a sentence said to you in your native language, you do not just recognise symbols by their physical properties and correlate them with other symbols recognised by their physical properties. You take note of the meaning of the symbols heard, and that determines what (appropriate) response you make.</li><li>Searle's conclusion is that understanding is more than just running a program: applied more generally  having mental states is more than simply being in certain functional states. </li></ul></li><li><B>Activity</B><ul type="disc"><li>Ask yourself what is missing from the computer running a program that makes it unlike you when you are answering questions in your native language. It has already been said that you, unlike the computer, know what the words used refer to and are about. What else have you got that the computer lacks? (Think about conscious awareness, self-consciousness. Can you think of any others?) </li></ul></li><li><B>Learning outcomes</B><ul type="disc"><li>When you have worked through the reading recommended for this chapter, you should be able to:<BR>& Explain what functionalism is.<BR>& List the main arguments in favour of functionalism.</li><li>Remember that, like all theories of the mind, functionalism defends its position by: </li><li><B>1</B>. Trying to present its stance as a plausible account which covers all the relevant phenomena.</li><li><B>2</B>. Arguing that it has all the advantages claimed by its rivals (a brief outline of these is useful in an exam answer.)</li><li><B>3</B>. Claiming that it can resolve difficulties that undermine its rivals' positions (again, these need to be referred to in an exam answer and enough said to show that you have mastered the detail and will present as much of it as there is time for.)</li><li><B>4</B>. Trying to rebut objections raised against it and show that they are less powerful than they appear for various reasons  in this case, the objections to be overcome concern: <BR>& phenomenal qualities and how mental states should be defined<BR>& the inverted spectrum hypothesis<BR>& the Chinese Room. </li></ul></li><li><B>Sample examination questions</B><ul type="disc"><li>1. How is functionalism to be distinguished from behaviourism? Does either give the right account of the mind or person?</li><li>2. Does the inverted spectrum hypothesis show that functionalism about the mind is mistaken?</li><li>3. Does the contention that pains must hurt pose a difficulty for functionalism?</li><li>4. Could a machine have religious beliefs or experiences? </li></ul></li><li><B>Tips for answering sample questions</B>: For each of these questions a satisfactory answer would certainly contain a brief, clear account of what functionalism is  that is, what its supporters believe about the nature of mental states and properties. <ul type="disc"><li>In question 1, you would also have to give a brief definition and account of the main tenets of behaviourism and what distinguishes the two theories from each other as well as what they share.</li><li>For question 2, you would need to give a brief but clear account of what the inverted spectrum hypothesis is and then explain how its very possibility is held to refute functionalism.</li><li>Question 3 asks you to explain the difference between defining a type of mental state by its causal relations and defining it by its intrinsic qualitative character. The examiners would be looking for some argument to defend either the view that the defining features of mental states are causal-relational or that they are intrinsic.</li><li>Question 4 appears more adventurous than the other three, but if you look closely you will see that what is really at issue is the possibility of a machine's having mental states (of whatever kind   religious experiences' are mentioned in the question, but the same points you would make about the possibility of a machine's having understanding, sensing colour or feeling pain should be made here). Searle's Chinese room should also be discussed since the only sorts of machines that have been seriously thought to be potential possessors of mental states are computers.</li></ul></li></ol><hr><FONT COLOR = "0000FF"><B>Comment: </B><BR><BR>Part 1 (The Metaphysics of Mind and Body); Section 2 (Varieties of anti-dualism and materialism); Chapter 7. Hard Copy filed in <a name="22"></a>"<A HREF = "../../BookSummaries/BookSummary_04/BookPaperAbstracts/BookPaperAbstracts_4080.htm">Various - Papers on Religion Boxes (Heythrop)</A>".<BR><BR><HR><BR><U><B>In-Page Footnotes</U></B><a name="On-Page_Link_P16537_1"></A><BR><BR><U><A HREF="#On-Page_Return_P16537_1"><B>Footnote 1</B></A></U>: See also Chapter 6 of this subject guide -  Type and token'<a name="On-Page_Link_P16537_3"></A><BR><BR><U><A HREF="#On-Page_Return_P16537_3"><B>Footnote 3</B></A></U>: You can read about Turing's famous test for mindedness in Priest 135-37 and Graham 94<BR><BR><FONT COLOR = "0000FF"><HR></P><a name="ColourConventions"></a><p><b>Text Colour Conventions (see <A HREF="../../Notes/Notes_10/Notes_1025.htm">disclaimer</a>)</b></p><OL TYPE="1"><LI><FONT COLOR = "0000FF">Blue</FONT>: Text by me; &copy; Theo Todman, 2018</li><LI><FONT COLOR = "800080">Mauve</FONT>: Text by correspondent(s) or other author(s); &copy; the author(s)</li></OL> <BR><HR><BR><CENTER> <TABLE class = "Bridge" WIDTH=950> <TR><TD WIDTH="30%">&copy; Theo Todman, June 2007 - August 2018.</TD> <TD WIDTH="40%">Please address any comments on this page to <A HREF="mailto:theo@theotodman.com">theo@theotodman.com</A>.</TD> <TD WIDTH="30%">File output: <time datetime="2018-08-02T08:32" pubdate>02/08/2018 08:32:24</time> <br><A HREF="../../Notes/Notes_10/Notes_1010.htm">Website Maintenance Dashboard</A></TD></TR> <TD WIDTH="30%"><A HREF="#Top">Return to Top of this Page</A></TD> <TD WIDTH="40%"><A HREF="../../Notes/Notes_11/Notes_1140.htm">Return to Theo Todman's Philosophy Page</A></TD> <TD WIDTH="30%"><A HREF="../../index.htm">Return to Theo Todman's Home Page</A></TD> </TR></TABLE></CENTER><HR> </BODY> </HTML>