The Nature of Mind
Rosenthal (David), Ed.
This Page provides (where held) the Abstract of the above Book and those of all the Papers contained in it.
Colour-ConventionsDisclaimerBooks / Papers Citing this Book

BOOK ABSTRACT: None.


BOOK COMMENT:

Oxford University Press, 1991



"Anscombe (G.E.M.) - The First Person"

Source: Rosenthal - The Nature of Mind
COMMENT:



"Armstrong (David) - Is Introspective Knowledge Possible?"

Source: Rosenthal - The Nature of Mind



"Armstrong (David) - The Causal Theory of Mind"

Source: Rosenthal - The Nature of Mind
COMMENT: Also in "Lycan (William) - Mind and Cognition - An Anthology"



"Block (Ned) - Troubles with Functionalism"

Source: Block - Readings in Philosophy of Psychology - Vol 1
COMMENT:



"Burge (Tyler) - Individualism and the Mental"

Source: Rosenthal - The Nature of Mind


Abstract1
  • This paper is regularly cited as extending Putnam's twin earth critique of Fregean theories of reference to the social realm.
  • Just as Putnam argues that traditional meaning theory leaves out the contribution of the physical world, Burge has been taken as arguing that traditional meaning theories have left out the contribution of the social world; the linguistic community plays a role in determining the objective content of thoughts ascribed in the language of that community.
  • Burge's interpretation of his thought experiment2 is controversial, but the influence of this paper has been profound.


COMMENT:




In-Page Footnotes ("Burge (Tyler) - Individualism and the Mental")

Footnote 1: Taken from "Harnish (Robert M.) - Basic Topics in the Philosophy of Language: Introduction".



"Campbell (Keith) - Central State Materialism"

Source: Rosenthal - The Nature of Mind



"Chihara (Charles S.) & Fodor (Jerry) - Operationalism and Ordinary Language: A Critique of Wittgenstein"

Source: Fodor - Representations - Philosophical Essays on the Foundations of Cognitive Science


Philosophers Index Abstract
  1. This paper explores some lines of argument in Wittgenstein1's post-Tractatus writings in order to indicate the relations between Wittgenstein2's philosophical psychology, on the one hand, and his philosophy of language, his epistemology, and his doctrines about the nature of philosophical analysis on the other.
  2. The authors maintain that the later writings of Wittgenstein3 express a coherent doctrine in which an operationalistic analysis of confirmation and language supports a philosophical psychology of a type the authors call "logical behaviorism."
  3. They also maintain that there are good grounds for rejecting the philosophical theory implicit in Wittgenstein4's later works. In particular,
    1. they first argue that Wittgenstein5's position leads to some implausible conclusions concerning the nature of language and psychology;
    2. second, they maintain that the arguments Wittgenstein6 provides are inconclusive; and
    3. third, they sketch an alternative position which they believe avoids many of the difficulties implicit in Wittgenstein7's philosophy.


COMMENT: Also in "Rosenthal (David), Ed. - The Nature of Mind"



"Chisholm (Roderick) - Intentional Inexistence"

Source: Rosenthal - The Nature of Mind



"Chisholm (Roderick) - The First Person"

Source: Rosenthal - The Nature of Mind


Contents
  1. Chapter 3: The Problem of First-Person Sentences
    • Belief De Re
    • A Difficulty with the Propositional Theory
    • The Problem with the ‘He, Himself’ Locution
    • Some Ways of Dealing with the Problem
    • An Approach to the Problem
    • Notes
  2. Chapter 4: Indirect Attribution
    • A Re-examination of Intentional Attitudes
    • Direct Attribution
    • Indirect Attribution
    • ’Under a Description’
    • Solution to the Problem of the ‘He, Himself’ Locution
    • Content and Object
    • Eternal Objects and Indirect Attribution
    • De Dicto Belief
    • Notes


COMMENT: Excerpts (Chapters 3 & 4) from the book of the same name.



"Chisholm (Roderick) - The Status of Appearances"

Source: Van Inwagen & Zimmerman - Metaphysics: The Big Questions

COMMENT: Part of Chap. 6 of "Theory of Knowledge (1st Edition)"; Also (excerpted) in "Rosenthal (David), Ed. - The Nature of Mind"



"Churchland (Paul) - Eliminative Materialism and the Propositional Attitudes"

Source: Rosenthal - The Nature of Mind

COMMENT:



"Davidson (Donald) - Mental Events"

Source: Davidson - Essays on Actions and Events, Chapter 11

COMMENT:



"Davidson (Donald) - Thought and Talk"

Source: Davidson - Inquiries into Truth & Interpretation, Chapter 11

COMMENT: Also in "Rosenthal (David), Ed. - The Nature of Mind"



"Dennett (Daniel) - Brain Writing and Mind Reading"

Source: Dennett - Brainstorms - Philosophical Essays on Mind and Psychology, Chapter 3

COMMENT:



"Dennett (Daniel) - Reflections: Instrumentalism Reconsidered"

Source: Rosenthal - The Nature of Mind



"Dennett (Daniel) - Three Kinds of Intentional Psychology"

Source: Dennett - The Intentional Stance, Chapter 3

COMMENT: Also in "Rosenthal (David), Ed. - The Nature of Mind"



"Dennett (Daniel) - True Believers: The Intentional Strategy and Why it Works"

Source: Dennett - The Intentional Stance, Chapter 2
Write-up Note1 (Full Text reproduced below).

See the Note2.

COMMENT:

Write-up4 (as at 04/04/2015 00:17:17): Dennett - True Believers

This note provides my detailed review of "Dennett (Daniel) - True Believers: The Intentional Strategy and Why it Works".

Currently, this write-up is only available as a PDF. Click File Note (PDF). It is my intention to convert this to Note format shortly.

… Further details to be supplied5




In-Page Footnotes ("Dennett (Daniel) - True Believers: The Intentional Strategy and Why it Works")

Footnote 4:
  • This is the write-up as it was when this Abstract was last output, with text as at the timestamp indicated (04/04/2015 00:17:17).
  • Books / Papers Citing this Book.



"Dretske (Fred) - The Intentionality of Cognitive States"

Source: Rosenthal - The Nature of Mind


Philosophers Index Abstract
    Our cognitive states exhibit intentional characteristics. It is argued that since statements of natural law also exhibit intentionality (if a is lawfully dependent on b, and "a" and "c" are co-extensional, c may not be lawfully dependent on b), information, understood as a measure of this lawful dependency, is a useful notion to explain the source of the mind's intentionality. Our cognitive states are information structures and, hence, exhibit the same (or a similar) kind of content as does the information on which they depend.



"Feyerabend (Paul) - Mental Events and the Brain"

Source: Rosenthal - The Nature of Mind

COMMENT: Also in "Rosenthal (David), Ed. - Materialism and the Mind-Body Problem" .



"Fodor (Jerry) - After-thoughts: Yin and Yang in the Chinese Room"

Source: Rosenthal - The Nature of Mind



"Fodor (Jerry) - Methodological Solipsism Considered as a Research Strategy in Cognitive Psychology"

Source: Fodor - Representations - Philosophical Essays on the Foundations of Cognitive Science

COMMENT: Also in "Rosenthal (David), Ed. - The Nature of Mind"



"Fodor (Jerry) - Propositional Attitudes"

Source: Fodor - Representations - Philosophical Essays on the Foundations of Cognitive Science

COMMENT:



"Fodor (Jerry) - Searle on What Only Brains Can Do"

Source: Behavioral and Brain Sciences, Volume 3 - Issue 3 - September 1980, pp. 431-432


Full Text
  1. Searle is certainly right that instantiating the same program that the brain does is not, in and of itself, a sufficient condition for having those propositional attitudes characteristic of the organism that has the brain. If some people in AI think that it is, they're wrong. As for the Turing test, it has all the usual difficulties with predictions of "no difference"; you can't distinguish the truth of the prediction from the insensitivity of the test instrument1.
  2. However, Searle's treatment of the "robot reply" is quite unconvincing. Given that there are the right kinds of causal linkages between the symbols that the device manipulates and things in the world - including the afferent and efferent transducers of the device - it is quite unclear that intuition rejects ascribing propositional attitudes to it. All that Searle's example shows is that the kind of causal linkage he imagines - one that is, in effect, mediated by a man sitting in the head of a robot - is, unsurprisingly, not the right kind.
  3. We don't know how to say what the right kinds of causal linkage are. This, also, is unsurprising since we don't know how to answer the closely related question as to what kinds of connection between a formula and the world determine the interpretation under which the formula is employed. We don't have an answer to this question for any symbolic system; a fortiori, not for mental representations. These questions are closely related because, given the mental representation view, it is natural to assume that what makes mental states intentional is primarily that they involve relations to semantically interpreted mental objects; again, relations of the right kind.
  4. It seems to me that Searle has misunderstood the main point about the treatment of intentionality in representational theories of the mind; this is not surprising since proponents of the theory - especially in AI - have been notably unlucid in expounding it. For the record, then, the main point is this: intentional properties of propositional attitudes are viewed as inherited from semantic properties of mental representations (and not from the functional role of mental representations, unless "functional role" is construed broadly enough to include symbol-world relations). In effect, what is proposed is a reduction of the problem what makes mental states intentional to the problem what bestows semantic properties on (fixes the interpretation of) a symbol. This reduction looks promising because we're going to have to answer the latter question anyhow (for example, in constructing theories of natural languages); and we need the notion of mental representation anyhow (for example, to provide appropriate domains for mental processes). It may be worth adding that there is nothing new about this strategy. Locke, for example, thought (a) that the intentional properties of mental states are inherited from the semantic (referential) properties of mental representations; (b) that mental processes are formal (associative); and (c) that the objects from which mental states inherit their intentionality are the same ones over which mental processes are defined: namely ideas. It's my view that no serious alternative to this treatment of propositional attitudes has ever been proposed.
  5. To say that a computer (or a brain) performs formal operations on symbols is not the same thing as saying that it performs operations on formal (in the sense of "uninterpreted") symbols. This equivocation occurs repeatedly in Searle's paper, and causes considerable confusion. If there are mental representations they must, of course, be interpreted objects; it is because they are interpreted objects that mental states are intentional. But the brain might be a computer for all that.
  6. This situation - needing a notion of causal connection, but not knowing which notion of causal connection is the right one - is entirely familiar in philosophy. It is, for example, extremely plausible that "a perceives b" can be true only where there is the right kind of causal connection between a and b. And we don't know what the right kind of causal connection is here either. Demonstrating that some kinds of causal connection are the wrong kinds would not, of course, prejudice the claim. For example, suppose we interpolated a little man between a and b, whose function it is to report to a on the presence of b. We would then have (inter alia) a sort of causal link from a to b, but we wouldn't have the sort of causal link that is required for a to perceive b. It would, of course, be a fallacy to argue from the fact that this causal linkage fails to reconstruct perception to the conclusion that no causal linkage would succeed. Searle's argument against the "robot reply" is a fallacy of precisely that sort.
  7. It is entirely reasonable (indeed it must be true) that the right kind of causal relation is the kind that holds between our brains and our transducer mechanisms (on the one hand) and between our brains and distal objects (on the other). It would not begin to follow that only our brains can bear such relations to transducers and distal objects; and it would also not follow that being the same sort of thing our brain is (in any biochemical sense of "same sort") is a necessary condition for being in that relation; and it would also not follow that formal manipulations of symbols are not among the links in such causal chains. And, even if our brains are the only sorts of things that can be in that relation, the fact that they are might quite possibly be of no particular interest; that would depend on why it's true2. Searle gives no clue as to why he thinks the biochemistry is important for intentionality and, prima facie, the idea that what counts is how the organism is connected to the world seems far more plausible. After all, it's easy enough to imagine, in a rough and ready sort of way, how the fact that my thought is causally connected to a tree might bear on its being a thought about a tree. But it's hard to imagine how the fact that (to put it crudely) my thought is made out of hydrocarbons could matter, except on the unlikely hypothesis that only hydrocarbons can be causally connected to trees in the way that brains are.
  8. The empirical evidence for believing that "manipulation of symbols" is involved in mental processes derives largely from the considerable success of work in linguistics, psychology, and AI that has been grounded in that assumption. Little of the relevant data concerns the simulation of behavior or the passing of Turing tests, though Searle writes as though all of it does. Searle gives no indication at allot how the facts that this work accounts for are to be explained if not on the mental-processes-are-formal-processes view. To claim that there is no argument that symbol manipulation is necessary for mental processing while systematically ignoring all the evidence that has been alleged in favor of the claim strikes me as an extremely curious strategy on Searle's part.
  9. Some necessary conditions are more interesting than others. While connections to the world and symbol manipulations are both presumably necessary for intentional processes, there is no reason (so far) to believe that the former provide a theoretical domain for a science; whereas, there is considerable a posteriori reason to suppose that the latter do. If this is right, it provides some justification for AI practice, if not for AI rhetoric.
  10. Talking involves performing certain formal operations on symbols: stringing words together. Yet, not everything that can string words together can talk. It does not follow from these banal observations that what we utter are uninterpreted sounds, or that we don't understand what we say, or that whoever talks talks nonsense, or that only hydrocarbons can assert - similarly, mutatis mutandis, if you substitute "thinking" for "talking."


COMMENT:




In-Page Footnotes ("Fodor (Jerry) - Searle on What Only Brains Can Do")

Footnote 1:
  • I assume, for simplicity, that there is only one program that the brain instantiates (which, of course, there isn't). Notice, by the way, that even passing the Turing test requires doing more than just manipulating symbols. A device that can't run a typewriter can't play the game.
Footnote 2:
  • For example, it might be that, in point of physical fact, only things that have the same simultaneous values of weight, density, and shade of gray that brains have can do the things that brains can. This would be surprising, but it's hard to see why a psychologist should care much. Not even if it turned out - still in point of physical fact - that brains are the only things that can have that weight, density, and color. If that's dualism, I imagine we can live with it.



"Frankfurt (Harry) - Freedom of the Will and the Concept of a Person"

Source: Rosenthal - The Nature of Mind
Write-up Note1 (Full Text reproduced below).

See the Note2.

COMMENT:

Write-up4 (as at 14/03/2015 11:36:58): Frankfurt - Freedom of the Will and the Concept of a Person

This is a review of "Frankfurt (Harry) - Freedom of the Will and the Concept of a Person".
  1. Introduction5
    • Strawson and Ayer have misappropriated the term “person”. It is not just a mind/body union as this applies to the higher animals who aren’t persons6. This is a (possibly innocent) misuse of language.
    • No problem ought to be of greater interest to philosophers than who we essentially are.
    • The criteria for being a person (at least those of most philosophical interest) are not those of distinguishing humans from other species7.
    • Some humans may not be persons, so the sets of humans and persons are not coextensive. However, we do presuppose, maybe wrongly, that the characteristics of personhood are uniquely human.
    • It is characteristic of humans (and therefore a presumed characteristic of persons) that they have second-order desires (ie. desires for desires).
    • Second-order desires: wanting to be different with respect to our motivation. Ie. “reflective self-evaluation”.
  2. Section I - (Desires)
    • The statement “A wants X” is compatible with lots of statements to the effect that A doesn’t know that he does, or that he “really” doesn’t want X.
    • So, we don’t have a simple distinction between first and second level desires.
    • Frankfurt accepts a broad range of desires, including those of which we are unconscious or are deceived about.
    • However, he only accounts as desires that are willed those that move to action.
    • The “will” is not, therefore, co-extensive with first order desire – only with those desires that did, do or will motivate to action (but the will is identical with at least one first order desire).
    • Effective desires are those which move (or will, or would move) all the way to action.
    • The will is not co-extensive with what the agent intends to do, as intentions may be overridden by stronger desires.
    • There are two kinds of second order desire (wanting to want X).
      1. A “precious” situation where the agent univocally wants not to X (ie. not a desire that his will should be other than it actually is). He doesn’t want his first order desire to be effective, but merely wants the desire itself8.
      2. Where there is a desire on the agent’s part for a will that is effective. The agent wants more than an inclination – he wants an effective desire, one that moves him effectively to act. He wants “to X” to be his will.
    • Frankfurt gives an example that decides between these two cases. If someone wants to be motivated by the desire to concentrate on his work, then, if his second order desire is case (2) he must necessarily already have a first-order desire to do so. However, a first-order desire is insufficient for case (2), as that desire might not be effective – it may be trumped by another desire (even though it may remain amongst his desires). It is this “when the chips are down” situation that distinguishes between the two cases.
    • Frankfurt does recognise a genuine case (2) where a second order desire may not imply a present first order desire. If I want my will to conform to another’s (I want to want what someone else wants, eg. for reasons of hero-worship) then this may possibly be a genuine case (2) – because I want my desired desire to be effective - even though I don’t know what it is I desire to desire (I may not know what my hero’s desires are). Frankfurt doesn’t pursue the matter here9.
  3. Section II – (Persons and Wantons)
    • The above “case (2)” second order desires are dubbed “Second-order Volitions” by Frankfurt. They apply when an agent wants a desire to be his will (rather than case (1) situations where someone merely wants a desire).
    • According to Frankfurt, it is having Second-order Volitions that makes a person10.
    • A wanton is defined as an agent with no second-order volitions (even though they have first-order desires and even second-order desires of the “case 1” type). Wantons are consequently not persons11.
    • Wantons don’t care about their wills. All animals and very young children, and some adults, are wantons.
    • Wantons may have rational faculties of a high order, but aren’t concerned with the desirability of their desires, or with what their wills ought to be.
    • In asserting that personhood resides in the will rather than in reason, Frankfurt is far from suggesting that irrational creatures can be persons, because only a rational being can become aware of his will and have second-order volitions.
    • Frankfurt gives the examples of the drug addict who struggles but fails to beat his craving and the one who’s happy in his situation. The former is a person, the latter a wanton (presumably other things being equal), “in respect of his wanton lack of concern, no different from an animal”.
    • It’s important to note that a wanton may have conflicting first order desires. He just doesn’t care which one wins out.
    • Frankfurt suggests that a wanton has no identity other than his first-order desires. He appears to be using (non-)personal “identity” in a sense different from the individuating sense used normally (and by Locke).
    • According to Frankfurt, if your first-order desire says “do X” and your secondorder desire says “don’t do X”, then if you do X you do it unwillingly, not of your own free will.
    • Frankfurt expatiates further on wantons. The wanton may not be satisfied, since one of two conflicting desires wins out and leaves the other unsatisfied12.
    • Frankfurt denies that second-order volitions are necessarily moral, and also allows persons to be capricious in their second-order volitions. The important thing about second order volitions is that they be preferences, not that they be well-founded.
    • The wanton is neither a winner nor a loser in the struggle of his desires, as he has no stake in the conflict.
  4. Section III – (Freedom of the Will)
    • A person has freedom of the will only insofar as he has second-order volitions.
    • Supra-human beings (if any) with necessarily free wills are not accounted persons.
    • Frankfurt asks what sort of freedom is freedom of the will? What problem is addressed?
    • Does freedom mean doing what you want? This captures some of what it is to act freely, but entirely misses what it is to will freely.
    • We suppose animals have freedom of action, but not freedom of the will. So, freedom of action isn’t sufficient13 for freedom of the will
    • Nor is freedom of action necessary for freedom of the will, for, if you don’t know one’s freedom (to act) is curtailed, one’s will may be as free as ever.
    • Freedom of the will is not concerned with the relation between desires and actions, but with desires themselves.
    • But, Frankfurt thinks that comparing freedom of the will with freedom of action is both useful and natural. The analogy is “acts we want to perform” versus “the will14 we want to have”.
    • Freedom of the will is securing conformity of the will to second order volitions.
    • Unwilling addicts are not free, but are still persons. Wantons lack free will, since they have no second-order volitions at all.
    • Frankfurt admits that people are more complex than his simple sketch suggests.
    • For instance, second-order desires may conflict and, if unresolved, may leave us with no second-order volitions. Since this leads the agent with no preference as to which first-order desire should be his will, this destroys the agent as a person. He either has no will at all, or his will operates without his participation. He’s like the unwilling addict, a helpless bystander to the forces that move him, but in a different way15.
    • We can have volitions of higher order than the second. Frankfurt thinks this “humanisation run wild” also leads to the “destruction of the person16”.
    • In this situation, decisive identification with some first-order desire means we no longer need worry about higher order desires. Just acknowledge the second-order volition (to want this desire) and don’t go to higher levels.
    • Conformity of the will to higher-order volitions comes more naturally / easily to some than to others. Some have to struggle to achieve freedom of the will.
  5. Section IV – (The Advantages of Frankfurt’s Theory)
    • Frankfurt claims his account shows why we’re reluctant to allow freedom of the will to inferior species.
    • His theory also accounts for why freedom of the will is desirable. Frankfurt’s explanation is because of the satisfaction, rather than frustration, of second-order (or higher-order) volitions.
    • You have the satisfaction of having a will of your own, as against being a passive bystander merely observing the forces moving you.
    • Freedom both to do17 what we want and to want what we want to want is all the freedom that’s conceivable.
    • Frankfurt now considers whether other theories meet these basic conditions of acceptability (explaining our refusal of free will to animals and our treating free will as desirable). He thinks they don’t.
    • Chisholm claims that human freedom entails the absence of causal determinism, ie. a miracle. A free agent is an unmoved mover. But, this doesn’t distinguish between human and animal freedom.
    • There is no experiential difference between someone miraculously initiating a causal chain and someone in whom no such causal breach occurs.
    • Does freedom of the will (in part) explain moral responsibility? Frankfurt thinks that one can be morally responsible for what one has done when one’s will was not free at all18.
    • Chisholm’s account of free will might have no actual instances (eg. if it was found that there is always a sufficient cause for any brain event).
    • Free will implies that we are free to make any first-order desire our will. A person with a free will could have made his will other than he in fact did.
    • Frankfurt thinks this has no bearing on moral responsibility, which states that an agent performed the act freely, of his own free will. However, acting of your own free will does not imply that your will is free19.
    • As an example, which I find very confusing, Frankfurt considers a third kind of addict – the willing addict (as distinct from the unwilling and wanton addicts). Frankfurt claims that this addict’s will isn’t free, for his desire to take the drug will be effective regardless of whether he wants this desire to constitute his will20. Yet, says Frankfurt, our addict takes the drug freely, of his own free will.
    • Frankfurt thinks that the willing addict’s first-order desire is over-determined, both physiologically and because the addict wants to be addicted, and that this helps us to understand what’s going on in the situation21.
    • Frankfurt claims that his account of freedom of the will is neutral with respect to determinism. He thinks it conceivable that it’s causally determined that a person is free to want what he wants to want, and hence causally determined that he has free will. This is only apparently paradoxical, he says.
    • Frankfurt also allows multiple responsibility for actions. The agent and some other agency may be jointly (even morally) responsible – he sees a difference between full and sole responsibility. If another has calculatedly inveigled the willing addict into his addiction, both are fully responsible, says Frankfurt.
    • Finally, Frankfurt considers that there are various means whereby free will may arise: chance, natural causes or some unspecified third way.




In-Page Footnotes ("Frankfurt (Harry) - Freedom of the Will and the Concept of a Person")

Footnote 4:
  • This is the write-up as it was when this Abstract was last output, with text as at the timestamp indicated (14/03/2015 11:36:58).
  • Books / Papers Citing this Book.
Footnote 5: We could have done with an abstract! Why is Frankfurt writing this paper?

Footnote 6: But, isn’t Strawson really interested in identity and arguing against Locke’s idea that personal identity equates to memory ?

Footnote 7: Indeed. We are interested in the most “humane” characteristics.

Footnote 8: Either for the thrill of it all, or (in Frankfurt’s example) where someone wants to experience (say) what it’s like to desire something, without actually wanting that desire to be effective. He doesn’t actually want to experience the thing desired (Frankfurt’s example is of a psychologist wanting to know what it’s like to be a drug addict, without ever wanting to take drugs himself).

Footnote 9: The situation doesn’t sound very plausible, in that my second order desire might change on learning what my hero’s desires actually are.

Footnote 10: This seems rather arbitrary to me, and Frankfurt admits that he has been arbitrary in excluding agents with second-order desires, but no second-order volitions, from the category of persons (he’s done this to make simpler the argument to follow). With respect to his definition of personhood, what about it being a forensic concept involving responsibility? What about “legal persons”?

Footnote 11: Are only self-confessed sinners persons? Ie. those who want their wills to differ? Is God a person? It appears not from a later comment (see next page).

Footnote 12: Hence, Schopenhauer’s “parallelogram of forces” approach to the will, being all first order, is wanton.

Footnote 13: So, Frankfurt sides with Bramhall and his spiders, against Hobbes.

Footnote 14: Need to review just what Frankfurt means by “the will”.

Footnote 15: Says Frankfurt. This isn’t terribly useful, as the unwilling addict is still a person.

Footnote 16: This almost looks like a reductio ad absurdum of Frankfurt’s view.

Footnote 17: But who’s free to do all that he wants? This is presumably not the point at issue.

Footnote 18: So, it’s only when one’s action was free that one is exculpated? Probably, though Frankfurt also argues that one can be morally responsible even when one couldn’t have acted otherwise than one did. See Alternate Possibilities and Moral Responsibility.

Footnote 19: It strikes me as a bit odd to say that you’re doing something of your own free will when your will isn’t free. Why not simply say “of your own will”.

Footnote 20: I don’t understand this – ex hypothesi, he does want this desire to constitute his will, otherwise he wouldn’t be so happy about his addiction. Can he not want desire for the drug to remain his will and still remain a willing addict? The willing addict can, however, turn from a willing to an unwilling addict?

Footnote 21: This all sounds confused. His will is outside his control, yet he has made “his will” (which will?) his own.



"Gordon (Robert M.) - Emotions and Knowledge"

Source: Rosenthal - The Nature of Mind



"Hampshire (Stuart) - The Analogy of Feeling"

Source: Rosenthal - The Nature of Mind


Philosophers Index Abstract
    In this article the author is concerned with the justification of the knowledge of other minds by virtue of statements of other people's feelings based upon inductive arguments of any ordinary pattern as being inferences from the observed to the unobserved of a familiar and accepted form. The author argues that they are not logically peculiar or invalid, when considered as inductive arguments. The author also proposes that solipsism is a linguistically absurd thesis, while at the same time stopping to explain why it is a thesis which tempts those who confuse epistemological distinctions with logical distinctions. (Staff)


COMMENT: Other Minds



"Jackson (Frank) - The Existence of Mental objects"

Source: Rosenthal - The Nature of Mind


Philosophers Index Abstract
    There is a very widespread view that, while there may be things like the "having" of bodily sensations and the "experiencing" of after images, there are, strictly speaking, no such things as bodily sensations and after-images. In this paper I challenge some of the more usual grounds for this view.


COMMENT: Chap. 3 of "Perceptions"; printout filed in "Various - Papers on Philosophy of Mind Boxes: Vol 2 (C-O)".



"Jackson (Frank) - What Mary Didn't Know"

Source: Block, Flanagan & Guzeldere - The Nature of Consciousness


Author’s Introduction
  1. Mary is confined to a black-and-white room, is educated through black-and-white books and through lectures relayed on black-and-white television. In this way she learns everything there is to know about the physical nature of the world. She knows all the physical facts about us and our environment, in a wide sense of 'physical' which includes everything in completed physics, chemistry, and neurophysiology, and all there is to know about the causal and relational facts consequent upon all this, including of course functional roles. If physicalism is true, she knows all there is to know. For to suppose otherwise is to suppose that there is more to know than every physical fact, and that is just what physicalism denies.
  2. Physicalism is not the noncontroversial thesis that the actual world is largely physical, but the challenging thesis that it is entirely physical. This is why physicalists must hold that complete physical knowledge is complete knowledge simpliciter. For suppose it is not complete: then our world must differ from a world, W(P), for which it is complete, and the difference must be in nonphysical facts; for our world and W(P) agree in all matters physical. Hence, physicalism would be false at our world {though contingently so1, for it would be true at W(P)}.
  3. It seems, however, that Mary does not know all there is to know. For when she is let out of the black-and-white room or given a color television, she will learn what it is like to see something red, say. This is rightly described as learning – she will not say "ho, hum." Hence, physicalism is false. This is the knowledge argument against physicalism in one of its manifestations2. This note is a reply to three objections to it mounted by Paul M. Churchland3.


COMMENT:




In-Page Footnotes ("Jackson (Frank) - What Mary Didn't Know")

Footnote 1:
  • The claim here is not that, if physicalism is true, only what is expressed in explicitly physical language is an item of knowledge. It is that, if physicalism is true, then if you know everything expressed or expressible in explicitly physical language, you know everything.
  • Pace "Horgan (Terence) - Jackson on Physical Information and Qualia" (April 1984).
Footnote 2: Footnote 3:



"Kim (Jaegwon) - Epiphenomenal and Supervenient Causation"

Source: Kim - Supervenience and Mind


Philosophers Index Abstract
    Causal relations involving macro-events and processes can be understood as cases of "supervenient causation1"--supervenient upon casual relations obtaining at the micro-level. It is argued that if the thesis of psychophysical supervenience2 is granted, casual relations involving mental events, too, can be understood on the model of supervenient causation3, and that this resolves many of the puzzles surrounding psychophysical casual relations.


COMMENT: Also in "Rosenthal (David), Ed. - The Nature of Mind"



"Lewis (David) - Mad Pain and Martian Pain"

Source: Lewis - Philosophical Papers Volume I, Part 2: Philosophy of Mind, Chapter 9


  1. Lewis invites us to consider two ostensible challenges to any materialist theory of the mind.
    • The madman feels pain just as we do, but his pain differs greatly from ours in its characteristic causes and effects;
    • the Martian also feels pain just as we do, but his pain differs greatly from ours in its physical realization.
  2. Lewis argues that his functionalist theory is adequate to meet the challenges presented by both cases.
  3. In the postscript, Lewis considers how advocates of phenomenal qualia respond to the functionalist account he defends; in particular, he responds to Frank Jackson's 'knowledge argument'.


COMMENT: Also in "Rosenthal (David), Ed. - The Nature of Mind".



"Lewis (David) - Psychophysical and Theoretical Identifications"

Source: Lewis - Papers in Metaphysics and Epistemology

COMMENT:



"Loar (Brian) - Social Content and Psychological Content"

Source: Rosenthal - The Nature of Mind



"Malcolm (Norman) - Knowledge of Other Minds"

Source: Rosenthal - The Nature of Mind

COMMENT: Also in "Chappell (Vere), Ed. - The Philosophy of Mind"



"Malcolm (Norman) - Thoughtless Brutes"

Source: Rosenthal - The Nature of Mind



"Matthews (Gareth B.) - Consciousness and Life"

Source: Rosenthal - The Nature of Mind


Philosophers Index Abstract
    Descartes rejected the traditional connection between thinking, or being conscious, and living. He also rejected the traditional separation between living things and mechanisms. In rejecting both the traditional connection and the traditional separation descartes rejected the traditional concept of soul and gave us the modern concept of mind. Suppose the problems of a cartesian philosophy of mind are as intractable as many people now suppose. We should then ask whether we, too, ought to reject the traditional concept of soul and accept descartes's concept of mind.



"Nagel (Thomas) - Brain Bisection and the Unity of Consciousness"

Source: Nagel (Thomas) - Mortal Questions


Introduction (Full Text)
  1. There has been considerable optimism recently, among philosophers and neuroscientists, concerning the prospect for major discoveries about the neurophysiological basis of mind. The support for this optimism has been extremely abstract and general. I wish to present some grounds for pessimism. That type of self-understanding may encounter limits which have not been generally foreseen: the personal, mentalist idea of human beings may resist the sort of coordination with an understanding of humans as physical systems, that would be necessary to yield anything describable as an understanding of the physical basis of mind. I shall not consider what alternatives will be open to us if we should encounter such limits. I shall try to present grounds for believing that the limits may exist - grounds derived from extensive data now available about the interaction between the two halves of the cerebral cortex, and about what happens when they are disconnected. The feature of the mentalist conception of persons which may be recalcitrant to integration with these data is not a trivial or peripheral one, that might easily be abandoned. It is the idea of a single person, a single subject of experience and action, that is in difficulties. The difficulties may be surmountable in ways I have not foreseen. On the other hand, this may be only the first of many dead ends that will emerge as we seek a physiological understanding of the mind.
  2. To seek the physical basis or realization of features of the phenomenal world is in many areas a profitable first line of inquiry, and it is the line encouraged, for the case of mental phenomena, by those who look forward to some variety of empirical reduction1 of mind to brain, through an identity theory, a functionalist theory, or some other device. When physical reductionism2 is attempted for a phenomenal feature of the external world, the results are sometimes very successful, and can be pushed to deeper and deeper levels. If, on the other hand, they are not entirely successful, and certain features of the phenomenal picture remain unexplained by a physical reduction3, then we can set those features aside as purely phenomenal, and postpone our understanding of them to the time when our knowledge of the physical basis of mind and perception will have advanced sufficiently to supply it. (An example of this might be the moon illusion, or other sensory illusions which have no discoverable basis in the objects perceived.) However, if we encounter the same kind of difficulty in exploring the physical basis of the phenomena of the mind itself, we cannot adopt the same line of retreat. That is, if a phenomenal feature of mind is left unaccounted for by the physical theory, we cannot postpone the understanding of it to the time when we study the mind itself - for that is exactly what we are supposed to be doing. To defer to an understanding of the basis of mind which lies beyond the study of the physical realization of certain aspects of it is to admit the irreducibility4 of the mental to the physical. A clearcut version of this admission would be some kind of dualism. But if one is reluctant to take such a route, then it is not clear what one should do about central features of the mentalistic idea of persons which resist assimilation to an understanding of human beings as physical system. It may be true of some of these features that we can neither find an objective basis for them, nor give them up. It may be impossible for us to abandon certain ways of conceiving and representing ourselves, no matter how little support they get from scientific research. This, I suspect, is true of the idea of the unity of a person: an idea whose validity may be called into question with the help of recent discoveries about the functional duality of the cerebral cortex. It will be useful to present those results here in outline.


COMMENT:



"Nagel (Thomas) - What is it Like to Be a Bat?"

Source: Block, Flanagan & Guzeldere - The Nature of Consciousness


Author’s Introduction
  1. Consciousness is what makes the mind-body problem really intractable. Perhaps that is why current discussions of the problem give it little attention or get it obviously wrong. The recent wave of reductionist1 euphoria has produced several analyses of mental phenomena and mental concepts designed to explain the possibility of some variety of materialism, psychophysical identification, or reduction2.
  2. But the problems dealt with are those common to this type of reduction3 and other types, and what makes the mind-body problem unique, and unlike the water-H2O problem or the Turing machine-IBM machine problem or the lightning-electrical discharge problem or the gene-DNA problem or the oak tree-hydrocarbon problem, is ignored.


COMMENT:



"Peacocke (Christopher) - Colour Concepts and Colour Experiences"

Source: Rosenthal - The Nature of Mind



"Putnam (Hilary) - Brains and Behaviour"

Source: Putnam - Philosophical Papers 2 - Mind, Language and Reality

COMMENT: Also in "Rosenthal (David), Ed. - The Nature of Mind"



"Putnam (Hilary) - Computational Psychology and Interpretive Theory"

Source: Putnam - Philosophical Papers 3 - Realism and Reason

COMMENT: Also in "Rosenthal (David), Ed. - The Nature of Mind"



"Putnam (Hilary) - The Nature of Mental States"

Source: Putnam - Philosophical Papers 2 - Mind, Language and Reality

COMMENT: Also in:-



"Quine (W.V.) - Quantifiers and Propositional Attitudes"

Source: Quine - The Ways of Paradox and Other Essays

COMMENT:



"Quine (W.V.) - States of Mind"

Source: Rosenthal - The Nature of Mind



"Reid (Thomas) - Essays on the Intellectual Powers of Man (I & II)"

Source: Rosenthal - The Nature of Mind



"Rorty (Richard) - Mind-Body Identity, Privacy, and Categories"

Source: Borst - The Mind-Brain Identity Theory

COMMENT: Also in:-



"Rosenthal (David) - Mind and Body: Introduction"

Source: Rosenthal - The Nature of Mind, 1991



"Rosenthal (David) - Problems About Mind: Introduction"

Source: Rosenthal - The Nature of Mind, 1991



"Rosenthal (David) - Psychological Explanation: Introduction"

Source: Rosenthal - The Nature of Mind, 1991



"Rosenthal (David) - Self and Other: Introduction"

Source: Rosenthal - The Nature of Mind, 1991



"Rosenthal (David) - The Nature of Mind: General Introduction"

Source: Rosenthal - The Nature of Mind, 1991



"Rosenthal (David) - The Nature of Mind: Introduction"

Source: Rosenthal - The Nature of Mind, 1991



"Rosenthal (David) - Two Concepts of Consciousness"

Source: Rosenthal - The Nature of Mind


Philosophers Index Abstract
    There are two conceptions of what it is for mental states to be conscious--i.E., To be in our stream of consciousness. On the cartesian conception, consciousness is essential to being mental. On the contrasting view I defend here mental states are not inherently conscious. We can then explain both introspective and so-called simple consciousness of mental state by appeal to suitable thoughts that one is in the states. I argue that consciousness is inexplicable except of this conception; that this conception saves the phenomenological appearances--including subjectivity and sensory quality--better than the cartesian conception; and that we can readily explain the apparent forcefulness of the cartesian view.



"Russell (Bertrand) - Analogy"

Source: Rosenthal - The Nature of Mind

COMMENT: Other Minds



"Ryle (Gilbert) - Descartes' Myth"

Source: Rosenthal - The Nature of Mind

COMMENT: From Chap. 1 of "Ryle (Gilbert) - The Concept of Mind"



"Ryle (Gilbert) - Self-Knowledge (excerpts)"

Source: Rosenthal - The Nature of Mind

COMMENT: From Chap. 6 of "Ryle (Gilbert) - The Concept of Mind"



"Searle (John) - Minds, Brains, and Programs"

Source: Behavioral and Brain Sciences, Volume 3 - Issue 3 - September 1980, pp. 417-424
Write-up Note1 (Full Text reproduced below).

Philosophers Index Abstract
  1. I distinguish between strong and weak artificial intelligence (AI).
  2. According to strong AI, appropriately programmed computers literally have cognitive states, and therefore the problems are psychological theories.
  3. I argue that strong AI must be false, since a human agent could instantiate the program and still not have the appropriate mental states.
  4. I examine some arguments against this claim, and I explore some consequences of the fact that human and animal brains are the causal bases of existing phenomena.

BBS-Online
  • This article can be viewed as an attempt to explore the consequences of two propositions.
    1. Intentionality in human beings (and animals) is a product of causal features of the brain. I assume this is an empirical fact about the actual causal relations between mental processes and brains. It says simply that certain brain processes are sufficient for intentionality.
    2. Instantiating a computer program is never by itself a sufficient condition of intentionality. The main argument of this paper is directed at establishing this claim
  • The form of the argument is to show how a human agent could instantiate the program and still not have the relevant intentionality.
  • These two propositions have the following consequences
    1. The explanation of how the brain produces intentionality cannot be that it does it by instantiating a computer program. This is a strict logical consequence of 1 and 2.
    2. Any mechanism capable of producing intentionality must have causal powers equal to those of the brain. This is meant to be a trivial consequence of 1.
    3. Any attempt literally to create intentionality artificially (strong AI) could not succeed just by designing programs but would have to duplicate the causal powers of the human brain. This follows from 2 and 4.

Another Abstract
  1. "Could a machine think?"
  2. On the argument advanced here only a machine could think, and only very special kinds of machines, namely brains and machines with internal causal powers equivalent to those of brains.
  3. And that is why strong AI has little to tell us about thinking, since it is not about machines but about programs, and no program by itself is sufficient for thinking.


COMMENT:

Write-up3 (as at 31/08/2017 19:35:02): Searle - Minds, Brains, and Programs

This note provides my detailed review of "Searle (John) - Minds, Brains, and Programs". As it was originally only available in pdf form, it was presumably written when I was an undergraduate in 2002 or thereabouts.

Abstract (Aims of Searle’s Paper)
  • The aim of Searle’s paper is to show that instantiating a computer program is never in itself sufficient for intentionality.
  • The form of his argument is to show by a thought-experiment that a human agent could instantiate (“run”) a program, yet still not have the relevant intentionality (knowledge of Chinese).
  • Searle thinks that the causal4 features of the brain are critical for intentionality (and other aspects of mentality such as consciousness). That is, the hardware (or wetware) is critical and has to be of an appropriate sort. The software isn’t enough, though Searle agrees that human beings do instantiate lots of programs.
  • Hence, attempts at AI need to concentrate on duplicating the causal powers of the brain, and not just on programming. While, only a machine can think (programs can’t), it has to be a special machine physically and not just functionally similar to a brain.
  • Note: Intentionality is what thoughts are about or directed on – the semantic rather than syntactic aspect of thought. Searle denies that digital computers have any intentionality or semantic aspect, operating merely at the syntactic level. The programmer supplies the meaning when (s)he encodes the input or interprets the output. Computers just manipulate meaningless symbols which, for it, signify nothing.

Introduction
  • Searle distinguishes between Strong and Weak AI. His argument is with Strong AI.
  • Weak AI: is fine for Searle, and simply claims that computers are powerful tools for running controlled psychological experiments to help explain the mind.
  • Strong AI: goes much further than this, claiming
    1. That an appropriately programmed computer really is a mind and does understand and
    2. That programs are the explanations for human cognition.

The Chinese Room Thought Experiment
  • I take it as read that we know the description of the (CR) thought-experiment.
  • Searle places himself in the room as a homunculus who really understands English, but only simulates an understanding of Chinese, yet he passes the Turing test for, and appears to understand, both. At least the room does, for the observers don’t know what’s in it. This is important for Searle, because he thinks we’re right to attribute intentionality on behaviourist grounds until we know that the relevant analogy doesn’t hold (ie. until we open the lid).
  • The existence of the homunculus is very important for Searle, because he thinks he knows in all situations what the homunculus understands. Even when, in response to objections, he has the homunculus internalise the contents of the room, it’s still the homunculus that has to do the understanding. His intuition is that it wouldn’t understand anything of Chinese. He says it’s a fact, because it’s him and he knows for a fact that he knows no Chinese and his operations in the CR don’t teach him any.

Searle’s Immediate Response
  • Searle thinks it’s obvious that he doesn’t understand a word of Chinese. Since (he says) he’s the computer in this case, computers never understand anything.
  • Since computers don’t understand anything, they provide no insight into human thought. The Searle-homunculus’s understanding of English and Chinese are not comparable because he only appears to understand Chinese, but does understand English.
  • Hence, the two claims of Strong AI are without foundation.
  • In particular, formal properties are insufficient – the homunculus follows all the formal rules, yet understands nothing.
  • Searle rejects “fancy footwork” about “understanding” – this comes up later as well. Understanding has to be a true mental property, not a figure of speech such as an adding machine “knowing” how to add up.
  • The crux of the matter is whether Searle’s though-experiment is relevant and a true parallel to what Strong AI claims. So …

AI Responses to the CR Thought Experiment and Searle’s Replies
  • There are 6 of these, though Searle thinks the last two are hardly worth mentioning.
  • |1|The Systems Reply
    • AI Response: Intentionality is to be attributed to the whole room, not just to the homunculus.
    • Searle’s Reply: The homunculus can internalise the program and do its calculations in its head, but would still not understand Chinese, and there isn’t anything left over that could. This apart, he thinks only someone in the grip of an ideology could imagine that the conjunction of a person and bits of paper could understand Chinese if the person himself didn’t5. Searle denies that the CR is processing information (only symbols), so there is a parallel with stomachs and such-like which process food, but which aren’t minds. Searle claims that any theory that alleges intentionality to thermostats has fallen victim to a reductio ad absurdum. The whole purpose of the project was to find the difference between things that have minds (humans) and those that don’t (thermostats), so if we start attributing minds to thermostats, we’ve gone wrong somewhere.

    |1|The Robot Reply
    • AI Response: we need to embed the CR in a robot that responds to its environment. This would have intentionality.
    • Searle’s Reply: firstly, Searle notes in this reply an admission that intentionality isn’t just formal symbol manipulation, but involves causal interaction with the world. But, in any case, he just re-runs his thought experiment. The homunculus still doesn’t need to know where his inputs are coming from nor what his outputs are doing, so we’re no better off.

    |1|The Brain Simulator Reply
    • AI Response: forget the simplistic information-processing program and build one that simulates the brain at the level of synapses, including parallel processing. If this machine couldn’t be said to understand Chinese, nor could native Chinese speakers.
    • Searle’s Reply:
      1. Searle thinks this undermines the whole point of AI, which is that to understand the mind we don’t need to understand how the brain works, because – important slogan – the mind is to the brain as the program is to the hardware, and programs can be instantiated on any hardware we like provided it can run them.
      2. Even so, Searle can elaborate on his CR with his homunculus operating a hydraulic system that’s connected up like the brain. He still wouldn’t understand any Chinese.
      3. Our homunculus could even internalise all this in his imagination and be no better off – this counters the Systems Reply to this response. Again, formal properties aren’t enough for understanding.

    |1|The Combination Reply
    • AI Response: this imagines a simulator at the synapse level crammed into a robot that looks or at least acts like a human being. We’d surely ascribe intentionality to such a system.
    • Searle’s Reply:
      1. Searle agrees we would, but denies that this helps the Strong AI cause. We’re attributing intentionality to the robot on the basis of the Turing test, which Searle denies is a sure sign of intentionality. If we knew that it was a robot – at least in the sense that there was a man inside fiddling with a hydraulic system – we’d no longer make this attribution but treat it as an ingenious mechanical dummy.
      2. Searle makes the important (but debatable) point that the reason we attribute intentionality to apes and dogs is not merely for behaviourist reasons but because they’re made of the same “causal stuff” as we are.

    |1|The Other Minds Reply
    • AI Response: we only know other people have minds by their behaviour, so if computers pass the Turing test we have to attribute intentionality to them.
    • Searle’s Reply: Searle doesn’t give his “causal stuff” response, but claims that metaphysics is the issue not epistemology. We’re supposing the reality of the mental, and he thinks he’s shown that computational processing plus inputs and outputs can carry on in the absence of mental states (and hence is no mark of the mental).

    |1|The Many Mansions Reply
    • AI Response: we’re not there with the right hardware yet, but eventually we will be. Such machines will have intentionality.
    • Searle’s Reply: fine, maybe so, but this has nothing to do with Strong AI.
    |99|

Searle’s Conclusion
  • Searle agrees that our bodies + brains are machines, so he’s got no problem in principle with machines understanding Chinese. What he denies is that mere instantiations of computer programs have understanding. The organism & its biology is crucial.
  • He thinks its an empirical question whether aliens might have intentionality even though they have brains made of different stuff6.
  • No formal model is of itself sufficient for intentionality. Even were native Chinese speakers running the CR program, by instantiating it in the CR we end up with no understanding, despite the program, so the program isn’t enough.

Questions and Answers
  • The important negative answer is to that suggesting that a computer could be made to think solely on the basis of running the right program. Syntax with no semantics isn’t enough.
  • He makes the important distinction between simulation and duplication. Computer simulations of storms don’t make us wet, so why should simulations of understanding understand?

Rationalisations for the deceptive attractiveness of Strong AI
  • Confusion about Information Processing: an AI response to the simulation versus duplication argument above is that the appropriately programmed computer stands in a special relation to the mind/brain because the information processed by it and the mind/brain is the same, and AI claims that information processing is the essence of the mental. The simulation of a mind is a mind, even though the simulation of a storm isn’t a storm. Searle claims that since computers operate at the syntactical rather than semantic level, they don’t process information in the way human beings do. He sees a dilemma. Either we treat information as fundamentally at the semantic level, so that computers don’t process it, or we treat it at the syntactic level, so that thermostats do. He treats it as a reductio ad absurdum to attribute mental states to thermostats.
  • Residual Behaviourism: Searle rehearses his rejection of the Turing test and the attribution of intentional states to adding machines.
  • Residual Dualism: what matters to Strong AI and Functionalism is the program, which could be realised by a computer, a Cartesian mental substance or a Hegelian world spirit7. The whole rationale behind strong AI is that the mind is separable from the brain, both conceptually and empirically. Searle admits that Strong AI is not substance-dualist, but is dualist in disconnecting mind from brain. The brain just happens to be one type of machine capable of instantiating the mind-program. Searle finds the AI literature’s fulminations against dualism amusing on this account8.

Could a Machine Think?
  • Only machines can think, and it’s the hardware rather than the software that’s important. Intentionality is a biological phenomenon.




In-Page Footnotes ("Searle (John) - Minds, Brains, and Programs")

Footnote 3:
  • This is the write-up as it was when this Abstract was last output, with text as at the timestamp indicated (31/08/2017 19:35:02).
  • Books / Papers Citing this Book.
Footnote 4: Searle mentions this a lot, but doesn’t really explain what he means (or, if he does, I missed it!).

Footnote 5:
  • We might ask whether a person who did operate in this way could do so without (thereby) knowing Chinese.
  • Segal claims that the program isn’t a “Chinese speaking” program but a “Chinese question answering” program.
Footnote 6: But how would Searle know they had intentionality?

Footnote 7:
  • Maybe so, but programs can’t run themselves, and the essence of the Cartesian thought experiment for mind being a separate substance to matter is that we can supposedly imagine disembodied minds. We can’t imagine programs running without hardware.
Footnote 8:
  • This doesn’t seem to rationalise the appeal of Strong AI, but rather to introduce an invalid “guilt by association” argument against proponents of Strong AI.



"Searle (John) - Yin and Yang Strike Out"

Source: Rosenthal - The Nature of Mind



"Sellars (Wilfrid) - Being and Being Known (excerpts)"

Source: Rosenthal - The Nature of Mind



"Sellars (Wilfrid) - Minds"

Source: Rosenthal - The Nature of Mind

COMMENT: Lecture II from "The Structure of Knowledge"



"Sellars (Wilfrid) - Phenomenalism (excerpts)"

Source: Rosenthal - The Nature of Mind



"Shaffer (Jerome) - Mental Events and the Brain"

Source: Rosenthal - The Nature of Mind


Philosophers Index Abstract
  1. It is first shown that J J C Smart's account of the meaning of reports of sensations in terms of physical stimulus conditions is defective.
  2. It is then argued that no such materialistic manoeuvring can succeed, showing that we cannot avoid admitting the existence of nonphysical properties. However, it is added that these nonphysical properties need not be irreducibly different from physical properties.
  3. The remainder of the paper is concerned, first, to defend the proposition that a convention could be adopted for locating mental events in the brain and, then, to describe conditions under which the identity theory is empirically refuted.


COMMENT:



"Shoemaker (Sydney) - Functionalism and Qualia"

Source: Shoemaker - Identity, Cause and Mind


Philosophers Index Abstract
    This paper replies to the claim of block and fodor, in "what psychological states are not" ("philosophical review", 1972), that functionalist accounts of mental states cannot accommodate their "qualitative character." It argues that cases of "absent qualia," in which a state lacking qualitative character is "functionally identical" to one having it, are not logically possible, and that the possibility of cases of "inverted qualia," in which functionally identical states are qualitatively different, is compatible with functionalism. Central to the paper is the claim that the relation of qualitative similarity between mental states is itself functionally definable.


COMMENT:



"Shoemaker (Sydney) - How is Self-Knowledge Possible?"

Source: Shoemaker - Self-Knowledge and Self-Identity, Chapter 6

COMMENT: Also (selections) in "Rosenthal (David), Ed. - The Nature of Mind"



"Smart (J.C.C.) - Sensations and Brain Processes"

Source: Rosenthal - The Nature of Mind

COMMENT:



"Stalnaker (Robert) - On What's in the Head"

Source: Rosenthal - The Nature of Mind



"Stich (Stephen) - Autonomous Psychology and the Belief-Desire Thesis"

Source: Rosenthal - The Nature of Mind


Philosophers Index Abstract
    The "belief-desire thesis" is the claim that states invoked in an explanatory psychological theory will include beliefs and desires. The "principle of autonomous psychology" is the claim that states invoked in an explanatory psychological theory must supervene1 upon current, internal physical states. A more informal way of stating the autonomy principle is this: organisms which are physical replicas of each other will be indistinguishable from the point of view of explanatory psychology. The paper argues that there is an incompatibility between the belief-desire thesis and the principle of autonomous psychology.


COMMENT:



"Stich (Stephen) - Paying the Price for Methodological Solipsism"

Source: Rosenthal - The Nature of Mind



"Strawson (Peter) - Persons"

Source: Rosenthal - The Nature of Mind

COMMENT:



"Strawson (Peter) - Self, Mind and Body"

Source: Rosenthal - The Nature of Mind



Text Colour Conventions (see disclaimer)
  1. Blue: Text by me; © Theo Todman, 2019
  2. Mauve: Text by correspondent(s) or other author(s); © the author(s)



© Theo Todman, June 2007 - August 2019. Please address any comments on this page to theo@theotodman.com. File output:
Website Maintenance Dashboard
Return to Top of this Page Return to Theo Todman's Philosophy Page Return to Theo Todman's Home Page