<!DOCTYPE html><html lang="en"><head><meta charset="utf-8"><title>Printable Note - Blog - The Singularity (Theo Todman's Web Page) </title><link href="../../../TheosStyle.css" rel="stylesheet" type="text/css"><link rel="shortcut icon" href="../../../TT_ICO.png" /></head> <P ALIGN="Center"><FONT Size = 3 FACE="Arial"><B><HR>Theo Todman's Web Page<HR><p>For Text Colour-conventions (at end of page): <A HREF="#ColourConventions">Click Here</a></p><U>Blog - The Singularity</B></U></P> <P ALIGN="Justify"><FONT Size = 2 FACE="Arial"> This Note discusses in detail  or begins to discuss in detail  the somewhat extravagant thoughts in "<A HREF = "../../../Abstracts/Abstract_16/Abstract_16893.htm">Grossman (Lev), Kurzweil (Ray) - 2045: The Year Man Becomes Immortal</A>". It ought to range more widely across the <a name="29"></a><U>Transhumanist</U><SUP>1</SUP> literature. The footnotes in the <a name="29"></a><U>Write-up for the paper</U><SUP>2</SUP> link to the sections in this <U><A HREF="#On-Page_Link_972_3">Note</A></U><SUB>3</SUB><a name="On-Page_Return_972_3"></A>. It is currently very much work in progress. <ol type="1"><li><a name="Off-Page_Link_Kurzweil"></a><B>Kurzweil</B>: <ul type="disc"><li>See <BR>&rarr; <A HREF = "https://en.wikipedia.org/wiki/Ray_Kurzweil" TARGET = "_top">Wikipedia: Ray Kurzweil</A> (https://en.wikipedia.org/wiki/Ray_Kurzweil), <BR>&rarr; <A HREF = "http://www.kurzweilai.net/" TARGET = "_top">Kurzweil: Accelerating Intelligence</A> (http://www.kurzweilai.net/), <BR>and much else besides. </li><li>I seem to have one of Kurzweil s books  "<A HREF = "../../../BookSummaries/BookSummary_04/BookPaperAbstracts/BookPaperAbstracts_4136.htm">Kurzweil (Ray) - The Age of Spiritual Machines</A>". </li><li>This book has been criticised by Searle  see <A HREF = "https://www.nybooks.com/articles/1999/04/08/i-married-a-computer/" TARGET = "_top">NY Books: Searle - I Married a Computer</A> (https://www.nybooks.com/articles/1999/04/08/i-married-a-computer/). Unfortunately, only the opening section is available for free. But Kurzweil s site (<A HREF = "http://www.kurzweilai.net/chapter-2-i-married-a-computer" TARGET = "_top">Searle: I Married a Computer</A> - Defunct) <U><A HREF="#On-Page_Link_972_4">seems</A></U><SUB>4</SUB><a name="On-Page_Return_972_4"></A> to hold an updated version. </li><li>Moreover, there s an ensuing debate between Searle and Kurzweil, that is fully available on-line at <em>New York Review of Books</em> (<A HREF = "https://www.nybooks.com/articles/1999/05/20/i-married-a-computer-an-exchange/" TARGET = "_top">NY Books: Kurzweil & Searle - I Married a Computer</A> (https://www.nybooks.com/articles/1999/05/20/i-married-a-computer-an-exchange/)). And see my transcripts:-<BR>&rarr; "<A HREF = "../../../Abstracts/Abstract_17/Abstract_17002.htm">Kurzweil (Ray) -  I Married a Computer : An Exchange (between Ray Kurzweil and John Searle)</A>", and<BR>&rarr; "<A HREF = "../../../Abstracts/Abstract_17/Abstract_17003.htm">Searle (John) -  I Married a Computer : An Exchange (between Ray Kurzweil and John Searle)</A>". </li><li>In fact, Kurtzweil s site <U><A HREF="#On-Page_Link_972_5">has a bunch of free e-books</A></U><SUB>5</SUB><a name="On-Page_Return_972_5"></A>, ie:-<BR>&rarr; Ray Kurtzweil (Editor)  <em>Are We Spiritual Machines? </em> (<A HREF = "http://www.kurzweilai.net/ebooks/are-we-spiritual-machines" TARGET = "_top">Kurzweil: Are We Spiritual Machines?</A> (http://www.kurzweilai.net/ebooks/are-we-spiritual-machines)). This contains (as Chapter 2) the critique by Searle noted above. <BR>&rarr; Drexler (Eric)  <em>Engines of Creation 2.0  The Coming Era of Nanotechnology</em> (<A HREF = "http://www.kurzweilai.net/ebooks/engines-of-creation-book-excerpts-features" TARGET = "_top">Drexler: Engines of Creation</A> - Defunct)<BR>&rarr; Ray Kurtzweil  <A HREF = "http://www.kurzweilai.net/ebooks/the-age-of-intelligent-machines" TARGET = "_top">Kurzweil: Age of Intelligent Machines</A> (http://www.kurzweilai.net/ebooks/the-age-of-intelligent-machines)<BR>&rarr; <A HREF = "http://www.kurzweilai.net/ebooks/the-age-of-spiritual-machines" TARGET = "_top">Kurzweil: Age of Spiritual Machines</A> (http://www.kurzweilai.net/ebooks/the-age-of-spiritual-machines)<BR>&rarr; Neil <A HREF = "http://www.kurzweilai.net/ebooks/when-things-start-to-think" TARGET = "_top">Gershenfeld: When Things Start to Think</A> - Defunct</li><li>I dare say that the substance of the <em>Time</em> article is already well worked-over in <em>Are We Spiritual Machines? </em> </li></ul></li><li><a name="Off-Page_Link_Creativity"></a><B>Creativity</B>: <ul type="disc"><li>There s presumably a distinction between rules-based creativity, which is what (presumably) computers can do, and creativity of a less constrained sort, that we don t know how to get computers to do (yet)? </li></ul></li><li><a name="Off-Page_Link_Self"></a><B>Self</B>: <ul type="disc"><li>And  self-expression  <em>facon de parler</em>, in this context? Musical composition seems more a skill than a matter of self-expression (as would be a literary composition). I can t see why a sense of self would be necessary for creative composition in either music or the graphic arts. Certain <em>Idiot Savants</em> are no doubt adept in these areas, despite autistic tendencies, that mitigate against a sense of self.</li><li>What I have to say on Selves should be under <BR>&rarr; <a name="29"></a><U>Self</U><SUP>6</SUP>, and <BR>&rarr; <a name="29"></a><U>Self-Consciousness</U><SUP>7</SUP>,<BR>Though I don t seem to have said anything yet. </li></ul></li><li><a name="Off-Page_Link_Intelligence"></a><B>Intelligence and Consciousness</B>: <ul type="disc"><li>There s a sharp distinction between intelligence and consciousness. </li><li>As far as we know, consciousness is the preserve of organic intelligence. </li><li>We can presume that lots of rather dim animals are phenomenally conscious (even if not self-conscious &rarr; this distinction is important) so, there s no link between getting smarter and smarter and then (as a result) getting phenomenally conscious. </li><li>I m not sure of the link between intelligence and self-consciousness. </li><li>There s an old <em>Time</em> article  Can Machines Think?  stimulated by the Kasparov vs Deep Blue chess match (at <A HREF = "http://content.time.com/time/magazine/article/0,9171,984304,00.html" TARGET = "_top">Time: Can Machines Think?</A> (http://content.time.com/time/magazine/article/0,9171,984304,00.html)). </li></ul></li><li><a name="Off-Page_Link_Imminence"></a><B>Imminence of the  Singularity </B>: <ul type="disc"><li>This is predicated on the assumption of continued exponential growth. It s a standard principle in scientific practice to be suspicious of exponentials, at least when they are unprincipled  ie. where there is no underlying theory that would lead us to expect them. </li><li>Also, as noted elsewhere in this discussion, the occurrence of the Singularity relies on the achievement of numerous conceptual and technological breakthroughs that we have no warrant for assuming will happen any time soon. </li></ul></li><li><a name="Off-Page_Link_Civilization"></a><B>Human Civilization</B>: <ul type="disc"><li>So far, computers have only enhanced human civilisation. </li><li> Ending human civilisation ( as we know it ) depends on delivering (in an uncontrolled manner) the various promissory-notes of the <em>Time</em> article. </li></ul></li><li><a name="Off-Page_Link_Faster"></a><B>Faster <em>Faster</em></B>: <ul type="disc"><li>Is this really the case that the rate of improvement in computing power is accelerating, and will it really continue to accelerate indefinitely, if it is so doing currently? </li><li>Note that Kurzweil's graph muddles together speed and cost. See the comments below. </li></ul></li><li><a name="Off-Page_Link_Emulation"></a><B>Emulation</B>: Two points here. <ul type="disc"><li>Firstly, emulation isn t the real thing. Models of hurricanes aren t wet and windy, so why should emulations of consciousness be conscious? </li><li>Secondly, digital computers are serial devices in which the components are (now) very quick, and brains are massively parallel devices whose components are very slow. Why should simulating one by the other produce the same (phenomenal) effect, and even be possible at all? </li></ul></li><li><a name="Off-Page_Link_Intelligent_Actions"></a><B>Intelligent Actions</B>: <ul type="disc"><li>The items on the list ( driving cars, writing books, making ethical decisions, appreciating fancy paintings, making witty observations at cocktail parties ) can all (presumably) be rules-based and situation-driven. No doubt this is true of human intelligence as well (ultimately) but modelling it is not straightforward, as we don t know how the brain does it. The issue isn t really (in this case) to do with  whether , but  when , as there are lots of major breakthroughs required before the promissory note can be delivered on. Also, all these functions can be delivered unconsciously (if they can be delivered at all). </li></ul></li><li><a name="Off-Page_Link_Smart_people"></a><B>Smart people</B>: <ul type="disc"><li>Does it matter how smart they are? Lots of equally smart people don t share the optimism of the futurologists. </li></ul></li><li><a name="Off-Page_Link_Computer_Power"></a><B>Increasingly Powerful Computers</B>: <ul type="disc"><li>Are there really no reasons to doubt that their onward exponential growth is really never going to end? Miniaturisation of components has to stop soon due to QM effects. So, a radically-new technology is needed. Some ideas are there, but we might get  stuck on their delivery, as has been the case for controlled nuclear fusion (<A HREF = "https://en.wikipedia.org/wiki/Fusion_power#Current_status" TARGET = "_top">Wikipedia: Fusion Power</A> (https://en.wikipedia.org/wiki/Fusion_power#Current_status)), which in the 1950s was expected soon, in the 1970s by 2000 and in 2006  not within 100 years . </li><li>There s no doubt that computers will continue to get more powerful, as hardware and software continues to improve, as it always will. The issue is really over the rate of change (can exponential growth continue indefinitely) and can certain conceptual breakthroughs be made?</li></ul></li><li><a name="Off-Page_Link_Bootstrapping"></a><B>Bootstrapped Development</B>: <ul type="disc"><li>This is certainly an important point, as we certainly use computers to help manufacture computers. But the extrapolation to development may involve the solution of the real  machine creativity problem. </li></ul></li><li><a name="Off-Page_Link_Prediction"></a><B>Prediction</B>: <ul type="disc"><li>Is this true? It would be true if machines became  smarter than humans in every dimension of  smartness . But  unpredictability (ie. non-rules-based) is one of the aspects of machine-intelligence yet to be delivered by AI. </li><li>Also, this argument sounds a bit like the  you can t know the mind of God (at all) arguments, which may or may not be sound. </li></ul></li><li><a name="Off-Page_Link_Cyborgs"></a><B>Cyborgs</B>: <ul type="disc"><li>This sounds a more promising approach than simulation, and it d relieve computers from having to realise consciousness. But any cognitive interlinking would still require a fuller understanding of how the brain works than is currently on the horizon. </li><li>See <a name="29"></a><U>Cyborgs</U><SUP>8</SUP> for my thoughts on the matter. </li></ul></li><li><a name="Off-Page_Link_Integration"></a><B>Integration</B>: <ul type="disc"><li>We don t  integrate with cars and planes any more than we integrate with computers. They are just tools. Prosthetics are the nearest analogues, but there s a long way from that to true integration. </li></ul></li><li><a name="Off-Page_Link_Nanotechnology"></a><B>Nanotechnology</B>: <ul type="disc"><li>At this stage of the argument, it s not clear how intelligent machines will help repair our bodies and brains (especially  indefinitely ). Usually nanotechnology is invoked at this stage (see <A HREF = "https://en.wikipedia.org/wiki/Nanotechnology" TARGET = "_top">Wikipedia: Nanotechnology</A> (https://en.wikipedia.org/wiki/Nanotechnology) for an overview). Now, it s true that intelligent machines would be needed to manufacture, and probably program, these myriads of tiny, very specialised machines, but the possibilities are very schematic. There s no evidence that anything workable is around the corner. </li><li>It looks like the free eBook by Eric Drexler <em>Engines of Creation 2.0  The Coming Era of Nanotechnology</em> (<A HREF = "http://www.kurzweilai.net/ebooks/engines-of-creation-book-excerpts-features" TARGET = "_top">Drexler: Engines of Creation</A> - Defunct) might prove useful. </li></ul></li><li><a name="Off-Page_Link_Consciousness"></a><B>Consciousnesses</B>: <ul type="disc"><li>Just what is meant here? Is this just loose speaking? A thing (an animal) is conscious, and the animal can t be scanned and downloaded anywhere. No-one really knows (at the theoretical level) what phenomenal consciousness is, though there are many theories. What s probably intended here is that  the contents of our brains would be read and uploaded to some device that can simulate our brains. This, of course, assumes that mind-body substance dualism is false (as it probably is), but even so  and admitting that whatever runs the downloaded software is at best a copy of the original, there s a long way to go before this sort of thing becomes even a worked-out theoretical possibility. </li></ul></li><li><a name="Off-Page_Link_Software"></a><B>Software</B>: <ul type="disc"><li>Well, philosophically-speaking, this is an outrageous idea. It depends on <a name="29"></a><U>what we are</U><SUP>9</SUP>, and we re almost certainly not software, though software is important to us. And there are issues of identity  since software is easy to copy, and copies aren t identical, what reason would an individual have for thinking any particular installed copy was (identical to) him? </li></ul></li><li><a name="Off-Page_Link_Annihilation"></a><B>Annihilation</B>: <ul type="disc"><li>Well, this is certainly something to watch out for, but I dare say it s a way off. It s more of a worry in genetic engineering or (if it gets going in the futurist mini-robot sense) nanotechnology. </li></ul></li><li><a name="Off-Page_Link_Singularity"></a><B>The Singularity</B>: <ul type="disc"><li>This term is defined later, but see <BR>&rarr; <A HREF = "https://en.wikipedia.org/wiki/Technological_singularity" TARGET = "_top">Wikipedia: Technological Singularity</A> (https://en.wikipedia.org/wiki/Technological_singularity) and <BR>&rarr; <A HREF = "https://en.wikipedia.org/wiki/The_Singularity_Is_Near" TARGET = "_top">Wikipedia: The Singularity Is Near</A> (https://en.wikipedia.org/wiki/The_Singularity_Is_Near) <BR>(amongst much else). </li></ul></li><li><a name="Off-Page_Link_Moore's_Law"></a><B>Moore's Law</B>: <ul type="disc"><li>See <A HREF = "https://en.wikipedia.org/wiki/Moore%27s_law" TARGET = "_top">Wikipedia: Moore's Law</A> (https://en.wikipedia.org/wiki/Moore%27s_law). </li><li>The Wikipedia article mentions Kurzweil and other futurologists, and the possible breakdown of Moore s Law within the next 5 years or so (ie. well before 2045). It also notes that Moore s Law is a self-fulfilling prophesy, in that the industry has taken it as a paradigm for R&D aims. Also, that the R&D costs of keeping up with Moore s Law are also increasing exponentially. </li></ul><IMG ALIGN=RIGHT ALT="Kurzweil's Graph" WIDTH=412 HEIGHT=293 SRC="../../../Photos/Notes/Kurzweil_Graph.jpg"></li><li><a name="Off-Page_Link_Kurzweil's_Graph"></a><B>Kurzweil's Graph</B>: <ul type="disc"><li>This graph intentionally muddles together speed and cost, but so-doing can lead others to draw the wrong conclusions from it. </li><li>Currently, while there continue to be improvements in computing power, the current driver behind the continuing exponential growth of Kurzweil s graph is economic  ie. computer hardware is being delivered <U>cheaper</U>, faster, not <U>faster</U> faster. </li><li>Even if Kurzweil s graph did continue for ever, it might still not lead to the singularity, in that the (infinitely cheap) computer hardware might still not deliver what Kurzweil needs. It might still be too slow. </li></ul> </li><li><a name="Off-Page_Link_xxx"></a><B>Dummy Section</B>: <ul type="disc"><li>Details to be supplied later! </li></ul></li></ol><FONT COLOR = "000000"><BR><HR><BR><U><B>In-Page Footnotes</U></B><a name="On-Page_Link_972_3"></A><BR><BR><B>Footnote 3</B>: <ul type="disc"><li>Currently the links are one-way. </li></ul> <a name="On-Page_Link_972_4"></A><B>Footnote 4</B>: <ul type="disc"><li>Or,  seemed ! </li></ul><a name="On-Page_Link_972_5"></A><B>Footnote 5</B>: <ul type="disc"><li>Some of these links now fail, as indicated. </li><li>Some other links work, but don t have the same text. </li><li>I ve not had time to chase them up and make repairs, if possible. </li></ul><BR><BR><FONT COLOR = "000000"></P><B>Note last updated:</B> 07/08/2018 21:18:43<BR> </P><HR> <P ALIGN="Left"><FONT Size = 2 FACE="Arial"><B><U>Footnote 1: (Transhumanism)</B></U></P> <P ALIGN="Justify"><FONT Size = 2 FACE="Arial"> <u><U><A HREF="#On-Page_Link_939_1">Plug Note</A></U><SUB>1</SUB><a name="On-Page_Return_939_1"></A></u><ul type="disc"><li>Transhumanism is the thesis that we human beings can  in principle at least  transcend our animal nature and escape or at least augment  in whole or part  our animal bodies. </li><li>The movement hopes to extend our lifespans  either considerably or indefinitely. </li><li>One particular strand of this hope is to escape our mortal bodies altogether by  <a name="29"></a>uploading ourselves to a digital computer. </li><li>I m very doubtful about the possibility  practical or theoretical  of most of these aims, as well as their desirability. However, while this topic is on the borders of sci-fi, it is a challenge to <a name="29"></a>animalism in that it presupposes that  <a name="29"></a>we can transcend our biological origins in some way or other.</li><li>The premier transhumanist of my acquaintance is <A HREF = "../../../Authors/B/Author_Bostrom (Nick).htm">Nick Bostrom</A>. He has also argued that we might be (and indeed probably are) living in a computer simulation. See:- <BR>&rarr; "<A HREF = "../../../Abstracts/Abstract_06/Abstract_6319.htm">Bostrom (Nick) - Are You Living in a Computer Simulation?</A>", <BR>&rarr; "<A HREF = "../../../Abstracts/Abstract_06/Abstract_6318.htm">Weatherson (Brian) - Are You a Sim?</A>", and <BR>&rarr; "<A HREF = "../../../Abstracts/Abstract_06/Abstract_6321.htm">Bostrom (Nick) - The Simulation Argument: Reply to Weatherson</A>".</li><li>A light-hearted introduction to the ideas and personalities is <BR>&rarr; "<A HREF = "../../../BookSummaries/BookSummary_06/BookPaperAbstracts/BookPaperAbstracts_6513.htm">O'Connell (Mark) - To be a Machine</A>", </li><li>And the main text for this topic is <BR>&rarr; "<A HREF = "../../../BookSummaries/BookSummary_06/BookPaperAbstracts/BookPaperAbstracts_6339.htm">More (Max) & Vita-More (Natasha) - The Transhumanist Reader</A>". <ol type="i"><li>It s probably best to start with the Introductions to the book s nine Parts:- <ul type="square"><li>"<A HREF = "../../../Abstracts/Abstract_20/Abstract_20798.htm">More (Max) & Vita-More (Natasha) - Transhumanism: Roots and Core Themes - Introduction</A>",</li><li>"<A HREF = "../../../Abstracts/Abstract_20/Abstract_20803.htm">More (Max) & Vita-More (Natasha) - Transhumanism: Human Enhancement: The Somatic Sphere - Introduction</A>",</li><li>"<A HREF = "../../../Abstracts/Abstract_20/Abstract_20804.htm">More (Max) & Vita-More (Natasha) - Transhumanism: Human Enhancement: The Cognitive Sphere - Introduction</A>",</li><li>"<A HREF = "../../../Abstracts/Abstract_20/Abstract_20805.htm">More (Max) & Vita-More (Natasha) - Transhumanism: Core Technologies - Introduction</A>",</li><li>"<A HREF = "../../../Abstracts/Abstract_20/Abstract_20806.htm">More (Max) & Vita-More (Natasha) - Transhumanism: Engines of Life: Identity and Beyond Death - Introduction</A>",</li><li>"<A HREF = "../../../Abstracts/Abstract_20/Abstract_20807.htm">More (Max) & Vita-More (Natasha) - Transhumanism: Enhanced Decision-Making - Introduction</A>",</li><li>"<A HREF = "../../../Abstracts/Abstract_20/Abstract_20808.htm">More (Max) & Vita-More (Natasha) - Transhumanism: Biopolitics and Policy - Introduction</A>",</li><li>"<A HREF = "../../../Abstracts/Abstract_20/Abstract_20809.htm">More (Max) & Vita-More (Natasha) - Transhumanism: Future Trajectories: Singularity - Introduction</A>", and</li><li>"<A HREF = "../../../Abstracts/Abstract_20/Abstract_20810.htm">More (Max) & Vita-More (Natasha) - Transhumanism: The World's Most Dangerous Idea - Introduction</A>". </li></ul></li><li>While the whole book is interesting, the other papers  from a quick look  that are most germane to my Thesis are:- <ul type="square"><li>"<A HREF = "../../../Abstracts/Abstract_20/Abstract_20813.htm">Bostrom (Nick) - Why I Want to be a Posthuman When I Grow Up</A>", </li><li>"<A HREF = "../../../Abstracts/Abstract_20/Abstract_20847.htm">Brin (David), Broderick (Damien), Bostrom (Nick), Chislenko (Alexander), Hanson (Robin), More (Max), Nielsen (Michael) & Sandberg (Anders) - A Critical Discussion of Vinge's Singularity Concept</A>",</li><li>"<A HREF = "../../../Abstracts/Abstract_20/Abstract_20849.htm">Broderick (Damien) - Trans and Post</A>",</li><li>"<A HREF = "../../../Abstracts/Abstract_20/Abstract_20821.htm">Clark (Andy) - Re-Inventing Ourselves: The Plasticity of Embodiment, Sensing, and Mind</A>",</li><li>"<A HREF = "../../../Abstracts/Abstract_20/Abstract_20822.htm">Goertzel (Ben) - Artificial General Intelligence and the Future of Humanity</A>",</li><li>"<A HREF = "../../../Abstracts/Abstract_20/Abstract_20828.htm">Hall (J. Storrs) - Nanocomputers</A>",</li><li>"<A HREF = "../../../Abstracts/Abstract_20/Abstract_20833.htm">Hughes (James) - Transhumanism and Personal Identity</A>",</li><li>"<A HREF = "../../../Abstracts/Abstract_20/Abstract_20824.htm">Koene (Randal A.) - Uploading to Substrate-Independent Minds</A>",</li><li>"<A HREF = "../../../Abstracts/Abstract_20/Abstract_20830.htm">Kurzweil (Ray) & Drexler (K. Eric) - Dialogue between Ray Kurzweil and Eric Drexler</A>",</li><li>"<A HREF = "../../../Abstracts/Abstract_20/Abstract_20825.htm">Merkle (Ralph C.) - Uploading</A>",</li><li>"<A HREF = "../../../Abstracts/Abstract_20/Abstract_20827.htm">Moravec (Hans) - Pigs in Cyberspace</A>",</li><li>"<A HREF = "../../../Abstracts/Abstract_20/Abstract_20851.htm">More (Max) - A Letter to Mother Nature</A>",</li><li>"<A HREF = "../../../Abstracts/Abstract_20/Abstract_20811.htm">More (Max) - The Philosophy of Transhumanism</A>",</li><li>"<A HREF = "../../../Abstracts/Abstract_20/Abstract_20834.htm">Prisco (Giulio) - Transcendent Engineering</A>",</li><li>"<A HREF = "../../../Abstracts/Abstract_20/Abstract_20829.htm">Rose (Michael R.) - Immortalist Fictions and Strategies</A>",</li><li>"<A HREF = "../../../Abstracts/Abstract_20/Abstract_20846.htm">Sandberg (Anders) - An Overview of Models of Technological Singularity</A>", and</li><li>"<A HREF = "../../../PaperSummaries/PaperSummary_20/PaperSummary_20845.htm">Vinge (Vernor) - Technological Singularity</A>". </li></ul></li></ol></li><li>This topic connects to a number of related items:- <ol type="1"><li><a name="29"></a>Androids,</li><li><a name="29"></a>Chimera, </li><li><a name="29"></a>Cyborgs,</li><li><a name="29"></a>Non-Human Persons</li><li>Superintelligence<BR>&rarr; see "<A HREF = "../../../BookSummaries/BookSummary_06/BookPaperAbstracts/BookPaperAbstracts_6362.htm">Bostrom (Nick) - Superintelligence: Paths, Dangers, Strategies</A>"</li><li>The Singularity<BR>&rarr; "<A HREF = "../../../Abstracts/Abstract_21/Abstract_21672.htm">Chalmers (David) - The singularity: A philosophical analysis</A>"</li><li><a name="29"></a>Teletransportation, </li></ol>&rarr; and maybe others & </li><li>For a page of <U><A HREF="#On-Page_Link_939_10">Links</A></U><SUB>10</SUB><a name="On-Page_Return_939_10"></A> to this Note, <A HREF = "../../Notes_9/Notes_939_Links.htm">Click here</A>.</li><li>Works on this topic that <U><A HREF="#On-Page_Link_939_11">I ve actually read</A></U><SUB>11</SUB><a name="On-Page_Return_939_11"></A>, <U><A HREF="#On-Page_Link_939_12">include</A></U><SUB>12</SUB><a name="On-Page_Return_939_12"></A> the following:- <ol type="i"><li>"<A HREF = "../../../Abstracts/Abstract_21/Abstract_21149.htm">Dainton (Barry) - Self: Philosophy In Transit: Prologue</A>", Dainton</li><li>"<A HREF = "../../../Abstracts/Abstract_23/Abstract_23360.htm">Hawthorne (John X.) - Are You Ready For The Cyborg Technology Coming In 2018?</A>", Hawthorne</li><li>"<A HREF = "../../../Abstracts/Abstract_17/Abstract_17206.htm">Jones (D. Gareth) - A Christian Perspective on Human Enhancement</A>", Jones</li><li>"<A HREF = "../../../Abstracts/Abstract_20/Abstract_20229.htm">Marshall (Richard) & Olson (Eric) - Eric T. Olson: The Philosopher with No Hands</A>", Marshall & Olson</li><li>"<A HREF = "../../../Abstracts/Abstract_23/Abstract_23280.htm">Price (Huw), Cave (Stephen), Iida (Fumiya), Etc. - Preparing for the future: artificial intelligence and us: Part 1</A>", Price Etc</li><li>"<A HREF = "../../../Abstracts/Abstract_23/Abstract_23286.htm">Price (Huw), Cave (Stephen), Iida (Fumiya), Etc. - Preparing for the future: artificial intelligence and us: Part 2</A>", Price Etc</li></ol></li><li>A reading list (where not covered elsewhere) might start with:- <ol type="i"><li>"<A HREF = "../../../Abstracts/Abstract_20/Abstract_20727.htm">Agar (Nicholas) - Enhancing Humananity</A>", Agar</li><li>"<A HREF = "../../../Abstracts/Abstract_20/Abstract_20995.htm">Agar (Nicholas) - Whereto Transhumanism?: The Literature Reaches a Critical Mass</A>", Agar</li><li>"<A HREF = "../../../Abstracts/Abstract_15/Abstract_15973.htm">Alexander (Denis) - Enhancing humans or a new creation?</A>", Alexander</li><li>"<A HREF = "../../../Abstracts/Abstract_22/Abstract_22108.htm">Andersen (Ross) - Omens</A>", Anderson</li><li>"<A HREF = "../../../Abstracts/Abstract_21/Abstract_21651.htm">Cerullo (Michael A.) - Uploading and Branching Identity</A>", Cerullo</li><li>"<A HREF = "../../../Abstracts/Abstract_22/Abstract_22309.htm">Chatfield (Tom) - Automated ethics</A>", Chatfield</li><li>"<A HREF = "../../../BookSummaries/BookSummary_05/BookPaperAbstracts/BookPaperAbstracts_5744.htm">Christian (Brian) - The Most Human Human: A Defence of Humanity in the Age of the Computer</A>", Christian</li><li>"<A HREF = "../../../BookSummaries/BookSummary_06/BookPaperAbstracts/BookPaperAbstracts_6364.htm">Dainton (Barry) - Self: Philosophy In Transit</A>", Dainton</li><li>"<A HREF = "../../../Abstracts/Abstract_22/Abstract_22322.htm">Deutsch (David) - Creative blocks</A>", Deutsch</li><li>"<A HREF = "../../../Abstracts/Abstract_22/Abstract_22232.htm">Graziano (Michael) - Endless fun</A>", Graziano</li><li>"<A HREF = "../../../Abstracts/Abstract_16/Abstract_16893.htm">Grossman (Lev), Kurzweil (Ray) - 2045: The Year Man Becomes Immortal</A>", Grossman</li><li>"<A HREF = "../../../Abstracts/Abstract_21/Abstract_21671.htm">Hayworth (Kenneth) - Killed by bad philosophy:Why brain preservation followed by mind uploading is a cure for death</A>", Hayworth</li><li>"<A HREF = "../../../BookSummaries/BookSummary_04/BookPaperAbstracts/BookPaperAbstracts_4136.htm">Kurzweil (Ray) - The Age of Spiritual Machines</A>", Kurzweil<BR>&rarr; "<A HREF = "../../../Abstracts/Abstract_17/Abstract_17003.htm">Searle (John) -  I Married a Computer : An Exchange (between Ray Kurzweil and John Searle)</A>", Searle<BR>&rarr; "<A HREF = "../../../Abstracts/Abstract_17/Abstract_17002.htm">Kurzweil (Ray) -  I Married a Computer : An Exchange (between Ray Kurzweil and John Searle)</A>", Kurzweil</li><li>"<A HREF = "../../../Abstracts/Abstract_22/Abstract_22174.htm">Medlock (Ben) - The body is the missing link for truly intelligent machines</A>", Medlock</li><li>"<A HREF = "../../../BookSummaries/BookSummary_06/BookPaperAbstracts/BookPaperAbstracts_6339.htm">More (Max) & Vita-More (Natasha) - The Transhumanist Reader</A>", More</li><li>"<A HREF = "../../../Abstracts/Abstract_21/Abstract_21126.htm">Oderberg (David) - Could There Be a Superhuman Species?</A>", Oderberg</li><li>"<A HREF = "../../../Abstracts/Abstract_22/Abstract_22240.htm">Price (Huw) - Now it s time to prepare for the Machinocene</A>", Price</li><li>"<A HREF = "../../../Abstracts/Abstract_21/Abstract_21673.htm">Sandberg (Anders) & Bostrom (Nick) - Whole Brain Emulation: A Roadmap</A>", Sandberg & Bostrom </li></ol></li><li>This is mostly a <a name="29"></a>place-holder. </li></ul><BR><BR><BR><HR><BR><U><B>In-Page Footnotes</U></B><a name="On-Page_Link_939_1"></A><BR><BR><B>Footnote 1</B>: <ul type="disc"><li>A number of my philosophical Notes are  promissory notes currently only listing the books and papers (if any) I possess on the topic concerned. </li><li>I ve decided to add some text  whether by way of motivation, or something more substantive  for all these identified topics related to my Thesis.</li><li>As I want to do this fairly quickly, the text may be confused or show surprising ignorance. </li><li>The reader (if such exists) will have to bear with me, and display the principle of charity while this footnote exists. </li></ul><a name="On-Page_Link_939_10"></A><B>Footnote 10</B>: <ul type="disc"><li>If only a  non-updating run has been made, the links are only one-way  ie. from the page of links to the objects that reference this Note by mentioning the appropriate key-word(s). The links are also only indicative, as they haven t yet been confirmed as relevant. </li><li>Once an updating run has been made, links are both ways, and links from this Notes page (from the  Authors, Books & Papers Citing this Note and  Summary of Note Links to this Page sections) are to the  point of link within the page rather than to the page generically. Links from the  links page remain generic. </li><li>There are two sorts of updating runs  for Notes and other Objects. The reason for this is that Notes are archived, and too many archived versions would be created if this process were repeatedly run. </li></ul> <a name="On-Page_Link_939_11"></A><B>Footnote 11</B>: <ul type="disc"><li>Frequently I ll have made copious marginal annotations, and sometimes have written up a review-note. </li><li>In the former case, I intend to transfer the annotations into electronic form as soon as I can find the time. </li><li>In the latter case, I will have remarked on the fact against the citation, and will integrate the comments into this Note in due course. </li><li>My intention is to incorporate into these Notes comments on material I ve already read rather than engage with unread material at this stage. </li></ul><a name="On-Page_Link_939_12"></A><B>Footnote 12</B>: <ul type="disc"><li>I may have read others in between updates of this Note  in which case they will be marked as such in the  References and Reading List below.</li><li>Papers or Books partially read have a rough %age based on the time spent versus the time expected. </li></ul> </P><B>Note last updated:</B> 17/08/2018 21:59:02<BR><BR><HR> <P ALIGN="Left"><FONT Size = 2 FACE="Arial"><B><U>Footnote 2: (2045: The Year Man Becomes Immortal)</B></U></P> <P ALIGN="Justify"><FONT Size = 2 FACE="Arial"> <u>Introduction</u><ol type="1"><li>Extracted from <em>Time On-Line</em> on 14th February 2011; there were some extra diagrams / photos in the hard-copy edition that were not repeated in the on-line version. The article bears comparison with "<A HREF = "../../../BookSummaries/BookSummary_04/BookPaperAbstracts/BookPaperAbstracts_4091.htm">Regis (Ed) - Great Mambo Chicken and the Transhuman Condition: Science Slightly over the Edge</A>", which hails from 1990, and which was then reporting the making of similar claims. This is a very superficial article, and there s obviously a lot more detailed stuff on-line (I ve given some links below), but this is a useful jumping-off point. </li><li>The article is (currently) available on-line (at <A HREF = "http://content.time.com/time/magazine/article/0,9171,2048299,00.html" TARGET = "_top">Time: 2045 - The Year Man Becomes Immortal</A> (http://content.time.com/time/magazine/article/0,9171,2048299,00.html)). I intend to make a lot of brief footnotes, but more extensive commentary will become <a name="29"></a>available here. The way to connect the <em>Time</em> article to this Note is via the footnotes on this page  they link directly to the sections in the Note. Currently there may be nothing extra added, but over time I ll reduce the duplication. </li><li>There was a very extensive commentary on-line, which ran to over 170 pages when I extracted it 5 days after the article was published. It s of very variable quality. If I get time I ll try to review it and pick out the popular themes. </li></ol><BR><BR><U>Full Text</U><FONT COLOR = "800080"><ol type="1"><li>On Feb. 15, 1965, a diffident but self-possessed high school student named Raymond <U><A HREF="#On-Page_Link_1273_2">Kurzweil</A></U><SUB>2</SUB><a name="On-Page_Return_1273_2"></A> appeared as a guest on a game show called <EM>I've Got a Secret</EM>. He was introduced by the host, Steve Allen, then he played a short musical composition on a piano. The idea was that Kurzweil was hiding an unusual fact and the panellists  they included a comedian and a former Miss America  had to guess what it was. </li><li>On the show, the beauty queen did a good job of grilling Kurzweil, but the comedian got the win: the music was composed by a computer. Kurzweil got $200. </li><li>Kurzweil then demonstrated the computer, which he built himself  a desk-size affair with loudly clacking relays, hooked up to a typewriter. The panellists were pretty blas about it; they were more impressed by Kurzweil's age than by anything he'd actually done. They were ready to move on to Mrs. Chester Loney of Rough and Ready, Calif., whose secret was that she'd been President Lyndon Johnson's first-grade teacher. </li><li>But Kurzweil would spend much of the rest of his career working out what his demonstration meant. <U><A HREF="#On-Page_Link_1273_3">Creating</A></U><SUB>3</SUB><a name="On-Page_Return_1273_3"></A> a work of art is one of those activities we reserve for humans and humans only. It's an act of self-expression; you're not supposed to be able to do it if you don't have a <U><A HREF="#On-Page_Link_1273_4">self</A></U><SUB>4</SUB><a name="On-Page_Return_1273_4"></A>. To see creativity, the exclusive domain of humans, usurped by a computer built by a 17-year-old is to watch a line blur that cannot be unblurred, the line between organic <U><A HREF="#On-Page_Link_1273_5">intelligence</A></U><SUB>5</SUB><a name="On-Page_Return_1273_5"></A> and artificial intelligence. </li><li>That was Kurzweil's real secret, and back in 1965 nobody guessed it. Maybe not even him, not yet. But now, 46 years later, Kurzweil believes that we're approaching a moment when computers will become intelligent, and not just intelligent but more intelligent than humans. When that happens, humanity  our bodies, our minds, our civilization  will be completely and irreversibly transformed. He believes that this moment is not only inevitable but <U><A HREF="#On-Page_Link_1273_6">imminent</A></U><SUB>6</SUB><a name="On-Page_Return_1273_6"></A>. According to his calculations, the end of human <U><A HREF="#On-Page_Link_1273_7">civilization</A></U><SUB>7</SUB><a name="On-Page_Return_1273_7"></A> as we know it is about 35 years away. </li><li>Computers are getting faster. Everybody knows that. Also, computers are getting faster <EM>faster</EM>  that is, the rate at which they're getting faster is increasing. </li><li>True? <U><A HREF="#On-Page_Link_1273_8">True</A></U><SUB>8</SUB><a name="On-Page_Return_1273_8"></A>. </li><li>So if computers are getting so much faster, so incredibly fast, there might conceivably come a moment when they are capable of something comparable to human intelligence. Artificial intelligence. All that horsepower could be put in the service of <U><A HREF="#On-Page_Link_1273_9">emulating</A></U><SUB>9</SUB><a name="On-Page_Return_1273_9"></A> whatever it is our brains are doing when they create consciousness  not just doing arithmetic very quickly or composing piano music but <U><A HREF="#On-Page_Link_1273_10">also</A></U><SUB>10</SUB><a name="On-Page_Return_1273_10"></A> driving cars, writing books, making ethical decisions, appreciating fancy paintings, making witty observations at cocktail parties. </li><li>If you can swallow that idea, and Kurzweil and a lot of other very <U><A HREF="#On-Page_Link_1273_11">smart</A></U><SUB>11</SUB><a name="On-Page_Return_1273_11"></A> people can, then all bets are off. From that point on, there's no <U><A HREF="#On-Page_Link_1273_12">reason</A></U><SUB>12</SUB><a name="On-Page_Return_1273_12"></A> to think computers would <U><A HREF="#On-Page_Link_1273_13">stop</A></U><SUB>13</SUB><a name="On-Page_Return_1273_13"></A> getting more powerful. They would keep on developing until they were far more intelligent than we are. Their rate of development would also continue to increase, because they would take over their own <U><A HREF="#On-Page_Link_1273_14">development</A></U><SUB>14</SUB><a name="On-Page_Return_1273_14"></A> from their slower-thinking human creators. Imagine a computer scientist that was itself a super-intelligent computer. It would work incredibly quickly. It could draw on huge amounts of data effortlessly. It wouldn't even take breaks to play Farmville. </li><li>Probably. It's impossible to predict the behavior of these smarter-than-human intelligences with which (with whom?) we might one day share the planet, <U><A HREF="#On-Page_Link_1273_15">because</A></U><SUB>15</SUB><a name="On-Page_Return_1273_15"></A> if you could, you'd be as smart as they would be. But there are a lot of theories about it. Maybe we'll merge with them to become super-intelligent <U><A HREF="#On-Page_Link_1273_16">cyborgs</A></U><SUB>16</SUB><a name="On-Page_Return_1273_16"></A>, using computers to extend our intellectual abilities the <U><A HREF="#On-Page_Link_1273_17">same</A></U><SUB>17</SUB><a name="On-Page_Return_1273_17"></A> way that cars and planes extend our physical abilities. Maybe the artificial intelligences will help us treat the effects of old age and prolong our life spans <U><A HREF="#On-Page_Link_1273_18">indefinitely</A></U><SUB>18</SUB><a name="On-Page_Return_1273_18"></A>. Maybe we'll scan our <U><A HREF="#On-Page_Link_1273_19">consciousnesses</A></U><SUB>19</SUB><a name="On-Page_Return_1273_19"></A> into computers and live inside them as <U><A HREF="#On-Page_Link_1273_20">software</A></U><SUB>20</SUB><a name="On-Page_Return_1273_20"></A>, forever, virtually. Maybe the computers will turn on humanity and <U><A HREF="#On-Page_Link_1273_21">annihilate</A></U><SUB>21</SUB><a name="On-Page_Return_1273_21"></A> us. The one thing all these theories have in common is the transformation of our species into something that is no longer recognizable as such to humanity circa 2011. This transformation has a name: the <U><A HREF="#On-Page_Link_1273_22">Singularity</A></U><SUB>22</SUB><a name="On-Page_Return_1273_22"></A>. </li><li>The difficult thing to keep sight of when you're talking about the Singularity is that even though it sounds like science <U><A HREF="#On-Page_Link_1273_23">fiction</A></U><SUB>23</SUB><a name="On-Page_Return_1273_23"></A>, it isn't, no more than a weather forecast is science fiction. It's not a fringe idea; it's a serious hypothesis about the future of life on Earth. There's an intellectual gag reflex that kicks in anytime you try to swallow an idea that involves super-intelligent immortal <a name="29"></a>cyborgs, but suppress it if you can, because while the Singularity appears to be, on the face of it, preposterous, it's an idea that rewards sober, careful evaluation. </li><li>People are spending a lot of money trying to understand it. The three-year-old Singularity University, which offers inter-disciplinary courses of study for graduate students and executives, is hosted by NASA. Google was a founding sponsor; its CEO and co-founder Larry Page spoke there last year. People are attracted to the Singularity for the shock value, like an intellectual freak show, but they stay because there's more to it than they expected. And of course, in the event that it turns out to be real, it will be the most important thing to happen to human beings since the invention of <U><A HREF="#On-Page_Link_1273_25">language</A></U><SUB>25</SUB><a name="On-Page_Return_1273_25"></A>. </li><li>The Singularity isn't a wholly new idea, just newish. In 1965 the British mathematician I.J. Good described something he called an "intelligence explosion": <ul type="disc"><EM>Let an <U><A HREF="#On-Page_Link_1273_26">ultraintelligent</A></U><SUB>26</SUB><a name="On-Page_Return_1273_26"></A> machine be defined as a machine that can far surpass all the intellectual activities of any man however clever. Since the design of machines is one of these intellectual activities, an ultraintelligent machine could design even better machines; there would then unquestionably be an "intelligence explosion," and the intelligence of man would be left far behind. Thus the first ultraintelligent machine is the last <U><A HREF="#On-Page_Link_1273_27">invention</A></U><SUB>27</SUB><a name="On-Page_Return_1273_27"></A> that man need ever make. </EM> </ul></li><li>The word <EM>singularity</EM> is borrowed from astrophysics: it refers to a point in space-time  for example, inside a black hole  at which the rules of ordinary physics do not apply. In the 1980s the science-fiction novelist Vernor Vinge attached it to Good's intelligence-explosion scenario. At a NASA symposium in 1993, Vinge announced that "within 30 <U><A HREF="#On-Page_Link_1273_28">years</A></U><SUB>28</SUB><a name="On-Page_Return_1273_28"></A>, we will have the technological means to create super-human intelligence. Shortly after, the human era will be ended." </li><li>By that time Kurzweil was thinking about the Singularity too. He'd been busy since his appearance on <EM>I've Got a Secret</EM>. He'd made several fortunes as an engineer and inventor; he founded and then sold his first software company while he was still at MIT. He went on to build the first print-to-speech reading machine for the blind  Stevie Wonder was customer No. 1  and made innovations in a range of technical fields, including music synthesizers and speech recognition. He holds 39 patents and 19 honorary doctorates. In 1999 President Bill Clinton awarded him the National Medal of Technology. </li><li>But Kurzweil was also pursuing a parallel career as a futurist: he has been publishing his thoughts about the future of human and machine-kind for 20 years, most recently in <EM>The Singularity Is Near</EM>, which was a best seller when it came out in 2005. A documentary by the same name, starring Kurzweil, Tony Robbins and Alan Dershowitz, among others, was released in January. (Kurzweil is actually the subject of two current documentaries. The other one, less authorized but more informative, is called <EM>The Transcendent Man</EM>.) Bill Gates has called him "the best person I know at <U><A HREF="#On-Page_Link_1273_29">predicting</A></U><SUB>29</SUB><a name="On-Page_Return_1273_29"></A> the future of artificial intelligence."</li><li>In real life, the transcendent man is an unimposing figure who could pass for Woody Allen's even nerdier younger brother. Kurzweil grew up in Queens, N.Y., and you can still hear a trace of it in his voice. Now 62, he speaks with the soft, almost hypnotic calm of someone who gives 60 public lectures a year. As the Singularity's most visible champion, he has heard all the questions and faced down the incredulity many, many times before. He's good-natured about it. His manner is almost apologetic: I wish I could bring you less exciting news of the future, but I've looked at the <U><A HREF="#On-Page_Link_1273_30">numbers</A></U><SUB>30</SUB><a name="On-Page_Return_1273_30"></A>, and this is what they say, so what else can I tell you? </li><li>Kurzweil's interest in humanity's <a name="29"></a>cyborganic destiny began about 1980 largely as a practical matter. He needed ways to measure and track the pace of technological progress. Even great inventions can fail if they arrive before their time, and he wanted to make sure that when he released his, the timing was right. "Even at that time, technology was moving quickly enough that the world was going to be different by the time you <U><A HREF="#On-Page_Link_1273_32">finished</A></U><SUB>32</SUB><a name="On-Page_Return_1273_32"></A> a project," he says. "So it's like skeet shooting  you can't shoot at the target." He knew about Moore'<U><A HREF="#On-Page_Link_1273_33">s</A></U><SUB>33</SUB><a name="On-Page_Return_1273_33"></A> law, of course, which states that the number of transistors you can put on a microchip doubles about every two years. It's a surprisingly reliable rule of thumb. Kurzweil tried plotting a slightly different curve: the change over time in the amount of computing power, measured in MIPS (millions of instructions per second), that you can <U><A HREF="#On-Page_Link_1273_34">buy</A></U><SUB>34</SUB><a name="On-Page_Return_1273_34"></A> for $1,000. <IMG ALIGN=RIGHT ALT="Kurzweil's Graph" WIDTH=412 HEIGHT=293 SRC="../../../Photos/Notes/Kurzweil_Graph.jpg"></li><li>As it turned out, Kurzweil's numbers looked a lot <U><A HREF="#On-Page_Link_1273_35">like</A></U><SUB>35</SUB><a name="On-Page_Return_1273_35"></A> Moore's. They doubled every couple of years. Drawn as graphs, they both made <U><A HREF="#On-Page_Link_1273_36">exponential</A></U><SUB>36</SUB><a name="On-Page_Return_1273_36"></A> curves, with their value increasing by multiples of two instead of by regular increments in a straight line. The curves held eerily steady, even when Kurzweil extended his backward through the decades of pre-transistor computing technologies like relays and vacuum tubes, all the way back to 1900. </li><li>Kurzweil then ran the numbers on a whole bunch of other key technological <U><A HREF="#On-Page_Link_1273_37">indexes</A></U><SUB>37</SUB><a name="On-Page_Return_1273_37"></A>  the falling cost of manufacturing transistors, the rising clock speed of microprocessors, the plummeting price of dynamic RAM. He looked even further afield at trends in biotech and <U><A HREF="#On-Page_Link_1273_38">beyond</A></U><SUB>38</SUB><a name="On-Page_Return_1273_38"></A>  the falling cost of sequencing DNA and of wireless data service and the rising numbers of Internet hosts and nanotechnology patents. He kept finding the same thing: exponentially accelerating progress. "It's really amazing how <U><A HREF="#On-Page_Link_1273_39">smooth</A></U><SUB>39</SUB><a name="On-Page_Return_1273_39"></A> these trajectories are," he says. "Through thick and thin, war and <U><A HREF="#On-Page_Link_1273_40">peace</A></U><SUB>40</SUB><a name="On-Page_Return_1273_40"></A>, boom times and recessions." Kurzweil calls it the law of accelerating <U><A HREF="#On-Page_Link_1273_41">returns</A></U><SUB>41</SUB><a name="On-Page_Return_1273_41"></A>: technological progress happens exponentially, not linearly.</li><li>Then he extended the curves into the <U><A HREF="#On-Page_Link_1273_42">future</A></U><SUB>42</SUB><a name="On-Page_Return_1273_42"></A>, and the growth they predicted was so phenomenal, it created cognitive resistance in his mind. Exponential curves start slowly, then rocket skyward toward infinity. According to Kurzweil, we're not <U><A HREF="#On-Page_Link_1273_43">evolved</A></U><SUB>43</SUB><a name="On-Page_Return_1273_43"></A> to think in terms of exponential growth. "It's not intuitive. Our built-in predictors are linear. When we're trying to avoid an animal, we pick the linear prediction of where it's going to be in 20 seconds and what to do about it. That is actually hardwired in our brains." </li><li>Here's what the exponential curves told him. We will successfully <U><A HREF="#On-Page_Link_1273_44">reverse-engineer</A></U><SUB>44</SUB><a name="On-Page_Return_1273_44"></A> the human brain by the mid-2020s. By the end of that decade, computers will be capable of human-level intelligence. Kurzweil puts the date of the Singularity  never say he's not conservative  at 2045. In that year, he estimates, given the vast increases in computing power and the vast reductions in the cost of same, the quantity of artificial intelligence created will be about a billion times the sum of all the human intelligence that exists today. </li><li>The Singularity isn't just an idea. It attracts people, and those people feel a bond with one another. Together they form a movement, a subculture; Kurzweil calls it a community. Once you decide to take the Singularity seriously, you will find that you have become part of a small but intense and globally distributed hive of like-minded thinkers known as <U><A HREF="#On-Page_Link_1273_45">Singularitarians</A></U><SUB>45</SUB><a name="On-Page_Return_1273_45"></A>. </li><li>Not all of them are Kurzweilians, not by a long chalk. There's room inside Singularitarianism for considerable <U><A HREF="#On-Page_Link_1273_46">diversity</A></U><SUB>46</SUB><a name="On-Page_Return_1273_46"></A> of opinion about what the Singularity means and when and how it will or won't happen. But Singularitarians share a <U><A HREF="#On-Page_Link_1273_46">worldview</A></U><SUB>47=46</SUB><a name="On-Page_Return_1273_47"></A>. They think in terms of deep time, they believe in the power of technology to shape history, they have little interest in the conventional wisdom about anything, and they cannot believe you're walking around living your life and watching TV as if the artificial-intelligence revolution were not about to erupt and change absolutely <U><A HREF="#On-Page_Link_1273_46">everything</A></U><SUB>48=46</SUB><a name="On-Page_Return_1273_48"></A>. They have no fear of sounding ridiculous; your ordinary citizen's distaste for apparently absurd ideas is just an example of irrational bias, and Singularitarians have no truck with irrationality. When you enter their mind-space you pass through an extreme gradient in worldview, a hard ontological shear that separates Singularitarians from the common run of humanity. Expect turbulence. </li><li>In addition to the Singularity University, which Kurzweil co-founded, there's also a Singularity Institute for Artificial Intelligence, based in San Francisco. It counts among its advisers Peter Thiel, a former CEO of PayPal and an early investor in Facebook. The institute holds an annual conference called the Singularity Summit. (Kurzweil co-founded that too.) Because of the highly interdisciplinary nature of Singularity theory, it attracts a diverse crowd. Artificial intelligence is the main event, but the sessions also cover the galloping progress of, among other fields, genetics and nanotechnology. </li><li>At the 2010 summit, which took place in August in San Francisco, there were not just computer scientists but also psychologists, neuroscientists, nanotechnologists, molecular biologists, a specialist in wearable computers, a professor of emergency medicine, an expert on cognition in grey parrots and the professional magician and debunker James "the Amazing" <U><A HREF="#On-Page_Link_1273_46">Randi</A></U><SUB>49=46</SUB><a name="On-Page_Return_1273_49"></A>. The atmosphere was a curious blend of Davos and UFO convention. Proponents of sea-steading  the practice, so far mostly theoretical, of establishing politically autonomous floating communities in international waters  handed out pamphlets. An <a name="29"></a>android chatted with visitors in one corner. </li><li>After artificial intelligence, the most talked-about topic at the 2010 summit was life <U><A HREF="#On-Page_Link_1273_46">extension</A></U><SUB>51=46</SUB><a name="On-Page_Return_1273_51"></A>. Biological boundaries that most people think of as permanent and inevitable Singularitarians see as merely intractable but <U><A HREF="#On-Page_Link_1273_46">solvable</A></U><SUB>52=46</SUB><a name="On-Page_Return_1273_52"></A> problems. Death is one of them. Old age is an <U><A HREF="#On-Page_Link_1273_46">illness</A></U><SUB>53=46</SUB><a name="On-Page_Return_1273_53"></A> like any other, and what do you do with illnesses? You cure them. Like a lot of Singularitarian ideas, it sounds funny at first, but the closer you get to it, the less funny it seems. It's not just wishful thinking; there's actual science going on here. </li><li>For example, it's well known that one cause of the physical degeneration associated with aging involves telomeres, which are segments of DNA found at the ends of chromosomes. Every time a cell divides, its telomeres get shorter, and once a cell runs out of telomeres, it can't reproduce anymore and dies. But there's an enzyme called telomerase that reverses this process; it's one of the reasons cancer cells live so long. So why not treat regular non-cancerous cells with <U><A HREF="#On-Page_Link_1273_46">telomerase</A></U><SUB>54=46</SUB><a name="On-Page_Return_1273_54"></A>? In November, researchers at Harvard Medical School announced in Nature that they had done just that. They administered telomerase to a group of mice suffering from age-related degeneration. The damage went away. The mice didn't just get better; they got younger. </li><li>Aubrey de Grey is one of the world's best-known life-extension researchers and a Singularity Summit veteran. A British biologist with a doctorate from Cambridge and a famously formidable beard, de Grey runs a foundation called SENS, or Strategies for Engineered Negligible Senescence. He views aging as a process of accumulating damage, which he has divided into seven categories, each of which he hopes to one day address using regenerative medicine. "People have begun to realize that the view of aging being something immutable  rather like the heat death of the universe  is simply ridiculous," he says. "It's just childish. The human body is a <U><A HREF="#On-Page_Link_1273_46">machine</A></U><SUB>55=46</SUB><a name="On-Page_Return_1273_55"></A> that has a bunch of functions, and it accumulates various types of damage as a side effect of the normal function of the machine. Therefore in principal that damage can be repaired periodically. This is why we have vintage cars. It's really just a matter of paying attention. The whole of medicine consists of messing about with what looks pretty inevitable until you figure out how to make it not inevitable." </li><li>Kurzweil takes life extension seriously too. His father, with whom he was very close, died of heart disease at 58. Kurzweil inherited his father's genetic predisposition; he also developed Type 2 diabetes when he was 35. Working with Terry Grossman, a doctor who specializes in longevity medicine, Kurzweil has published two books on his own approach to life extension, which involves taking up to 200 pills and supplements a day. He says his diabetes is essentially cured, and although he's 62 years old from a chronological perspective, he estimates that his biological age is about 20 years younger.</li><li>But his goal differs slightly from de Grey's. For Kurzweil, it's not so much about staying healthy as long as possible; it's about staying alive <U><A HREF="#On-Page_Link_1273_46">until</A></U><SUB>56=46</SUB><a name="On-Page_Return_1273_56"></A> the Singularity. It's an attempted handoff. Once hyper-intelligent artificial intelligences arise, armed with advanced <U><A HREF="#On-Page_Link_1273_46">nanotechnology</A></U><SUB>57=46</SUB><a name="On-Page_Return_1273_57"></A>, they'll really be able to wrestle with the vastly complex, systemic problems associated with aging in humans. Alternatively, by then we'll be able to <U><A HREF="#On-Page_Link_1273_46">transfer</A></U><SUB>58=46</SUB><a name="On-Page_Return_1273_58"></A> our minds to sturdier vessels such as computers and robots. He and many other Singularitarians take seriously the proposition that many people who are alive today will wind up being <U><A HREF="#On-Page_Link_1273_46">functionally</A></U><SUB>59=46</SUB><a name="On-Page_Return_1273_59"></A> <U><A HREF="#On-Page_Link_1273_46">immortal</A></U><SUB>60=46</SUB><a name="On-Page_Return_1273_60"></A>. </li><li>It's an idea that's radical and ancient at the same time. In "Sailing to Byzantium," W.B. Yeats describes mankind's fleshly predicament as a soul fastened to a dying animal. Why not unfasten it and fasten it to an immortal robot instead? But Kurzweil finds that life extension produces even more resistance in his audiences than his exponential growth curves. "There are people who can accept computers being more intelligent than people," he says. "But the idea of significant changes to human longevity  that seems to be particularly <U><A HREF="#On-Page_Link_1273_46">controversial</A></U><SUB>61=46</SUB><a name="On-Page_Return_1273_61"></A>. People invested a lot of personal effort into certain philosophies dealing with the issue of life and death. I mean, that's the major reason we have <U><A HREF="#On-Page_Link_1273_46">religion</A></U><SUB>62=46</SUB><a name="On-Page_Return_1273_62"></A>." </li><li>Of course, a lot of people think the Singularity is nonsense  a fantasy, wishful thinking, a Silicon Valley version of the Evangelical story of the Rapture, spun by a man who earns his living making outrageous claims and backing them up with pseudoscience. Most of the serious critics focus on the question of whether a computer can truly become <U><A HREF="#On-Page_Link_1273_46">intelligent</A></U><SUB>63=46</SUB><a name="On-Page_Return_1273_63"></A>. </li><li>The entire field of artificial intelligence, or AI, is devoted to this question. But AI doesn't currently produce the <U><A HREF="#On-Page_Link_1273_46">kind</A></U><SUB>64=46</SUB><a name="On-Page_Return_1273_64"></A> of intelligence we associate with humans or even with talking computers in movies  HAL or C3PO or Data. Actual Ais tend to be able to master only one highly <U><A HREF="#On-Page_Link_1273_46">specific</A></U><SUB>65=46</SUB><a name="On-Page_Return_1273_65"></A> domain, like interpreting search queries or playing chess. They operate within an extremely specific frame of reference. They don't make conversation at parties. They're intelligent, but only if you define intelligence in a vanishingly narrow way. The kind of intelligence Kurzweil is talking about, which is called strong AI or artificial general intelligence, doesn't <U><A HREF="#On-Page_Link_1273_46">exist</A></U><SUB>66=46</SUB><a name="On-Page_Return_1273_66"></A> yet. </li><li>Why not? Obviously we're still waiting on all that exponentially growing computing power to get here. But it's also possible that there are things going on in our brains that can'<U><A HREF="#On-Page_Link_1273_46">t</A></U><SUB>67=46</SUB><a name="On-Page_Return_1273_67"></A> be duplicated electronically no matter how many MIPS you throw at them. The neurochemical architecture that generates the ephemeral chaos we know as human consciousness may just be too complex and <U><A HREF="#On-Page_Link_1273_46">analog</A></U><SUB>68=46</SUB><a name="On-Page_Return_1273_68"></A> to replicate in <U><A HREF="#On-Page_Link_1273_46">digital</A></U><SUB>69=46</SUB><a name="On-Page_Return_1273_69"></A> silicon. The biologist Dennis Bray was one of the few voices of dissent at last summer's Singularity Summit. "Although biological components act in ways that are comparable to those in electronic circuits," he argued, in a talk titled "What Cells Can Do That Robots Can't," "they are set apart by the huge number of different states they can adopt. Multiple biochemical processes create chemical modifications of protein molecules, further diversified by association with distinct structures at defined locations of a cell. The resulting combinatorial <U><A HREF="#On-Page_Link_1273_46">explosion</A></U><SUB>70=46</SUB><a name="On-Page_Return_1273_70"></A> of states endows living systems with an almost infinite capacity to store information regarding past and present conditions and a unique capacity to prepare for future events." That makes the ones and zeros that computers trade in look pretty crude. </li><li>Underlying the practical challenges are a host of <U><A HREF="#On-Page_Link_1273_46">philosophical</A></U><SUB>71=46</SUB><a name="On-Page_Return_1273_71"></A> ones. Suppose we did create a computer that talked and acted in a way that was indistinguishable from a human being  in other words, a computer that could pass the Turing test. (Very loosely speaking, such a computer would be able to pass as human in a blind test.) Would that mean that the computer was sentient, the way a human being is? Or would it just be an extremely sophisticated but essentially mechanical <U><A HREF="#On-Page_Link_1273_46">automaton</A></U><SUB>72=46</SUB><a name="On-Page_Return_1273_72"></A> without the mysterious spark of consciousness  a machine with no ghost in it? And how would we <U><A HREF="#On-Page_Link_1273_46">know</A></U><SUB>73=46</SUB><a name="On-Page_Return_1273_73"></A>? </li><li>Even if you grant that the Singularity is plausible, you're still staring at a thicket of unanswerable questions. If I can scan my <U><A HREF="#On-Page_Link_1273_46">consciousness</A></U><SUB>74=46</SUB><a name="On-Page_Return_1273_74"></A> into a computer, am I still <U><A HREF="#On-Page_Link_1273_46">me</A></U><SUB>75=46</SUB><a name="On-Page_Return_1273_75"></A>? What are the geopolitics and the <U><A HREF="#On-Page_Link_1273_46">socioeconomics</A></U><SUB>76=46</SUB><a name="On-Page_Return_1273_76"></A> of the Singularity? Who decides who gets to be immortal? Who draws the <U><A HREF="#On-Page_Link_1273_46">line</A></U><SUB>77=46</SUB><a name="On-Page_Return_1273_77"></A> between sentient and non-sentient? And as we approach immortality, omniscience and omnipotence, will our lives still have <U><A HREF="#On-Page_Link_1273_46">meaning</A></U><SUB>78=46</SUB><a name="On-Page_Return_1273_78"></A>? By beating death, will we have lost our essential <U><A HREF="#On-Page_Link_1273_46">humanity</A></U><SUB>79=46</SUB><a name="On-Page_Return_1273_79"></A>? </li><li>Kurzweil admits that there's a fundamental level of <U><A HREF="#On-Page_Link_1273_46">risk</A></U><SUB>80=46</SUB><a name="On-Page_Return_1273_80"></A> associated with the Singularity that's impossible to refine away, simply because we don't know what a highly advanced artificial intelligence, finding itself a newly created inhabitant of the planet Earth, would choose to <U><A HREF="#On-Page_Link_1273_46">do</A></U><SUB>81=46</SUB><a name="On-Page_Return_1273_81"></A>. It might not feel like competing with us for resources. One of the goals of the Singularity Institute is to make sure not just that artificial intelligence develops but also that the AI is friendly. You don't have to be a super-intelligent <a name="29"></a>cyborg to understand that introducing a superior life-form into your own biosphere is a basic <U><A HREF="#On-Page_Link_1273_46">Darwinian</A></U><SUB>83=46</SUB><a name="On-Page_Return_1273_83"></A> error. </li><li>If the Singularity is coming, these questions are going to get <U><A HREF="#On-Page_Link_1273_46">answers</A></U><SUB>84=46</SUB><a name="On-Page_Return_1273_84"></A> whether we like it or not, and Kurzweil thinks that trying to put off the Singularity by <U><A HREF="#On-Page_Link_1273_46">banning</A></U><SUB>85=46</SUB><a name="On-Page_Return_1273_85"></A> technologies is not only impossible but also unethical and probably dangerous. "It would require a totalitarian system to implement such a ban," he says. "It wouldn't work. It would just drive these technologies <U><A HREF="#On-Page_Link_1273_46">underground</A></U><SUB>86=46</SUB><a name="On-Page_Return_1273_86"></A>, where the responsible scientists who we're counting on to create the defenses would not have easy access to the tools." </li><li>Kurzweil is an almost inhumanly patient and thorough debater. He relishes it. He's tireless in hunting down his critics so that he can respond to them, point by point, carefully and in detail.</li><li>Take the question of whether computers can replicate the biochemical complexity of an organic brain. Kurzweil yields no ground there whatsoever. He does not see any fundamental <U><A HREF="#On-Page_Link_1273_46">difference</A></U><SUB>87=46</SUB><a name="On-Page_Return_1273_87"></A> between flesh and silicon that would prevent the latter from thinking. He defies biologists to come up with a neurological mechanism that could not be <U><A HREF="#On-Page_Link_1273_46">modelled</A></U><SUB>88=46</SUB><a name="On-Page_Return_1273_88"></A> or at least matched in power and flexibility by software running on a computer. He refuses to fall on his knees before the mystery of the human brain. "Generally speaking," he says, "the core of a disagreement I'll have with a critic is, they'll say, Oh, Kurzweil is underestimating the complexity of <U><A HREF="#On-Page_Link_1273_46">reverse-engineering</A></U><SUB>89=46</SUB><a name="On-Page_Return_1273_89"></A> of the human brain or the complexity of biology. But I don't believe I'm underestimating the challenge. I think they're underestimating the power of <U><A HREF="#On-Page_Link_1273_46">exponential</A></U><SUB>90=46</SUB><a name="On-Page_Return_1273_90"></A> growth." </li><li>This position doesn't make Kurzweil an outlier, at least among Singularitarians. Plenty of people make more-extreme predictions. Since 2005 the neuroscientist Henry Markram has been running an ambitious initiative at the Brain Mind Institute of the Ecole Polytechnique in Lausanne, Switzerland. It's called the Blue Brain project, and it's an attempt to create a <U><A HREF="#On-Page_Link_1273_46">neuron-by-neuron</A></U><SUB>91=46</SUB><a name="On-Page_Return_1273_91"></A> simulation of a mammalian brain, using IBM's Blue Gene super-computer. So far, Markram's team has managed to simulate one neocortical column from a rat's brain, which contains about 10,000 neurons. Markram has said that he hopes to have a <U><A HREF="#On-Page_Link_1273_46">complete</A></U><SUB>92=46</SUB><a name="On-Page_Return_1273_92"></A> virtual human brain up and running in 10 years. (Even Kurzweil sniffs at this. If it worked, he points out, you'd then have to <U><A HREF="#On-Page_Link_1273_46">educate</A></U><SUB>93=46</SUB><a name="On-Page_Return_1273_93"></A> the brain, and who knows how long that would take?) </li><li>By definition, the future beyond the Singularity is not knowable by our linear, chemical, animal brains, but Kurzweil is teeming with theories about it. He positively flogs himself to think bigger and bigger; you can see him kicking against the confines of his aging organic hardware. "When people look at the implications of ongoing exponential growth, it gets harder and harder to accept," he says. "So you get people who really accept, yes, things are progressing exponentially, but they fall off the horse at some point because the <U><A HREF="#On-Page_Link_1273_46">implications</A></U><SUB>94=46</SUB><a name="On-Page_Return_1273_94"></A> are too fantastic. I've tried to push myself to really look." </li><li>In Kurzweil's future, biotechnology and nanotechnology give us the power to manipulate our bodies and the world around us at will, at the <U><A HREF="#On-Page_Link_1273_46">molecular</A></U><SUB>95=46</SUB><a name="On-Page_Return_1273_95"></A> level. Progress hyperaccelerates, and every hour brings a century's worth of scientific breakthroughs. We ditch Darwin and take <U><A HREF="#On-Page_Link_1273_46">charge</A></U><SUB>96=46</SUB><a name="On-Page_Return_1273_96"></A> of our own evolution. The human genome becomes just so much code to be bug-tested and optimized and, if necessary, <U><A HREF="#On-Page_Link_1273_46">rewritten</A></U><SUB>97=46</SUB><a name="On-Page_Return_1273_97"></A>. Indefinite life extension becomes a reality; people die only if they choose to. Death loses its sting once and for all. Kurzweil hopes to bring his dead father <U><A HREF="#On-Page_Link_1273_46">back</A></U><SUB>98=46</SUB><a name="On-Page_Return_1273_98"></A> to life. </li><li>We can scan our consciousnesses into computers and enter a <U><A HREF="#On-Page_Link_1273_46">virtual</A></U><SUB>99=46</SUB><a name="On-Page_Return_1273_99"></A> existence or swap our <U><A HREF="#On-Page_Link_1273_46">bodies</A></U><SUB>100=46</SUB><a name="On-Page_Return_1273_100"></A> for immortal robots and light out for the edges of space as intergalactic godlings. Within a matter of <U><A HREF="#On-Page_Link_1273_46">centuries</A></U><SUB>101=46</SUB><a name="On-Page_Return_1273_101"></A>, human intelligence will have re-engineered and saturated all the matter in the universe. This is, Kurzweil believes, our destiny as a species. </li><li>Or it isn't. When the big questions get answered, a lot of the action will happen where no one can see it, deep inside the black silicon brains of the computers, which will either bloom bit by bit into conscious minds or just continue in ever more brilliant and powerful iterations of <U><A HREF="#On-Page_Link_1273_46">nonsentience</A></U><SUB>102=46</SUB><a name="On-Page_Return_1273_102"></A>. </li><li>But as for the minor questions, they're already being decided all around us and in plain sight. The more you read about the Singularity, the more you start to see it peeking out at you, coyly, from unexpected directions. Five years ago we didn't have 600 million humans carrying out their social lives over a single electronic network. Now we have Facebook. Five years ago you didn't see people double-checking what they were saying and where they were going, even as they were saying it and going there, using handheld network-enabled digital prosthetics. Now we have iPhones. Is it an unimaginable step to take the iPhones out of our hands and put them into our <U><A HREF="#On-Page_Link_1273_46">skulls</A></U><SUB>103=46</SUB><a name="On-Page_Return_1273_103"></A>? </li><li>Already 30,000 patients with Parkinson's disease have neural <U><A HREF="#On-Page_Link_1273_46">implants</A></U><SUB>104=46</SUB><a name="On-Page_Return_1273_104"></A>. Google is experimenting with computers that can drive cars. There are more than 2,000 <U><A HREF="#On-Page_Link_1273_46">robots</A></U><SUB>105=46</SUB><a name="On-Page_Return_1273_105"></A> fighting in Afghanistan alongside the human troops. This month a game show will once again figure in the history of artificial intelligence, but this time the computer will be the guest: an IBM super-computer nicknamed Watson will compete on Jeopardy! Watson runs on 90 servers and takes up an entire <U><A HREF="#On-Page_Link_1273_46">room</A></U><SUB>106=46</SUB><a name="On-Page_Return_1273_106"></A>, and in a practice match in January it finished ahead of two former champions, Ken Jennings and Brad Rutter. It got every question it answered right, but much more important, it didn't need help understanding the questions (or, strictly speaking, the answers), which were phrased in plain English. Watson isn't strong AI, but if strong AI happens, it will arrive <U><A HREF="#On-Page_Link_1273_46">gradually</A></U><SUB>107=46</SUB><a name="On-Page_Return_1273_107"></A>, bit by bit, and this will have been one of the bits. </li><li>A hundred years from now, Kurzweil and de Grey and the others could be the 22nd century's answer to the Founding Fathers  except unlike the Founding Fathers, they'll still be alive to get credit  or their ideas could look as hilariously retro and dated as Disney's Tomorrowland. Nothing gets old as fast as the future. </li><li>But even if they're dead wrong about the future, they're right about the present. They're taking the long view and looking at the big picture. You may reject every specific article of the Singularitarian charter, but you should admire Kurzweil for taking the future seriously. Singularitarianism is grounded in the idea that change is real and that humanity is in charge of its own fate and that history might not be as simple as one damn thing after another. Kurzweil likes to point out that your average cell phone is about a millionth the size of, a millionth the price of and a thousand times more powerful than the computer he had at MIT 40 years ago. Flip that forward 40 years and what does the world look like? If you really want to figure that out, you have to think very, very far outside the box. Or maybe you have to think further <U><A HREF="#On-Page_Link_1273_46">inside</A></U><SUB>108=46</SUB><a name="On-Page_Return_1273_108"></A> it than anyone ever has before. </li></ol></FONT><BR><BR><BR><HR><BR><U><B>In-Page Footnotes</U></B><a name="On-Page_Link_1273_2"></A><BR><BR><B>Footnote 2</B>: <ul type="disc"><li><B>Kurzweil</B>: See <A HREF = "https://en.wikipedia.org/wiki/Ray_Kurzweil" TARGET = "_top">Wikipedia: Ray Kurzweil</A> (https://en.wikipedia.org/wiki/Ray_Kurzweil), <A HREF = "http://www.kurzweilai.net/" TARGET = "_top">Kurzweil: Accelerating Intelligence</A> (http://www.kurzweilai.net/), and much else besides. I seem to have one of his books - "<A HREF = "../../../BookSummaries/BookSummary_04/BookPaperAbstracts/BookPaperAbstracts_4136.htm">Kurzweil (Ray) - The Age of Spiritual Machines</A>". <A HREF="../../../Notes/Notes_9/Notes_972.htm#Off-Page_Link_Kurzweil">Click here for Note</A>. </li></ul><a name="On-Page_Link_1273_3"></A><B>Footnote 3</B>: <ul type="disc"><li><B>Creativity</B>: there s presumably a distinction between rules-based creativity, which is what (presumably) computers can do, and creativity of a less constrained sort, that we don t know how to get computers to do (yet)? <A HREF="../../../Notes/Notes_9/Notes_972.htm#Off-Page_Link_Creativity">Click here for Note</A>. </li></ul><a name="On-Page_Link_1273_4"></A><B>Footnote 4</B>: <ul type="disc"><li><B>Self</B>: and  self-expression  facon de parler? <A HREF="../../../Notes/Notes_9/Notes_972.htm#Off-Page_Link_Self">Click here for Note</A>. </li></ul><a name="On-Page_Link_1273_5"></A><B>Footnote 5</B>: <ul type="disc"><li><B>Intelligence and Consciousness</B>: there s a sharp distinction between intelligence and consciousness. As far as we know, consciousness is the preserve of organic intelligence. We can presume that lots of rather dim animals are phenomenally conscious (even if not self-conscious & the distinction is important) so, there s no link between getting smarter and smarter and then (as a result) getting phenomenally conscious. I m not sure of the link between intelligence and self-consciousness. <A HREF="../../../Notes/Notes_9/Notes_972.htm#Off-Page_Link_Intelligence">Click here for Note</A>. </li></ul><a name="On-Page_Link_1273_6"></A><B>Footnote 6</B>: <ul type="disc"><li><B>Imminence of the  Singularity </B>: this is predicated on the assumption of continued exponential growth. <A HREF="../../../Notes/Notes_9/Notes_972.htm#Off-Page_Link_Imminence">Click here for Note</A>. </li></ul><a name="On-Page_Link_1273_7"></A><B>Footnote 7</B>: <ul type="disc"><li><B>Human Civilization</B>: So far, computers have only enhanced human civilisation.  Ending it ( as we know it ) depends on delivering (out of control) the various promissory-notes of this article. <A HREF="../../../Notes/Notes_9/Notes_972.htm#Off-Page_Link_Civilization">Click here for Note</A>. </li></ul><a name="On-Page_Link_1273_8"></A><B>Footnote 8</B>: <ul type="disc"><li><B>Faster <EM>Faster</EM></B>: Is this really so, and will it really continue to be so, if it is so? Note that Kurzweil's graph muddles together speed and cost. <A HREF="../../../Notes/Notes_9/Notes_972.htm#Off-Page_Link_Faster">Click here for Note</A>. </li></ul><a name="On-Page_Link_1273_9"></A><B>Footnote 9</B>: <ul type="disc"><li><B>Emulation</B>: Two points here. Firstly, emulation isn t the real thing. Models of hurricanes aren t wet and windy, so why should emulations of consciousness be conscious? Secondly, digital computers are serial devices in which the components are (now) very quick, and brains are massively parallel devices whose components are very slow. Why should simulating one by the other produce the same (phenomenal) effect, and even be possible at all? <A HREF="../../../Notes/Notes_9/Notes_972.htm#Off-Page_Link_Emulation">Click here for Note</A>. </li></ul><a name="On-Page_Link_1273_10"></A><B>Footnote 10</B>: <ul type="disc"><li><B>Intelligent Actions</B>: The items on the list ( driving cars, writing books, making ethical decisions, appreciating fancy paintings, making witty observations at cocktail parties ) can all (presumably) be rules-based and situation-driven. No doubt this is true of human intelligence as well (ultimately) but modelling it is not straightforward, as we don t know how the brain does it. The issue isn t really (in this case) to do with  whether , but  when , as there are lots of major breakthroughs required before the promissory note can be delivered on. Also, all these functions can be delivered unconsciously (if they can be delivered at all). <A HREF="../../../Notes/Notes_9/Notes_972.htm#Off-Page_Link_Intelligent_Actions">Click here for Note</A>. </li></ul><a name="On-Page_Link_1273_11"></A><B>Footnote 11</B>: <ul type="disc"><li><B>Smart people</B>: Does it matter how smart they are? Lots of equally smart people don t share the optimism of the futurologists. <A HREF="../../../Notes/Notes_9/Notes_972.htm#Off-Page_Link_Smart_people">Click here for Note</A>. </li></ul><a name="On-Page_Link_1273_12"></A><B>Footnote 12</B>: <ul type="disc"><li><B>Increasingly Powerful Computers</B>: are there really no reasons to doubt that their onward exponential growth is really never going to end? Miniaturisation of components has to stop soon due to QM effects. So, a radically-new technology is needed. Some ideas are there, but we might get  stuck on their delivery, as has been the case for controlled nuclear fusion (<A HREF = "https://en.wikipedia.org/wiki/Fusion_power#Current_status" TARGET = "_top">Wikipedia: Fusion Power</A> (https://en.wikipedia.org/wiki/Fusion_power#Current_status)), which in the 1950s was expected soon, in the 1970s by 2000 and in 2006  not within 100 years . <A HREF="../../../Notes/Notes_9/Notes_972.htm#Off-Page_Link_Computer_Power">Click here for Note</A>. </li></ul><a name="On-Page_Link_1273_13"></A><B>Footnote 13</B>: <ul type="disc"><li><B>Computing Power</B>: There s no doubt that computers will continue to get more powerful, as hardware and software continues to improve, as it always will. The issue is really over the rate of change (can exponential growth continue indefinitely) and can certain conceptual breakthroughs be made? <A HREF="../../../Notes/Notes_9/Notes_972.htm#Off-Page_Link_Computer_Power">Click here for Note</A>. </li></ul><a name="On-Page_Link_1273_14"></A><B>Footnote 14</B>: <ul type="disc"><li><B>Bootstrapped Development</B>: This is certainly an important point, as we certainly use computers to help <U>manufacture</U> computers. But the extrapolation to development may involve the solution of the real  machine creativity problem. <A HREF="../../../Notes/Notes_9/Notes_972.htm#Off-Page_Link_Bootstrapping">Click here for Note</A>. </li></ul><a name="On-Page_Link_1273_15"></A><B>Footnote 15</B>: <ul type="disc"><li><B>Prediction</B>: is this true? It would be true if machines became  smarter than humans in every dimension of  smartness . But  unpredictability (ie. non-rules-based) is one of the aspects of machine-intelligence yet to be delivered by AI. Also, this argument sounds a bit like the  you can t know the mind of God (at all) arguments, which may or may not be sound. <A HREF="../../../Notes/Notes_9/Notes_972.htm#Off-Page_Link_Prediction">Click here for Note</A>. </li></ul><a name="On-Page_Link_1273_16"></A><B>Footnote 16</B>: <ul type="disc"><li><B><A HREF="../../../Notes/Notes_0/Notes_66.htm">Cyborgs</a></B>: This sounds a more promising approach than simulation, and it d relieve computers from having to realise consciousness. But any cognitive interlinking would still require a fuller understanding of how the brain works than is currently on the horizon. <A HREF="../../../Notes/Notes_9/Notes_972.htm#Off-Page_Link_Cyborgs">Click here for Note</A>. </li></ul><a name="On-Page_Link_1273_17"></A><B>Footnote 17</B>: <ul type="disc"><li><B>Analogies</B>: We don t  integrate with cars and planes any more than we integrate with computers. They are just tools. Prosthetics are the nearest analogues, but there s a long way from that to true integration. <A HREF="../../../Notes/Notes_9/Notes_972.htm#Off-Page_Link_Analogies">Click here for Note</A>. </li></ul><a name="On-Page_Link_1273_18"></A><B>Footnote 18</B>: <ul type="disc"><li><B>Nanotechnology</B>: At this stage of the argument, it s not clear how intelligent machines will help repair our bodies and brains (especially  indefinitely ). Usually nanotechnology is invoked at this stage (see <A HREF = "https://en.wikipedia.org/wiki/Nanotechnology" TARGET = "_top">Wikipedia: Nanotechnology</A> (https://en.wikipedia.org/wiki/Nanotechnology) for an overview). Now, it s true that intelligent machines would be needed to manufacture, and probably program, these myriads of tiny, very specialised machines, but the possibilities are very schematic. There s no evidence that anything workable is around the corner. <A HREF="../../../Notes/Notes_9/Notes_972.htm#Off-Page_Link_Nanotechnology">Click here for Note</A>. </li></ul><a name="On-Page_Link_1273_19"></A><B>Footnote 19</B>: <ul type="disc"><li><B>Consciousnesses</B>: Just what is meant here? Is this just loose speaking? A thing (an animal) is conscious, and the animal can t be scanned and downloaded anywhere. No-one really knows (at the theoretical level) what phenomenal consciousness is, though there are many theories. What s probably intended here is that  the contents of our brains would be read and <A HREF="../../../Notes/Notes_12/Notes_1246.htm">uploaded</a> to some device that can simulate our brains. This, of course, assumes that mind-body substance dualism is false (as it probably is), but even so  and admitting that whatever runs the downloaded software is at best a copy of the original, there s a long way to go before this sort of thing becomes even a worked-out theoretical possibility. <A HREF="../../../Notes/Notes_9/Notes_972.htm#Off-Page_Link_Consciousness">Click here for Note</A>. </li></ul><a name="On-Page_Link_1273_20"></A><B>Footnote 20</B>: <ul type="disc"><li><B>Software</B>: Well, philosophically-speaking, this is an outrageous idea. It depends on <A HREF="../../../Notes/Notes_7/Notes_734.htm">what we are</a>, and we re almost certainly not software, though software is important to us. And there are issues of identity  since software is easy to copy, and copies aren t identical, what reason would an individual have for thinking any particular installed copy was (identical to) him? <A HREF="../../../Notes/Notes_9/Notes_972.htm#Off-Page_Link_Software">Click here for Note</A>. </li></ul><a name="On-Page_Link_1273_21"></A><B>Footnote 21</B>: <ul type="disc"><li><B>Annihilation</B>: Well, this is certainly something to watch out for, but I dare say it s a way off. It s more of a worry in genetic engineering or (if it gets going in the futurist mini-robot sense) nanotechnology. <A HREF="../../../Notes/Notes_9/Notes_972.htm#Off-Page_Link_Annihilation">Click here for Note</A>. </li></ul><a name="On-Page_Link_1273_22"></A><B>Footnote 22</B>: <ul type="disc"><li><B>The Singularity</B>: This term is defined later, but see <A HREF = "https://en.wikipedia.org/wiki/Technological_singularity" TARGET = "_top">Wikipedia: Technological Singularity</A> (https://en.wikipedia.org/wiki/Technological_singularity) and <A HREF = "https://en.wikipedia.org/wiki/The_Singularity_Is_Near" TARGET = "_top">Wikipedia: The Singularity Is Near</A> (https://en.wikipedia.org/wiki/The_Singularity_Is_Near) (amongst much else). <A HREF="../../../Notes/Notes_9/Notes_972.htm#Off-Page_Link_Singularity">Click here for Note</A>. </li></ul><a name="On-Page_Link_1273_23"></A><B>Footnote 23</B>: <ul type="disc"><li><B>Science Fiction</B>: The difference, presumably, is that talk of the Singularity is intended as a prediction rather than as mere entertainment with no real concern with the facts. But the predictions don t really seem to be worked out in any detail  it s just the idea that throwing hardware at things will work, combined with the assumption of indefinitely-continued exponential growth. <A HREF="../../../Notes/Notes_9/Notes_972.htm#Off-Page_Link_xxx">Click here for Note</A>. </li></ul><a name="On-Page_Link_1273_25"></A><B>Footnote 25</B>: <ul type="disc"><li><B>Importance of the Singularity</B>: It would certainly be important. Whether it s as important as language is debateable. Why not choose for comparison some other technological development, like the use of agriculture, or an intellectual one like the invention of writing? Also, language isn t something that was <U>invented</U>, is it? It <U>arose</U>, maybe as externalised inner thoughts  an external and public Language of Thought. <A HREF="../../../Notes/Notes_9/Notes_972.htm#Off-Page_Link_xxx">Click here for Note</A>. </li></ul><a name="On-Page_Link_1273_26"></A><B>Footnote 26</B>: <ul type="disc"><li><B>Ultraintelligence</B>: This is a definition of <EM>ultraintelligent</EM>. It does not guarantee that there will ever be anything that falls under this category. Also, it seems a bit heavy-handed. <EM>Superintelligent</EM> machines  those that may not be ultimate, but will supplement human intelligence even more than current computers  might do the job. The idea is that there could be a human invention that obviates the need for any further human inventions, because any invention that a human could come up with, the machine could also come up with. Maybe all we need is that it (with human assistance) can come up with anything that a human can come up with (though a brick is such a  machine ), or that it (with human assistance) can come up with something that no unaided human can come up with (but this is already satisfied). More thought required. <A HREF="../../../Notes/Notes_9/Notes_972.htm#Off-Page_Link_xxx">Click here for Note</A>. </li></ul><a name="On-Page_Link_1273_27"></A><B>Footnote 27</B>: <ul type="disc"><li><B>Last Invention</B>: No doubt Hollywood would disagree. After the machines have taken over, human beings would have to invent a way of defeating them. This aside, is it really clear what  <EM>surpassing all the intellectual activities of any man however clever</EM> really means? <A HREF="../../../Notes/Notes_9/Notes_972.htm#Off-Page_Link_xxx">Click here for Note</A>. </li></ul><a name="On-Page_Link_1273_28"></A><B>Footnote 28</B>: <ul type="disc"><li><B>Failed Predictions?</B>: This is by 2023, now in 2011  just 12 years away. While the prediction hasn t yet failed, it will no doubt do so as super-human intelligence seems as far away as ever, and the human era shows no sign of ending. <A HREF="../../../Notes/Notes_9/Notes_972.htm#Off-Page_Link_Imminence">Click here for Note</A>. </li></ul><a name="On-Page_Link_1273_29"></A><B>Footnote 29</B>: <ul type="disc"><li><B>Predicting the Future</B>: One could be styled  good at predicting the future if your predictions had a habit of coming true. Is this the case with Kurzweil s predictions, or is it just that his predictions are the sort that Bill Gates likes? <A HREF="../../../Notes/Notes_9/Notes_972.htm#Off-Page_Link_xxx">Click here for Note</A>. </li></ul><a name="On-Page_Link_1273_30"></A><B>Footnote 30</B>: <ul type="disc"><li><B>The Numbers</B>: As always, this is the extrapolation of exponential growth. What if Moore s Law fails because we ve reached QM-interference levels? What then? There was an article in <EM>Custom PC</EM> that made further progress look rather a struggle. Joining together microprocessors reduces miniaturisation and introduces light-speed effects. Compare with the stalled progress on nuclear fusion. Electricity  too cheap to metre is still a way off after 60 years of research. <A HREF="../../../Notes/Notes_9/Notes_972.htm#Off-Page_Link_xxx">Click here for Note</A>. </li></ul><a name="On-Page_Link_1273_32"></A><B>Footnote 32</B>: <ul type="disc"><li><B>A Different Future</B>: This is certainly true  products have to be placed in a context to be useful  both because fashions change, and they need to link in with other technology and people s needs. Technology does become obsolete very quickly, and has been doing so for decades. But technologies eventually reach maturity, or have to await the development of other technologies to mature before they can move on further. <A HREF="../../../Notes/Notes_9/Notes_972.htm#Off-Page_Link_xxx">Click here for Note</A>. </li></ul><a name="On-Page_Link_1273_33"></A><B>Footnote 33</B>: <ul type="disc"><li><B>Moore's Law</B>: See <A HREF = "https://en.wikipedia.org/wiki/Moore%27s_law" TARGET = "_top">Wikipedia: Moore's Law</A> (https://en.wikipedia.org/wiki/Moore%27s_law). This article mentions Kurzweil and other futurologists, and the possible breakdown of Moore s Law within the next 5 years or so (ie. well before 2045). It also notes that Moore s Law is a self-fulfilling prophesy, in that the industry has taken it as a paradigm for R&D aims. Also, that the R&D costs of keeping up with Moore s Law are also increasing exponentially. <A HREF="../../../Notes/Notes_9/Notes_972.htm#Off-Page_Link_Moore's_Law">Click here for Note</A> </li></ul><a name="On-Page_Link_1273_34"></A><B>Footnote 34</B>: <ul type="disc"><li><B>Hardware Costs</B>: As any IT professional knows, the costs associated with any major development are almost all down to software; and residual hardware costs are mostly down to those of their minders. These costs aren t going to exponentially decay. <A HREF="../../../Notes/Notes_9/Notes_972.htm#Off-Page_Link_xxx">Click here for Note</A>. </li></ul><a name="On-Page_Link_1273_35"></A><B>Footnote 35</B>: <ul type="disc"><li><B>Kurzweil's Graph</B>: This graph intentionally muddles together speed and cost, but so-doing can lead others to draw the wrong conclusions from it. Currently, while there continue to be improvements in computing power, the current driver behind the continuing exponential growth of Kurzweil s graph is economic  ie. computer hardware is being delivered <U>cheaper</U>, faster, not <U>faster</U> faster. Also, even if Kurzweil s graph did continue for ever, it might still not lead to the singularity, in that the (infinitely cheap) computer hardware might still not deliver what Kurzweil needs. It might still be too slow. <A HREF="../../../Notes/Notes_9/Notes_972.htm#Off-Page_Link_Kurzweil's_Graph">Click here for Note</A>. </li></ul><a name="On-Page_Link_1273_36"></A><B>Footnote 36</B>: <ul type="disc"><li><B>Exponential Curve</B>: Kurzweil s graph is slightly more than exponential (an exponential curve would appear as a straight line given the Y-axis is logarithmic). Maybe the <EM>Time</EM>editor made the curve <U>look</U> exponential, lest we failed to get the message. But, this extra bit of hyper-exponentiality  which depends critically (it seems to me) on the last two points on the graph, has a huge impact on the date of the Singularity. If we were to fit a straight line to these points, the power in 2045 would be only 1/10,000,000,000 of that predicted by Kurzweil. But, such is exponential growth, that this would only defer the Singularity by 30 years or so. Unfortunately, while this is no-time in the grand scheme of things, this will be disappointing to those who are  waiting for the Singularity, as it may come along too late given that this would imply it s 64 years away. <A HREF="../../../Notes/Notes_9/Notes_972.htm#Off-Page_Link_xxx">Click here for Note</A>. </li></ul><a name="On-Page_Link_1273_37"></A><B>Footnote 37</B>: <ul type="disc"><li><B>Technological Indexes</B>: It s true that it s not just micro-processor speeds that are important, and that other related technologies are always improving. The question is whether these will also hit the wall at some time. The trouble with exponentiation is that there are certain fundamental properties of the world that are not open to human manipulation. <A HREF="../../../Notes/Notes_9/Notes_972.htm#Off-Page_Link_xxx">Click here for Note</A>. </li></ul><a name="On-Page_Link_1273_38"></A><B>Footnote 38</B>: <ul type="disc"><li><B>Exponentiation Beyond IT</B>: This should give us pause. Some of these indicators are clearly not open to indefinite exponential growth. <A HREF="../../../Notes/Notes_9/Notes_972.htm#Off-Page_Link_xxx">Click here for Note</A>. </li></ul><a name="On-Page_Link_1273_39"></A><B>Footnote 39</B>: <ul type="disc"><li><B>Smooth Curves</B>: One would need to check this by investigating whether the smoothness is a point-selection effect. I suppose, however, that by choosing the  best of breed at any date, the chosen points will be accurate. But dates without points may (were points to be supplied) show periods of stasis, and a less smooth curve. <A HREF="../../../Notes/Notes_9/Notes_972.htm#Off-Page_Link_xxx">Click here for Note</A>. </li></ul><a name="On-Page_Link_1273_40"></A><B>Footnote 40</B>: <ul type="disc"><li><B>Peace</B>: This  that exponential growth continues irrespective of the state of the world  is a critical claim, as if (on my calculations) the Singularity is (even assuming all Kurzweil s miracles take place) still 64 years away, that assumes some sort of stability is maintained for a period comparable to that between the rise of Nazism and the present. Now, traditionally wars have been stimuli for technological change  but whether this will remain so is open to doubt. Terrorism is more destructive of technological development than carpet bombing, as it can get anywhere (imagine the situation if the Nazis could have reached Los Alamos, or the industrial centres of America). <A HREF="../../../Notes/Notes_9/Notes_972.htm#Off-Page_Link_xxx">Click here for Note</A>. </li></ul><a name="On-Page_Link_1273_41"></A><B>Footnote 41</B>: <ul type="disc"><li><B>Law of Accelerating Returns</B>: Whether returns continue to accelerate depends on the maturity of a product. In the  green fields situation, exponentiation is possible, but eventually stasis kicks in. Consider the railways. The wonder is that exponentiation in IT has continued for so long. But it cannot last indefinitely. <A HREF="../../../Notes/Notes_9/Notes_972.htm#Off-Page_Link_xxx">Click here for Note</A>. </li></ul><a name="On-Page_Link_1273_42"></A><B>Footnote 42</B>: <ul type="disc"><li><B>Future Extrapolation</B>: As noted <EM>passim</EM>, it s the extrapolation of indefinite exponential growth (rather than linear growth) that causes cognitive dissonance here. Kurzweil thinks he has an answer to the dissonance, but I don t believe it. <A HREF="../../../Notes/Notes_9/Notes_972.htm#Off-Page_Link_xxx">Click here for Note</A>. </li></ul><a name="On-Page_Link_1273_43"></A><B>Footnote 43</B>: <ul type="disc"><li><B>Evolutionary Psychology</B>: There are two issues here. Arguments aren t won or lost by what we ve evolved to think. Scientists (presumably) over-ride whatever their evolved prejudices might be all the time. We re not exactly evolved to favour curved space-time. Secondly, we may be right to intuit a suspicion of exponential growth as, in general, the environment can t cope with it. This is at the centre of Malthusian accounts of the practical necessity of natural culls of exponential population growth. Finally, we might note that digital computers are linear, and what Kurzweil needs (ultimately) for his continued exponential growth is massive parallelism, which hasn t been invented yet. <A HREF="../../../Notes/Notes_9/Notes_972.htm#Off-Page_Link_xxx">Click here for Note</A>. </li></ul><a name="On-Page_Link_1273_44"></A><B>Footnote 44</B>: <ul type="disc"><li><B>Reverse-engineering</B>: Where does this claim come from? This is not a problem that can be solved by throwing hardware at it. The human brain has billions of neurons with billions of connections  fine  this might be simulated. But the <U>contents</U> of the brain relates to just what these connections are, and no-one has the vaguest idea how the wiring works, so how could this be simulated  especially in the next 15 years? <A HREF="../../../Notes/Notes_9/Notes_972.htm#Off-Page_Link_xxx">Click here for Note</A>. </li></ul><a name="On-Page_Link_1273_45"></A><B>Footnote 45</B>: <ul type="disc"><li><B>Singularitarian Subculture</B>: One can a thoroughgoing naturalist, and admit that the naturalist programme will eventually get there (as it will with controlled nuclear fusion) but claim that there are numerous technical saltations between now and the Singularity that we have no warrant for supposing a near-immediate solution is available. <A HREF="../../../Notes/Notes_9/Notes_972.htm#Off-Page_Link_xxx">Click here for Note</A>. </li></ul><a name="On-Page_Link_1273_46"></A><B>Footnote 46</B>: <ul type="disc"><li><B>xxx</B>: <A HREF="../../../Notes/Notes_9/Notes_972.htm#Off-Page_Link_xxx">Click here for Note</A>. </li></ul><a name="On-Page_Link_1273_47"></A><B>Footnote 47</B>: <ul type="disc"><li><B>xxx</B>: <A HREF="../../../Notes/Notes_9/Notes_972.htm#Off-Page_Link_xxx">Click here for Note</A>. </li></ul><a name="On-Page_Link_1273_48"></A><B>Footnote 48</B>: <ul type="disc"><li><B>xxx</B>: <A HREF="../../../Notes/Notes_9/Notes_972.htm#Off-Page_Link_xxx">Click here for Note</A>. </li></ul><a name="On-Page_Link_1273_49"></A><B>Footnote 49</B>: <ul type="disc"><li><B>xxx</B>: <A HREF="../../../Notes/Notes_9/Notes_972.htm#Off-Page_Link_xxx">Click here for Note</A>. </li></ul><a name="On-Page_Link_1273_51"></A><B>Footnote 51</B>: <ul type="disc"><li><B>xxx</B>: <A HREF="../../../Notes/Notes_9/Notes_972.htm#Off-Page_Link_xxx">Click here for Note</A>. </li></ul><a name="On-Page_Link_1273_52"></A><B>Footnote 52</B>: <ul type="disc"><li><B>xxx</B>: <A HREF="../../../Notes/Notes_9/Notes_972.htm#Off-Page_Link_xxx">Click here for Note</A>. </li></ul><a name="On-Page_Link_1273_53"></A><B>Footnote 53</B>: <ul type="disc"><li><B>xxx</B>: <A HREF="../../../Notes/Notes_9/Notes_972.htm#Off-Page_Link_xxx">Click here for Note</A>. </li></ul><a name="On-Page_Link_1273_54"></A><B>Footnote 54</B>: <ul type="disc"><li><B>xxx</B>: <A HREF="../../../Notes/Notes_9/Notes_972.htm#Off-Page_Link_xxx">Click here for Note</A>. </li></ul><a name="On-Page_Link_1273_55"></A><B>Footnote 55</B>: <ul type="disc"><li><B>xxx</B>: <A HREF="../../../Notes/Notes_9/Notes_972.htm#Off-Page_Link_xxx">Click here for Note</A>. </li></ul><a name="On-Page_Link_1273_56"></A><B>Footnote 56</B>: <ul type="disc"><li><B>xxx</B>: <A HREF="../../../Notes/Notes_9/Notes_972.htm#Off-Page_Link_xxx">Click here for Note</A>. </li></ul><a name="On-Page_Link_1273_57"></A><B>Footnote 57</B>: <ul type="disc"><li><B>xxx</B>: <A HREF="../../../Notes/Notes_9/Notes_972.htm#Off-Page_Link_xxx">Click here for Note</A>. </li></ul><a name="On-Page_Link_1273_58"></A><B>Footnote 58</B>: <ul type="disc"><li><B>xxx</B>: <A HREF="../../../Notes/Notes_9/Notes_972.htm#Off-Page_Link_xxx">Click here for Note</A>. </li></ul><a name="On-Page_Link_1273_59"></A><B>Footnote 59</B>: <ul type="disc"><li><B>xxx</B>: <A HREF="../../../Notes/Notes_9/Notes_972.htm#Off-Page_Link_xxx">Click here for Note</A>. </li></ul><a name="On-Page_Link_1273_60"></A><B>Footnote 60</B>: <ul type="disc"><li><B>xxx</B>: <A HREF="../../../Notes/Notes_9/Notes_972.htm#Off-Page_Link_xxx">Click here for Note</A>. </li></ul><a name="On-Page_Link_1273_61"></A><B>Footnote 61</B>: <ul type="disc"><li><B>xxx</B>: <A HREF="../../../Notes/Notes_9/Notes_972.htm#Off-Page_Link_xxx">Click here for Note</A>. </li></ul><a name="On-Page_Link_1273_62"></A><B>Footnote 62</B>: <ul type="disc"><li><B>xxx</B>: <A HREF="../../../Notes/Notes_9/Notes_972.htm#Off-Page_Link_xxx">Click here for Note</A>. </li></ul><a name="On-Page_Link_1273_63"></A><B>Footnote 63</B>: <ul type="disc"><li><B>xxx</B>: <A HREF="../../../Notes/Notes_9/Notes_972.htm#Off-Page_Link_xxx">Click here for Note</A>. </li></ul><a name="On-Page_Link_1273_64"></A><B>Footnote 64</B>: <ul type="disc"><li><B>xxx</B>: <A HREF="../../../Notes/Notes_9/Notes_972.htm#Off-Page_Link_xxx">Click here for Note</A>. </li></ul><a name="On-Page_Link_1273_65"></A><B>Footnote 65</B>: <ul type="disc"><li><B>xxx</B>: <A HREF="../../../Notes/Notes_9/Notes_972.htm#Off-Page_Link_xxx">Click here for Note</A>. </li></ul><a name="On-Page_Link_1273_66"></A><B>Footnote 66</B>: <ul type="disc"><li><B>xxx</B>: <A HREF="../../../Notes/Notes_9/Notes_972.htm#Off-Page_Link_xxx">Click here for Note</A>. </li></ul><a name="On-Page_Link_1273_67"></A><B>Footnote 67</B>: <ul type="disc"><li><B>xxx</B>: <A HREF="../../../Notes/Notes_9/Notes_972.htm#Off-Page_Link_xxx">Click here for Note</A>. </li></ul><a name="On-Page_Link_1273_68"></A><B>Footnote 68</B>: <ul type="disc"><li><B>xxx</B>: <A HREF="../../../Notes/Notes_9/Notes_972.htm#Off-Page_Link_xxx">Click here for Note</A>. </li></ul><a name="On-Page_Link_1273_69"></A><B>Footnote 69</B>: <ul type="disc"><li><B>xxx</B>: <A HREF="../../../Notes/Notes_9/Notes_972.htm#Off-Page_Link_xxx">Click here for Note</A>. </li></ul><a name="On-Page_Link_1273_70"></A><B>Footnote 70</B>: <ul type="disc"><li><B>xxx</B>: <A HREF="../../../Notes/Notes_9/Notes_972.htm#Off-Page_Link_xxx">Click here for Note</A>. </li></ul><a name="On-Page_Link_1273_71"></A><B>Footnote 71</B>: <ul type="disc"><li><B>xxx</B>: <A HREF="../../../Notes/Notes_9/Notes_972.htm#Off-Page_Link_xxx">Click here for Note</A>. </li></ul><a name="On-Page_Link_1273_72"></A><B>Footnote 72</B>: <ul type="disc"><li><B>xxx</B>: <A HREF="../../../Notes/Notes_9/Notes_972.htm#Off-Page_Link_xxx">Click here for Note</A>. </li></ul><a name="On-Page_Link_1273_73"></A><B>Footnote 73</B>: <ul type="disc"><li><B>xxx</B>: <A HREF="../../../Notes/Notes_9/Notes_972.htm#Off-Page_Link_xxx">Click here for Note</A>. </li></ul><a name="On-Page_Link_1273_74"></A><B>Footnote 74</B>: <ul type="disc"><li><B>xxx</B>: <A HREF="../../../Notes/Notes_9/Notes_972.htm#Off-Page_Link_xxx">Click here for Note</A>. </li></ul><a name="On-Page_Link_1273_75"></A><B>Footnote 75</B>: <ul type="disc"><li><B>xxx</B>: <A HREF="../../../Notes/Notes_9/Notes_972.htm#Off-Page_Link_xxx">Click here for Note</A>. </li></ul><a name="On-Page_Link_1273_76"></A><B>Footnote 76</B>: <ul type="disc"><li><B>xxx</B>: <A HREF="../../../Notes/Notes_9/Notes_972.htm#Off-Page_Link_xxx">Click here for Note</A>. </li></ul><a name="On-Page_Link_1273_77"></A><B>Footnote 77</B>: <ul type="disc"><li><B>xxx</B>: <A HREF="../../../Notes/Notes_9/Notes_972.htm#Off-Page_Link_xxx">Click here for Note</A>. </li></ul><a name="On-Page_Link_1273_78"></A><B>Footnote 78</B>: <ul type="disc"><li><B>xxx</B>: <A HREF="../../../Notes/Notes_9/Notes_972.htm#Off-Page_Link_xxx">Click here for Note</A>. </li></ul><a name="On-Page_Link_1273_79"></A><B>Footnote 79</B>: <ul type="disc"><li><B>xxx</B>: <A HREF="../../../Notes/Notes_9/Notes_972.htm#Off-Page_Link_xxx">Click here for Note</A>. </li></ul><a name="On-Page_Link_1273_80"></A><B>Footnote 80</B>: <ul type="disc"><li><B>xxx</B>: <A HREF="../../../Notes/Notes_9/Notes_972.htm#Off-Page_Link_xxx">Click here for Note</A>. </li></ul><a name="On-Page_Link_1273_81"></A><B>Footnote 81</B>: <ul type="disc"><li><B>xxx</B>: <A HREF="../../../Notes/Notes_9/Notes_972.htm#Off-Page_Link_xxx">Click here for Note</A>. </li></ul><a name="On-Page_Link_1273_83"></A><B>Footnote 83</B>: <ul type="disc"><li><B>xxx</B>: <A HREF="../../../Notes/Notes_9/Notes_972.htm#Off-Page_Link_xxx">Click here for Note</A>. </li></ul><a name="On-Page_Link_1273_84"></A><B>Footnote 84</B>: <ul type="disc"><li><B>xxx</B>: <A HREF="../../../Notes/Notes_9/Notes_972.htm#Off-Page_Link_xxx">Click here for Note</A>. </li></ul><a name="On-Page_Link_1273_85"></A><B>Footnote 85</B>: <ul type="disc"><li><B>xxx</B>: <A HREF="../../../Notes/Notes_9/Notes_972.htm#Off-Page_Link_xxx">Click here for Note</A>. </li></ul><a name="On-Page_Link_1273_86"></A><B>Footnote 86</B>: <ul type="disc"><li><B>xxx</B>: <A HREF="../../../Notes/Notes_9/Notes_972.htm#Off-Page_Link_xxx">Click here for Note</A>. </li></ul><a name="On-Page_Link_1273_87"></A><B>Footnote 87</B>: <ul type="disc"><li><B>xxx</B>: <A HREF="../../../Notes/Notes_9/Notes_972.htm#Off-Page_Link_xxx">Click here for Note</A>. </li></ul><a name="On-Page_Link_1273_88"></A><B>Footnote 88</B>: <ul type="disc"><li><B>xxx</B>: <A HREF="../../../Notes/Notes_9/Notes_972.htm#Off-Page_Link_xxx">Click here for Note</A>. </li></ul><a name="On-Page_Link_1273_89"></A><B>Footnote 89</B>: <ul type="disc"><li><B>xxx</B>: <A HREF="../../../Notes/Notes_9/Notes_972.htm#Off-Page_Link_xxx">Click here for Note</A>. </li></ul><a name="On-Page_Link_1273_90"></A><B>Footnote 90</B>: <ul type="disc"><li><B>xxx</B>: <A HREF="../../../Notes/Notes_9/Notes_972.htm#Off-Page_Link_xxx">Click here for Note</A>. </li></ul><a name="On-Page_Link_1273_91"></A><B>Footnote 91</B>: <ul type="disc"><li><B>xxx</B>: <A HREF="../../../Notes/Notes_9/Notes_972.htm#Off-Page_Link_xxx">Click here for Note</A>. </li></ul><a name="On-Page_Link_1273_92"></A><B>Footnote 92</B>: <ul type="disc"><li><B>xxx</B>: <A HREF="../../../Notes/Notes_9/Notes_972.htm#Off-Page_Link_xxx">Click here for Note</A>. </li></ul><a name="On-Page_Link_1273_93"></A><B>Footnote 93</B>: <ul type="disc"><li><B>xxx</B>: <A HREF="../../../Notes/Notes_9/Notes_972.htm#Off-Page_Link_xxx">Click here for Note</A>. </li></ul><a name="On-Page_Link_1273_94"></A><B>Footnote 94</B>: <ul type="disc"><li><B>xxx</B>: <A HREF="../../../Notes/Notes_9/Notes_972.htm#Off-Page_Link_xxx">Click here for Note</A>. </li></ul><a name="On-Page_Link_1273_95"></A><B>Footnote 95</B>: <ul type="disc"><li><B>xxx</B>: <A HREF="../../../Notes/Notes_9/Notes_972.htm#Off-Page_Link_xxx">Click here for Note</A>. </li></ul><a name="On-Page_Link_1273_96"></A><B>Footnote 96</B>: <ul type="disc"><li><B>xxx</B>: <A HREF="../../../Notes/Notes_9/Notes_972.htm#Off-Page_Link_xxx">Click here for Note</A>. </li></ul><a name="On-Page_Link_1273_97"></A><B>Footnote 97</B>: <ul type="disc"><li><B>xxx</B>: <A HREF="../../../Notes/Notes_9/Notes_972.htm#Off-Page_Link_xxx">Click here for Note</A>. </li></ul><a name="On-Page_Link_1273_98"></A><B>Footnote 98</B>: <ul type="disc"><li><B>xxx</B>: <A HREF="../../../Notes/Notes_9/Notes_972.htm#Off-Page_Link_xxx">Click here for Note</A>. </li></ul><a name="On-Page_Link_1273_99"></A><B>Footnote 99</B>: <ul type="disc"><li><B>xxx</B>: <A HREF="../../../Notes/Notes_9/Notes_972.htm#Off-Page_Link_xxx">Click here for Note</A>. </li></ul><a name="On-Page_Link_1273_100"></A><B>Footnote 100</B>: <ul type="disc"><li><B>xxx</B>: <A HREF="../../../Notes/Notes_9/Notes_972.htm#Off-Page_Link_xxx">Click here for Note</A>. </li></ul><a name="On-Page_Link_1273_101"></A><B>Footnote 101</B>: <ul type="disc"><li><B>xxx</B>: <A HREF="../../../Notes/Notes_9/Notes_972.htm#Off-Page_Link_xxx">Click here for Note</A>. </li></ul><a name="On-Page_Link_1273_102"></A><B>Footnote 102</B>: <ul type="disc"><li><B>xxx</B>: <A HREF="../../../Notes/Notes_9/Notes_972.htm#Off-Page_Link_xxx">Click here for Note</A>. </li></ul><a name="On-Page_Link_1273_103"></A><B>Footnote 103</B>: <ul type="disc"><li><B>xxx</B>: <A HREF="../../../Notes/Notes_9/Notes_972.htm#Off-Page_Link_xxx">Click here for Note</A>. </li></ul><a name="On-Page_Link_1273_104"></A><B>Footnote 104</B>: <ul type="disc"><li><B>xxx</B>: <A HREF="../../../Notes/Notes_9/Notes_972.htm#Off-Page_Link_xxx">Click here for Note</A>. </li></ul><a name="On-Page_Link_1273_105"></A><B>Footnote 105</B>: <ul type="disc"><li><B>xxx</B>: <A HREF="../../../Notes/Notes_9/Notes_972.htm#Off-Page_Link_xxx">Click here for Note</A>. </li></ul><a name="On-Page_Link_1273_106"></A><B>Footnote 106</B>: <ul type="disc"><li><B>xxx</B>: <A HREF="../../../Notes/Notes_9/Notes_972.htm#Off-Page_Link_xxx">Click here for Note</A>. </li></ul><a name="On-Page_Link_1273_107"></A><B>Footnote 107</B>: <ul type="disc"><li><B>xxx</B>: <A HREF="../../../Notes/Notes_9/Notes_972.htm#Off-Page_Link_xxx">Click here for Note</A>. </li></ul><a name="On-Page_Link_1273_108"></A><B>Footnote 108</B>: <ul type="disc"><li><B>xxx</B>: <A HREF="../../../Notes/Notes_9/Notes_972.htm#Off-Page_Link_xxx">Click here for Note</A>. </li></ul></P><B>Note last updated:</B> 17/08/2018 17:35:31<BR><BR><HR> <P ALIGN="Left"><FONT Size = 2 FACE="Arial"><B><U>Footnote 6: (Self)</B></U></P> <P ALIGN="Justify"><FONT Size = 2 FACE="Arial"> <u><U><A HREF="#On-Page_Link_98_1">Plug Note</A></U><SUB>1</SUB><a name="On-Page_Return_98_1"></A></u><ul type="disc"><li>The Self is important, as it s the root of Baker s <a name="29"></a>FPP, and the motivator for all <a name="29"></a>psychological theories of PI, so understanding just what it is supposed to be is central to my concerns. </li><li>The self is what the reflexive pronouns refer to, but this doesn t get us far. Just what is a self?</li><li>There s a temptation to equate the Self with the <a name="29"></a>Person, but this is to waste a term, and it is likely that the two terms can <U><A HREF="#On-Page_Link_98_5">come apart</A></U><SUB>5</SUB><a name="On-Page_Return_98_5"></A>. </li><li>Nor is it just the personality, though the reification of the personality is probably at the root of the (misguided) intuition that personal identity is broken if the individual suffers a too-radical change of personality.</li><li>It s not clear to me that SELF is a <a name="29"></a>natural kind concept, so there may not be just one correct answer to its definition. </li><li>But my use will equate a self to an individual with a perspective on the world which  if that individual were a person (as many selves are)  would equal a FPP. </li><li>In "<A HREF = "../../../Abstracts/Abstract_22/Abstract_22259.htm">Seth (Anil K.) - The real problem</A>", Anil Seth distinguishes five selves (or aspects of the self, considered as  a complex construction generated by the brain ):- <FONT COLOR = "800080"><ol type="1"><li>The <U><A HREF="#On-Page_Link_98_7">bodily self</A></U><SUB>7</SUB><a name="On-Page_Return_98_7"></A>, which is the experience of being a body and of having a particular body. </li><li>The <U><A HREF="#On-Page_Link_98_8">perspectival self</A></U><SUB>8</SUB><a name="On-Page_Return_98_8"></A>, which is the experience of perceiving the world from a particular rst-person point of view. </li><li>The <U><A HREF="#On-Page_Link_98_9">volitional self</A></U><SUB>9</SUB><a name="On-Page_Return_98_9"></A> involves experiences of intention and of agency  of urges to do this or that, and of being the causes of things that happen. </li><li>The <b>narrative self</b> is where the  I comes in, as the experience of being a continuous and distinctive person over time, built from a rich set of autobiographical memories. </li><li>And the <U><A HREF="#On-Page_Link_98_10">social self</A></U><SUB>10</SUB><a name="On-Page_Return_98_10"></A> is that aspect of self-experience that is refracted through the perceived minds of others, shaped by our unique social milieu. </li></ol> </FONT> </li><li>Not all individuals towards which we might adopt <A HREF = "../../../Authors/D/Author_Dennett (Daniel).htm">Daniel Dennett</A> s <em>Intentional Stance</em> are selves. </li><li>While thermometers are excluded, I m not sure whether having  a sense of self is essential for being a self. So, creatures that pass the Mirror Test will be Selves, though might not be persons, but others  human infants, gorillas, elephants, dogs, & might be selves. </li><li>For a page of <U><A HREF="#On-Page_Link_98_11">Links</A></U><SUB>11</SUB><a name="On-Page_Return_98_11"></A> to this Note, <A HREF = "../../Notes_0/Notes_98_Links.htm">Click here</A>.</li><li>Works on this topic that <U><A HREF="#On-Page_Link_98_12">I ve actually read</A></U><SUB>12</SUB><a name="On-Page_Return_98_12"></A>, <U><A HREF="#On-Page_Link_98_13">include</A></U><SUB>13</SUB><a name="On-Page_Return_98_13"></A> the following:- <ol type="i"><li></li></ol></li><li>As for a reading list, even the short-list immediately below (taken from the reading-list for the section on the Self in <a name="29"></a>Chapter 2 of my Thesis) is rather long, and contains many whole books. I may have to cull several of these further down the line, but it s worth preserving the full list here.</li><li>I ve not checked this list recently, so it maybe should grow. </li><li>So, a reading list (where not covered elsewhere) might start with:- <ol type="i"><li>"<A HREF = "../../../BookSummaries/BookSummary_01/BookPaperAbstracts/BookPaperAbstracts_1382.htm">Alexander (Ronald) - The Self, Supervenience and Personal Identity</A>", <U><A HREF="#On-Page_Link_98_15">Alexander</A></U><SUB>15</SUB><a name="On-Page_Return_98_15"></A></li><li>"<A HREF = "../../../Abstracts/Abstract_13/Abstract_13015.htm">Brennan (Andrew) - Fragmented Selves and the Problem of Ownership</A>", Brennan</li><li>"<A HREF = "../../../BookSummaries/BookSummary_01/BookPaperAbstracts/BookPaperAbstracts_1027.htm">Bermudez (Jose Luis), Marcel (Anthony) & Eilan (Naomi), Eds. - The Body and the Self</A>", Bermudez</li><li>"<A HREF = "../../../BookSummaries/BookSummary_01/BookPaperAbstracts/BookPaperAbstracts_1174.htm">Campbell (John) - Past, Space and Self</A>", Campbell</li><li>"<A HREF = "../../../Abstracts/Abstract_07/Abstract_7463.htm">Cassam (Quassim) - Kant and Reductionism</A>", Cassam</li><li>"<A HREF = "../../../BookSummaries/BookSummary_01/BookPaperAbstracts/BookPaperAbstracts_1333.htm">Cassam (Quassim) - Self and World</A>", Cassam</li><li>"<A HREF = "../../../Abstracts/Abstract_20/Abstract_20741.htm">Cassam (Quassim) - The Embodied Self</A>", Cassam</li><li>"<A HREF = "../../../Abstracts/Abstract_05/Abstract_5084.htm">Churchland (Patricia) - Self and Self-Knowledge</A>", Churchland</li><li>"<A HREF = "../../../Abstracts/Abstract_05/Abstract_5288.htm">Dennett (Daniel) - The Reality of Selves</A>", Dennett</li><li>"<A HREF = "../../../PaperSummaries/PaperSummary_05/PaperSummary_5275.htm">Dennett (Daniel) - The Self as a Center of Narrative Gravity</A>", Dennett</li><li>"<A HREF = "../../../BookSummaries/BookSummary_01/BookPaperAbstracts/BookPaperAbstracts_1402.htm">Feinberg (Todd) - Altered Egos: How the Brain Creates the Self</A>", Feinberg</li><li>"<A HREF = "../../../BookSummaries/BookSummary_00/BookPaperAbstracts/BookPaperAbstracts_574.htm">Gallagher (Shaun) & Shear (Jonathan), Eds. - Models of the Self</A>", Gallagher</li><li>"<A HREF = "../../../Abstracts/Abstract_02/Abstract_2664.htm">Harre (Rom) - Persons and Selves</A>", Harre</li><li>"<A HREF = "../../../Abstracts/Abstract_15/Abstract_15601.htm">Jenkins (Phil) - Review of Galen Strawson's 'Selves'</A>", Jenkins</li><li>"<A HREF = "../../../Abstracts/Abstract_05/Abstract_5171.htm">Johnstone (Henry) - Persons and Selves</A>", Johnstone</li><li>"<A HREF = "../../../Abstracts/Abstract_05/Abstract_5200.htm">Lowe (E.J.) - Substance and Selfhood</A>", Lowe</li><li>"<A HREF = "../../../BookSummaries/BookSummary_00/BookPaperAbstracts/BookPaperAbstracts_642.htm">Ludwig (Arnold) - How do we Know who we are? A Biography of the Self</A>", Ludwig</li><li>"<A HREF = "../../../BookSummaries/BookSummary_00/BookPaperAbstracts/BookPaperAbstracts_645.htm">Madell (Geoffrey) - The Identity of the Self</A>", Madell</li><li>"<A HREF = "../../../BookSummaries/BookSummary_00/BookPaperAbstracts/BookPaperAbstracts_120.htm">Martin (Raymond) - Self-Concern: An Experiential Approach to what Matters in Survival</A>", Martin</li><li>"<A HREF = "../../../PaperSummaries/PaperSummary_04/PaperSummary_4161.htm">McGinn (Colin) - The Self</A>", McGinn</li><li>"<A HREF = "../../../BookSummaries/BookSummary_06/BookPaperAbstracts/BookPaperAbstracts_6403.htm">Metzinger (Thomas) - Being No One: The Self-Model Theory of Subjectivity</A>", Metzinger</li><li>"<A HREF = "../../../BookSummaries/BookSummary_03/BookPaperAbstracts/BookPaperAbstracts_3601.htm">Metzinger (Thomas) - The Ego Tunnel: The Science of the Mind and the Myth of the Self</A>", Metzinger<BR>&rarr; "<A HREF = "../../../Abstracts/Abstract_15/Abstract_15417.htm">Godelek (Kamuran) - Review of Thomas Metzinger's 'The Ego Tunnel: The Science of the Mind and the Myth of the Self'</A>", Godelek</li><li>"<A HREF = "../../../Abstracts/Abstract_03/Abstract_3635.htm">Nagel (Thomas) - Mind and Body</A>", Nagel</li><li>"<A HREF = "../../../PaperSummaries/PaperSummary_00/PaperSummary_192.htm">Nagel (Thomas) - Subjective and Objective</A>", Nagel</li><li>"<A HREF = "../../../Abstracts/Abstract_03/Abstract_3636.htm">Nagel (Thomas) - The Objective Self</A>", Nagel</li><li>"<A HREF = "../../../Abstracts/Abstract_09/Abstract_9179.htm">Perry (John) - The Self</A>", Perry</li><li>"<A HREF = "../../../BookSummaries/BookSummary_00/BookPaperAbstracts/BookPaperAbstracts_139.htm">Popper (Karl) & Eccles (John) - The Self and Its Brain</A>", Popper&Eccles </li><li>"<A HREF = "../../../BookSummaries/BookSummary_00/BookPaperAbstracts/BookPaperAbstracts_656.htm">Schechtman (Marya) - The Constitution of Selves</A>", Schechtman</li><li>"<A HREF = "../../../BookSummaries/BookSummary_00/BookPaperAbstracts/BookPaperAbstracts_577.htm">Shoemaker (Sydney) - Self-Knowledge and Self-Identity</A>", Shoemaker</li><li>"<A HREF = "../../../Abstracts/Abstract_00/Abstract_561.htm">Strawson (Galen) - The Self</A>", Strawson_G</li><li>"<A HREF = "../../../BookSummaries/BookSummary_02/BookPaperAbstracts/BookPaperAbstracts_2759.htm">Valberg (J.J.) - Dream, Death, and the Self</A>", Valberg</li><li>"<A HREF = "../../../Abstracts/Abstract_05/Abstract_5152.htm">Van Inwagen (Peter) - The Self: the Incredulous Stare Articulated</A>", Van Inwagen</li><li>"<A HREF = "../../../BookSummaries/BookSummary_00/BookPaperAbstracts/BookPaperAbstracts_160.htm">Williams (Bernard) - Problems of the Self</A>", Williams</li><li>"<A HREF = "../../../Abstracts/Abstract_05/Abstract_5174.htm">Wolf (Susan) - Self-Interest and Interest in Selves</A>", Wolf</li><li>"<A HREF = "../../../Abstracts/Abstract_04/Abstract_4732.htm">Wright (Crispin) - The Problem of Self-Knowledge (I)</A>", Wright </li><li>"<A HREF = "../../../BookSummaries/BookSummary_03/BookPaperAbstracts/BookPaperAbstracts_3603.htm">Zahavi (Dan) - Subjectivity and Selfhood: Investigating the First-Person Perspective</A>", Zahavi </li></ol></li><li>This is mostly a <a name="29"></a>place-holder. </li></ul><BR><BR><BR><HR><BR><U><B>In-Page Footnotes</U></B><a name="On-Page_Link_98_1"></A><BR><BR><B>Footnote 1</B>: <ul type="disc"><li>A number of my philosophical Notes are  promissory notes currently only listing the books and papers (if any) I possess on the topic concerned. </li><li>I ve decided to add some text  whether by way of motivation, or something more substantive  for all these identified topics related to my Thesis.</li><li>As I want to do this fairly quickly, the text may be confused or show surprising ignorance. </li><li>The reader (if such exists) will have to bear with me, and display the principle of charity while this footnote exists. </li></ul><a name="On-Page_Link_98_5"></A><B>Footnote 5</B>: There is no unanimity on what a person is; but it will be worth taking candidate definitions and see whether we would be willing to assign selfhood to some non-persons. <a name="On-Page_Link_98_7"></A><BR><BR><B>Footnote 7</B>: We are referred to "<A HREF = "../../../Abstracts/Abstract_22/Abstract_22275.htm">Seth (Anil K.) - Interoceptive inference, emotion, and the embodied self</A>". <a name="On-Page_Link_98_8"></A><BR><BR><B>Footnote 8</B>: We are referred to "<A HREF = "../../../Abstracts/Abstract_22/Abstract_22276.htm">Ehrsson (H. Henrik) - The Experimental Induction of Out-of-Body Experiences</A>". <a name="On-Page_Link_98_9"></A><BR><BR><B>Footnote 9</B>: We are referred to "<A HREF = "../../../Abstracts/Abstract_22/Abstract_22274.htm">Haggard (Patrick) - Human volition: towards a neuroscience of will</A>". <a name="On-Page_Link_98_10"></A><BR><BR><B>Footnote 10</B>: <ul type="disc"><li>We are referred to  Mechanisms of Social Cognition by Chris & Uta Frith, Annual Review of Psychology, Vol. 63:287-313 (January 2012) </li><li>I don t have access to this, but the abstract is as below &darr;<BR><FONT COLOR = "800080"><ol type="1"><li>Social animals including humans share a range of social mechanisms that are automatic and implicit and enable learning by observation. Learning from others includes imitation of actions and mirroring of emotions. Learning about others, such as their group membership and reputation, is crucial for social interactions that depend on trust. </li><li>For accurate prediction of others' changeable dispositions, mentalizing is required, i.e., tracking of intentions, desires, and beliefs. </li><li>Implicit mentalizing is present in infants less than one year old as well as in some nonhuman species. </li><li>Explicit mentalizing is a meta-cognitive process and enhances the ability to learn about the world through self-monitoring and reflection, and may be uniquely human. </li><li>Meta-cognitive processes can also exert control over automatic behavior, for instance, when short-term gains oppose long-term aims or when selfish and prosocial interests collide. We suggest that they also underlie the ability to explicitly share experiences with other agents, as in reflective discussion and teaching. These are key in increasing the accuracy of the models of the world that we construct.</li></ol> </FONT></li></ul><a name="On-Page_Link_98_11"></A><B>Footnote 11</B>: <ul type="disc"><li>If only a  non-updating run has been made, the links are only one-way  ie. from the page of links to the objects that reference this Note by mentioning the appropriate key-word(s). The links are also only indicative, as they haven t yet been confirmed as relevant. </li><li>Once an updating run has been made, links are both ways, and links from this Notes page (from the  Authors, Books & Papers Citing this Note and  Summary of Note Links to this Page sections) are to the  point of link within the page rather than to the page generically. Links from the  links page remain generic. </li><li>There are two sorts of updating runs  for Notes and other Objects. The reason for this is that Notes are archived, and too many archived versions would be created if this process were repeatedly run. </li></ul> <a name="On-Page_Link_98_12"></A><B>Footnote 12</B>: <ul type="disc"><li>Frequently I ll have made copious marginal annotations, and sometimes have written up a review-note. </li><li>In the former case, I intend to transfer the annotations into electronic form as soon as I can find the time. </li><li>In the latter case, I will have remarked on the fact against the citation, and will integrate the comments into this Note in due course. </li><li>My intention is to incorporate into these Notes comments on material I ve already read rather than engage with unread material at this stage. </li></ul><a name="On-Page_Link_98_13"></A><B>Footnote 13</B>: <ul type="disc"><li>I may have read others in between updates of this Note  in which case they will be marked as such in the  References and Reading List below.</li><li>Papers or Books partially read have a rough %age based on the time spent versus the time expected. </li></ul> <a name="On-Page_Link_98_15"></A><B>Footnote 15</B>: <ul type="disc"><li>Alexander thinks that we are Selves, and that Selves are tropes  abstract particulars  which by my lights is about as far from the truth as you can get, so I need to consider his arguments carefully. </li></ul> </P><B>Note last updated:</B> 17/08/2018 21:59:02<BR><BR><HR> <P ALIGN="Left"><FONT Size = 2 FACE="Arial"><B><U>Footnote 7: (Self-Consciousness)</B></U></P> <P ALIGN="Justify"><FONT Size = 2 FACE="Arial"> <u><U><A HREF="#On-Page_Link_21_1">Plug Note</A></U><SUB>1</SUB><a name="On-Page_Return_21_1"></A></u><ul type="disc"><li>This is more than just phenomenal <a name="29"></a>consciousness (which may be a watershed in itself with moral consequences greater than generally accepted) but the consciousness of oneself as a <a name="29"></a>self (as <a name="29"></a>Locke noted). </li><li>But we need also consider the view that this  watcher is an illusion, a falsely assumed <a name="29"></a>Cartesian Ego whose existence is undermined by neuroscience, the modularity of mind, and such-like.</li><li>I was alerted to a <U><A HREF="#On-Page_Link_21_6">quotation</A></U><SUB>6</SUB><a name="On-Page_Return_21_6"></A> from <A HREF = "https://en.wikipedia.org/wiki/John_Updike" TARGET = "_top">John Updike</A> (https://en.wikipedia.org/wiki/John_Updike) s "<A HREF = "../../../BookSummaries/BookSummary_06/BookPaperAbstracts/BookPaperAbstracts_6705.htm">Updike (John) - Self-Consciousness</A>":- <ul type="square"> <FONT COLOR = "800080">Not only are selves conditional but they die. Each day, we wake slightly altered, and the person we were yesterday is dead. So why, one could say, be afraid of death, when death comes all the time? </FONT> </ul>I think <U><A HREF="#On-Page_Link_21_7">this thought</A></U><SUB>7</SUB><a name="On-Page_Return_21_7"></A> is muddled in several respects:- <ol type="1"><li><a name="29"></a>Death is a biological event that  at least in the ordinary case  can happen to an organism only once. Whatever <a name="29"></a>Selves are, they don t die every night. Follow the links for further discussion. </li><li>We do indeed  wake slightly altered ; indeed, we alter slightly whenever we encounter an event that has an impact on us. </li><li>I m not sure what Updike means by our selves being  conditional , but I can well believe it. </li><li>Updike seems to subscribe to some  strict and philosophical view of identity, whereby nothing survives change. This is not a useful understanding. </li><li>Any comfort we might get from such thoughts concerning our inevitable deaths is entirely spurious. </li></ol></li><li>For a page of Links to this Note, <A HREF = "../../Notes_0/Notes_21_Links.htm">Click here</A>.</li><li>The categorised reading list is rather small; naturally, see also those on <a name="29"></a>Self and <a name="29"></a>Consciousness.</li><li>Works on this topic that <U><A HREF="#On-Page_Link_21_12">I ve actually read</A></U><SUB>12</SUB><a name="On-Page_Return_21_12"></A>, <U><A HREF="#On-Page_Link_21_13">include</A></U><SUB>13</SUB><a name="On-Page_Return_21_13"></A> the following:- <ol type="i"><li>"<A HREF = "../../../BookSummaries/BookSummary_00/BookPaperAbstracts/BookPaperAbstracts_97.htm">Garrett (Brian) - Personal Identity and Self-consciousness</A>", Garrett</li><li>"<A HREF = "../../../Abstracts/Abstract_11/Abstract_11981.htm">Kriegel (Uriah) - Strange Loops and Self-conscious Marbles</A>", Kriegel</li></ol></li><li>A reading list (where not covered elsewhere) might start with:- <ol type="i"><li>"<A HREF = "../../../PaperSummaries/PaperSummary_02/PaperSummary_2053.htm">Eilan (Naomi), Marcel (Anthony) & Bermudez (Jose Luis) - Self-Consciousness and the Body: An Interdisciplinary Approach</A>", Eilan</li><li>"<A HREF = "../../../Abstracts/Abstract_02/Abstract_2690.htm">Laycock (Stephen) - Consciousness It/Self</A>", Laycock</li><li>"<A HREF = "../../../Abstracts/Abstract_03/Abstract_3697.htm">Neisser (Ulric) - Five Kinds of Self-Knowledge</A>", Neisser</li><li>"<A HREF = "../../../Abstracts/Abstract_04/Abstract_4953.htm">Pollock (John L.) - The Self-Conscious Machine</A>", Pollock</li><li>"<A HREF = "../../../PaperSummaries/PaperSummary_03/PaperSummary_3831.htm">Shoemaker (Sydney) - The Self and the Contents of Consciousness</A>", Shoemaker</li><li>"<A HREF = "../../../PaperSummaries/PaperSummary_03/PaperSummary_3920.htm">Vesey (Godfrey N.A.) - Are We Intimately Conscious of What We Call Our Self</A>", Vesey</li><li>"<A HREF = "../../../Abstracts/Abstract_05/Abstract_5067.htm">Vjecsner (Paul) - Searching for the Heart of Human Nature</A>", Vjecsner</li></ol></li><li>This is mostly a <a name="29"></a>place-holder. </li></ul><BR><BR><BR><HR><BR><U><B>In-Page Footnotes</U></B><a name="On-Page_Link_21_1"></A><BR><BR><B>Footnote 1</B>: <ul type="disc"><li>A number of my philosophical Notes are  promissory notes currently only listing the books and papers (if any) I possess on the topic concerned. </li><li>I ve decided to add some text  whether by way of motivation, or something more substantive  for all these identified topics related to my Thesis.</li><li>As I want to do this fairly quickly, the text may be confused or show surprising ignorance. </li><li>The reader (if such exists) will have to bear with me, and display the principle of charity while this footnote exists. </li></ul><a name="On-Page_Link_21_6"></A><B>Footnote 6</B>: <ul type="disc"><li>It appeared in <em>The Week</em>, but it seems to be a popular one. </li><li>See <A HREF = "https://www.goodreads.com/quotes/939545-not-only-are-selves-conditional-but-they-die-each-day" TARGET = "_top">Link</A> (https://www.goodreads.com/quotes/939545-not-only-are-selves-conditional-but-they-die-each-day). </li><li>I m not yet clear of the context: the book is on order. </li></ul> <a name="On-Page_Link_21_7"></A><B>Footnote 7</B>: <ul type="disc"><li>Which has little to do with self-consciousness other that the book s title. </li></ul> <a name="On-Page_Link_21_12"></A><B>Footnote 12</B>: <ul type="disc"><li>Frequently I ll have made copious marginal annotations, and sometimes have written up a review-note. </li><li>In the former case, I intend to transfer the annotations into electronic form as soon as I can find the time. </li><li>In the latter case, I will have remarked on the fact against the citation, and will integrate the comments into this Note in due course. </li><li>My intention is to incorporate into these Notes comments on material I ve already read rather than engage with unread material at this stage. </li></ul><a name="On-Page_Link_21_13"></A><B>Footnote 13</B>: <ul type="disc"><li>I may have read others in between updates of this Note  in which case they will be marked as such in the  References and Reading List below.</li><li>Papers or Books partially read have a rough %age based on the time spent versus the time expected. </li></ul> </P><B>Note last updated:</B> 17/08/2018 17:35:31<BR><BR><HR> <P ALIGN="Left"><FONT Size = 2 FACE="Arial"><B><U>Footnote 8: (Cyborgs)</B></U></P> <P ALIGN="Justify"><FONT Size = 2 FACE="Arial"> <u><U><A HREF="#On-Page_Link_66_1">Plug Note</A></U><SUB>1</SUB><a name="On-Page_Return_66_1"></A></u><ul type="disc"><li>Briefly, a Cyborg (Cybernetic Organism) is a human being (or any organic being) with some inorganic parts. See the entry in Wikipedia (<A HREF = "https://en.wikipedia.org/wiki/Cyborg" TARGET = "_top">Link</A> (https://en.wikipedia.org/wiki/Cyborg)).</li><li>Compare and contrast with <a name="29"></a>Android, which is a humanoid robot. </li><li>See also <a name="29"></a>Siliconisation, the <a name="29"></a>TE wherein we have the gradual replacement of (human) neural tissue with microchips while  allegedly  preserving consciousness. </li><li>And again, connect to <a name="29"></a>Chimeras. <U><A HREF="#On-Page_Link_66_6">In this case</A></U><SUB>6</SUB><a name="On-Page_Return_66_6"></A>, biological material from other animals is merged with human tissue to provide an enhancement. </li><li>All of the above is beloved of the <a name="29"></a>Transhumanists, who want to enhance the human condition by all means possible, even if this means that humans are no longer  strictly speaking  <a name="29"></a>human beings. </li><li>My interest in Cyborgs stems from the impact of their possibility on the truth of <a name="29"></a>Animalism. </li><li>If we are (human) <a name="29"></a>animals, would we continue to exist if increasingly enhanced by technological implants and extensions. I see no immediate problem  just a bit more along the lines of spectacles & hip replacements. But no doubt there would eventually become a tipping point when we become more inorganic than organic. Our <a name="29"></a>persistence conditions would then be mixed between those of <a name="29"></a>organisms and <a name="29"></a>artefacts. Or is the situation better described by us shrinking (if our parts are replaced) or  if the technological parts are add-ons  remaining unchanged. Currently we re unchanged by our spectacles, but hip replacements become part of us. Is this not so?</li><li>For a page of <U><A HREF="#On-Page_Link_66_14">Links</A></U><SUB>14</SUB><a name="On-Page_Return_66_14"></A> to this Note, <A HREF = "../../Notes_0/Notes_66_Links.htm">Click here</A>.</li><li>Works on this topic that <U><A HREF="#On-Page_Link_66_15">I ve actually read</A></U><SUB>15</SUB><a name="On-Page_Return_66_15"></A>, <U><A HREF="#On-Page_Link_66_16">include</A></U><SUB>16</SUB><a name="On-Page_Return_66_16"></A> the following:- <ol type="i"><li>"<A HREF = "../../../Abstracts/Abstract_16/Abstract_16893.htm">Grossman (Lev), Kurzweil (Ray) - 2045: The Year Man Becomes Immortal</A>", Grossman</li><li>"<A HREF = "../../../Abstracts/Abstract_23/Abstract_23360.htm">Hawthorne (John X.) - Are You Ready For The Cyborg Technology Coming In 2018?</A>", Hawthorne</li><li>"<A HREF = "../../../Abstracts/Abstract_22/Abstract_22749.htm">Iida (Fumiya) - Could we build a Blade Runner-style  replicant ?</A>", Iida</li><li>"<A HREF = "../../../Abstracts/Abstract_17/Abstract_17206.htm">Jones (D. Gareth) - A Christian Perspective on Human Enhancement</A>", Jones</li><li>"<A HREF = "../../../Abstracts/Abstract_22/Abstract_22268.htm">Mayor (Adrienne) - Bio-techne</A>", Mayor</li><li>"<A HREF = "../../../BookSummaries/BookSummary_06/BookPaperAbstracts/BookPaperAbstracts_6513.htm">O'Connell (Mark) - To be a Machine</A>", O Connell</li></ol></li><li>A reading list (where not covered elsewhere) might start with:- <ol type="i"><li>"<A HREF = "../../../Abstracts/Abstract_15/Abstract_15973.htm">Alexander (Denis) - Enhancing humans or a new creation?</A>", Alexander</li><li>"<A HREF = "../../../BookSummaries/BookSummary_06/BookPaperAbstracts/BookPaperAbstracts_6715.htm">Clark (Andy) - Natural-Born Cyborgs: Minds, Technologies, and the Future of Human Intelligence</A>", Clark<BR>&rarr; "<A HREF = "../../../Abstracts/Abstract_23/Abstract_23243.htm">Erickson (Mark) - Review of Andy Clark's 'Natural-Born Cyborgs'</A>", Erickson<BR>&rarr; "<A HREF = "../../../Abstracts/Abstract_04/Abstract_4531.htm">Shipley (G.J.) - Review of Andy Clark's 'Natural-Born Cyborgs'</A>", Shipley</li><li>"<A HREF = "../../../Abstracts/Abstract_20/Abstract_20821.htm">Clark (Andy) - Re-Inventing Ourselves: The Plasticity of Embodiment, Sensing, and Mind</A>", Clark</li><li>"<A HREF = "../../../Abstracts/Abstract_06/Abstract_6464.htm">Clark (Andy) - That Special Something: Dennett on the Making of Minds and Selves</A>", Clark</li><li>"<A HREF = "../../../Abstracts/Abstract_17/Abstract_17207.htm">CSC WG - Human Enhancement  A Discussion Document</A>", CSC WG</li><li>"<A HREF = "../../../Abstracts/Abstract_20/Abstract_20183.htm">Kaku (Michio) - The Future of the Mind: The Scientific Quest to Understand, Enhance and Empower the Mind (YouTube Lecture)</A>", Kaku</li><li>"<A HREF = "../../../Abstracts/Abstract_20/Abstract_20839.htm">Miah (Andy) - Justifying Human Enhancement: The Accumulation of Biocultural Capital</A>", Miah</li><li>"<A HREF = "../../../Abstracts/Abstract_16/Abstract_16565.htm">Puccetti (Roland) - Conquest of Death</A>", Puccetti</li></ol></li><li>This is mostly a <a name="29"></a>place-holder. </li></ul><BR><BR><BR><HR><BR><U><B>In-Page Footnotes</U></B><a name="On-Page_Link_66_1"></A><BR><BR><B>Footnote 1</B>: <ul type="disc"><li>A number of my philosophical Notes are  promissory notes currently only listing the books and papers (if any) I possess on the topic concerned. </li><li>I ve decided to add some text  whether by way of motivation, or something more substantive  for all these identified topics related to my Thesis.</li><li>As I want to do this fairly quickly, the text may be confused or show surprising ignorance. </li><li>The reader (if such exists) will have to bear with me, and display the principle of charity while this footnote exists. </li></ul><a name="On-Page_Link_66_6"></A><B>Footnote 6</B>: <ul type="disc"><li>There are other situations where human tissue is to be harvested from other animals  after genetic modification or other means  for the purpose of implantation. </li></ul> <a name="On-Page_Link_66_14"></A><B>Footnote 14</B>: <ul type="disc"><li>If only a  non-updating run has been made, the links are only one-way  ie. from the page of links to the objects that reference this Note by mentioning the appropriate key-word(s). The links are also only indicative, as they haven t yet been confirmed as relevant. </li><li>Once an updating run has been made, links are both ways, and links from this Notes page (from the  Authors, Books & Papers Citing this Note and  Summary of Note Links to this Page sections) are to the  point of link within the page rather than to the page generically. Links from the  links page remain generic. </li><li>There are two sorts of updating runs  for Notes and other Objects. The reason for this is that Notes are archived, and too many archived versions would be created if this process were repeatedly run. </li></ul> <a name="On-Page_Link_66_15"></A><B>Footnote 15</B>: <ul type="disc"><li>Frequently I ll have made copious marginal annotations, and sometimes have written up a review-note. </li><li>In the former case, I intend to transfer the annotations into electronic form as soon as I can find the time. </li><li>In the latter case, I will have remarked on the fact against the citation, and will integrate the comments into this Note in due course. </li><li>My intention is to incorporate into these Notes comments on material I ve already read rather than engage with unread material at this stage. </li></ul><a name="On-Page_Link_66_16"></A><B>Footnote 16</B>: <ul type="disc"><li>I may have read others in between updates of this Note  in which case they will be marked as such in the  References and Reading List below.</li><li>Papers or Books partially read have a rough %age based on the time spent versus the time expected. </li></ul> </P><B>Note last updated:</B> 17/08/2018 21:59:02<BR><BR><HR> <P ALIGN="Left"><FONT Size = 2 FACE="Arial"><B><U>Footnote 9: (What are We?)</B></U></P> <P ALIGN="Justify"><FONT Size = 2 FACE="Arial"> <u><U><A HREF="#On-Page_Link_734_1">Plug Note</A></U><SUB>1</SUB><a name="On-Page_Return_734_1"></A></u><ul type="disc"><li>This Note cannot answer this question. Rather, it ll try to consider the sort of desiderata necessary for formulating and answering the question, and for deciding between the various candidate answers. </li><li>For the present, I just mention that I need to distinguish, as candidates for what we are, (human-)<BR>&rarr; <a name="29"></a>animals, <BR>&rarr; <a name="29"></a>organisms, <BR>&rarr; <a name="29"></a>persons, <BR>&rarr; <a name="29"></a>bodies, <BR>&rarr; <a name="29"></a>beings and <BR>&rarr; <a name="29"></a>brains. </li><li>Additionally, I need to treat of <BR>&rarr; <a name="29"></a>selves <BR>and maybe contrast terms like  <A HREF = "https://en.wikipedia.org/wiki/Mensch" TARGET = "_top">Mensch</A> (https://en.wikipedia.org/wiki/Mensch) with  person .</li><li><B>We</B>: the use of the plural is significant. However, the determination of  we as  the sort of entity likely to be reading this paper isn t quite right, even though Dennett and others use similar expressions. Refer to the first parts of "<A HREF = "../../../Abstracts/Abstract_12/Abstract_12473.htm">Brandom (Robert) - Toward a Normative Pragmatics</A>" in "<A HREF = "../../../BookSummaries/BookSummary_02/BookPaperAbstracts/BookPaperAbstracts_2711.htm">Brandom (Robert) - Making It Explicit: Reasoning, Representing & Discursive Commitment</A>" for inspiration on  We .</li><li><B>Intelligibility</B>: this is a reciprocal relationship. We find others (of  our sort) intelligible, and it is important that they find us intelligible in return. Does this thereby make R =  finds intelligible an equivalence relation, dividing the world into equivalence classes of mutually intelligible individuals, or does R come in degrees and fall prey to <a name="29"></a>Sorites paradoxes?</li><li>For my Thesis Chapter on this topic, follow this <a name="29"></a>link.</li><li>For a page of Links to this Note, <A HREF = "../../Notes_7/Notes_734_Links.htm">Click here</A>.</li><li>The reading lists below are somewhat bloated; but, in general, only a small portion of the works cited needs to be addressed in the context of this question. No doubt the best place to start is<BR>&rarr; "<A HREF = "../../../Abstracts/Abstract_12/Abstract_12470.htm">Olson (Eric) - What Are We?</A>" (the Paper), followed by<BR>&rarr; "<A HREF = "../../../BookSummaries/BookSummary_02/BookPaperAbstracts/BookPaperAbstracts_2710.htm">Olson (Eric) - What are We?</A>" (the Book). </li><li>Works on this topic that <U><A HREF="#On-Page_Link_734_11">I ve actually read</A></U><SUB>11</SUB><a name="On-Page_Return_734_11"></A>, <U><A HREF="#On-Page_Link_734_12">include</A></U><SUB>12</SUB><a name="On-Page_Return_734_12"></A> the following:- <ol type="i"><li>"<A HREF = "../../../Abstracts/Abstract_03/Abstract_3803.htm">Baillie (James) - What Am I?</A>", Baillie</li><li>"<A HREF = "../../../Abstracts/Abstract_14/Abstract_14448.htm">Baker (Lynne Rudder) - Big-Tent Metaphysics</A>", Baker</li><li>"<A HREF = "../../../Abstracts/Abstract_03/Abstract_3674.htm">Baker (Lynne Rudder) - Persons in the Material World</A>", Baker</li><li>"<A HREF = "../../../Abstracts/Abstract_05/Abstract_5137.htm">Baker (Lynne Rudder) - Precis of 'Persons & Bodies: A Constitution View'</A>", Baker</li><li>"<A HREF = "../../../Abstracts/Abstract_14/Abstract_14452.htm">Baker (Lynne Rudder) - Response to Eric Olson</A>", Baker</li><li>"<A HREF = "../../../Abstracts/Abstract_21/Abstract_21333.htm">Baker (Lynne Rudder) - Review of 'What Are We? A Study in Personal Ontology' by Eric T. Olson</A>", Baker</li><li>"<A HREF = "../../../Abstracts/Abstract_04/Abstract_4282.htm">Baker (Lynne Rudder) - What Am I?</A>", Baker</li><li>"<A HREF = "../../../Abstracts/Abstract_21/Abstract_21015.htm">Belshaw (Christopher) - Review of Paul Snowdon's 'Persons, Animals, Ourselves'</A>", Belshaw</li><li>"<A HREF = "../../../Abstracts/Abstract_23/Abstract_23284.htm">Blatti (Stephan) - Animalism (SEP)</A>", Blatti</li><li>"<A HREF = "../../../Abstracts/Abstract_23/Abstract_23281.htm">Blatti (Stephan) - We Are Animals</A>", Blatti</li><li>"<A HREF = "../../../Abstracts/Abstract_12/Abstract_12473.htm">Brandom (Robert) - Toward a Normative Pragmatics</A>", Brandom</li><li>"<A HREF = "../../../Abstracts/Abstract_21/Abstract_21758.htm">Claxton (Guy) - Intelligence in the Flesh - Limbering Up: An Introduction</A>", Claxton</li><li>"<A HREF = "../../../Abstracts/Abstract_05/Abstract_5807.htm">DeGrazia (David) - Are we essentially persons? Olson, Baker, and a reply</A>", DeGrazia</li><li>"<A HREF = "../../../Abstracts/Abstract_00/Abstract_262.htm">Johnston (Mark) - Human Beings</A>", Johnston</li><li>"<A HREF = "../../../Abstracts/Abstract_20/Abstract_20173.htm">Liao (S. Matthew) - The Organism View Defended</A>", Liao</li><li>"<A HREF = "../../../Abstracts/Abstract_05/Abstract_5910.htm">Lockwood (Michael) - When Does a Life Begin?</A>", Lockwood</li><li>"<A HREF = "../../../Abstracts/Abstract_03/Abstract_3586.htm">Nozick (Robert) - The Identity of the Self: Introduction</A>", Nozick</li><li>"<A HREF = "../../../Abstracts/Abstract_12/Abstract_12470.htm">Olson (Eric) - What Are We?</A>", Olson</li><li>"<A HREF = "../../../Abstracts/Abstract_03/Abstract_3583.htm">Parfit (Derek) - Nagel's Brain</A>", Parfit</li><li>"<A HREF = "../../../Abstracts/Abstract_15/Abstract_15140.htm">Shoemaker (David) - Personal Identity, Rational Anticipation, and Self-Concern</A>", Shoemaker</li><li>"<A HREF = "../../../Abstracts/Abstract_21/Abstract_21153.htm">Smith (Barry C.), Broks (Paul), Kennedy (A.L.) & Evans (Jules) - What Does It Mean to Be Me?</A>", Smith, etc.</li></ol></li><li>A reading list (where not covered elsewhere) might start with:- <ol type="i"><li>"<A HREF = "../../../Abstracts/Abstract_20/Abstract_20779.htm">Bailey (Andrew M.) - The Elimination Argument</A>", Bailey</li><li>"<A HREF = "../../../Abstracts/Abstract_20/Abstract_20778.htm">Bailey (Andrew M.) - You Needn t be Simple</A>", Bailey</li><li>"<A HREF = "../../../Abstracts/Abstract_22/Abstract_22072.htm">Baker (Lynne Rudder) - Animalism vs. Constitutionalism</A>", Baker</li><li>"<A HREF = "../../../Abstracts/Abstract_01/Abstract_1327.htm">Blackburn (Simon) - Has Kant Refuted Parfit?</A>", Blackburn</li><li>"<A HREF = "../../../BookSummaries/BookSummary_06/BookPaperAbstracts/BookPaperAbstracts_6366.htm">Broks (Paul) - Into the Silent Land: Travels in Neuropsychology</A>", Broks</li><li>"<A HREF = "../../../Abstracts/Abstract_15/Abstract_15968.htm">Bynum (Terrell Ward) - Two Philosophers of the Information Age</A>", Bynum</li><li>"<A HREF = "../../../Abstracts/Abstract_13/Abstract_13016.htm">Chitty (Andrew) - First Person Plural Ontology and Praxis</A>", Chitty</li><li>"<A HREF = "../../../BookSummaries/BookSummary_06/BookPaperAbstracts/BookPaperAbstracts_6335.htm">Corcoran (Kevin) - Rethinking Human Nature: A Christian Materialist Alternative to the Soul</A>", Corcoran</li><li>"<A HREF = "../../../Abstracts/Abstract_09/Abstract_9451.htm">Dennett (Daniel) - Natural Freedom</A>", Dennett</li><li>"<A HREF = "../../../Abstracts/Abstract_21/Abstract_21113.htm">Hershenov (David) - Animals, Persons and Bioethics</A>", Hershenov</li><li>"<A HREF = "../../../Abstracts/Abstract_05/Abstract_5838.htm">McMahan (Jeff) - Identity</A>", McMahan</li><li>"<A HREF = "../../../BookSummaries/BookSummary_03/BookPaperAbstracts/BookPaperAbstracts_3602.htm">Noe (Alva) - Out of Our Heads: Why You Are Not Your Brain, and Other Lessons from the Biology of Consciousness</A>", Noe</li><li>"<A HREF = "../../../BookSummaries/BookSummary_02/BookPaperAbstracts/BookPaperAbstracts_2710.htm">Olson (Eric) - What are We?</A>", <U><A HREF="#On-Page_Link_734_13">Olson</A></U><SUB>13</SUB><a name="On-Page_Return_734_13"></A></li><li>"<A HREF = "../../../Abstracts/Abstract_20/Abstract_20709.htm">Parfit (Derek) - We Are Not Human Beings</A>", Parfit</li><li>"<A HREF = "../../../Abstracts/Abstract_07/Abstract_7629.htm">Richards (Janet Radcliffe) - Internicene Strife</A>", Richards</li><li>"<A HREF = "../../../Abstracts/Abstract_21/Abstract_21010.htm">Snowdon (Paul) - [P & not-A] Cases: An Introduction</A>", Snowdon</li><li>"<A HREF = "../../../Abstracts/Abstract_21/Abstract_21029.htm">Snowdon (Paul) - The Self and Personal Identity</A>", Snowdon</li><li>"<A HREF = "../../../Abstracts/Abstract_00/Abstract_550.htm">Taylor (Charles) - Responsibility For Self</A>", Taylor</li><li>"<A HREF = "../../../Abstracts/Abstract_06/Abstract_6980.htm">Wilson (Robert) - Persons, Social Agency, and Constitution</A>", Wilson</li></ol></li><li>This is mostly a <a name="29"></a>place-holder. </li></ul><BR><BR><BR><HR><BR><U><B>In-Page Footnotes</U></B><a name="On-Page_Link_734_1"></A><BR><BR><B>Footnote 1</B>: <ul type="disc"><li>A number of my philosophical Notes are  promissory notes currently only listing the books and papers (if any) I possess on the topic concerned. </li><li>I ve decided to add some text  whether by way of motivation, or something more substantive  for all these identified topics related to my Thesis.</li><li>As I want to do this fairly quickly, the text may be confused or show surprising ignorance. </li><li>The reader (if such exists) will have to bear with me, and display the principle of charity while this footnote exists. </li></ul><a name="On-Page_Link_734_11"></A><B>Footnote 11</B>: <ul type="disc"><li>Frequently I ll have made copious marginal annotations, and sometimes have written up a review-note. </li><li>In the former case, I intend to transfer the annotations into electronic form as soon as I can find the time. </li><li>In the latter case, I will have remarked on the fact against the citation, and will integrate the comments into this Note in due course. </li><li>My intention is to incorporate into these Notes comments on material I ve already read rather than engage with unread material at this stage. </li></ul><a name="On-Page_Link_734_12"></A><B>Footnote 12</B>: <ul type="disc"><li>I may have read others in between updates of this Note  in which case they will be marked as such in the  References and Reading List below.</li><li>Papers or Books partially read have a rough %age based on the time spent versus the time expected. </li></ul> <a name="On-Page_Link_734_13"></A><B>Footnote 13</B>: |..||.|There are hosts of papers by Olson that touch on this topic, but this book, and the paper of the same name, are enough in this context. </P><B>Note last updated:</B> 17/08/2018 21:59:02<BR><BR><HR> <a name="ColourConventions"></a><BR><P ALIGN="Left"><FONT Size = 2 FACE="Arial"><B><U>Text Colour Conventions</U></B><OL TYPE="1"><LI><FONT COLOR = "000000">Black</FONT>: Printable Text by me; &copy; Theo Todman, 2018<LI><FONT COLOR = "0000FF">Blue</FONT>: Text by me; &copy; Theo Todman, 2018<LI><FONT COLOR = "800080">Mauve</FONT>: Text by correspondent(s) or other author(s); &copy; the author(s)</OL><hr><BR><a href = "../../../index.htm">Return to Home page</a><BR><B>Timestamp: 17/08/2018 22:30:43. Comments to <U>theo@theotodman.com</U>.</B></P></BODY></HTML>