<!DOCTYPE html><HTML lang="en"> <head><meta charset="utf-8"> <title>Chalmers (David) - The singularity: A philosophical analysis (Theo Todman's Book Collection - Paper Abstracts) </title> <link href="../../TheosStyle.css" rel="stylesheet" type="text/css"><link rel="shortcut icon" href="../../TT_ICO.png" /></head> <BODY> <CENTER> <div id="header"><HR><h1>Theo Todman's Web Page - Paper Abstracts</h1><HR></div><A name="Top"></A> <TABLE class = "Bridge" WIDTH=950> <tr><th><A HREF = "../../PaperSummaries/PaperSummary_21/PaperSummary_21672.htm">The singularity: A philosophical analysis</A></th></tr> <tr><th><A HREF = "../../Authors/C/Author_Chalmers (David).htm">Chalmers (David)</a></th></tr> <tr><th>Source: Journal of Consciousness Studies, Volume 17, Issue 01-02 (2010)</th></tr> <tr><th>Paper - Abstract</th></tr> </TABLE> </CENTER> <P><CENTER><TABLE class = "Bridge" WIDTH=800><tr><td><A HREF = "../../PaperSummaries/PaperSummary_21/PaperSummary_21672.htm">Paper Summary</A></td><td><A HREF = "../../PaperSummaries/PaperSummary_21/PaperCitings_21672.htm">Books / Papers Citing this Paper</A></td><td><A HREF = "../../PaperSummaries/PaperSummary_21/PapersToNotes_21672.htm">Notes Citing this Paper</A></td><td><A HREF="#ColourConventions">Text Colour-Conventions</a></td></tr></TABLE></CENTER></P> <hr><P><FONT COLOR = "0000FF"><u>Author s Introduction</u> (Extracts)<FONT COLOR = "800080"><ol type="1"><li>What happens when machines become more intelligent than humans? One view is that this event will be followed by an explosion to ever-greater levels of intelligence, as each generation of machines creates more intelligent machines in turn. This intelligence explosion is now often known as the  singularity .</li><li> & </li><li><b>Practically</b>: If there is a singularity, it will be one of the most important events in the history of the planet. An intelligence explosion has enormous potential benefits: a cure for all known diseases, an end to poverty, extraordinary scientific advances, and much more. It also has enormous potential dangers: an end to the human race, an arms race of warring machines, the power to destroy the planet. So if there is even a small chance that there will be a singularity, we would do well to think about what forms it might take and whether there is anything we can do to influence the outcomes in a positive direction. </li><li><b>Philosophically</b>: The singularity raises many important philosophical questions. The basic argument for an intelligence explosion is philosophically interesting in itself, and forces us to think hard about the nature of intelligence and about the mental capacities of artificial machines. The potential consequences of an intelligence explosion force us to think hard about values and morality and about consciousness and personal identity. In effect, the singularity brings up some of the hardest traditional questions in philosophy and raises some new philosophical questions as well. </li><li>Furthermore, the philosophical and practical questions intersect. To determine whether there might be an intelligence explosion, we need to better understand what intelligence is and whether machines might have it. To determine whether an intelligence explosion will be a good or a bad thing, we need to think about the relationship between intelligence and value. To determine whether we can play a significant role in a post-singularity world, we need to know whether human identity can survive the enhancing of our cognitive systems, perhaps through <a name="1"></a><A HREF="../../Notes/Notes_12/Notes_1246.htm">uploading</A><SUP>1</SUP> onto new technology. These are life-or-death questions that may confront us in coming decades or centuries. To have any hope of answering them, we need to think clearly about the philosophical issues. </li><li>In what follows, I address some of these philosophical and practical questions. I start with the argument for a singularity: is there good reason to believe that there will be an intelligence explosion? Next, I consider how to negotiate the singularity: if it is possible that there will be a singularity, how can we maximize the chances of a good outcome? Finally, I consider the place of humans in a post-singularity world, with special attention to questions about <a name="2"></a><A HREF="../../Notes/Notes_12/Notes_1246.htm">uploading</A><SUP>2</SUP>: can an <a name="3"></a><A HREF="../../Notes/Notes_12/Notes_1246.htm">uploaded</A><SUP>3</SUP> human be conscious, and will <a name="4"></a><A HREF="../../Notes/Notes_12/Notes_1246.htm">uploading</A><SUP>4</SUP> preserve personal identity? </li><li>My discussion will necessarily be speculative, but I think it is possible to reason about speculative outcomes with at least a modicum of rigor. For example, by formalizing arguments for a speculative thesis with premises and conclusions, one can see just what opponents need to be deny in order to deny the thesis, and one can then assess the costs of doing so. I will not try to give knockdown arguments in this paper, and I will not try to give final and definitive answers to the questions above, but I hope to encourage others to think about these issues further. </li></ol></FONT><hr><FONT COLOR = "0000FF"><B>Comment: </B><BR><BR>See <a name="W3312W"></a><A HREF = "http://consc.net/papers/singularity.pdf" TARGET = "_top">Link</A>.<BR><FONT COLOR = "0000FF"><HR></P><a name="ColourConventions"></a><p><b>Text Colour Conventions (see <A HREF="../../Notes/Notes_10/Notes_1025.htm">disclaimer</a>)</b></p><OL TYPE="1"><LI><FONT COLOR = "0000FF">Blue</FONT>: Text by me; &copy; Theo Todman, 2018</li><LI><FONT COLOR = "800080">Mauve</FONT>: Text by correspondent(s) or other author(s); &copy; the author(s)</li></OL> <BR><HR><BR><CENTER> <TABLE class = "Bridge" WIDTH=950> <TR><TD WIDTH="30%">&copy; Theo Todman, June 2007 - August 2018.</TD> <TD WIDTH="40%">Please address any comments on this page to <A HREF="mailto:theo@theotodman.com">theo@theotodman.com</A>.</TD> <TD WIDTH="30%">File output: <time datetime="2018-08-02T09:40" pubdate>02/08/2018 09:40:37</time> <br><A HREF="../../Notes/Notes_10/Notes_1010.htm">Website Maintenance Dashboard</A></TD></TR> <TD WIDTH="30%"><A HREF="#Top">Return to Top of this Page</A></TD> <TD WIDTH="40%"><A HREF="../../Notes/Notes_11/Notes_1140.htm">Return to Theo Todman's Philosophy Page</A></TD> <TD WIDTH="30%"><A HREF="../../index.htm">Return to Theo Todman's Home Page</A></TD> </TR></TABLE></CENTER><HR> </BODY> </HTML>