Transhumanism: Future Trajectories: Singularity - Introduction
More (Max) & Vita-More (Natasha)
Source: More & Vita-More - The Transhumanist Reader, Introduction to Part VIII
Paper - Abstract

Paper StatisticsBooks / Papers Citing this PaperNotes Citing this PaperColour-ConventionsDisclaimer

Full Text1

  1. How does the concept of the singularity2 relate to that of transhumanism3? In science, the singularity4 may refer to a discontinuity or a mathematical point where an object is not defined, or to a cosmological event where measure of the gravitational field becomes infinite.
  2. In theory, the technological singularity is a conjecture about the emergence of super-intelligent minds. Transhumanism5 is a worldview that seeks to understand the unknown, anticipate risks, and create an advantageous future for humanity, including the nonbiological superintelligences6 we may become or create.
  3. However, too often, observers conflate the two concepts, assuming that all transhumanists7 anticipate a technological singularity. The considerable overlap of interests and expectations represented by both views feeds that confusion. After all, both transhumanists8 and proponents of the technological singularity (i.e., singularitarians, as they sometimes call themselves) expect drastic changes in the future.
  4. Because the term has had wide appeal, it is now referred to simply as "the singularity9." Some transhumanists10 expect a singularity and most of those who expect a singularity are broadly transhumanist11. But, while transhumanism12 is a broad worldview that anticipates using technology to overcome human limits, the singularity13 is a specific model (or set of models) of technological change and its trajectory into the future.
  5. To clearly separate specific singularitarian expectations from the philosophy of transhumanism14 requires first defining the former. The original meaning of "technological singularity", as coined by Vernor Vinge in his 1993 essay ("Vinge (Vernor) - Technological Singularity", the first in this section) is the Event Horizon view. This view links to Alan Turing's seminal writing about intelligent machinery outstripping human intelligence15, and more directly to I.J. Good's term "intelligence16 explosion," which suggests not only a growth of machine intelligence17 but its acceleration. Accordingly, technological advance will lead to the advent of superhuman intelligence18.
  6. Superhuman intelligence19 will not only accelerate technological progress, it will create even more intelligent entities ever faster (e.g. Good's concept). As Vinge puts it, "This change will be a throwing-away of all the human rules, perhaps in the blink of an eye." One result of this sudden shift will be the unfolding of a future that is incomprehensible to us. Sometimes this view includes the claim that to know what a superhuman intelligence20 would do at this point, you would have to be superintelligent too. The distinctive consequence of the Event Horizon version of the singularity21 is that the future beyond that point will be completely unpredictable, as Vinge states:
      My use of the word "Singularity" is not meant to imply that some variable blows up to infinity. My use of the term comes from the notion that if physical progress with computation gets good enough, then we will have creatures that are smarter than human. At that point the human race will no longer be at center stage. The world will be driven by those other intelligences22. This is a fundamentally different form of technical progress. The change would be essentially unknowable, unknowable in a different way than technology change has been in the past. ("Arterati On Ideas: Vinge's View of the Singularity23," interview by Natasha Vita-More in Extropy Online [1998])
  7. Ironically, the concept of the "singularity" is not itself singular. In "Sandberg (Anders) - An Overview of Models of Technological Singularity", Anders Sandberg delves in great detail into a range of differing models of technological singularity. A less complex map of this territory captures the three primary models on which most people seem to agree. These are the Event Horizon, Accelerating Change, and Intelligence24 Explosion.
  8. The Accelerating Change conception of the technological singularity has become strongly associated with inventor and visionary Ray Kurzweil. According to this view, technological change is a positive feedback loop and so is exponential rather than linear. Because change in the past was slower than change in the present, and future change will be faster still, our typically linear expectations of change will be drastically conservative, especially as we look out further ahead. If, as Kurzweil argues, technological advance follows smooth exponential curves then - contrary to the Event Horizon view - we can make accurate forecasts of some new technologies, including the development of artificial intelligence25.
  9. Finally, the Intelligence26 Explosion throws out smooth exponential change in favor of positive feedback cycle of cognitive improvement. Once technology leads to minds of superhuman intelligence27, a powerful positive feedback cycle comes into play. Recursive cognitive self-improvement extremely rapidly causes vast change before running into upper limits imposed by the laws of physics or computational possibilities.
  10. Casual observers frequently mix these models of the singularity28 of singularity into a single, cloudy suspension. The more sharply defined you make these three views, the more they contradict one another. Most obviously, the inherent predictability29 of the Event Horizon and Intelligence30 Explosion views conflicts with the predictability of the Accelerating Change view.
  11. Those who anticipate a singularity often display considerable confidence in their forecasts of when the singularity31 will happen, whether or not they expect the post-singularity future to be predictable. Writing in 1993 and reaffirming his view in 2003, Vinge said that he would be "surprised if this event occurs before 2005 or after 2030." On the basis of sets of exponential curves, Kurzweil forecasts a singularity close to 2045, while others think it might take several hundred years to achieve superintelligent AI. Other transhumanists32 are much more skeptical about the ability to make remotely precise forecasts of this nature. Some individuals who would like to experience a singularity nevertheless worry that technological and economic progress is actually slowing, not accelerating. In addition, there is no shortage of those who are much less favorable to the singularity33 and see more stagnation than acceleration.
  12. It is entirely possible to expect a technological singularity of one of these types and yet not to be a transhumanist34. The advent of superhuman intelligence35 might involve augmenting human intelligence36 to superhuman levels or it might mean that synthetic intelligences37 leave us far behind while we remain mired in the human condition. Some writers, such as Hans Moravec, at least sometimes seem to expect this outcome and are unconcerned about it. This might better be described as a type of posthumanism38, except that there is already a vaguely defined academic view using that term. It is also entirely possible to affirm the core values and goals of transhumanism39 while doubting that we will experience anything resembling a singularity. Technological advance may come in fits and starts, eventually leading to a posthuman condition, but without any singular event.

In-Page Footnotes

Footnote 1: Chapter summaries have been removed and used as the Abstracts of the Chapters themselves; though, in this case, very little of the Introduction was left!.

Footnote 29: I think this should read “unpredictability”.

Text Colour Conventions (see disclaimer)

  1. Blue: Text by me; © Theo Todman, 2021
  2. Mauve: Text by correspondent(s) or other author(s); © the author(s)

© Theo Todman, June 2007 - Jan 2021. Please address any comments on this page to File output:
Website Maintenance Dashboard
Return to Top of this Page Return to Theo Todman's Philosophy Page Return to Theo Todman's Home Page