Amazon Book Description
- AI is the future - but what will that future look like? Will superhuman intelligence1 be our slave, or become our god?
- Taking us to the heart of the latest thinking about AI, Max Tegmark, the MIT professor whose work has helped mainstream research on how to keep AI beneficial, separates myths from reality, utopias from dystopias, to explore the next phase of our existence.
- How can we grow our prosperity through automation, without leaving people lacking income or purpose? How can we ensure that future AI systems do what we want without crashing, malfunctioning or getting hacked? Should we fear an arms race in lethal autonomous weapons? Will AI help life flourish as never before, or will machines eventually outsmart us at all tasks, and even, perhaps, replace us altogether?
ContentsAcknowledgments – xi
Prelude: The Tale of the Omega Team – 3
- Welcome to the Most Important Conversation of Our Time – 22
→ A Brief History of Complexity’
→ The Three Stages of Life
→ Controversies
→ Misconceptions
→ The Road Ahead
- Matter Turns Intelligent – 49
→ What Is Intelligence2?
→ What Is Memory’?
→ What Is Computation?
→ What Is Learning?
- The Near Future: Breakthroughs, Bugs, Laws, Weapons and Jobs – 82
→ Breakthroughs
→ Bugs vs Robust AI
→ Laws
→ Weapons
- Intelligence3 Explosion? – 134
→ Totalitarianism
→ Prometheus Takes Over the World
→ Slow Takeoff and Multipolar Scenarios
→ Cyborgs and Uploads4
→ What Will Actually Happen?
- Aftermath: The Next 10,000 Years – 161
→ Libertarian Utopia
→ Benevolent Dictator
→ Egalitarian Utopia
→ Gatekeeper
→ Protector God
→ Enslaved God
→ Conquerors
→ Descendants
→ Zookeeper
→ 1984
→ Reversion
→ Self-Destruction
→ What Do You Want?
- Our Cosmic Endowment: The Next Billion Years and Beyond – 203
→ Making the Most of Your Resources
→ Gaining Resources Through Cosmic Settlement
→ Cosmic Hierarchies
→ Outlook
- Goals – 249
→ Physics: The Origin of Goals
→ Biology: The Evolution5 of Goals
→ Psychology: The Pursuit of and Rebellion Against Goals
→ Engineering: Outsourcing Goals
→ Friendly AI: Aligning Goals
→ Ethics: Choosing Goals
→ Ultimate Goals?
- Consciousness – 281
→ Who Cares?
→ What Is Consciousness?
→ What’s the Problem?
→ Is Consciousness Beyond Science?
→ Experimental Clues About Consciousness
→ Theories of Consciousness
→ Controversies of Consciousness
→ How Might AI Consciousness Feel?
→ Meaning
- Epilogue: The Tale of the FLI Team – 316
Notes – 337
Index – 349
The Bottom Line6, by Chapter
- Chapter 1
- Life, defined as a process that can retain its complexity and can develop through three stages: a biological stage (1.0), where hardware and software are evolved, a cultural stage (2.0), where it can design its software (through learning) and a technological stage (3.0), where it can design its hardware as well, becoming the master of its own destiny.
- Artificial intelligence7 may enable us to launch Life 3.0 this century, and a fascinating conversation has sprung up regarding what future we should aim for and how this can be accomplished. There are three main camps in the controversy: techno-skeptics, digital Utopians and the beneficial-AI movement.
- Techno-skeptics view building superhuman AGI as so hard that it won’t happen for hundreds of years, making it silly to worry about it (and Life 3.0) now. Digital Utopians view it as likely this century and wholeheartedly welcome Life 3.0, viewing it as the natural and desirable next step in the cosmic evolution8.
- The beneficial-AI movement also views it as likely this century, but views a good outcome not as guaranteed, but as something that needs to be ensured by hard work in the form of AI-safety research.
- Beyond such legitimate controversies where world-leading experts disagree, there are also boring pseudo-controversies caused by misunderstandings. For example, never waste time arguing about “life," “intelligence9,” or “consciousness” before ensuring that you and your protagonist are using these words to mean the same thing! This book uses the definitions in table 1.1.
- Life: : Process that can retain its complexity and replicate10
- Life 1.0: Life that evolves its hardware and software (biological stage)
- Life 2.0: Life that evolves its hardware but designs much of its software (cultural stage)
- Life 3.0: Life that designs its hardware and software (technological stage)
- Intelligence11: Ability to accomplish complex goals
- Intelligence12 (Al): Non-biological intelligence13
- Narrow intelligence14: Ability to accomplish a narrow set of goals, e.g. play chess or drive a car
- General intelligence15: Ability to accomplish virtually any goal, including learning
- Universal intelligence16: Ability to acquire general intelligence17 given access to data and resources
- [Human-level] Artificial General Intelligence18 (AGI): Ability to accomplish any cognitive task at least as well as humans
- Human-level Al: AGI
- Strong Al: AGI
- Superintelligence19: General intelligence20 far beyond human level
- Civilization: Interacting group of intelligent life forms
- Consciousness: Subjective experience
- Qualia: Individual instances of subjective experience
- Ethics: Principles that govern how we should behave
- Teleology: Explanation of things in terms of their goals or purposes rather than their causes
- Goal-oriented behavior: Behavior more easily explained via its effect than via its cause
- Having a goal: Exhibiting goal-oriented behavior
- Having purpose: Serving goals of one’s own or of another entity
- Friendly Al: Superintelligence21 whose goals are aligned with ours
- Cyborg: Human-machine hybrid
- Intelligence22 explosion: Recursive self-improvement rapidly leading to superintelligence23
- Singularity: Intelligence24 explosion
- Universe: The region of space from which light has had time to reach us during the 13.8 billion years since our Big Bang
- Also beware the common misconceptions in figure 1.5:
→ “Superintelligence25 by 2100 is inevitable/impossible.”
→ “Only Luddites worry about AI.”
→ “The concern is about AI turning evil and/or conscious, and it’s just years away.”
→ “Robots are the main concern.”
→ “AI can’t control humans and can’t have goals.”
- In chapters 2 through 6, we’ll explore the story of intelligence26 from its humble beginning billions of years ago to possible cosmic futures billions of years from now. We’ll first investigate near-term challenges such as jobs, AI weapons and the quest for human-level AGI, then explore possibilities for a fascinating spectrum of possible futures with intelligent machines and/or humans. I wonder which options you’ll prefer!
- In chapters 7 through 9, we’ll switch from cold factual descriptions to an exploration of goals, consciousness and meaning, and investigate what we can do right now to help create the future we want.
- I view this conversation about the future of life with AI as the most important one of our time — please join it!
Chapter 2
Intelligence27, defined as ability to accomplish complex goals, can’t be measured by a single IQ28, only by an ability spectrum across all goals.
Today’s artificial intelligence29 tends to be narrow, with each system able to accomplish only very specific goals, while human intelligence30 is remarkably broad.
Memory, computation, learning and intelligence31 have an abstract, intangible and ethereal feel to them because they’re substrate-independent: able to take on a life of their own that doesn’t depend on or reflect the details of their underlying material substrate. Any chunk of matter can be the substrate for memory as long as it has many different stable states.
Any matter can be computronium, the substrate for computation, as long as it contains certain universal building blocks that can be combined to implement any function. NAND gates and neurons are two important examples of such universal “computational atoms.”
A neural network is a powerful substrate for learning because, simply by obeying the laws of physics, it can rearrange itself to get better and better at implementing desired computations.
Because of the striking simplicity of the laws of physics, we humans only care about a tiny fraction of all imaginable computational problems, and neural networks tend to be remarkably good at solving precisely this tiny fraction.
Once technology gets twice as powerful, it can often be used to design and build technology that’s twice as powerful in turn, triggering repeated capability doubling in the spirit of Moore’s law. The cost of information technology has now halved roughly every two years for about a century, enabling the information age.
If AI progress continues, then long before AI reaches human level for all skills, it will give us fascinating opportunities and challenges involving issues such as bugs, laws, weapons and jobs — which we’ll explore in the next chapter.
Chapter 3
Near-term Al progress has the potential to greatly improve our lives in myriad ways, from making our personal lives, power grids and financial markets more efficient to saving lives with self-driving cars, surgical bots and Al diagnosis systems.
When we allow real-world systems to be controlled by Al, it’s crucial that we learn to make Al more robust, doing what we want it to do. This boils down to solving tough technical problems related to verification, validation, security and control.
This need for improved robustness is particularly pressing for AI-controlled weapon systems, where the stakes can be huge.
Many leading Al researchers and roboticists have called for an international treaty banning certain kinds of autonomous weapons, to avoid an out-of-control arms race that could end up making convenient assassination machines available to everybody with a full wallet and an axe to grind.
Al can make our legal systems more fair and efficient if we can figure out how to make robojudges transparent and unbiased.
Our laws need rapid updating to keep up with Al, which poses tough legal questions involving privacy, liability and regulation.
Long before we need to worry about intelligent machines replacing us altogether, they may increasingly replace us on the job market.
This need not be a bad thing, as long as society redistributes a fraction of the Al-created wealth to make everyone better off.
Otherwise, many economists argue, inequality will greatly increase.
With advance planning, a low-employment society should be able to flourish not only financially, with people getting their sense of purpose from activities other than jobs.
Career advice for today’s kids: Go into professions that machines are bad at — those involving people, unpredictability and creativity.
There’s a non-negligible possibility that AGI progress will proceed to human levels and beyond – we’ll explore that in the next chapter!
Chapter 4
If we one day succeed in building human-level AGI, this may trigger an intelligence32 explosion, leaving us far behind.
If a group of humans manage to control an intelligence33 explosion, they may be able to take over the world in a matter of years.
If humans fail to control an intelligence34 explosion, the Al itself may take over the world even faster.
Whereas a rapid intelligence35 explosion is likely to lead to a single world power, a slow one dragging on for years or decades may be more likely to lead to a multipolar scenario with a balance of power between a large number of rather independent entities.
The history of life shows it self-organizing into an ever more complex hierarchy shaped by collaboration, competition and control. Superintelligence36 is likely to enable coordination on ever-larger cosmic scales, but it’s unclear whether it will ultimately lead to more totalitarian top-down control or more individual empowerment.
Cyborgs and uploads37 are plausible, but arguably not the fastest route to advanced machine intelligence38.
The climax of our current race toward Al may be either the best or the worst thing ever to happen to humanity, with a fascinating spectrum of possible outcomes that we’ll explore in the next chapter.
We need to start thinking hard about which outcome we prefer and how to steer in that direction, because if we don’t know what we want, we’re unlikely to get it.
Chapter 5
The current race toward AGI can end in a fascinatingly broad range of aftermath scenarios for upcoming millennia.
Superintelligence39 can peacefully coexist with humans either because it’s forced to (enslaved-god scenario) or because it’s “friendly Al” that wants to (libertarian-utopia, protector-god, benevolent-dictator and zookeeper scenarios).
Superintelligence40 can be prevented by an Al (gatekeeper scenario) or by humans (1984 scenario), by deliberately forgetting the technology (reversion scenario) or by lack of incentives to build it (egalitarian-utopia scenario).
Humanity can go extinct and get replaced by AIs (conqueror and descendant scenarios) or by nothing (self-destruction scenario).
There’s absolutely no consensus on which, if any of these scenarios are desirable, and all involve objectionable elements. This makes it all the more important to continue and deepen the conversation around our future goals, so that we don’t inadvertently drift or steer in an unfortunate direction.
Chapter 6
Compared to cosmic timescales of billions of years, an intelligence41 explosion is a sudden event where technology rapidly plateaus at a level limited only by the laws of physics.
This technological plateau is vastly higher than today’s technology, allotting a given amount of matter to generate about ten billion times more energy (using sphalerons or black holes), store 12-18 orders of magnitude more information or compute 31-41 orders of magnitude faster — or to be converted to any other desired form of matter.
Superintelligent life would not only make such dramatically more efficient use of its existing resources, but would also be able to grow today’s biosphere by about 32 orders of magnitude by acquiring more resources through cosmic settlement at near light speed.
Dark energy limits the cosmic expansion of superintelligent life and also protects it from distant expanding death bubbles or hostile civilizations. The threat of dark energy tearing cosmic civilizations apart motivates massive cosmic engineering projects, including wormhole construction if this turns out to be feasible.
The main commodity shared or traded across cosmic distances is likely to be information.
Barring wormholes, the light-speed limit on communication poses severe challenges for coordination and control across a cosmic civilization. A distant central hub may incentivize its superintelligent “nodes” to cooperate either through rewards or through threats, say by deploying a local guard Al programmed to destroy the node by setting off a supernova or quasar unless the rules are obeyed.
The collision of two expanding civilizations may result in assimilation, cooperation or war, where the latter is arguably less likely than it is between today’s civilizations.
Despite popular belief to the contrary, it’s quite plausible that we’re the only life form capable of making our observable Universe come alive in the future.
If we don’t improve our technology, the question isn’t whether humanity will go extinct, but merely how: will an asteroid, a super-volcano, the burning heat of the aging Sun or some other calamity get us first?
If we do keep improving our technology with enough care, foresight and planning to avoid pitfalls, life has the potential to flourish on Earth and far beyond for many billions of years, beyond the wildest dreams of our ancestors.
Chapter 7
The ultimate origin of goal-oriented behavior lies in the laws of physics, which involve optimization.
Thermodynamics has the built-in goal of dissipation: to increase a measure of messiness that’s called entropy.
Life is a phenomenon that can help dissipate (increase overall messiness) even faster by retaining or growing its complexity and replicating while increasing the messiness of its environment.
Darwinian evolution42 shifts the goal-oriented behavior from dissipation to replication43.
Intelligence44 is the ability to accomplish complex goals.
Since we humans don’t always have the resources to figure out the truly optimal replication45 strategy, we’ve evolved useful rules of thumb that guide our decisions: feelings such as hunger, thirst, pain, lust and compassion.
We therefore no longer have a simple goal such as replication46; when our feelings conflict with the goal of our genes, we obey our feelings, as by using birth control.
We’re building increasingly intelligent machines to help us accomplish our goals. Insofar as we build such machines to exhibit goal-oriented behavior, we strive to align the machine goals with ours.
Aligning machine goals with our own involves three unsolved problems: making machines learn them, adopt them and retain them.
Al can be created to have virtually any goal, but almost any sufficiently ambitious goal can lead to subgoals of self-preservation, resource acquisition and curiosity to understand the world better – the former two may potentially lead a superintelligent Al to cause problems for humans, and the latter may prevent it from retaining the goals we give it.
Although many broad ethical principles are agreed upon by most humans, it’s unclear how to apply them to other entities, such as non-human animals and future AIs.
It’s unclear how to imbue a superintelligent Al with an ultimate goal that neither is undefined nor leads to the elimination of humanity, making it timely to rekindle research on some of the thorniest issues in philosophy!
Chapter 8
There’s no undisputed definition of “consciousness.” I use the broad and non-anthropocentric definition consciousness = subjective experience.
Whether AIs are conscious in that sense is what matters for the thorniest ethical and philosophical problems posed by the rise of Al:
→ Can AIs suffer?
→ Should they have rights?
→ Is uploading47 a subjective suicide?
→ Could a future cosmos teeming with AIs be the ultimate zombie apocalypse?
The problem of understanding intelligence48 shouldn’t be conflated with three separate problems of consciousness:
→ the “pretty hard problem” of predicting which physical systems are conscious,
→ the “even harder problem" of predicting qualia, and
→ the “really hard problem” of why anything at all is conscious.
The “pretty hard problem” of consciousness is scientific, since a theory that predicts which of your brain processes are conscious is experimentally testable and falsifiable, while it’s currently unclear how science could fully resolve the two harder problems.
Neuroscience experiments suggest that many behaviors and brain regions are unconscious, with much of our conscious experience representing an after-the-fact summary of vastly larger amounts of unconscious information.
Generalizing consciousness predictions from brains to machines requires a theory. Consciousness appears to require not a particular kind of particle or field, but a particular kind of information processing that’s fairly autonomous and integrated, so that the whole system is rather autonomous but its parts aren’t.
Consciousness might feel so non-physical because it’s doubly substrate-independent: if consciousness is the way information feels when being processed in certain complex ways, then it’s merely the structure of the information processing that matters, not the structure of the matter doing the information processing.
If artificial consciousness is possible, then the space of possible Al experiences is likely to be huge compared to what we humans can experience, spanning a vast spectrum of qualia and timescales — all sharing a feeling of having free will.
Since there can be no meaning without consciousness, it’s not our Universe giving meaning to conscious beings, but conscious beings giving meaning to our Universe.
This suggests that as we humans prepare to be humbled by ever smarter machines, we take comfort mainly in being Homo sentiens, not Homo sapiens.
Links from Book
- Chapter 1
→ The Future Of AI – What Do You Think?
→ The AI Revolution: Our Immortality or Extinction
→ Research Priorities For Robust And Beneficial Artificial Intelligence
→ Mail Online: Stephen Hawking on AI
In-Page Footnotes ("Tegmark (Max) - Life 3.0: Being Human in the Age of Artificial Intelligence")
Footnote 6:
- Chapter summaries provided by the author.
Book Comment
Penguin (5 July 2018), Paperback
Text Colour Conventions (see disclaimer)- Blue: Text by me; © Theo Todman, 2023
- Mauve: Text by correspondent(s) or other author(s); © the author(s)