The Reith Lectures 2021: Living With Artificial Intelligence
Russell (Stuart)
Source: BBC Radio 4 Website, December 2021
Paper - Abstract

Paper StatisticsNotes Citing this PaperColour-ConventionsDisclaimer


Full Text

  1. Introduction: Nine Things You Should Know About AI
    • AI – the adoption of computers or machines to produce intelligent behaviour – is expanding rapidly. In the last decade it has conquered a number of tasks, including agile legged robot locomotion, recognition of objects in images, speech recognition and machine translation, with self-driving cars one of the developments likely to be perfected in the next wave.
    • But it won’t stop there. General-purpose AI – with the capability of doing everything that a human can do or better – is the ultimate game-changer. According to this year’s BBC Reith lecturer, Prof. Stuart Russell, the development of this technology could be “the biggest event in human history”. In an essay, co-written with Stephen Hawking in 2014, he concludes: “Unfortunately, it might also be the last.” It's this stark warning that runs through Prof. Russell’s fascinating – and sometimes shocking – series of four Reith Lectures.
    • General-purpose AI is not here yet, but Russell argues that we must plan for it in order to avoid losing control over our own future.
  2. AI is already a big part of your life
    • “Every time you use a credit card or debit card, there's an AI system deciding ‘is this a real transaction or a fraudulent one?’,” explains Prof. Russell. “Every time you ask Siri a question on your iPhone there's an AI system there that has to understand your speech and then understand the question and then figure out how to answer it.”
  3. AI could give us so much more
    • General-purpose AI could – theoretically – have universal access to all the knowledge and skills of the human race, meaning that a multitude of tasks could be carried out more effectively, at far less cost and on a far greater scale. This potentially means that we could raise the living standard of everyone on earth. Professor Russell estimates that this equates to a GDP of around ten times the current level – that’s equivalent to a cash value of 14 quadrillion dollars.
  4. AI can harm us
    • There are already a number of negative consequences from the misuse of AI, including racial and gender bias, disinformation, deepfakes and cybercrime. However, Professor Russell says that even the normal way AI is programmed is potentially harmful.
    • The “standard model” for AI involves specifying a fixed objective, for which the AI system is supposed to find and execute the best solution. So, the problem for AI when it moves into the real world is that we can’t specify objectives completely and correctly.
    • Having fixed but imperfect objectives could lead to an uncontrollable AI that stops at nothing to achieve its aim. Professor Russell gives a number of examples of this including a domestic robot programmed to look after children. In this scenario, the robot tries to feed the children but sees nothing in the fridge.
    • “And then… the robot sees the cat… Unfortunately, the robot lacks the understanding that the cat’s sentimental value is far more important than its nutritional value. So, you can imagine what happens next!”
    • The clickbait-amplifying, manipulative algorithms of social media illustrate the pitfalls of pursuing fixed but misspecified objectives. While these algorithms are very simple, things could get far worse in future as AI systems become more powerful and we place greater reliance on them. In a chilling example, Russell imagine a future COP36 asking for help on ocean acidification. Even though the pitfalls of specifying narrow objectives have been factored in, the solution found by the AI system is a chemical reaction that uses up a quarter of all the oxygen in the atmosphere.
    • One could say, “Just be more careful in specifying the objective!” But even for a relatively narrow task such as driving, it turns out to be very difficult to balance the variables of speed, safety, legality, passenger comfort, and politeness to other road users.
  5. AI needs humility
    • If specifying complete and correct objectives in the real world is practically impossible, then, Russell suggests, we need a different way to think about building AI systems. Instead of requiring a fixed objective, the AI system needs to know that it doesn’t know what the real human objective is, even though it is required to pursue it. This humility leads the AI system to defer to human control, to ask permission before doing something that might violate human preferences, and to allow itself to be switched off if that’s what humans want.
  6. AI warfare is not science fiction – it’s already here
    • The idea of a malevolent and warlike AI was popularised in the 1984 film The Terminator, with the rogue Skynet and its quest for world domination. “It makes people think that autonomous weapons are science fiction. They’re not. You can buy them today,” warns Professor Russell.
    • The Kargu 2 drone is one example. Advertised as capable of "anti-personnel autonomous hits" with "targets selected on images and face recognition", this dinner plate-sized drone, made by STM (“an arm of the Turkish government”), was used in 2020 to hunt down and attack human targets in Libya – according to a recent UN report.
  7. Your job is at risk
    • Despite the prospect of killer robots, it’s AI’s effect on the world of work that scares most people. This goes from concerned parents witnessing their children’s academic or job prospects being decided by AI sifting through applications to the succession of Nobel prize-winning economists who describe it as “the biggest problem we face in the world economy”.
    • Technological advance sees economies experience an inverted "U-curve". Professor Russell explains: “The direct effects of technology work both ways: at first, technology can increase employment by reducing costs and increasing demand; subsequently, further increases in technology mean that fewer and fewer humans are required once demand saturates.”
    • “In the early 20th century, technology put tens of millions of horses out of work,” says Professor Russell. “Their ‘new job’ was to be pet food and glue. Human workers might not be so compliant.”
  8. Wall-E could have been a documentary
    • The reaction of economists to the threats presented by AI has seen two camps emerge – one favouring UBI (Universal Basic Income), the other warning that UBI is an admission of failure. The latter group argues that guaranteeing a level of income will result in a lack of striving, taking us closer to a life of the kind depicted in the animated film Wall-E, where robots do all the work and humans have become lazy and enfeebled.
  9. Our humanity is our greatest resource
    • The economic challenge of AI might teach us to have a new appreciation of the interpersonal professions – such as psychotherapists, executive coaches, tutors, counsellors, social workers, companions and those who care for children and the elderly. Prof. Russell suggests that these roles should be perceived as emphasising personal growth rather than, as they often are, about creating a dependence. “If we can no longer supply routine physical labour and routine mental labour,” he says, “we can still supply our humanity.”
  10. AI has no endgame
    • Prof. Russell urges people not to think of AI as an arms race. “We have Putin, we have US presidents, Chinese general secretaries talking about this as if we're going to use AI to enable us to rule the world. And I think that's a huge mistake.”
    • Wrangling over the control of general-purpose AI would be, Professor Russell says, as futile as arguing over who has more digital copies of a newspaper. “If I have a digital copy, it doesn't prevent other people from having digital copies, and it doesn't matter how many more copies I have than they do, it doesn't do me a lot of good.”
  11. Professor Stuart Russell
    • Stuart Russell is founder of the Center for Human-Compatible Artificial Intelligence at the University of California, Berkeley.
    • Professor Stuart Russell
  12. The Reith Lectures 2021: Living With Artificial Intelligence
  13. AI and why people should be scared:Former BBC correspondent Rory Cellan-Jones spoke to Prof Stuart Russell ahead of his four Reith Lectures.
    • Prof Stuart Russell, founder of the Center for Human-Compatible Artificial Intelligence, at the University of California, Berkeley, is giving this year's Reith Lectures.
      • His four lectures, Living With Artificial Intelligence, address the existential threat from machines more powerful than humans - and offer a way forward.
      • The Reith Lectures are in Newcastle, Manchester, Edinburgh and London
      • Last month, he spoke to then BBC News technology correspondent Rory Cellan-Jones about what to expect.
    • How have you shaped the lectures?
      • The first drafts that I sent them were much too pointy-headed, much too focused on the intellectual roots of AI and the various definitions of rationality and how they emerged over history and things like that.
      • So I readjusted - and we have one lecture that introduces AI and the future prospects both good and bad.
      • And then, we talk about weapons and we talk about jobs.
      • And then, the fourth one will be: "OK, here's how we avoid losing control over AI systems in the future."
    • Do you have a formula, a definition, for what artificial intelligence is?
      • Yes, it's machines that perceive and act and hopefully choose actions that will achieve their objectives.
      • All these other things that you read about, like deep learning and so on, they're all just special cases of that.
    • But could a dishwasher not fit into that definition?
      • Increasingly, home appliances have a degree of intelligence
      • It's a continuum.
      • Thermostats perceive and act and, in a sense, they have one little rule that says: "If the temperature is below this, turn on the heat.
      • "If the temperature is above this, turn off the heat."
      • So that's a trivial program and it's a program that was completely written by a person, so there was no learning involved.
      • All the way up the other end - you have the self-driving cars, where the decision-making is much more complicated, where a lot of learning was involved in achieving that quality of decision-making.
      • But there's no hard-and-fast line.
      • We can't say anything below this doesn't count as AI and anything above this does count.
    • And is it fair to say there have been great advances in the past decade in particular?
      • In object recognition, for example, which was one of the things we've been trying to do since the 1960s, we've gone from completely pathetic to superhuman, according to some measures.
      • And in machine translation, again we've gone from completely pathetic to really pretty good.
    • So what is the destination for AI?
      • Robots are increasingly being used as a teaching resource in schools - but will they one day build one?
      • If you look at what the founders of the field said their goal was, general-purpose AI, which means not a program that's really good at playing Go or a program that's really good at machine translation but something that can do pretty much anything a human could do and probably a lot more besides because machines have huge bandwidth and memory advantages over humans.
      • Just say we need a new school.
      • The robots would show up.
      • The robot trucks, the construction robots, the construction management software would know how to build it, knows how to get permits, knows how to talk to the school district and the principal to figure out the right design for the school and so on so forth - and a week later, you have a school.
    • And where are we in terms of that journey?
      • I'd say we're a fair bit of the way.
      • Clearly, there are some major breakthroughs that still have to happen.
      • And I think the biggest one is around complex decision-making.
      • So if you think about the example of building a school - how do we start from the goal that we want a school, and then all the conversations happen, and then all the construction happens, how do humans do that?
      • Well, humans have an ability to think at multiple scales of abstraction.
      • So we might say: "OK, well the first thing we need to figure out is where we're going to put it. And how big should it be?"
      • We don't start thinking about should I move my left finger first or my right foot first, we focus on the high-level decisions that need to be made.
    • You've painted a picture showing AI has made quite a lot of progress - but not as much as it thinks. Are we at a point, though, of extreme danger?
      • I think so, yes.
      • There are two arguments as to why we should pay attention.
      • One is that even though our algorithms right now are nowhere close to general human capabilities, when you have billions of them running they can still have a very big effect on the world.
      • The other reason to worry is that it's entirely plausible - and most experts think very likely - that we will have general-purpose AI within either our lifetimes or in the lifetimes of our children.
      • I think if general-purpose AI is created in the current context of superpower rivalry - you know, whoever rules AI rules the world, that kind of mentality - then I think the outcomes could be the worst possible.
    • Your second lecture is about military use of AI and the dangers there. Why does that deserve a whole lecture?
      • The military is already experimenting with AI and robots on the battlefield
      • Because I think it's really important and really urgent.
      • And the reason it's urgent is because the weapons that we have been talking about for the last six years or seven years are now starting to be manufactured and sold.
      • So in 2017, for example, we produced a movie called Slaughterbots about a small quadcopter about 3in [8cm] in diameter that carries an explosive charge and can kill people by getting close enough to them to blow up.
      • We showed this first at diplomatic meetings in Geneva and I remember the Russian ambassador basically sneering and sniffing and saying: "Well, you know, this is just science fiction, we don't have to worry about these things for 25 or 30 years."
      • I explained what my robotics colleagues had said, which is that no, they could put a weapon like this together in a few months with a few graduate students.
      • And in the following month, so three weeks later, the Turkish manufacturer STM [Savunma Teknolojileri Mühendislik ve Ticaret AŞ] actually announced the Kargu drone, which is basically a slightly larger version of the Slaughterbot.
    • What are you hoping for in terms of the reaction to these lectures - that people will come away scared, inspired, determined to see a path forward with this technology?
      • All of the above - I think a little bit of fear is appropriate, not fear when you get up tomorrow morning and think my laptop is going to murder me or something, but thinking about the future - I would say the same kind of fear we have about the climate or, rather, we should have about the climate.
      • I think some people just say: "Well, it looks like a nice day today," and they don't think
      • And I think a little bit of fear is necessary, because that's what makes you act now rather than acting when it's too late, which is, in fact, what we have done with the climate.
      Cellan-Jones: Reith Lectures: AI and why people should be scared

Comment:

See Reith Lectures: Nine Things You Should Know About AI.

Text Colour Conventions (see disclaimer)

  1. Blue: Text by me; © Theo Todman, 2022
  2. Mauve: Text by correspondent(s) or other author(s); © the author(s)



© Theo Todman, June 2007 - Jan 2022. Please address any comments on this page to theo@theotodman.com. File output:
Website Maintenance Dashboard
Return to Top of this Page Return to Theo Todman's Philosophy Page Return to Theo Todman's Home Page