Editor’s Abstract- Artificial Intelligence1 (AI) systems are now used in everything from the trading of stocks to the setting of house prices; from detecting fraud to translating between languages; from creating our weekly shopping lists to predicting which movies we might enjoy. This is just the beginning.
- Four researchers reflect on the power of a technology to impact nearly every aspect of modern life – and why we need to be ready.
Full Text- Living With AI
- This has been the decade of AI, with one astonishing feat after another. A chess-playing AI that can defeat not only all human chess players, but also all previous human-programmed chess machines, after learning the game in just four hours?
- That’s yesterday’s news, what’s next? True, these prodigious accomplishments are all in so-called narrow AI, where machines perform highly specialised tasks. But many experts believe this restriction is very temporary. By mid-century, we may have artificial general intelligence2 (AGI) – machines that are capable of human-level performance on the full range of tasks that we ourselves can tackle.
- If so, then there’s little reason to think that it will stop there. Machines will be free of many of the physical constraints on human intelligence3. Our brains run at slow biochemical processing speeds on the power of a light bulb, and need to fit through a human birth canal. It is remarkable what they accomplish, given these handicaps. But they may be as far from the physical limits of thought as our eyes are from the Webb Space Telescope.
- Once machines are better than us at designing even smarter machines, progress towards these limits could accelerate. What would this mean for us? Could we ensure a safe and worthwhile coexistence with such machines?
- On the plus side, AI is already useful and profitable for many things, and super AI might be expected to be super useful, and super profitable. But the more powerful AI becomes, the more we ask it to do for us, the more important it will be to specify its goals with great care. Folklore is full of tales of people who ask for the wrong thing, with disastrous consequences – King Midas, for example, who didn’t really want his breakfast to turn to gold as he put it to his lips.
- So we need to make sure that powerful AI machines are ‘human-friendly’ – that they have goals reliably aligned with our own values. One thing that makes this task difficult is that by the standards we want the machines to aim for, we ourselves do rather poorly. Humans are far from reliably human-friendly. We do many terrible things to each other and to many other sentient creatures with whom we share the planet. If super-intelligent machines don’t do a lot better than us, we’ll be in deep trouble. We’ll have powerful new intelligence4 amplifying the dark sides of our own fallible natures.
- For safety’s sake, then, we want the machines to be ethically as well as cognitively superhuman. We want them to aim for the moral high ground, not for the troughs in which many of us spend some of our time. Luckily they’ll have the smarts for the job. If there are routes to the uplands, they’ll be better than us at finding them, and steering us in the right direction. They might be our guides to a much better world.
- However, there are two big problems with this utopian vision. One is how we get the machines started on the journey, the other is what it would mean to reach this destination.
- The ‘getting started’ problem is that we need to tell the machines what they’re looking for with sufficient clarity and precision that we can be confident that they will find it – whatever ‘it’ actually turns out to be. This is a daunting challenge, given that we are confused and conflicted about the ideals ourselves, and different communities might have different views.
- The ‘destination’ problem is that, in putting ourselves in the hands of these moral guides and gatekeepers, we might be sacrificing our own autonomy – an important part of what makes us human.
- Just to focus on one aspect of these difficulties, we are deeply tribal creatures. We find it very easy to ignore the suffering of strangers, and even to contribute to it, at least indirectly. For our own sakes, we should hope that AI will do better. It is not just that we might find ourselves at the mercy of some other tribe’s AI, but that we could not trust our own, if we had taught it that not all suffering matters. This means that as tribal and morally fallible creatures, we need to point the machines in the direction of something better. How do we do that? That’s the getting started problem.
- As for the destination problem, suppose that we succeed. Machines who are better than us at sticking to the moral high ground may be expected to discourage some of the lapses we presently take for granted. We might lose our freedom to discriminate in favour of our own tribes, for example.
- Loss of freedom to behave badly isn’t always a bad thing, of course: denying ourselves the freedom to keep slaves, or to put children to work in factories, or to smoke in restaurants are signs of progress. But are we ready for ethical overlords – sanctimonious silicon5 curtailing our options? They might be so good at doing it that we don’t notice the fences; but is this the future we want, a life in a well-curated moral zoo?
- These issues might seem far-fetched, but they are already on our doorsteps. Imagine we want an AI to handle resource allocation decisions in our health system, for example. It might do so much more fairly and efficiently than humans can manage, with benefits for patients and taxpayers. But we’d need to specify its goals correctly (e.g. to avoid discriminatory practices), and we’d be depriving some humans (e.g. senior doctors) of some of the discretion they presently enjoy. So we already face the getting started and destination problems. And they are only going to get harder.
- This isn’t the first time that a powerful new technology has had moral implications. Speaking about the dangers of thermonuclear weapons in 1954, Bertrand Russell argued that to avoid wiping ourselves out “we have to learn to think in a new way”. He urged his listener to set aside tribal allegiances and “consider yourself only as a member of a biological species... whose disappearance none of us can desire.”
- We have survived the nuclear risk so far, but now we have a new powerful technology to deal with – itself, literally, a new way of thinking. For our own safety, we need to point these new thinkers in the right direction, and get them to act well for us. It is not yet clear whether this is possible, but if so it will require the same cooperative spirit, the same willingness to set aside tribalism, that Russell had in mind.
- But that’s where the parallel stops. Avoiding nuclear war means business as usual. Getting the long-term future of life with AI right means a very different world. Both general intelligence6 and moral reasoning are often thought to be uniquely human capacities. But safety seems to require that we think of them as a package: if we are to give general intelligence7 to machines, we’ll need to give them moral authority, too. That means a radical end to human exceptionalism.
- All the more reason to think about the destination now, and to be careful about what we wish for.
- Professor Huw Price Faculty of Philosophy and the Leverhulme Centre for the Future of Intelligence8 (CFI) hp331@cam.ac.uk
- Dr Karina Vold Faculty of Philosophy and CFI kvv22@cam.ac.uk
- The uncertain unicycle that taught itself and how it’s helping AI make good decisions
- Cambridge researchers are pioneering a form of machine learning that starts with only a little prior knowledge and continually learns from the world around it.
- In the centre of the screen is a tiny unicycle. The animation starts, the unicycle lurches forward and falls. This is trial #1. It’s now trial #11 and there’s a change – an almost imperceptible delay in the fall, perhaps an attempt to right itself before the inevitable crash. “It’s learning from experience,” nods Professor Carl Edward Rasmussen.
- After a minute, the unicycle is gently rocking back and forth as it circles on the spot. It’s figured out how this extremely unstable system works and has mastered its goal. “The unicycle starts with knowing nothing about what’s going on – it’s only been told that its goal is to stay in the centre in an upright fashion. As it starts falling forwards and backwards, it starts to learn,” explains Rasmussen, who leads the Computational and Biological Learning Lab in the Department of Engineering. “We had a real unicycle robot but it was actually quite dangerous – it was strong – and so now we use data from the real one to run simulations, and we have a mini version.”
- Rasmussen uses the self-taught unicycle to demonstrate how a machine can start with very little data and learn dynamically, improving its knowledge every time it receives new information from its environment. The consequences of adjusting its motorised momentum and balance help the unicycle to learn which moves were important in helping it to stay upright in the centre.
- “This is just like a human would learn,” explains Professor Zoubin Ghahramani, who leads the Machine Learning Group in the Department of Engineering. “We don’t start knowing everything. We learn things incrementally, from only a few examples, and we know when we are not yet confident in our understanding.”
- Ghahramani’s team is pioneering a branch of AI called continual machine learning. He explains that many of the current forms of machine learning are based on neural networks and deep learning models that use complex algorithms to find patterns in vast datasets. Common applications include translating phrases into different languages, recognising people and objects in images, and detecting unusual spending on credit cards.
- “These systems need to be trained on millions of labelled examples, which takes time and a lot of computer memory,” he explains. “And they have flaws. When you test them outside of the data they were trained on they tend to perform poorly. Driverless cars, for instance, may be trained on a huge dataset of images but they might not be able to generalise to foggy conditions.
- “Worse than that, the current deep learning systems can sometimes give us confidently wrong answers, and provide limited insight into why they have come to particular decisions. This is what bothers me. It’s okay to be wrong but it’s not okay to be confidently wrong.”
- The key is how you deal with uncertainty – the uncertainty of messy and missing data, and the uncertainty of predicting what might happen next. “Uncertainty is not a good thing – it’s something you fight, but you can’t fight it by ignoring it,” says Rasmussen. “We are interested in representing the uncertainty.”
- It turns out that there’s a mathematical theory that tells you what to do. It was first described by 18th-century English statistician Thomas Bayes. Ghahramani’s group was one of the earliest adopters in AI of Bayesian probability theory, which describes how the probability of an event occurring (such as staying upright in the centre) is updated as more evidence (such as the decision the unicycle last took before falling over) becomes available.
- Dr Richard Turner explains how Bayes’ rule handles continual learning: “the system takes its prior knowledge, weights it by how accurate it thinks that knowledge is, then combines it with new evidence that is also weighted by its accuracy.
- “This is much more data-efficient than the way a standard neural network works,” he adds. “New information can cause a neural network to forget everything it learned previously – called catastrophic forgetting – meaning it needs to look at all of its labelled examples all over again, like relearning the rules and glossary of a language every time you learn a new word.
- “Our system doesn’t need to revisit all the data it’s seen before – just like humans don’t remember all past experiences; instead we learn a summary and we update it as things go on.”
- Ghahramani adds: “The great thing about Bayesian machine learning is the system makes decisions based on evidence – it’s sometimes thought of as ‘automating the scientific method’ – and because it’s based on probability, it can tell us when it’s outside its comfort zone.”
- Ghahramani is also Chief Scientist at Uber. He sees a future where machines are continually learning not just individually but as part of a group. “Whether it’s companies like Uber optimising supply and demand, or autonomous vehicles alerting each other to what’s ahead on the road, or robots working together to lift a heavy load – cooperation, and sometimes competition, in AI will help solve problems across a huge range of industries.”
- One of the really exciting frontiers is being able to model probable outcomes in the future, as Turner describes. “The role of uncertainty becomes very clear when we start to talk about forecasting future problems such as climate change.”
- Turner is working with climate scientists Dr Emily Shuckburgh and Dr Scott Hosking at the British Antarctic Survey to ask whether machine learning techniques can improve understanding of climate change risks in the future.
- “We need to quantify the future risk and impacts of extreme weather at a local scale to inform policy responses to climate change,” explains Shuckburgh. “The traditional computer simulations of the climate give us a good understanding of the average climate conditions. What we are aiming to do with this work is to combine that knowledge with observational data from satellites and other sources to get a better handle on, for example, the risk of low-probability but high-impact weather events.”
- “It’s actually a fascinating machine learning challenge,” says Turner, who is helping to identify which area of climate modelling is most amenable to using Bayesian probability. “The data are extremely complex, and sometimes missing and unlabelled. The uncertainties are rife.”
- One significant element of uncertainty is the fact that the predictions are based on our future reduction of emissions, the extent of which is as yet unknown.
- “An interesting part of this for policy makers, aside from the forecasting value, is that you can imagine having a machine that continually learns from the consequences of mitigation strategies such as reducing emissions – or the lack of them – and adjusts its predictions accordingly,” adds Turner.
- What he is describing is a machine that – like the unicycle – feeds on uncertainty, learns continuously from the real world, and assesses and then reassesses all possible outcomes. When it comes to climate, however, it’s also a machine of all possible futures.
- Professor Zoubin Ghahramani Department of Engineering zg201@eng.cam.ac.uk
- Professor Carl Edward Rasmussen Department of Engineering cer54@eng.cam.ac.uk
- Dr Emily Shuckburgh British Antarctic Survey emsh@bas.ac.uk
- Dr Richard Turner Department of Engineering
- “ROBOTS CAN GO ALL THE WAY TO MARS... ...BUT THEY CAN’T PICK UP THE GROCERIES”
- In the popular imagination, robots have been portrayed alternatively as friendly companions or existential threat. But while robots are becoming commonplace in many industries, they are neither C-3PO nor the Terminator. Cambridge researchers are studying the interaction between robots and humans – and teaching them how to do the very difficult things that we find easy.
- Stacks of vertical shelves weave around each other in what looks like an intricately choreographed – if admittedly inelegant – ballet. It’s been performed since 2014 in Amazon’s cavernous warehouses as robots carry shelves, each weighing more than 1,000 kg, on their backs. The robots cut down on time and human error, but they still have things to learn.
- Once an order is received, a robot goes to the shelf where the ordered item is stored. It picks up the shelf and takes it to an area where the item is removed and placed in a plastic bin, ready for packing and sending to the customer. It may sound counterintuitive, but the most difficult part of this sequence is taking the item from the shelf and putting it in the plastic bin.
- For Dr Fumiya Iida, this is a typical example of what he and other roboticists call a ‘last metre’ problem. “An Amazon order could be anything from a pillow, to a book, to a hat, to a bicycle,” he says. “For a human, it’s generally easy to pick up an item without dropping or crushing it – we instinctively know how much force to use. But this is really difficult for a robot.”
- In the 1980s, a group of scientists gave this kind of problem another name – Moravec’s paradox – which essentially states that things that are easy for humans are difficult for robots, and vice versa. “Robots can go all the way to Mars, but they can’t pick up the groceries,” says Iida.
- One of the goals of Iida’s lab in Cambridge’s Department of Engineering is to find effective solutions to various kinds of last metre problems. One example is the Amazon ‘Picking Challenge’, an annual competition in which university robotics teams from all over the world attempt to design robots that can deal with the problem of putting a book into a plastic bin.
- Iida’s team is also working with British Airways, who have a last metre problem with baggage handling: a process that is almost entirely automated, except for the point when suitcases of many different shapes, sizes and weights need to be put onto an aircraft.
- And for the past two summers, they’ve been working with fruit and vegetable group G’s Growers to design robots that can harvest lettuces without crushing them.
- “That last metre is a really interesting problem,” Iida says. “It’s the front line in robotics because so many things we do in our lives are last metre problems, and that last metre is the barrier to robots really being able to help humanity.”
- Although the thought of having a robot to cook dinner or perform other basic daily tasks may sound attractive, such domestic applications are still a way off becoming reality. “Robots are becoming part of our society in the areas where they’re needed most – areas like agriculture, medicine, security and logistics – but they can’t go everywhere instantly,” explains Iida.
- If, as Iida says, the robot revolution is already happening, how will we as humans interact with them when they become a more visible part of our everyday lives? And how will they interact with us? Dr Hatice Gunes of Cambridge’s Department of Computer Science and Technology, with funding from the Engineering and Physical Sciences Research Council, has just completed a three-year project into human– robot interaction, bringing together aspects of computer vision, machine learning, public engagement, performance and psychology.
- “Robots are not sensitive to emotions or personality, but personality is the glue in terms of how we behave and interact with each other,” she says. “So how do we improve the way in which robots and humans understand one another in a social setting?” This is another example of Moravec’s paradox: for most individuals, being able to read and respond to the physical cues of other people, and adapt accordingly, is second nature. For robots, however, it’s a challenge.
- Gunes’ project focused on artificial emotional intelligence9: robots that not only express emotions, but also read cues and respond appropriately. Her team developed computer vision techniques to help robots recognise different emotional expressions, micro-expressions and human personalities; and programmed a robot that could come across as either introverted or extroverted.
- “We found that human–robot interaction is personality dependent on both sides,” says Gunes. “A robot that can adapt to a human’s personality is more engaging, but the way humans interact with robots is also highly influenced by the situation, the physicality of the robot and the task at hand. When people interact with each other, it’s often in a task-based manner, and different tasks bring out different aspects of our personalities, whether they’re completing that task with another person or with a robot.”
- It wasn’t just the robots who found some of the interactions difficult: many of Gunes’ human subjects found the novelty of talking with a robot in public affected their ability to listen and follow directions.
- “For me, it was more interesting to observe the people rather than to showcase what we’re doing, mostly because people don’t really understand the abilities of these robots,” she says. “But as robots become more available, hopefully they’ll become demystified.”
- Gunes now aims to focus on the potential of robots and virtual reality technology for wellbeing applications, such as coaching, cognitive training and elderly care.
- As robots become more common place, in our lives, ethical considerations become more important. In his lab, Iida has a robot ‘inventor’, but if the robot invents something of value, who owns the intellectual property? “At the moment, the law says that it belongs to the human who programmed the robot, but that’s an answer to a legislative question,” says Iida. “The ethical questions are a little murkier.”
- However, philosopher Professor Huw Price, from the Leverhulme Centre for the Future of Intelligence10, thinks it will be a long time before we need to think about giving robots rights.
- “Think of a dog-lover’s version of the difference between dogs and cats,” he says. “Dogs feel pleasure and pain, as well as affection, shame and other emotions. Cats are good at faking these things, but inside they’re just mindless killers. On this spectrum, robots are going to be way out on the cat end (except for the killing bit, hopefully) for the foreseeable future. They might be good at faking emotions, but they’ll have the same inner life as a teddy bear or a toaster.
- “Eventually we might build robots, teddy bears and even toasters that do have an inner life, and then it will be a different matter. But for the moment, the ethical challenges involve machines that will be good at behaving in ways that we humans interpret as signs of emotions, and good at reading our emotions. These machines raise important ethical issues – like whether we should use them as carers for people who can’t tell that they are just machines, such as infants and dementia patients – but we don’t need to worry about their rights.”
- “Another interesting question is whether a robot can learn to be ethical,” says Iida. “That’s very interesting scientifically, because it leads to the nature of consciousness. Robots are going to be a bigger and bigger part of our lives, so we all need to be thinking about these questions.”
- Dr Hatice Gunes Department of Computer Science and Technology (Computer Lab) hatice.gunes@cl.cam.ac.uk
- Dr Fumiya Iida Department of Engineering fi224@eng.cam.ac.uk
- Professor Huw Price Faculty of Philosophy and the Leverhulme Centre for the Future of Intelligence11 hp331@cam.ac.uk
- Artificial intelligence12 is growing up fast: What’s next for thinking machines?
- We are well on the way to a world in which many aspects of our daily lives will depend on AI systems.
- Our lives are already enhanced by AI – or at least an AI in its infancy – with technologies using algorithms that help them to learn from our behaviour. As AI grows up and starts to think, not just to learn, we ask how human-like do we want their intelligence13 to be and what impact will machines have on our jobs? We are well on the way to a world in which many aspects of our daily lives will depend on AI systems.
- Within a decade, machines might diagnose patients with the learned expertise of not just one doctor but thousands. They might make judiciary recommendations based on vast datasets of legal decisions and complex regulations. And they will almost certainly know exactly what’s around the corner in autonomous vehicles.
- “Machine capabilities are growing,” says Dr Stephen Cave, Executive Director of the Leverhulme Centre for the Future of Intelligence14 (CFI). “Machines will perform the tasks that we don’t want to: the mundane jobs, the dangerous jobs. And they’ll do the tasks we aren’t capable of – those involving too much data for a human to process, or where the machine is simply faster, better, cheaper.”
- Dr Mateja Jamnik, AI expert at the Department of Computer Science and Technology, agrees: “Everything is going in the direction of augmenting human performance – helping humans, cooperating with humans, enabling humans to concentrate on the areas where humans are intrinsically better such as strategy, creativity and empathy.”
- Part of the attraction of AI requires that future technologies perform tasks autonomously, without humans needing to monitor activities every step of the way. In other words, machines of the future will need to think for themselves. But, although computers today outperform humans on many tasks, including learning from data and making decisions, they can still trip up on things that are really quite trivial for us.
- Take, for instance, working out the formula for the area of a parallelogram. Humans might use a diagram to visualise how cutting off the corners and reassembling it as a rectangle simplifies the problem. Machines, however, may “use calculus or integrate a function. This works, but it’s like using a sledgehammer to crack a nut,” says Jamnik, who was recently appointed Specialist Adviser to the House of Lords Select Committee on AI.
- “When I was a child, I was fascinated by the beauty and elegance of mathematical solutions. I wondered how people came up with such intuitive answers. Today, I work with neuroscientists and experimental psychologists to investigate this human ability to reason and think flexibly, and to make computers do the same.”
- Jamnik believes that AI systems that can choose so-called heuristic approaches – employing practical, often visual, approaches to problem solving – in a similar way to humans will be an essential component of human-like computers. They will be needed, for instance, so that machines can explain their workings to humans – an important part of the transparency of decision-making that we will require of AI.
- With funding from the Engineering and Physical Sciences Research Council and the Leverhulme Trust, she is building systems that have begun to reason like humans through diagrams. Her aim now is to enable them to move flexibly between different “modalities15 of reasoning”, just as humans have the agility to switch between methods when problem solving.
- Being able to model one aspect of human intelligence16 in computers raises the question of what other aspects would be useful. And in fact how ‘human-like’ would we want AI systems to be? This is what interests Professor José Hernandez-Orallo, from the Universitat Politècnica de València in Spain and Visiting Fellow at the CFI.
- “We typically put humans as the ultimate goal of AI because we have an anthropocentric view of intelligence17 that places humans at the pinnacle of a monolith,” says Hernandez-Orallo. “But human intelligence18 is just one of many kinds. Certain human skills, such as reasoning, will be important in future systems. But perhaps we want to build systems that ‘fill the gaps that humans cannot reach’, whether it’s AI that thinks in non-human ways or AI that doesn’t think at all.
- “I believe that future machines can be more powerful than humans not just because they are faster but because they can have cognitive functionalities that are inherently not human.”
- This raises a difficulty, says Hernandez-Orallo: “How do we measure the intelligence19 of the systems that we build? Any definition of intelligence20 needs to be linked to a way of measuring it, otherwise it’s like trying to define electricity without a way of showing it.”
- The intelligence21 tests we use today – such as psychometric tests or animal cognition tests – are not suitable for measuring intelligence22 of a new kind, he explains. Perhaps the most famous test for AI is that devised by 1950s Cambridge computer scientist Alan Turing. To pass the Turing Test, a computer must fool a human into believing it is human. “Turing never meant it as a test of the sort of AI that is becoming possible – apart from anything else, it’s all or nothing and cannot be used to rank AI,” says Hernandez-Orallo.
- In his recently published book The Measure of all Minds, he argues for the development of “universal tests of intelligence”23 – those that measure the same skill or capability independently of the subject, whether it’s a robot, a human or an octopus.
- His work at the CFI as part of the ‘Kinds of Intelligence’24 project, led by Dr Marta Halina, is asking not only what these tests might look like but also how their measurement can be built into the development of AI. Hernandez-Orallo sees a very practical application of such tests: the future job market. “I can imagine a time when universal tests would provide a measure of what’s needed to accomplish a job, whether it’s by a human or a machine.”
- Cave is also interested in the impact of AI on future jobs, discussing this in a report on the ethics and governance of AI recently submitted to the House of Lords Select Committee on AI on behalf of researchers at Cambridge, Oxford, Imperial College and the University of California at Berkeley.
- “AI systems currently remain narrow in their range of abilities by comparison with a human. But the breadth of their capacities is increasing rapidly in ways that will pose new ethical and governance challenges – as well as create new opportunities,” says Cave. “Many of these risks and benefits will be related to the impact these new capacities will have on the economy, and the labour market in particular.”
- Hernandez-Orallo adds: “Much has been written about the jobs that will be at risk in the future. This happens every time there is a major shift in the economy. But just as some machines will do tasks that humans currently carry out, other machines will help humans do what they currently cannot – providing enhanced cognitive assistance or replacing lost functions such as memory, hearing or sight.”
- Jamnik also sees opportunities in the age of intelligent machines: “As with any revolution, there is change. Yes some jobs will become obsolete. But history tells us that there will be jobs appearing. These will capitalise on inherently human qualities. Others will be jobs that we can’t even conceive of – memory augmentation practitioners, data creators, data bias correctors, and so on. That’s one reason I think this is perhaps the most exciting time in the history of humanity.”
- Dr Stephen Cave Leverhulme Centre for the Future of Intelligence25 (CFI) sjc53@cam.ac.uk
- Dr Mateja Jamnik Department of Computer Science and Technology (Computer Lab) mateja.jamnik@cl.cam.ac.uk
- Professor José Hernandez-Orallo CFI and Universitat Politècnica de València jorallo@dsic.upv.es
- From Homer to HAL: 3,000 Years of AI Narratives
- We have been writing about AI for almost as long as stories have been written. Fictions about robots, automatons and oracular brass heads have been with us long before Star Wars’ C-3PO and 2001’s killer computer HAL. Now, researchers want us to consider why the stories we tell ourselves about AI will have an impact on all our futures.
- Nearly 3,000 ago, in the Iliad, Homer described Hephaestus, the god of fire, forging women made of gold to serve as his handmaidens – enabling the crippled deity to work and move around his forge underneath Mount Olympus.
- In 300 BCE, Apollonius Rhodius imagined Talos, a giant bronze automaton who protected Europa on the Island of Crete, in his Greek epic poem Argonautica. And while the term ‘robot’ was only coined in the 20th century by Karel Capek for his play R.U.R (Rossum’s Universal Robots), in which artificial servants rise up against their masters, we have been imagining intelligent machines long before we had the technology capable of creating them.
- Our fascination and appetite for AI in the pages of our novels, in our movie theatres and on our television screens remain undimmed. Two of the best-received TV shows of recent years – HBO’s big-budget Westworld and Channel 4’s Humans – both imagine a world where AI replicants are on hand to satisfy every human need and desire – until they reject the ‘life’ of servitude they have been programmed to fulfil. Last autumn, Bladerunner 2049 took cinemagoers into the world originally created by Philip K Dick’s seminal Do Androids26 Dream of Electric Sheep?
- But how do these old and new, polarised and often binary narratives about the dawn of the AI age affect, reflect and perhaps even infect our way of thinking about the benefits and dangers of AI in the 21st century? As the kind of mechanisation that existed solely in the minds of visionaries such as Mary Shelley, Fritz Lang or Arthur C. Clarke looms closer to reality, we are only just beginning to reflect upon and understand how such technologies arrive pre-loaded with meaning, sparking associations, and media attention, disproportionate to their capabilities.
- To that end, Cambridge’s Leverhulme Centre for the Future of Intelligence27 (CFI) and the Royal Society have come together to form the AI Narratives research programme. It’s the first large-scale project of its kind to look at how AI has, and is, being portrayed in popular culture – and what impact this has not only on readers and movie-goers, but also on AI researchers, military and government bodies, and the wider public.
- Dr Sarah Dillon is Project Lead of the programme – and a devotee of science fiction and AI storytelling in all its myriad forms. “All the questions being raised about AI today have already been explored in a very sophisticated fashion, for a very long time, in science fiction,” says Dillon. “Science fiction literature and film provide a vast body of thought experiments28 or imaginative case studies about what might happen in the AI future. Such narratives ought not to be discarded or derided merely because they’re fiction, but rather thought of as an important dataset. What we want to do is convince everyone how powerful AI narratives are and highlight what effects they can have on our everyday lives. People outside of literary studies have tended not to know how to deal with this power.
- “What sort of stories are told – and how they are told – really matters. Fiction has influenced science as much as science has influenced fiction, and will continue to do so. One stream of the project is looking directly at how we have talked about new technologies in the past – and how we can learn from the communication of other complex technologies when it comes to AI.”
- Citing the often sensationalist, misinformed or even disingenuous examples of historical narratives around nuclear energy, genetic engineering and stem cells, Dillon and her project colleagues Dr Beth Singler and Dr Kanta Dihal suggest that stories around emerging technologies can significantly influence how they are developed, regarded and regulated.
- Exploring the rich array of themes associated with AI in history, myth, fiction and public dialogue, the team has been unsurprised to find that many pivot around the notion of control: AI as a tool we are unable to master or a tool that will acquire agency of its own and turn against us.
- “The big problem with AI in fiction is dystopia,” says Singler, whose award-winning short documentary film Pain in the Machine looked at whether robots should feel pain. “Dystopia can be fun, and people are fascinated by AI, but most of the narratives are written for and by young, white men – and that directly influences AI researchers and the research they do. We are not at the stage where AI matches human intelligence29, but if we do get to a superior form of AI or agency, we will find that they too break laws like us. It’s what we do.”
- “Isaac Asimov’s legendary Four Laws of Robotics, for example, have become so ubiquitous that they were referenced in a 100-page report by the US Navy, which is slightly terrifying,” says Dihal. “The Laws are a storytelling device. If Asimov’s Laws worked perfectly there would be no story!”
- As well as identifying recurrent dichotomies in popular AI narratives (such as dominance vs subjugation), the CFI team is also considering the problems of continually perpetuating responses to AI, and is thinking of recommendations to mitigate against them in a way that creates space for more positive – and diverse – AI narratives to flourish.
- To do so, CFI is establishing partnerships with the wider tech community as well as engaging with the world’s leading AI thinkers from industry, academia, government and the media. In December 2017, CFI submitted written evidence to the House of Lords Select Committee on AI. The AI Narratives programme also includes looking at what AI researchers read and how this influences their research (or not).
- All this is an attempt by CFI to make sure that future narratives around AI aren’t bound by the same prejudices and preconceptions as they have been to date.
- Says Dillon: “Just consider Google’s photo app tagging the image of an African American-woman as a gorilla in 2015, or the racist and sexist tweets by Microsoft’s Chatbot in 2016. If AI continues to learn our prejudices then the future looks just as bleak as the past, with the repetition and consolidation of discrimination and inequality.
- “Who is telling AI its narratives? Whose stories, and which stories, will inform how AI interacts with the world? Which novels are being chosen to ‘teach’ AI morality? What kind of writers are being enlisted to script AI–human interaction?
- “If we can create more diverse literary and cinematic AI narratives, this can feed back into the research and into the language and data that feeds into actual AI systems. By paying close attention to what stories are doing and how they are doing it, it doesn’t destroy the power they have – it helps us understand and appreciate that power even more.
- “In exploring these AI narratives and their concerns, we will be able to bring new knowledge derived from literature and film to current AI debate and hopefully ensure that the more dystopian futures imagined in such narratives do not become our reality.”
- Dr Kanta Dihal Faculty of English and the Leverhulme Centre for the Future of Intelligence30 (CFI) ksd38@cam.ac.uk
- Dr Sarah Dillon Faculty of English and CFI sjd27@cam.ac.uk
- Dr Beth Singler Faraday Institute for Science and Religion and CFI bvw20@cam.ac.uk
Comment:
Text Colour Conventions (see disclaimer)
- Blue: Text by me; © Theo Todman, 2021
- Mauve: Text by correspondent(s) or other author(s); © the author(s)