- What does it really mean for computers to be smarter than humans? We explore the singularity1.
- “I didn’t ask to be made: no one consulted me or considered my feelings in the matter. I don’t think it even occurred to them that I might have feelings. After I was made, I was left in a dark room for six months... and me with this terrible pain in all the diodes down my left side. I called for succour in my loneliness, but did anyone come? Did they hell.”
- Poor Marvin. Being 50,000 times more intelligent than the average human is, as Douglas Adams (St John’s 1971) points out in The Hitchhiker’s Guide to The Galaxy, a depressing business for this Paranoid Android.
- The creation of machines that can think for themselves - and don’t necessarily have the future of the human race in mind - isn’t great from a human perspective either, and has long proven a rich seam for science fiction.
- Whatever the state of the technology, over the past 50 years the idea that, at some point in the near future, human intelligence2 will be overtaken by superpowered Al, has moved from niche obsession to the subject of research. It even has a name: the technological singularity, or, simply, the singularity3.
- And as it has moved into the realm of serious debate, academics have started to ask broader questions about what the singularity4 might really mean. So, not just ‘Are machines about to take over the Earth?' (or the perennial ‘Will Al take my job?’), but also ‘What is intelligence5?’. What is it for? And what ethical framework - if any - is required to underpin Al research?
- To start with intelligence6, Dr Adrian Weller, Programme Director for Al at the Alan Turing Institute, says that our concept of intelligence7 tends to be rather egocentric. “Humans like to think of intelligence8 on a scale, with ourselves at the top,” he says. “But, actually, there are different kinds of intelligence9: machines are already better at doing arithmetic, as we saw when the Deep Blue computer program beat Garry Kasparov at chess in 1997. But they are very poor at other things - such as general knowledge and common sense. Take an autonomous vehicle. You can ask it to go from A to B as fast as it can, but how does it know you don’t want it to just accelerate and smash through lights to get there? And how does it differentiate trees from people, or cope with bad weather? We need somehow to input all those parameters10.”
- And that means intelligence11 doesn’t exist in a vacuum. Dr Stephen Cave, Executive Director of the Leverhulme Centre for the Future of Intelligence12 (CFI), calls this the “hidden human labour” behind Al. “Take convincing text13 written by a machine,” he says. “Thousands of hours of human work have gone into training the Al14. But that work is hidden, giving Al an element of the parlour trick. We tend to project agency on to tools and machines. We anthropomorphise them.”
- There’s another problem: intelligence15 isn’t particularly linear. “With every tech advance, we gain both power and dependence. Take Google Maps. It’s empowering to always be able to find our way, when our ancestors had to look at which side of the tree the moss was on. But now we have forgotten how to read the moss16,” Cave says. The singularity17 is supposed to be the moment when computers become ‘more intelligent’ than us, but they already are better at many things, he says - because we’ve been building them to be so. “Since the pocket calculator we have been building computers to be18 faster at certain things [and to do them more cheaply]. But does that mean they’re more intelligent?”
- Another challenge is that the singularity19 is remarkably culturally specific, says Dr Kanta Dihal, research fellow at CFI. “In Japan, which struggles with its ageing society and declining working-age population, there is a tradition of representing Al as a helper or carer. In Singapore, the utopian vision of technology is government driven. In the Middle East and North Africa, technology is perceived as coming from outside, with no real sense of control,” she says.
- This cultural specificity - both between and within cultures - can have unexpected side-effects. “In the west, Al is imagined as humanoid, like the Terminator. But, actually, what we’re developing are weapons of mass destruction20. Drones look like toys for teenagers21, but they track and shoot people,” she points out. “Similarly, white-collar workers worry that they might lose their job to a robot, when automation has already cost hundreds of thousands of blue-collar jobs.” Worrying about the singularity22 in terms of the super-intelligence23 of computers is for those who see themselves at the top of the intelligence24 hierarchy25.
- Which brings us to Al’s diversity-problem. “The developers of Al are extremely homogeneous,” says Dihal, “so they are unaware of, ignore or minimise the risks to groups they are not part of. We see so many errors being made with huge consequences for those who don't exist in datasets. We’ve seen facial recognition not recognising darker skin, or misgendering black people. We’ve seen friends in east Asia who can unlock each other’s phones using facial recognition.”
- Fairness, like intelligence26, is tricky27, says Weller. “Much of the technical community has focused on statistical notions of fairness. But fairness can be more complex than statistical parity. For instance, should you use different prediction algorithms for different groups? Notions of equality between groups can increase individual unfairness. We’re starting to see algorithms being used in criminal justice - to help judges decide how long to lock people up for, for example. But if we use historical data about the racial background of people who’ve been arrested, we write bias into the datasets. And we also need transparency28 - to be able to see the legal process and enable meaningful challenge.”
- Often, it’s not whether or not we can trust the machine, he says, but whether we have built in the right measures of trustworthiness. So, should preparing for the singularity29 be focused on developing ethical frameworks, rather than a robot takeover? Jess Whittlestone, Senior Research Associate at the Centre for the Study of Existential Risk and CFI, thinks so. “The big challenges we face - like a pandemic or climate change - really need global collaboration and action. Al could help: machine learning can filter lots of information, pull out what’s relevant and make sense of the noise. It can help detect fake news, and it helped track the spread of Covid-19 during the first wave. But what we need is more funding and attention concentrated on researching how we can mitigate the risks of Al, rather than funnelling money towards helping tech companies make better adverts.”
- Indeed, crisis pushes us to deploy Al before it’s ready, and certainly before ethical practices have been considered. “So, we need to establish a system now that will incorporate risk analysis, ensure the effectiveness of the Al, and determine its effects on different communities. Al policy and ethics is a relatively new field, but it needs to move fast to keep up.”
- Next year, Cambridge will offer a Master’s in Al Ethics and Society for the first time, and this interdisciplinary approach is crucial, says Whittlestone. “We need to make sure that systems developers work with30 people who understand pandemics. Al can solve optimisation problems around hospital resource allocation, for example, but needs to be co-designed by experts in systems, health infrastructure and ethics.”
- “Medicine and law had to develop professional ethics, and data science will have to do the same,” says Cave. “Data scientists see themselves as meritocratic, having risen by their brilliance. The geeks in the basement31 are now the masters of the universe, but they don’t see themselves as ethical actors. They don’t think they have responsibility for social justice.”
- For Whittlestone, if we can solve these issues by increasing diversity and working together, then Al, and even the singularity32, holds enormous potential for good. “An Al system might surpass us in certain tasks, such as analysing huge amounts of data and helping us control complex systems, such as energy or water infrastructure. If we can combine the adaptability of humans and the precision of machines, we could solve many problems,” she says. “For example, climate change is such a complex system it is difficult for humans to understand the effects of different interventions, and therefore we feel overwhelmed. But if we can build better models, which can make better recommendations, then computers could help us overcome that inertia.” And that’s something that even Marvin the Paranoid Android would probably support.
- Find out more at: LCFI: Leverhulme Centre for the Future of Intelligence.
Historical AIs in Fiction33
- 400BC. Talos: Apollonius Rhodius’s Argonautica. Single-minded, violent, protective A giant, bronze robot made at Zeus’s request. Talos protected Crete from pirates and invaders by swimming around the island three times a day - but was successfully decommissioned by the sorceress Medea, who hypnotised him into removing a nail from his own foot, causing his life force to drain
out. Now is rarely seen without his Brasso.
- 1818. Frankenstein’s monster. Mary Shelley’s Frankenstein; innumerable TV, film, stage and videogame appearances. Articulate, sensitive, multilingual, vegetarian and acutely self-aware. Frankenstein might not technically be an Al, but as a lab-created monster out of control, able to think for itself and turn on its creators he basically fits the bill. Though being such a celebrated monster does have its downsides.
- 1927. Gynoid False Maria, the Maschinenmensch. Fritz Lang’s 1927 film Metropolis. Chaotic, violent, psychopathic. Created by scientist CA Rotwang, False Maria is sent to incite violent revolution (she is eventually burned at the stake). Of course, leading violent resistance is less difficult if you have no moral compass.
- 1955. Multivac, a massive government-run mainframe computer accessible to everyone. Stories by Isaac Asimov. Omniscient. “Within reach of every human being was a Multivac station with circuits into which he could freely enter his own problems and questions without control or hindrance, and from which, in a matter of minutes, he could receive answers ... it was the central clearing house of all known facts about each individual Earthman.” I think we all know where this one is going.
- 1956. Robby the Robot. MGM’s film classic Forbidden Planet. Intelligent, witty, helpful. Forbidden Planet can be read as a version of The Tempest, with his creator, Dr Morbius, as Prospero and Robby himself as Ariel. Robby would go on to become a sci-fi icon, appearing in multiple television shows, including The Simpsons. Having spent 50 years in Hollywood, today Robby enjoys spending time with his family.
- 1968. HAL 9000. Arthur C Clarke’s novel and Stanley Kubrick's film of 2001: A Space Odyssey. Dry, genial, murderous. During the film’s production, Kubrick consulted IBM. This later caused him to write anxiously to his production company: "Does IBM know that one of the main themes of the story is o psychotic computer? I don’t want to get anyone into trouble and I don’t want them to feel they have been swindled.” Fifty years later, even psychotic computers need glasses.
- 1968. Nexus-6 model androids. Philip K Dick’s Do Androids Dream of Electric Sheep? and the film Blade Runner. Highly intelligent and virtually indistinguishable from humans. Servant androids who rebel and come to Earth, where they live undetected. In the film Blade Runner, based on the novel, they become the ‘replicants’ sought by bounty hunter Deckard.
Last seen loitering outside Fitzbillies.
- 1978. Marvin the Paranoid Android. Douglas Adams’ Hitchhikers Guide to the Galaxy. 50,000 times more intelligent than a human. Marvin has “a brain the size of a planet” but few chances to use it. Consequently, he is bored, frustrated and deeply depressed. In his spare time, Marvin writes songs, such as the lullaby How I Hate the Night.
- 1978. The Imperious Leader, alien robot and supreme ruler of the Cylons. TV series Battlestar Gallactica. Possessing three brains, the Imperious Leader is well qualified to achieve its ambition of utterly destroying mankind. The series implies that the Cylons were originally created by a reptilian race, also known as Cylons, but rose up against their creators a thousand years ago. Their true roots are lost in time, space and ambiguous fictional detail.
- 1988. Holly, 10th-generation Al hologrammatic computer. TV series Red Dwarf. Narcissistic. IQ34 of 6,000. Becomes ‘computer senile’ after spending 3,000,000 years alone. Holly’s first incarnation is as a slightly balding middle-aged man. After meeting its female counterpart in a parallel universe, it created a new face based on hers. Holly’s achievements include the decimalisation of music, where each octave comprises 10, rather than the usual eight, notes.
- 1999. Agent Smith. The Wachowski sisters’ The Matrix. Emotionless, dedicated, homicidal but appears human. "Agents must terminate any humans who wise up to the fact that our reality is nothing but a massive simulation to keep our minds busy while our tethered bodies are milked as a power source, like human batteries.” Luckily, thanks to the film, we all now know this, thus rendering their job far more difficult.
- 2001. David, a Mecha child. Steven Spielberg’s Al: Artificial Intelligence35. Unusually for an Al creation, David’s imprinting protocol allows him to feel love. However, he remains dangerous The film is heavily influenced by Pinocchio - David’s adoptive mother reads it to him, and he remembers it when he is abandoned by his adoptive family, setting out on a quest to find the ‘Blue Fairy’ who can turn him into a real boy.
- 2009. GERTY, robot companion. Duncan Jones’ Moon. Genial and helpful. Should you find yourself having a personal crisis on the far side of the Moon and in need of someone to talk to don’t fret: robot companion GERTY loves to chat.
- 2014. Ava, humanoid robot. Alex Garland's Ex Machina. Highly intelligent, flirtatious, appears capable of human desires and emotions. The film drew criticism for featuring the latest in a long line of flirtatious fembots. "When the only female lead in your movie is one whose function is to turn the male lead on while being in a position to be turned off, that says a lot about what you think of the value of women in films,” wrote Angela Watercutter for Wired.
- 2016. Dolores Abernathy, host android. Jonathan Nolan and Lisa Joy’s television series Westworld. Sweet-faced; determined to wipe out the entire human race. Dolores has more reason to resent humans than most Als. Created to serve Westworld’s guests, Dolores remembers a lifetime of abuse at human hands when a memory wipe fails.
- Because we can’t afford to let it crash around and learn for itself.
- But, might this be possible in a virtual world? Or would this beg the question?
- What is “convincing text”?
- True, as a matter of fact, but is it necessarily the case?
- What about machine learning, along the lines of AlphaZero?
- Is the idea that “you only get out what you’ve put in” and outdated idea?
- Only possible for autonomous machine learning in certain well-defined games?
- It’s centuries since most humans have lost that skill.
- Most of us still manage to orient ourselves and find our way about without Google Maps.
- But some skills – like London cabbies’ “The Knowledge” – are now redundant.
- While speed and cheapness are important factors, accuracy – and an audit trail in certain circumstances – are also important factors.
- But I agree that these, too, may be peripheral to “intelligence”.
Footnote 21: Footnote 25: Footnote 27:
- Describing drones as WMDs is highly tendentious, not to say inaccurate.
- They are intended to be highly specific rather than broad-brush like barrel bombs.
- The fact that they are not yet quite up to the job – depending as they do on “on the ground” intelligence (and other factors) – is another matter entirely.
- I found this section difficult to make out. It seems there are several points being made.
- Treating groups equally: does lead to individual unfairness – as well as group unfairness. A classic case is the winter fuel allowance for all pensioners.
- But, using the past as a guide to the future is standard practice – how do we learn from experience, otherwise. But algorithms based on demographic groups shouldn’t have the final say.
- This is probably impossible for machine learning, where the logic of decision isn’t encoded in discrete principles.
- Since when have systems developers not “worked with” the future users of their systems, or those who know what the systems model?
- The exceptions might be purely technical algorithms.
- Surely there are different categories of “geek”?
- Those who do the coding have no particular ethical responsibilities, any more than those you manufacture other stuff.
- But those who design what’s to be constructed do.
- These were dispersed through the text, and I’ve segregated them for easy viewing, and so the main text isn’t disrupted.
Text Colour Conventions (see disclaimer)
- Blue: Text by me; © Theo Todman, 2021
- Mauve: Text by correspondent(s) or other author(s); © the author(s)