Full Text- Incident: Interim Report: HQ Attack
- The future used to belong to science fiction writers. But the technologies once imagined by Philip K. Dick and Ray Bradbury now belong to the realm of science fact. What visions of the future might the world’s leading AI experts predict if you put them in a room together? Cambridge’s Centre for the Study of Existential Risk (CSER) and Oxford’s Future of Humanity Institute decided to find out…
- The scenario above1 never happened. Or at least, it hasn’t happened yet. But it is one of several possible real-life scenarios envisaged by some of the world’s leading experts on the impacts of AI – who joined forces to author and sign a ground-breaking report that sounds the alarm about the potential future misuse of AI by rogue states, terrorists and malicious groups or individuals.
- The report forecasts dramatic growth during the next decade in the use of robots and drones that may be designed or repurposed for attacks – as well as an unprecedented rise in the use of ‘bots’ to manipulate everything from elections and the news agenda to social media. It issues a clarion call for governments and corporations worldwide to address the clear and present danger inherent in the myriad applications of AI.
- In addition, the report – The Malicious Use of Artificial Intelligence2: Forecasting, Prevention, and Mitigation – also identifies potential solutions and interventions to allay some of the potentially catastrophic risks discussed. Experts in the fields of machine learning, AI safety, drones, cybersecurity, lethal autonomous weapons systems and counterterrorism, from organisations such as Google, OpenAI, DeepMind and Microsoft, as well as leading thinkers from Cambridge, Yale, Oxford and Princeton Universities (among others), came together in Oxford to address the critical challenges around AI in the 21st century.
- Together, the participants highlighted important changes to the strategic security landscape, which could include: more attacks, due to the scalable automation of attacks; harder to defend against attacks, due to the dynamic nature of AI; and more attackers, as skill and computing resources become increasingly available.
- “The consequences of such developments are difficult to predict in detail, and not all participants agreed on all conclusions,” says Dr Shahar Avin of CSER who co-chaired the workshop with Miles Brundage from Oxford. “However, there was broad consensus on predictions around attacks that are novel either in the form of attacks on AI systems or because they are carried out by AI systems; more targeted attacks, through automated identification of victims; and unattributable attacks through AI intermediaries.”
- “AI is a game changer and this report has imagined what the world could look like in the next five to ten years,” adds Dr Seán Ó hÉigeartaigh, Executive Director of CSER and one of the report signatories. “We live in a world that could become fraught with day-to-day hazards from the misuse of AI and we need to take ownership of the problems – because the risks are real. There are choices that we need to make now. Our report is a call to arms for governments, institutions and individuals across the globe.”
- He adds: “For many decades, hype was outstripping fact in terms of AI and machine learning. Now, the situation is being reversed and we have to rethink all the ways we currently do things. This report looks at the practices that just don’t work anymore – and suggests broad approaches that might help: for example, how to design software and hardware to make it less hackable – and what type of laws and international regulations might work in tandem with this.”
- The report also identifies three security domains (digital, physical and political) as particularly relevant to the malicious use of AI. It suggests that AI will disrupt the trade-off between scale and efficiency, and allow large-scale, highly efficient and targeted attacks on digital systems.
- Likewise, the proliferation of cyber-physical systems will allow attackers to deploy or repurpose such systems for harmful ends (such as turning commercial drones into face-targeting missiles or holding critical infrastructure to ransom). The rise of autonomous weapons systems in the battlefield also risks the loss of meaningful human control and increases the prospects of targeted autonomous attacks.
- Meanwhile, in the political sphere, detailed analytics and the automation of message creation present powerful tools for manipulating public opinion on previously unimaginable scales.
- “The aggregation of information by states and corporations, and the increasing ability to analyse and act on this information at scale using AI could enable new levels of surveillance and invasions of privacy, and threaten to radically shift the power between individuals, corporations and states,” adds Ó hÉigeartaigh.
- To mitigate such risk, the authors explore several interventions to reduce threats associated with the malicious use of AI. They include recommendations for more engaged policy making and more responsible development of the technology, an opportunity to learn from the best practices of other risky fields, and a call for a “broader conversation”.
- The report also highlights key areas for further research, including at the intersection of AI and cybersecurity, on openness and information sharing of risky capabilities, on the promotion of a culture of responsibility, and on seeking both institutional and technological solutions to tip the balance in favour of those defending against attacks.
- While the design and use of dangerous AI systems by malicious actors has been highlighted in high-profile settings (such as the US Congress and White House, separately), the intersection of AI and malicious use on a massive scale has not yet been analysed comprehensively – until now.
- “The field of AI has gone through several so-called ‘winters’, when over hyped promises failed to match the reality of how difficult it has been to make progress on these technologies,” explains Avin. “With all the rapid progress in recent years, brought about in part by much more capable processors, there has yet to be a clear point of maturation, of acknowledging that this technology is going to change everyone’s lives this time around, and we need to start planning for the potential risks and benefits.”
- Avin and Ó hÉigeartaigh suggest that CSER is uniquely placed to contribute to discussions around the study and mitigation of risks associated with emerging technologies and human activity. For the purpose of this report, this meant being able to convene experts in machine learning, cybersecurity and the broader legal, socio-political implications. The result is a report that lays out how and why AI will alter the landscape of risk for citizens, organisations and states.
- “It is often the case that AI systems don’t merely reach human levels of performance but significantly surpass them,” says the report. “It is troubling, but necessary, to consider the implications of superhuman hacking, surveillance, persuasion, and physical target identification, as well as AI capabilities that are subhuman but nevertheless much more scalable than human labour.”
- Adds Ó hÉigeartaigh: “Whether it’s criminals training machines to hack or ‘phish’ at human levels of performance or privacy-eliminating surveillance and profiling – the full range of impacts on security is vast.”
- ‘The Malicious Use of Artificial Intelligence3: Forecasting, Prevention, and Mitigation’ is the result of a workshop co-organised by CSER and the University of Oxford’s Future of Humanity Institute.
- Dr Shahar Avin Centre for the Study of Existential Risk (CSER) sa478@cam.ac.uk
- Dr Seán Ó hÉigeartaigh CSER so348@cam.ac.uk
- Needles & Haystacks
- Police at the “front line” of difficult risk-based judgements are trialling an AI system trained to give guidance using the outcomes of five years of criminal histories.
- It’s 3am on Saturday morning. The man in front of you has been caught in possession of drugs. He has no weapons, and no record of any violent or serious crimes. Do you let the man out on police bail the next morning, or keep him locked up for two days to ensure he comes to court on Monday?”
- The scenario Dr Geoffrey Barnes is describing is fictitious and yet the decision is one that happens hundreds of thousands of times a year across the UK: whether to detain a suspect in police custody or release them on bail. The outcome of this decision could be major for the suspect, for public safety and for the police.
- “The police officers who make these custody decisions are highly experienced,” explains Barnes. “But all their knowledge and policing skills can’t tell them the one thing they need to now most about the suspect – how likely is it that he or she is going to cause major harm if they are released? This is a job that really scares people – they are at the front line of risk-based decision-making.”
- Barnes and Professor Lawrence Sherman, who leads the Jerry Lee Centre for Experimental Criminology in the University of Cambridge’s Institute of Criminology, have been working with police forces around the world to ask whether AI can help.
- “Imagine a situation where the officer has the benefit of a hundred thousand, and more, real previous experiences of custody decisions?” says Sherman. “No one person can have that number of experiences, but a machine can.”
- In mid-2016, with funding from the Monument Trust, the researchers installed the world’s first AI tool for helping police make custodial decisions in Durham Constabulary.
- Called the Harm Assessment Risk Tool (HART), the AI-based technology uses 104,000 histories of people previously arrested and processed in Durham custody suites over the course of five years, with a two-year follow-up for each custody decision. Using a method called “random forests”, the model looks at vast numbers of combinations of ‘predictor values’, the majority of which focus on the suspect’s offending history, as well as age, gender and geographical area.
- “These variables are combined in thousands of different ways before a final forecasted conclusion is reached,” explains Barnes. “Imagine a human holding this number of variables in their head, and making all of these connections before making a decision. Our minds simply can’t do it.”
- The aim of HART is to categorise whether in the next two years an offender is high risk (highly likely to commit a new serious offence such as murder, aggravated violence, sexual crimes or robbery); moderate risk (likely to commit a non-serious offence); or low risk (unlikely to commit any offence).
- “The need for good prediction is not just about identifying the dangerous people,” explains Sherman. “It’s also about identifying people who definitely are not dangerous. For every case of a suspect on bail who kills someone, there are tens of thousands of non-violent suspects who are locked up longer than necessary.”
- Durham Constabulary want to identify the ‘moderate-risk’ group – who account for just under half of all suspects according to the statistics generated by HART. These individuals might benefit from their Checkpoint programme, which aims to tackle the root causes of offending and offer an alternative to prosecution that they hope will turn moderate risks into low risks.
- “It’s needles and haystacks,” says Sherman. “On the one hand, the dangerous ‘needles’ are too rare for anyone to meet often enough to spot them on sight. On the other, the ‘hay’ poses no threat and keeping them in custody wastes resources and may even do more harm than good.”
- A randomised controlled trial is currently under way in Durham to test the use of Checkpoint among those forecast as moderate risk.
- HART is also being refreshed with more recent data – a step that Barnes explains will be an important part of this sort of tool: “A human decision-maker might adapt immediately to a changing context – such as a prioritisation of certain offences, like hate crime – but the same cannot necessarily be said of an algorithmic tool. This suggests the need for careful and constant scrutiny of the predictors used and for frequently refreshing the algorithm with more recent historical data.”
- No prediction tool can be perfect. An independent validation study of HART found an overall accuracy of around 63%. But, says Barnes, the real power of machine learning comes not from the avoidance of any error at all but from deciding which errors you most want to avoid.
- “Not all errors are equal,” says Sheena Urwin, head of criminal justice at Durham Constabulary and a graduate of the Institute of Criminology’s Police Executive Master of Studies Programme. “The worst error would be if the model forecasts low and the offender turned out high.”
- “In consultation with the Durham police, we built a system that is 98% accurate at avoiding this most dangerous form of error – the ‘false negative’ – the offender who is predicted to be relatively safe, but then goes on to commit a serious violent offence,” adds Barnes. “AI is infinitely adjustable and when constructing an AI tool it’s important to weigh up the most ethically appropriate route to take.”
- The researchers also stress that HART’s output is for guidance only, and that the ultimate decision is that of the police officer in charge.
- “HART uses Durham’s data and so it’s only relevant for offences committed in the jurisdiction of Durham Constabulary. This limitation is one of the reasons why such models should be regarded as supporting human decision-makers not replacing them,” explains Barnes. “These technologies are not, of themselves, silver bullets for law enforcement, and neither are they sinister machinations of a so-called surveillance state.”
- Some decisions, says Sherman, have too great an impact on society and the welfare of individuals for them to be influenced by an emerging technology.
- Where AI-based tools provide great promise, however, is to use the forecasting of offenders’ risk level for effective ‘triage’, as Sherman describes: “The police service is under pressure to do more with less, to target resources more efficiently, and to keep the public safe.
- “The tool helps identify the few ‘needles in the haystack’ who pose a major danger to the community, and whose release should be subject to additional layers of review. At the same time, better triaging can lead to the right offenders receiving release decisions that benefit both them and society.”
- Dr Geoffrey Barnes Jerry Lee Centre for Experimental Criminology, Institute of Criminology gcb1002@cam.ac.uk
- Professor Lawrence Sherman Jerry Lee Centre for Experimental Criminology, Institute of Criminology ls434@cam.ac.uk
- Sheena Urwin Durham Constabulary
- In Tech We Trust?
- Fairness, trust and transparency are qualities we usually associate with organisations or individuals. Today, these attributes might also apply to algorithms. As machine learning systems become more complex and pervasive, Cambridge researchers believe it’s time for new thinking about new technology.
- Dr Jat Singh is familiar with breaking new ground and working across disciplines. Even so, he and colleagues were pleasantly surprised by how much enthusiasm has greeted their new Strategic Research Initiative on Trustworthy Technologies, which brings together science, technology and humanities researchers from across the University.
- In fact, Singh, a researcher in Cambridge’s Department of Computer Science and Technology, has been collaborating with lawyers for several years: “A legal perspective is paramount when you’re researching the technical dimensions to compliance, accountability and trust in emerging ICT; although the Computer Lab is not the usual home for lawyers, we have two joining soon.”
- Governance and public trust present some of the greatest challenges in technology today. The European General Data Protection Regulation (GDPR), which comes into force this year, has brought forward debates such as whether individuals have a ‘right to an explanation’ regarding decisions made by machines, and introduces stiff penalties for breaching data protection rules. “With penalties including fines of up to 4% of global turnover or €20 million, people are realising that they need to take data protection much more seriously,” he says.
- Singh is particularly interested in how data-driven systems and algorithms – including machine learning – will soon underpin and automate everything from transport networks to council services. As we work, shop and travel, computers and mobile phones already collect, transmit and process much data about us; as the ‘Internet of Things’ continues to instrument the physical world, machines will increasingly mediate and influence our lives.
- It’s a future that raises profound issues of privacy, security, safety and ultimately trust, says Singh, whose research is funded by an Engineering and Physical Sciences Research Council Fellowship: “We work on mechanisms for better transparency, control and agency in systems, so that, for instance, if I give data to someone or something, there are means for ensuring they’re doing the right things with it. We are also active in policy discussions to help better align the worlds of technology and law.”
- What it means to trust machine learning systems also concerns Dr Adrian Weller. Before becoming a senior research fellow in the Department of Engineering and a Turing Fellow at The Alan Turing Institute, he spent many years working in trading for leading investment banks and hedge funds, and has seen first-hand how machine learning is changing the way we live and work.
- “Not long ago, many markets were traded on exchanges by people in pits screaming and yelling,” Weller recalls. “Today, most market making and order matching is handled by computers. Automated algorithms can typically provide tighter, more responsive markets – and liquid markets are good for society.”
- But cutting humans out of the loop can have unintended consequences, as the flash crash of 2010 shows. During 36 minutes on 6 May, nearly one trillion dollars were wiped off US stock markets as an unusually large sell order produced an emergent coordinated response from automated algorithms. “The flash crash was an important example illustrating that over time, as we have more AI agents operating in the real world, they may interact in ways that are hard to predict,” he says.
- Algorithms are also beginning to be involved in critical decisions about our lives and liberty. In medicine, machine learning is helping diagnose diseases such as cancer and diabetic retinopathy; in US courts, algorithms are used to inform decisions about bail, sentencing and parole; and on social media and the web, our personal data and browsing history shape the news stories and advertisements we see.
- How much we trust the ‘black box’ of machine learning systems, both as individuals and society, is clearly important. “There are settings, such as criminal justice, where we need to be able to ask why a system arrived at its conclusion – to check that appropriate process was followed, and to enable meaningful challenge,” says Weller. “Equally, to have effective real-world deployment of algorithmic systems, people will have to trust them.”
- But even if we can lift the lid on these black boxes, how do we interpret what’s going on inside? “There are many kinds of transparency,” he explains. “A user contesting a decision needs a different kind of transparency to a developer who wants to debug a system. And a third form of transparency might be needed to ensure a system is accountable if something goes wrong, for example an accident involving a driverless car.”
- If we can make them trustworthy and transparent, how can we ensure that algorithms do not discriminate unfairly against particular groups? While it might be useful for Google to advertise products it ‘thinks’ we are most likely to buy, it is more disquieting to discover the assumptions it makes based on our name or postcode.
- When Latanya Sweeney, Professor of Government and Technology in Residence at Harvard University, tried to track down one of her academic papers by Googling her name, she was shocked to be presented with ads suggesting that she had been arrested. After much research, she discovered that “black-sounding” names were 25% more likely to result in the delivery of this kind of advertising.
- Like Sweeney, Weller is both disturbed and intrigued by examples of machine-learned discrimination. “It’s a worry,” he acknowledges. “And people sometimes stop there – they assume it’s a case of garbage in, garbage out, end of story. In fact, it’s just the beginning, because we’re developing techniques that can automatically detect and remove some forms of bias.”
- Transparency, reliability and trustworthiness are at the core of Weller’s work at the Leverhulme Centre for the Future of Intelligence4 and The Alan Turing Institute. His project grapples with how to make machine-learning decisions interpretable, develop new ways to ensure that AI systems perform well in real-world settings, and examine whether empathy is possible – or desirable – in AI.
- Machine learning systems are here to stay. Whether they are a force for good rather than a source of division and discrimination depends partly on researchers such as Singh and Weller. The stakes are high, but so are the opportunities. Universities have a vital role to play, both as critic and conscience of society. Academics can help society imagine what lies ahead and decide what we want from machine learning – and what it would be wise to guard against.
- Weller believes the future of work is a huge issue: “Many jobs will be substantially altered if not replaced by machines in coming decades. We need to think about how to deal with these big changes.”
- And academics must keep talking as well as thinking. “We’re grappling with pressing and important issues,” he concludes. “As technical experts we need to engage with society and talk about what we’re doing so that policy makers can try to work towards policy that’s technically and legally sensible.”
- Dr Jat Singh Department of Computer Science and Technology (Computer Lab) js573@cam.ac.uk
- Dr Adrian Weller Department of Engineering, the Leverhulme Centre for the Future of Intelligence5 and The Alan Turing Insititute adrian.weller@eng.cam.ac.uk
- What makes a city as small as Cambridge a hotbed for AI and machine learning start-ups?
- A critical mass of clever people obviously helps. But there’s more to Cambridge’s success than that.
- On any given day, some of the world’s brightest minds in the areas of AI and machine learning can be found riding the train between Cambridge and London King’s Cross.
- Five of the biggest tech companies in the world – Google, Facebook, Apple, Amazon and Microsoft – all have offices at one or both ends of the train line. Apart from the tech giants, however, both cities (and Oxford, the third corner of the UK’s so-called golden triangle) also support thriving ecosystems of start-ups. Over the past decade, start-ups based on AI and machine learning, in Cambridge and elsewhere, have seen explosive growth.
- Of course, it’s not unexpected that a cluster of high-tech companies would sprout up next to one of the world’s leading universities. But what is it that makes Cambridge, a small city on the edge of the Fens, such a good place to start a business?
- “In my experience, Silicon Valley is 10% tech and 90% hype, but Cambridge is just the opposite,” says Vishal Chatrath, CEO of PROWLER.io, a Cambridge-based AI company. “As an entrepreneur, I want to bring world-changing technology to market. The way you do that is to make something that’s never existed before and create the science behind it. Cambridge, with its rich history of mathematicians, has the kind of scientific ambition to do that.”
- “The ecosystem in Cambridge is really healthy,” says Professor Carl Edward Rasmussen from Cambridge’s Department of Engineering, and Chair of PROWLER.io. “The company has been expanding at an incredible rate, and I think this is something that can only happen in Cambridge.
- PROWLER.io is developing what it calls the world’s first ‘principled’ AI decision-making platform, which could be used in a variety of sectors, including autonomous driving, logistics, gaming and finance. Most AI decision-making platforms tend to view the world like an old-fashioned flowchart, in which the world is static. But in the real world, every time a decision is made, there are certain parameters to take into account.
- “If you could take every decision-making point and treat it as an autonomous AI agent, you could understand the incentives under which the decision is made,” says Chatrath. “Every time these agents make a decision, it changes the environment, and the agents have an awareness of all the other agents. All these things work together to make the best decision.”
- For example, autonomous cars running PROWLER.io’s platform would communicate with one another to alleviate traffic jams by re-routing automatically. “Principled AI is almost an old-fashioned way of thinking about the world,” says Chatrath. “Humans are capable of making good decisions quickly, and probabilistic models like ours are able to replicate that, but with millions of data points. Data isn’t king: the model is king. And that’s what principled AI means.”
- Could PROWLER.io be the next big success story from the so-called ‘Cambridge cluster’ of knowledge-intensive firms? In just under two years, the company has grown to more than 60 employees, has filed multiple patents and published papers. Many of the people working at the company have deep links with the University and its research base, and many have worked for other Cambridge start-ups. Like any new company, what PROWLER.io needs to grow is talent, whether it’s coming from Cambridge or from farther afield.
- “There’s so much talent here already, but it’s also relatively easy to convince people to move to Cambridge,” says Rasmussen. “Even with the uncertainty that comes along with working for a start-up, there’s so much going on here that even if a start-up isn’t ultimately successful, there are always new opportunities for talented people because the ecosystem is so rich.”
- “Entrepreneurs in Cambridge really support one another – people often call each other up and bounce ideas around,” says Carol Cheung, an Investment Associate at Cambridge Innovation Capital (CIC). “You don’t often see that degree of collaboration in other places.”
- CIC is a builder of high-growth technology companies in the Cambridge Cluster, and has been an important addition to the Cambridge ecosystem. It provides long-term support to companies that helps to bridge the critical middle stage of commercial development – the ‘valley of death’ between when a company first receives funding and when it begins to generate steady revenue – and is a preferred investor for the University of Cambridge. One of CIC’s recent investments was to lead a £10 million funding round for PROWLER.io, and it will work with the company to understand where the best commercial applications are for their platform.
- AI and machine learning companies like PROWLER.io are clearly tapping into what could be a massive growth area for the UK economy: PwC estimates that AI could add £232 billion to the economy by 2030; and the government’s Industrial Strategy describes investments aimed at making the UK a global centre for AI and data-driven innovation. But given the big salaries that can come with a career in big tech, how can universities prevent a ‘brain drain’ in their computer science, engineering and mathematics departments?
- The University has a long tradition of entrepreneurial researchers who have built and sold multiple companies while maintaining their academic careers, running labs and teaching students. “People from academia are joining us and feeding back into academia – in Cambridge, there’s this culture of ideas going back and forth,” says Chatrath. “Of course some people will choose to pursue a career in industry, but Cambridge has this great tradition of academics choosing to pursue both paths – perhaps one will take precedence over the other for a time, but it is possible here to be both an academic and an entrepreneur.”
- “I don’t know of any other university in the world that lets you do this in terms of IP. It’s a pretty unique set-up that I can start a business, raise venture capital, and still retain a research position and do open-ended research. I feel very lucky,” says Dr Alex Kendall, who recently completed his PhD in Professor Roberto Cipolla’s group in the Department of Engineering, as well as founding Wayve, a Cambridge-based machine learning company. “A lot of other universities wouldn’t allow this, but here you can – and it’s resulted in some pretty amazing companies.”
- “I didn’t get into this field because I thought it would be useful or that I’d start lots of companies – I got into it because I thought it was really interesting,” says Professor Zoubin Ghahramani, one of Cambridge’s high-profile entrepreneurial academics, who splits his time between the Department of Engineering and his Chief Scientist role at Uber. “There were so many false starts in AI when people thought this is going to be very useful and it wasn’t. Five years ago, AI was like any other academic field, but now it’s changing so fast – and we’ve got such a tremendous concentration of the right kind of talent here in Cambridge to take advantage of it.”
Comment:
In-Page Footnotes
Footnote 1: A cleaning “Sweepbot” infiltrates the ministry and ultimately explodes, killing the minister.
Text Colour Conventions (see disclaimer)
- Blue: Text by me; © Theo Todman, 2021
- Mauve: Text by correspondent(s) or other author(s); © the author(s)