Official Blurbs
- Disruption, Democracy & the Global Order: An Evening with Yuval Noah Harari.
- Tuesday 27 February; 5.30pm (GMT) – 19:00. Keynes Lecture Theatre, King’s, and Zoom.
- CSER and King’s College Cambridge are pleased to host a talk by Professor Yuval Noah Harari on Disruption, Democracy & the Global Order.
- Hear Yuval Noah Harari in conversation with Provost Gillian Tett and the Director of the University's Centre for the Study for Existential Risk, as they discuss the challenges and opportunities of AI alongside disruption, democracy and the global order. With opening remarks from Lord Martin Rees (KC 1969) and contributions from student members of the Cambridge Existential Risks Initiative. In person tickets have now sold out, but King's alumni have access to an exclusive livestream via the link and passcode below (now expired).
YouTube: The talk is now here: YouTube: Disruption, Democracy and the Global Order – Yuval Noah Harari at the University of Cambridge. Blurb:-
- Disruption, Democracy & the Global Order: An Evening with Yuval Noah Harari.
- Watch Yuval Noah Harari's presentation and panel discussions at an evening hosted by the Cambridge Centre for the Study of Existential Risk (CSER) and King's College, University of Cambridge.
- Introduced by CSER co-founder Martin Rees, Harari's presentation focuses on today's most urgent global challenges. It is followed by a panel discussion with Matthew Connelly, CSER's Director, and Gillian Tett, King's College Provost and columnist at the Financial Times – and a Q&A session with a panel of students from the Cambridge Existential Risks Initiative: Olivia Benoit, Nandini Shiralkar, Shoshana Dahdi and Giovanni Mussini.
- Filmed in Cambridge, England, on 27 February 2024.
- The Centre for the Study of Existential Risk (CSER) is an interdisciplinary research centre within the University of Cambridge dedicated to the study and mitigation of risks that could lead to human extinction or civilisational collapse. For more information, please visit our website: CSER: Cambridge Centre for the Study of Existential Risk.
CSER Website:
Introductory Notes
- I don’t normally attend these events, but having read a couple of Yuval Noah Harari’s (YNH’s) books, thought I’d listen in.
- The student-participants had received a copy of YNH’s talk, but I’ve not been able to find it on-line, so have had to reconstruct it from notes I made when listening live on Zoom. There doesn’t seem to be a podcast available either, though maybe one will turn up1.
- Also, I had to leave towards the end as I was due to play bridge, so missed the detail of the Provost’s ‘3 take-aways’. One was that not all hope is lost and that we can achieve something by coordinated action.
- The early exit and the evening’s activities meant that I didn’t have the opportunity to review my notes after the session. So, my write-up will be neither fully accurate nor complete.
- Details about the major2 participating individuals and organisations can be found by following the links below:-
→ Yuval Noah Harari
→ Wikipedia: Yuval Noah Harari
→ Wikipedia: Gillian Tett
→ CSER: Matthew Connelly
→ Wikipedia: Matthew Connelly
→ CSER: Cambridge Centre for the Study of Existential Risk
→ Wikipedia: Martin Rees
→ CERI: Cambridge Existential Risks Initiative
- My own comments appear as footnotes. I just note here that the title of the talk bears precious little relation to its contents.
Introduction by Martin Rees3
- I didn’t take any notes as it wasn’t substantial.
- All a bit deferential4, given how distinguished Lord Rees is in comparison to YNH.
Yuval Noah Harari’s (YNH) Talk5
- Existential Risks: YNH sees three:-
- Ecological collapse / mass extinction6
- AI
- it’s already here – though in its early ‘amoeba’ stage.
- We have only a few years to ‘escape’ either enslavement or elimination7.
- Digital evolution is a million times faster than biological evolution.
- Global War
- We can only solve ER1 & ER2 if ER3 is solved. Otherwise, war will distract us from addressing these concerns.
- WW3 may already have started.
- Compare8 present Ukraine with Poland in 1939.
- Only in hindsight can we see that WW2 was underway even in 1941: the various regional wars weren’t seen as ‘joined up’ in a WW.
- The current situation with Hamas9, while a regional conflict has connections across the world.
- CSER10:
- If WW3 has started then we won’t be able to invest the resources or command the international cooperation to address ER1 & ER2.
- Can we stop WW3?
- It is possible to stop the war in Ukraine but only if11 we can convince Russia that they cannot win. The GDP of the US + Europe is 20 times that of Russia. The West has $300Bn of frozen Russian assets that we could donate to Ukraine.
- But even if Ukraine12 is contained what about the other wars? Is the world a jungle? Is WW3 only a question of time?
- Wars are variable rather than constant. Periods of peace. Since WW2 mostly cooperation rather than conflict.
- Consider State Budgets: Empires – either Roman or British – consumed 50% of budget on military expenditure. Now, only 7% (healthcare is 10%).
- This represents a major change: humans have been changing and making better choices, but this is reversible.
- Russia is spending 30% of GDP on its military.
- Understanding of History
- Predator or prey? This is dependent on our positive or negative views. It’s analogous to attitudes to AI.
- There is no historical determinism. ‘All’ wars are fought over narratives13 rather than resources.
- Gaza: lots of food resources. The narrative is the Rock14 … to whom did God give it? The worst massacre ever15 … over a rock.
- YNH’s talk ended rather abruptly, and disappointingly, at this point.
Questions and Discussion
- There were three sets of questions
- The first was essentially a discussion between Matthew Connolly and YNH.
- Those from the (graduate) student staff of CESR / CERI were next. These questioners had seen the text of the talk beforehand so they could prepare their questions. They appeared as a group on stage. Four others apart from MC (and GT).
- Then came a selection of questions from the student audience. There was a system for those in the audience to enter their questions, which somehow appeared on a list visible to those in the room that was voted on in real time. The question at the top at the time was the next chosen for discussion. Some of the questioners were anonymous, and it was hoped that they weren’t Bots.
- Reviewing my Notes, I see that I didn’t have time to jot down the actual questions, nor who posed them. Instead, I’ve just a few notes on the discussion.
- Discussion: Matthew Connolly16 and YNH
- Narratives are not tangible. Can YNH add anything positive in this regard? Say there was war between the USA and China. There would be a terrible impact on the environment and a devastating effect on the world economy: worse than Covid. But, Covid had a good effect on the climate! But conflict would escalate development and deployment of AI. Existential treats are not in isolation. Could YNH amplify or contradict?
- The 20th century saw three main narratives: Fascism versus Communism versus Liberalism. Fascism and Communism saw inevitable conflict between races (or nations) and classes, respectively. Liberalism didn’t aim for conflict but cooperation: we all have common experiences, interests, and values. There’s still a debate about whether history is about conflict.
- We need to unite against an existential threat. We would do so if there was a Martian invasion! We need to unite17 also to resist climate change and also with respect to AI, which could be classified as Alien18 Intelligence! Noted, though, that there is great positive potential in AI as well as risk.
- Only MAD (Mutually Assured Destruction) prevented WW3 in the second half of the 20th Century. Nuclear war ‘evaporated’ after worries in the 50s through the 80s. We’d unite against an Alien Invasion. Discussions were had between Reagan and Gorbachev19.
- Watch out for millenarian movements. (Transhumanists20) imagine AI to be an improvement over humans – even if we go extinct. The ability to produce a perfect world becomes an excuse to do terrible things21.
- Why now? We’ve gone from ‘the End of History22’, when it made sense to issue a 100-year bond; why has this changed now? History can be described but not explained. Change arises via stories rather than material conditions. Stories are the irrational engines of history. The world was finely balanced during the Cold War. Now, no-one is in charge. We’re still in an adjustment phase of chaos23. Do we have enough time to ‘muddle through24’? Not if we make the wrong choices.
- Compare the effect of AI with the Industrial Revolution. There were terrible experiments in how to build industrial societies. Empires and terrible nonsenses25. AI might be a similar26 failed experiment.
- Deaths from war have declined27, but for how long?
- Q&A: CESR / CERI
- What’s the impact of the rise of Corporate Power on Democracy28?: Corporations are ‘legal persons’ but require human beings to run them. However, AIs might run them one day, and AIs might themselves be legal persons. What rights would AIs have29? Might they hire politicians to acquire them? This might provide a legal path for AI to take over. Are Corporations the new countries? Compare with30 the various East India Companies (Dutch / British). These background ideas aren’t new, though AI is new.
- Does Ecological Collapse require a new narrative?: Historically, societies have been more worried about wars with their neighbours than on the impact of campfires on the environment. The relevant narrative is the contrast between bodies and minds. We’re animals31 and share the environment with other creatures on whom we should have compassion. At the level of the mind, we consider ourselves to be totally different (‘only humans have souls’); worries about what happens to our souls after death leads to a disconnect from other creatures.
- How should we control AI?: Well, we need to spot the dangers and legislate, but there are difficulties both with anticipation and coordination. Anything we can imagine now won’t protect us in 30 years’ time. What we need is a living32 regulatory institution. This will require the best minds and technology (rather than having all these in ‘Big Tech’) together with public trust and support.
- Q&A: Audience
- Isn’t Futurology highly speculative?: YNH was originally a historian. History is the study of change (partly) to learn what it tells us for today33.
- National borders were often drawn up by collapsing colonialists in the wrong places: true, but they have to be maintained else34 there would be universal war.
- Individual Agency: Individuals have more agency than they have ever had, for good and ill. Ideas ‘go viral’. Greta Thunberg. There’s also a narrative of victimhood which leads people not to take responsibility. If I have no power, bad things are not my fault.
- Books / Movies / TV related to Existential Risk?: Contrast with ‘When the Wind Blows35’, which dealt with the threat of nuclear war (including futile hiding under desks36!). While there are many apocalyptic movies, these are not changing the world. We need analogues to the ‘MAD’ films for AI.
- How should AI affect schools?: Who knows whether schools will exist in 50 years’ time? Schools – as distinct from universities – are a fairly recent37 innovation.
- AI Sentience?: YNH distinguishes Consciousness from Intelligence. Computers are not sentient38 and may always remain unconscious39. It may be that the Universe will become filled with highly intelligent beings devoid of sentience and feelings of any kind. This would be terrible and the worst-case scenario40.
In-Page Footnotes
Footnote 1:
- One has turned up on YouTube, as noted above.
- I will use this to correct my notes in due course.
Footnote 2:
- I didn’t catch the names of the graduate students invited on stage for the Q & A session. I might be able to deduce them from the photos on the CSER / CERI websites, but best not to try.
Footnote 3:
- My overall impression was of how ancient Martin Rees looked – gaunt, doddery and muddled. He looked 101 but is apparently only 81. Maybe he’s unwell, though this wasn’t remarked on if so.
- While the blurb has him as ‘KC 1969’ Wikipedia doesn’t say in what capacity, though it does say that he’s an honorary fellow of King’s (and sundry other Colleges).
- Looking in the King’s Register of Admissions, it seems he was a Senior Research Fellow during 1969-72, so had departed for a Professorship at the University of Sussex the year before I went up to King’s.
- His wife (Professor Dame Caroline Humphrey: see Wikipedia: Caroline Humphrey) is an anthropologist who has been a Fellow at King’s since 1978, now a Life Fellow. She’s retained her surname from her first marriage to Nicholas Humphrey, presumably for reasons of professional recognition.
- Martin Rees made the introduction in his capacity as one of the three co-founders of CSER and a member of its Advisory Board.
- He wrote "Rees (Martin) - Our Final Century: Will the Human Race Survive the Twenty-First Century?" 20 years ago. I’ve not read it but may do so now to see how its prophesies have stood the test of time.
- I was annoyed to see that he’d won the Templeton Prize, despite being a (non-militant) atheist.
Footnote 4:
- YNH is ‘famous’ rather than ‘distinguished’. There appears to be some disagreement about his popular books – academics seem to think that where he gives the standard line, he’s uninteresting, and when he doesn’t he’s wrong (not just misguided, but factually incorrect).
Footnote 5:
- This was surprisingly brief – only 15 or 20 minutes (it ended at 17:54, but I didn’t time the introductions, and the event started a couple of minutes late waiting for Martin Rees to some on stage).
Footnote 6:
- This is passed over in silence, maybe because it’s seen as uncontroversial.
- It’s not possible to dispute this without being seen as a ‘climate change denier’.
- Yet, one can agree that climate change is anthropogenic without seeing it as an ‘existential threat’, at least not on the scale of bioterrorism or all-out nuclear war. It can be managed even in the worst-case scenario. The oceans aren’t going to boil.
- Yes – there have been – and will continue to be – anthropogenic mass extinctions. We know of many more species these days than we have knowledge of for eons past.
Footnote 7:
- I think a miracle would need to occur before this happened.
- Of course, one could see how stupidity could give ‘the machines’ power over materiel. But they aren’t conventionally embodied, and you can just turn them off as in ‘2001’. A bit quick, I know. But – as YNH say – they are at an amoeba stage and we don’t know what breakthroughs are required for the next steps.
Footnote 8:
- Yes – let’s! It seems completely absurd to compare the two.
- The dispute between Russia and Ukraine – while it is over ‘narratives’ – is really just a border dispute rather than an attempt at conquest that’s to be followed by world domination. The Russian army is a rag-tag bunch of conscripts and mercenaries rather than the most powerful and innovative army in the world.
- The crunch would come if China invaded Taiwan. It would depend on what the US would do in that circumstance. Hopefully nothing beyond ‘sanctions’, especially as they tacitly agree that Taiwan is part of China (as it is) though even that would lead to a global recession (which would be good way of addressing ER1).
Footnote 9:
- I think that’s how he put things. I was surprised there was no pro-Palestinian lobby present. Students aren’t what they used to be.
- The situation in Gaza is a mess and will likely get worse, but in the grand scheme of things it’ll only be important if a million get killed in Gaza or Israel gets eliminated.
- Compare with various other messes – Syria, Iraq, the Iran-Iraq war, Vietnam and so on.
- The West is uncomfortable about Gaza because Israel is treated as a European power which should be held to a higher standard.
- Of course, if things get really bad, there could be oil-price shocks and global trade disruption. Inconvenient, but not the end of civilization as we know it. And good for ER1.
Footnote 10:
- I’n not sure why I noted this as the section title.
- CSER is – it seems – pronounced ‘Caesar’.
Footnote 11:
- I think he said this. He seems rather hawkish and confrontational for someone who thinks global cooperation is the way forward (as it is).
- Wars can be stopped if the weaker side sues for peace when they cannot protect their civilian population rather than relying on their enemies’ enemies to help them fight on for years.
Footnote 12:
- As noted, Ukraine is a side-show. If the West had shown the compassion and pragmatism they showed to Germany after WW2 to Gorbachev rather than trying to take advantage, we wouldn’t be in this mess.
Footnote 13:
- This is a good point, but I think it depends on which wars catch your eye. Mostly, I suppose, it’s both. The barbarian invasions of the Roman empire weren’t based on narratives – the invaders had been displaced from their own lands.
Footnote 14:
- This is an absurd over-simplification. The Dome of the Rock is a flash-point, no more.
- The problem is insoluble given that two peoples have claims on the same territory.
- Or at least it’s insoluble unless one side or other (or its supporters) give up.
- Neither Gaza nor Israel is self-supporting. Apparently Gaza needs 455 trucks of aid a day even in peacetime.
Footnote 15:
- What tosh. The Mongols are said to have killed a million people in the fall of Baghdad (though this is disputed – estimates vary from 200,000 to 2 million: see Wikipedia: Siege of Baghdad) when it refused to surrender. Julius Caesar killed a million Gauls. And many more.
Footnote 16:
- As noted above, Matthew Connolly is Director of CESR.
- His latest book - The Declassification Engine: What History Reveals About America's Top Secrets (2023, Pantheon) was mentioned.
Footnote 17:
- This is clearly the case for climate change and other environmental problems. But it’s the major players that are key. It’s no use the UK footling about with electric cars when Brazil is burning the rain forest and India catching up on its development by burning coal.
- I don’t know what sort of ‘unity’ YNH has in mind. A world government leads to a single point of failure and could end up repressive.
- Unity is even more important for AI – assuming exponential development is indeed possible. Defectors might end up with a huge advantage. It would depend how expensive future developments are. If they continue to require huge datasets and computing capacity it won’t be open to teenagers in their bedrooms, though it might be open to ‘rogue states’.
Footnote 18:
- I get the point – but AIs aren’t really ‘alien’ as they depend on us both for their architecture and – especially in the case of LLMs – their data.
Footnote 19:
- This comment is in my Notes, but I can’t remember what was intended (I don’t think I knew at the time).
- It might have been strategic arms limitation (Wikipedia: START I), with unity against alien invasion as a trope.
Footnote 20:
- I discuss Transhumanism in this Note and elsewhere.
Footnote 21:
- I agree, but this is likely to apply more to religious millenarians, particularly – at the present time – to Islamists.
- There’s also ‘passive millenarianism’ whereby it is thought that things have got to get so bad that the Messiah will come to rescue us. So, resisting disasters is counter-productive as well as futile. Thankfully, this approach is rather niche.
Footnote 22: Footnote 23:
- This is a little negative. After the fall of the Soviet Union there was – for a while – only one superpower. Now there are two – the US and China – and it’s only a matter of time (most likely) before China overtakes the USA economically and militarily. The US is therefore getting protectionist to try to stall this. The fiascos of the Vietnam and the Iraq wars have led to doubts about the ability of the world’s policeman to sort things out.
Footnote 24:
- What is the alternative to ‘muddling through’?
Footnote 25:
- No doubt there were many errors made. Empires were – I suppose – necessary for the supply of raw materials for manufactures.
- I suppose, also, that the history of the industrial revolution was distorted by it arising in a tiny island that had the energy and iron-ore requirements covered but needed cotton (and maybe other raw materials).
- However, the major disaster in the industrial revolution was the impact on the domestic population as it moved to the towns into an unregulated exploitative environment. Dickens, Factory Acts and all that. That and exploitation of the Irish navvies (see Wikipedia: Navvy) to build infrastructure (canals, then the railways).
Footnote 26:
- I suppose the ‘similarity’ might be on ‘job losses’ (as was the case when weaving was mechanized) but optimists say that – while some jobs will go – others will replace them.
- This is equally analogous to the de-industrialization within Wales and the North of England, with no real attempt to mitigate matters for the local communities.
- Are there other analogies?
Footnote 27:
- There are at least four issues to my mind:-
- The rise in terrorism.
- The shift of wars from battlefields to population centres, with the inevitable rise in civilian casualties.
- The refusal to end wars by accommodation or surrender, but to carry on using external support from ‘supporters’ willing to carry on proxy wars against their enemies.
- The rise of new technologies – in particular drones. This is tied in with AI.
Footnote 28:
- I think this question had been prepared before the draft of YNH’s speech had been issued, on the naïve assumption that the talk would relate to its title.
- YNH’s response basically ignores the likely intent of the question – presumably wanting him to support control of ‘Big Tech’ and other powerful transnational companies – and replies as though it was a question about the control of AI.
Footnote 29:
- This is likely to be a muddled question. There are two aspects:-
- If AIs were simply legal persons in the same way as Companies are, they would have the same rights (and duties) that companies have. Nothing new there.
- However – if AI became (or were deemed to have become) sensate, then human beings would have (or it might be accepted that they have) duties of care towards them, and this might then lead on to them being granted rights. This would be a step change. And, as AIs are not physical beings (though the hardware they run on is) they can be replicated without end, subject to the availability of that hardware.
Footnote 30:
- The British East India Company (Wikipedia: East India Company) might have had a role in running (parts of) India and other countries, but wasn’t itself a country or a proxy for one. Much less contemporary multinationals. Whatever their influence and exploitative practices, they don’t have their own standing armies.
Footnote 31:
- So, YNH is an Animalist!
- I suppose YNH’s reply answers the question. I agree with him, for what it’s worth.
Footnote 32:
- Presumably, what he means is one that provides rapid response. But that’s not really possible without executive powers, especially in the UK where there is a laborious parliamentary procedure and endless challenges in the courts.
- It also requires global cooperation, which won’t be easily forthcoming.
Footnote 33:
- Given that AI is such a new phenomenon, just what use is history as a precedent for what we need to do? The same goes for environmental collapse. There are lessons from previous WWs. Presumably YNH’s lesson would be ‘no appeasement’, but this can lead to the start of a WW if compromise (even if inequitable) would lead to peace. That was the point of Munich prior to WW2. The issue is knowing what the end-game is for the aggressor.
- The lesson from history should be that – in general – aggressors are usually happy with ‘tribute’ but take unreasonable punitive measures against unreasonable resistance.
- Hitler and WW2 is not a typical war, so we shouldn’t use this as our yardstick.
- As I’m always saying, Putin is no Hitler. Nor is he a Stalin. Also, despite recent repressive measures, there are still more freedoms in Russia than in many countries.
Footnote 34:
- I’m not 100% sure that this is what YNH said.
- If he did say it, I’m not sure he’s right. Something needs to be done to sort such mismatches out. In India, this was done – at great cost in human life – by moving populations. Otherwise, border disputes seem to have been addressed militarily. But – where they are obviously badly-drawn – it would be better for borders to be redrawn by negotiation, maybe with compensation (usually cheaper than war).
- No doubt YNH has Ukraine in mind. Russia’s claims weren’t absurd; Ukraine’s borders were drawn on the assumption that the USSR would continue in existence. No one has won out in this war. Whoever ‘wins’ will inherit a heap of ruins in the disputed areas.
Footnote 35: Footnote 36:
- This was alluded to by the Provost, but it’s not germane to ‘When the Wind Blows’ but to other ‘public information’ films of the time (or earlier), though other futile precautions are mentioned.
- Gilian Tett was born in July 1967, so would have been at School when ‘Threads’ came out in September 1984 but at University (Clare College) when ‘When the Wind Blows’ came out in October 1986. Not sure whether this is germane to why she should – repeatedly – mention one film rather than the other.
- Interestingly, the storyline of ‘Threads’ involves a Soviet invasion of Iran, a prime mover behind today’s troubles (a US invasion is somewhat more likely this time round). The invasion is from Soviet Afghanistan, which is taken as the spark that ignites nuclear war in ‘When the Wind Blows’.
- Talking of films and existential threats, 12 Monkeys from 1995 (Wikipedia: 12 Monkeys) raises the much more serious existential threat of bioterrorism. Unfortunately – like the ‘Terminator’ series – its realism is undermined by the assumption that time-travel to change the past is possible.
Footnote 37:
- No doubt it depends on which country and which class you’re talking about. In England, grammar schools have been around since the 16th century – Shakespeare went to one. Royal foundations since at least Henry VIII (earlier for foundations like Eton and Winchester, then for ‘poor scholars’). Earlier still for monastery schools. But for the working class and the upper classes, schools are a comparatively recent innovation, I suppose. The working or agricultural classes had little education until the 19th century. The same goes for the English Public Schools, mostly founded or reformed in the 19th century; prior to that the rich were educated privately.
- Or so I think! Maybe a bit more complicated in the case of Public Schools. See:-
→ Wikipedia: Grammar school
→ Wikipedia: Public school (United Kingdom)
Not sure how relevant this is.
- It’s true that AI (and the internet generally) will have an impact on education. But there’s more to education than imparting knowledge. Children being stuck at home in front of their PCs and phones isn’t much of an upbringing.
Footnote 38:
- I agree with this – as does almost everyone. It will be difficult eventually to determine from behaviour whether they are sentient or not. Decisions will have to be made on theoretical grounds. Presumably like YNH, I think that sentience is a biological phenomenon that can only be simulated by functionally equivalent digital computers.
Footnote 39:
- Given the possibility that consciousness is related to quantum phenomena, who knows whether quantum computers will eventually become conscious, and how we would know whether they are.
- Worries about sentience might have huge practical and legal consequences, as noted earlier. We are right to predicate sentience of the higher animals as they share our biological architecture and act as though they are sentient. The stakes of getting things wrong are high, but sometimes there’s nothing we can do. If flies and other pests are sentient, so much the worse for them.
- It’s important not to run consciousness and sentience together – they are related but distinct concepts (and experiences). There’s no space here to cover the matter, and it was not – I don’t think – touched on by YNH. Some theories of consciousness – such as Higher Order Thought (HOT) theories - seem to me not necessarily to involve sentience! See SEP: Carruthers - Higher-Order Theories of Consciousness.
Footnote 40:
- Well, it would be pretty bad, but the worst case scenario would be a universe devoid of ‘beings’ – whether sentient or intelligent. While there is intelligence it is presumably possible for sentience to re-develop, if the ‘hardware’ can in principle support it.
Text Colour Conventions (see disclaimer)
- Blue: Text by me; © Theo Todman, 2025
- Mauve: Text by correspondent(s) or other author(s); © the author(s)