Aeon: Follow-up Boxes
Hains (Brigid) & Hains (Paul)
This Page provides (where held) the Abstract of the above Book and those of all the Papers contained in it.
Text Colour-ConventionsDisclaimerPapers in this Book




"Aftab (Awais) - What a psychiatric diagnosis means – and what it doesn’t mean"

Source: Aeon, 14 November 2024


Author's Introduction
  • Just as nearly everyone experiences physical health problems such as infections or injuries, most people will experience one or more mental health problems over the course of their lives. At present, only a fraction of individuals who have these problems come to the attention of a clinician.
  • Many live with unexplained suffering for years, feeling isolated and burdened by an invisible weight that is hard to articulate. Identifying and naming these experiences of suffering is an important step toward understanding and treatment. A diagnosis given by a trained mental health professional, such as a psychiatrist or clinical psychologist, can provide relief by offering professional recognition of someone’s distress and disability. It can guide a person towards appropriate interventions and support. And it can come with a comforting realisation that one’s problems are also experienced by others, and that they have been carefully studied by clinicians and researchers.
  • However, it’s important for people who receive a psychiatric diagnosis for a mental health problem, or who learn about someone else’s diagnosis, to have clarity on what getting a diagnosis means – and what it doesn’t mean. Some people expect a diagnosis to describe their problems perfectly. Others are tempted to dismiss a diagnosis as nothing more than a label. Some misunderstand a mental health diagnosis as the identification of an abnormality or disease in the brain. There is also considerable stigma around mental health diagnoses, and misunderstandings about their nature and outcomes are widespread. When they are seen as pejorative labels, or the person with a mental health problem is viewed as having a damaged self or an incurable brain disease, diagnoses can induce shame and pessimism.
  • Diagnoses in mental healthcare are far from perfect and can cause harm if misapplied, but they also carry more depth and significance than just applying a label. I’d like to share a few different and general ways of understanding these diagnoses – focusing on some key yet underappreciated points – to paint a more realistic picture of their meaning and use.

Author's Conclusion: A diagnosis does not describe the essence of a person
  • A psychiatric diagnosis can profoundly shape how a person sees themselves. After receiving a diagnosis, they might begin to view themselves primarily through the lens of that category, reducing their complex, multifaceted self – with all the nuances of their personality, background and experiences – to a single aspect. This over-identification with a diagnosis can be further complicated by internalised stigma that makes people feel fundamentally flawed or inadequate, eroding their sense of self-worth.
  • A diagnosis can also foster a mistaken belief that one’s condition is unchangeable. People sometimes feel trapped by their diagnosis (especially for diagnoses that evoke a sense of pessimism, such as schizophrenia or personality disorders), and they may be unable to envision a future where they feel differently or their symptoms improve. Someone who over-identifies with a diagnosis might also begin to behave in ways that align with the expectations attached to that diagnosis, even if those behaviours were not originally prominent. For example, someone diagnosed with an anxiety disorder might start avoiding situations that wouldn’t actually be anxiety-provoking for them, believing that their condition makes them predisposed to overwhelming anxiety in any uncertain or challenging scenario.
  • The key to mitigating all of these risks lies in being mindful that a diagnosis is just one piece of a much more detailed puzzle of personhood. Seeing diagnosis as a useful but fallible tool for addressing mental health problems can help the diagnosed person and those who know them keep their complex individuality in sight, including their strengths, interests, ambitions, relationships and all the other elements that make up an identity.
  • Mental health problems are not immutable by default. People can recover and symptoms can go into remission. Even in the absence of remission, people can experience substantial improvements in their functioning over time and achieve fulfilling, successful lives. It is true that some people live with chronic mental health challenges, just as some people live with chronic physical health challenges. But these chronic problems, even when they deeply influence how someone relates to themselves and others, are still only one part of a more complicated identity. The possibility of growth and transformation is almost always there.
Author Narrative
  • Awais Aftab is a psychiatrist in Cleveland, Ohio and a clinical assistant professor of psychiatry at Case Western Reserve University. He is the editor of Conversations in Critical Psychiatry (2024), and writes online at Psychiatry at the Margins.
Notes
  • I dare say I agree with most of this, though I'm in general suspicious of psychological diagnoses and of those happy to be pigeon-holed and excused personal responsibility for their actions.
  • I need to re-read the paper, though I've quoted quite a lot of it!
  • There are no Aeon Comments.
  • In addition to Psychopathology, this relates to my Note on Narrative Identity.

Paper Comment



"Aho (Kevin) - Permission to be ill"

Source: Aeon, 16 May 2025


Author's Introduction
  • It began in late September 2022. I was just recovering from a severe case of COVID-19 when Hurricane Ian hit my hometown in southwest Florida. My wife and I evacuated to Miami for a week and watched the damage unfold on various news channels. When we returned to our home in Fort Myers a week later, we were shocked by the devastation. The scenes were absurd. There were huge fishing boats suspended like toys from mangrove and palm trees, entire homes floating in the middle of San Carlos Bay, the famous causeway to Sanibel Island ripped in half, and the town of Fort Myers Beach utterly flattened. But our house, albeit without power, made it through the storm relatively unscathed.
  • One evening amid the power outage, I happened to be outside on the patio reading by headlamp when I began noticing my jaw clenching and tightening up. I was suddenly having difficulty controlling my tongue, lips and jaw. I came inside and asked my wife if I might be having a stroke. Beyond my Covid experience and the wreckage of the hurricane, this was an extremely stressful time in my life. As a philosophy professor, I had just published a new book that was generating a modest buzz, and it seemed as if I was being invited to give talks all over the place. Always an anxious traveller, I was scheduled to fly in rapid succession from Madison, Wisconsin to Birmingham, England and then to Sweden for a talk in Stockholm, then to Linköping for another talk and back to Stockholm again for a third. This, in addition to my normal work duties and some major upheavals in my personal life, appeared to short-circuit me physically and emotionally.
  • My first thought was that I had temporomandibular joint disorder (TMJ) from excessive jaw clenching. I scheduled appointments with numerous dentists, who confirmed that I was a jaw clencher and created an occlusion splint to wear at night. But the movements continued, and the pain was getting worse. My tongue was moving constantly, and I noticed it affecting my speech with slurring and a pronounced lisp. I started chewing gum to occupy my tongue. The anxiety about what was happening to my body reached such a breaking point that I cancelled all my trips and gave my talks virtually via Zoom. I scheduled an appointment with a psychiatrist, who recommended I up the dose of the antidepressant Zoloft, a selective serotonin reuptake inhibitor (SSRI), a version of which I had been taking for well over two decades. But things got only worse. The increased dose of Zoloft made me feel dissociated and dangerously impulsive. Ativan was added to help take the edge off, but the spiral of depression and anxiety deepened. I considered checking into a psychiatric hospital. What was happening to me?

Author's Conclusion
  • Alas, I’m still searching for adequate treatment and on the lookout for a speech pathologist or neuropsychologist with formal training and expertise in FND. In the meantime, I realise that the only way forward is to learn to accept my condition as it is now, to have compassion for myself and my malfunctioning mouth, and to continue moving toward the things that I am afraid of.
  • Part of this path to acceptance was to get out in front of audiences and talk again – slurred speech, flailing tongue and all. A pivotal moment came at an interdisciplinary conference at the University of Wisconsin, Madison in the fall of 2023, about a year to the day that I began suffering from symptoms. I was terrified of embarrassing myself, but I walked to the podium and, before I began, openly and honestly described my condition to the audience.
  • Appropriately titled ‘Wellness, Burnout, and the Conditions of American Work’, the talk went over beautifully, and the support from participants over the next few days was extraordinary. I made lasting friendships, and I’m convinced that my willingness to be vulnerable about my health struggles made an important difference in how both I and my work were received.
  • Another important move in the direction of self-acceptance – one that I would certainly not recommend to everyone suffering from FND – was a session of psychedelic-assisted psychotherapy in Boulder, Colorado. There is a growing body of research showing that psychedelic treatment may be effective in treating depression, PTSD and anxiety, and perhaps even diminish symptoms of FND. My intention for treatment was to dismantle the ego-driven need for perfection and control that was tormenting me, and to let go of the ruminating grief I was holding onto at the loss of my old self.
  • Taking a so-called ‘heroic’ dose of psilocybin, the five-hour journey was profound: I was flooded with indescribable feelings of love, tenderness and forgiveness, and an immediate and direct awareness of the interdependence of all things.
  • My therapist recorded me at the peak, lying with eye mask and headphones on, sobbing and writhing in ecstasy, saying over and over: ‘The glory… the glory… let me share.’ Whether or not this experience of transcendence endures remains to be seen, but there is finally light coming in through a crack in the door. Eating yoghurt and granola in the aftermath, food once again fell out of my mouth. But my reflexive response of despair and frustration was absent.
  • Instead, I found myself smiling – then laughing. I was finally seeing my disorder, and my life, from a much wider lens – and from a cosmic perspective, none of it really mattered. That makes a difference.
Author Narrative
  • Kevin Aho is a professor in the Department of Communication and Philosophy at Florida Gulf Coast University. He is the author of One Beat More: Existentialism and the Gift of Mortality (2022), Existentialism: An Introduction (2020), and the coeditor of The Routledge Handbook of Contemporary Existentialism (2024).
Notes
  • This paper - at first sight - seems to be rather self-serving and narcissistic. But the writer has been through a lot, and is doubtless writing both for therapy and to raise awareness.
  • There are rather too many acronyms in the paper. The most important one is FND ('functional neurological disorder'), which is his final diagnosis (if that's what it is).
  • The author seems to take the view that every illness deserves a cure, even if no-one knows that the problem is, nor how to fix it. I was glad that - in the end - he decided to 'pull himself together' and make the best of it.
  • There are 42 Aeon Comments, nearly half of which are replies by the Author. They seem to be supportive (a couple deleted for contravening guidelines may not have been). I suppose I should take them seriously.
  • This relates to my Notes on Brain, Mind and Psychopathology.

Paper Comment
  • Sub-Title: "It took months for my functional neurological disorder to finally be diagnosed. It’s a condition that must be recognised"
  • For the full text see Aeon: Aho - Permission to be ill.



"Andersen (Timothy) - All possible worlds"

Source: Aeon, 15 June 2023


Author's Introduction
  • When I was in my mid-30s, I was faced with a difficult decision. It had repercussions for years, and at times the choice I made filled me with regret. I had two job offers. One was to work at a very large physics experiment on the West Coast of the United States called the National Ignition Facility (NIF). Last year, they achieved a nuclear fusion breakthrough. The other offer was to take a job at a university research institute. I agonised over the choice for weeks. There were pros and cons in both directions. I reached out to a mentor from graduate school, a physicist I respected, and asked him to help me choose. He told me to take the university job, and so I did.
  • In the years to come, whenever my work seemed dull and uninspiring, or the vagaries of funding forced me down an unwelcome path, or – worse – the NIF was in the news, my mind would turn back to that moment and ask: ‘What if?’ Imagine if I were at that other job in that other state thousands of miles away. Imagine a different life that I would never live.
  • Then again, perhaps I had dodged a bullet, who knows?
Author Narrative
  • Timothy Andersen was born in Hamilton, Ontario, Canada in 1980. He received his Doctorate in Mathematics from Rensselaer Polytechnic Institute in 2007. He is a Principal Research Scientist at the Georgia Tech Research Institute. In addition, his research interests are in the foundations of quantum theory and quantum gravity. He has published several books including his most recent, The Infinite Universe: A First Principles Guide (2020). He also writes frequently for The Infinite Universe on medium.com (Andersen - The Infinite Universe (Medium.com)) primarily about science and philosophy.
Notes
  • Overall, I thought the paper was utter drivel.
  • It's a shame that it wasn't opened up for Aeon Comments - it's have been interesting to see what other readers thought.
  • However, there's an item on his blog (Andersen - The Infinite Universe (Medium.com)) inviting comments. See Andersen - All possible worlds (Medium.com). You need to sign up to media.com to get full access.
  • The thoughts on various interpretations of the many-worlds interpretations of Quantum Mechanics were interesting enough, but rather random, speculative and inconclusive.
  • There's an absurd speculation about Consciousness, and the usual confusion about observers. But it has forced me to reconsider what's going on in the usual experiments (Double Slit) and TEs (Schrodinger's Cat). I need to document these.
  • But I'm not sure what all the stuff on 'many worlds' in popular culture was all about.
  • See his earlier paper: "Andersen (Timothy) - Quantum Wittgenstein". I need to read this.
  • The author's book - The Infinite Universe: A First Principles Guide - looks like a general account of science, so this paper isn't a plug for it.

Paper Comment
  • Sub-Title: "Long a matter of philosophical speculation, the idea of multiple realities has been given new artistic licence by physics"
  • For the full text see Aeon: Andersen - All possible worlds.



"Andersen (Timothy) - Quantum Wittgenstein"

Source: Aeon, 12 May 2022


Author's Introduction
  • As a scientist and mathematician, Wittgenstein has challenged my own tendency to seek out interpretations of phenomena that have no scientific value – and to see such explanations as nothing more than narratives. He taught that all that philosophy can do is remind us of what is evidently true. It’s evidently true that the wavefunction has a multiverse interpretation, but one must assume the multiverse first, since it cannot be measured. So the interpretation is a tautology, not a discovery.
  • I have humbled myself to the fact that we can’t justify clinging to one interpretation of reality over another. In place of my early enthusiastic Platonism, I have come to think of the world not as one filled with sharply defined truths, but rather as a place containing myriad possibilities – each of which, like the possibilities within the wavefunction itself, can be simultaneously true. Likewise, mathematics and its surrounding language don’t represent reality so much as serve as a trusty tool for helping people to navigate the world. They are of human origin and for human purposes.
  • To shut up and calculate, then, recognises that there are limits to our pathways for understanding. Our only option as scientists is to look, predict and test. This might not be as glamorous an offering as the interpretations we can construct in our minds, but it is the royal road to real knowledge.
Author Narrative
  • Timothy Andersen was born in Hamilton, Ontario, Canada in 1980. He received his Doctorate in Mathematics from Rensselaer Polytechnic Institute in 2007. He is a Principal Research Scientist at the Georgia Tech Research Institute. In addition, his research interests are in the foundations of quantum theory and quantum gravity. He has published several books including his most recent, The Infinite Universe: A First Principles Guide (2020). He also writes frequently for The Infinite Universe on medium.com (Andersen - The Infinite Universe (Medium.com)) primarily about science and philosophy.
Notes
  • I've not actually read this yet - but just wanted to add it to my database as a companion piece to "Andersen (Timothy) - All possible worlds".
  • There are a lot of Aeon Comments, which I also need to review.

Paper Comment
  • Sub-Title: "Metaphysical debates in quantum physics don’t get at ‘truth’ – they’re nothing but a form of ritual, activity and culture"
  • For the full text see Aeon: Andersen - Quantum Wittgenstein.



"Andrews (Kristin) & Birch (Jonathan) - What has feelings?"

Source: Aeon, 23 February 2023


Authors' Introduction
  • We have lots of evidence that many other animals are sentient beings. It’s not that we have a single, decisive test that conclusively settles the issue, but rather that animals display many different markers of sentience. Markers are behavioural and physiological properties we can observe in scientific settings, and often in our everyday life as well. Their presence in animals can justify our seeing them as having sentient minds. Just as we often diagnose a disease by looking for lots of symptoms, all of which raise the probability of having that disease, so we can look for sentience by investigating many different markers.
  • This marker-based approach has been most intensively developed in the case of pain. Pain, though only a small part of sentience, has a special ethical significance. It matters a lot. For example, scientists need to show they have taken pain into account, and minimised it as far as possible, to get funding for animal research. So the question of what types of behaviour may indicate pain has been discussed a great deal. In recent years, the debate has concentrated on invertebrate animals like octopuses, crabs and lobsters that have traditionally been left outside the scope of animal welfare laws. The brains of invertebrates are organised very differently from our own, so behavioural markers end up carrying a lot of weight.
Authors' Conclusion
  • There are also promising lines of enquiry in the animal case that just don’t exist in the AI case. For example, we can look for evidence in sleep patterns, and in the effects of mind-altering drugs. Octopuses, for example, sleep and may even dream, and dramatically change their social behaviour when given MDMA. This is only a small part of the case for sentience in octopuses. We don’t want to suggest it carries a lot of weight. But it opens up possible ways to look for deep common features (eg, in the neurobiological activity of octopuses and humans when dreaming) that could ultimately lead to gaming-proof markers to use with AI.
  • In sum, we need better tests for AI sentience, tests that are not wrecked by the gaming problem. To get there, we need gaming-proof markers based on a secure understanding of what is really indispensable for sentience, and why. The most realistic path to these gaming-proof markers involves more research into animal cognition and behaviour, to uncover as many independently evolved instances of sentience as we possibly can. We can discover what is essential to a natural phenomenon only if we examine many different instances. Accordingly, the science of consciousness needs to move beyond research with monkeys and rats toward studies of octopuses, bees, crabs, starfish, and even the nematode worm.
  • In recent decades, governmental initiatives supporting research on particular scientific issues, such as the Human Genome Project and the BRAIN Initiative, led to breakthroughs in genetics and neuroscience. The intensive public and private investments into AI research in recent years have resulted in the very technologies that are forcing us to confront the question of AI sentience today. To answer these current questions, we need a similar degree of investment into research on animal cognition and behaviour, and a renewal of efforts to train the next generation of scientists who can study not only monkeys and apes, but also bees and worms. Without a deep understanding of the variety of animal minds on this planet, we will almost certainly fail to find answers to the question of AI sentience.
Author Narrative
  • Kristin Andrews is the York Research Chair in Animal Minds and a professor of philosophy at York University in Toronto. She is on the board of directors of the Borneo Orangutan Society Canada and a member of the College of the Royal Society of Canada. Her books include The Animal Mind (2nd ed, 2020) and How to Study Animal Minds (2020)
  • Jonathan Birch is an associate professor in philosophy at the London School of Economics and Political Science, and principal investigator on the Foundations of Animal Sentience project. He is the author of The Philosophy of Social Evolution (2017).
Notes
  • Despite the eminence of the authors, this seems a muddled and somewhat absurd paper.
  • There's a distinction to be made between sentience and full-blown consciousness, and sentience itself come in degrees.
  • Also, there's a distinction between avoidance of toxic stimuli and the consciousness of that toxicity. Suggesting that nematode worms are conscious in any way like we are (or any mammall is), is absurd.
  • The reason for the absurdity is the relative complexity of the neurology of the different species.
  • While it is true that we have no real idea of what causes consciousness, we rightly assume it's related to brains and their complexity.
  • There are sites that list the number of neurons in the brains and wider nervous systems of . See:-
    Wikipedia: List of animals by number of neurons
    DinoAnimals: Number of Neurons in the Brain of Animals
  • Bundling together octopuses, crabs and lobsters in the same group for protection is absurd. A dog has 2.25Bn neurons. An octopus has 500m. A mouse has 70m. A bee or a cockroach has 1m. A fruit fly or an ant has 250k. A lobster has 100k. I'm not sure about crabs: the coconut crab - which is very large - has over 1m neurons associated with olefaction. Anyway, the bottom line of all this is that we should worry about fruit flies and ants before lobsters. But octopuses genuinely deserve protection.
  • The paper assumes functionalism too readily. As I've said, we don't know how consciousness arises. If it relies on quantum effects, simulating a connectome in a digital computer won't cut the mustard.
  • However, the authors are right to point out 'gaming' and to debunk Blake Lemoine's claims about the sentience of LaMDA (see "Aeon - Video - Changeling").
  • More needs to be said!

Paper Comment



"Appleton (Andrea) - Insectophilia"

Source: Aeon, 04 March 2015


Author's Introduction
  • I am standing in the cottage in the south of France where the entomologist Jean-Henri Fabre was born in 1823. Cooking utensils and a rosary hang on the wall. Through the window, the whitewashed village just beyond glows in the morning light.
  • Or so it seems. We are actually in the basement of a sleek building in downtown Tokyo. The view through the window is a cleverly lit painting in a hallway that leads to the rest of the Fabre Insect Museum.
  • Fabre would have found modern Tokyo harrowing: in 36 years, he never even visited the village neighbouring the rural estate where he spent his retirement. Yet it is the Japanese who remember Fabre best. In other parts of the world, only entomologists keep his memory alive, but here most people know his name, and many have read essays from his classic series, Souvenirs entomologiques (1879-1907). He is so well-known that 7-Eleven convenience stores in Japan gave away Fabre-themed plastic figurines as part of a soft-drink promotion in 2005. One depicted the man himself. Another, a plastic dung beetle, complete with dung.
  • ‘A vain wish has often come to me in my dreams,’ Fabre wrote in The Glow Worm and Other Beetles (1919). ‘It is to be able … to see the world with the faceted eyes of a Gnat.’ In Japan, Fabre found an audience with similar aspirations. Insects have been celebrated in Japanese culture for centuries. ‘The Lady Who Loved Insects’ is a classic story of a caterpillar-collecting lady of the 12th century court; the Tamamushi, or ‘Jewel Beetle’ Shrine, is a seventh century miniature temple, once shingled with 9,000 iridescent beetle forewings.
  • Insects continue to rear their antennae in modern Japan. Consider ‘Mothra’, the giant caterpillar-moth monster who is second only to Godzilla in film appearances; the many bug-inspired characters of ‘Pokémon’, and any number of manga (including an insect-themed detective series named after Fabre). Travel agencies advertise firefly-watching tours, there are televised beetle-wrestling competitions and beetle petting zoos. Department stores and even vending machines sell live insects.
  • Nor do the Japanese merely admire insects: they eat them too. In the Chūbu region, in central Japan, villagers rear wasps at home for food, and forage for giant hornets that are eaten at all life stages, while fried grasshoppers or inago are a luxury foodstuff. Entomophagy once had a place in Western culture too: the ancient Greeks ate cicadas, the Romans ate grubs. But while modern Westerners blithely eat aquatic arthropods – lobster, shrimp, crab, crayfish – we’ve lost our taste for the terrestrial kind.

Author's Conclusion
  • In the US, ‘nature’ is epic, free of roads and power lines, and generally far away. Majestic mountains, redwoods and polar bears sell the calendars. Weevils and crane flies are not invited to the photo shoot. In his article ‘Imagining an Everyday Nature’ (2010), the US environmental literature scholar Scott Hess writes that on a series of road trips to spectacular wilderness areas with which he had no connection, he felt like ‘a bee gathering nectar without a hive’. Our vision of nature, he says, is too often based on ‘a model of Romantic imaginative escapism’. Nature is everywhere that human beings are not.
  • As the global population expands, along with its corresponding need for resources, the notion of wilderness is likely to become increasingly theoretical. And if we are to maintain any semblance of a natural world outside those constricted areas, we will need to be less dogmatic about what is human and what is natural. The first step is to become friendly with our fellow organisms. The US psychologist James Hillman suggested in his 1988 essay ‘Going Bugs’ that ‘we must start not in their splendor – the horned stag, the yellow lion and the great bear, or even old faithful “spot” – but with those we fear the worst – the bugs’. In its regard for the ephemeral and the familiar in nature, Japanese culture has got something right.
  • Research shows that children develop the strongest environmental ethos through everyday contact with nature. Insects are good candidates for this contact. They are small and abundant, and they have one other virtue besides: they’re everywhere. Even the dark corners of my bathroom, the bastards.
  • Reporting for this essay was supported by the U.S.-Japan Foundation, through the International Center for Journalists.
Author Narrative
  • Andrea Appleton is a freelance journalist whose work has appeared in Al Jazeera America, Grist and High Country News, among others. She lives in Baltimore, Maryland.
Notes
Paper Comment
  • Sub-Title: "In Japan, beetles are pets, grasshoppers a delicacy and fireflies are adored. Is the creepy-crawly a Western invention?"
  • For the full text see Aeon: Appleton - Insectophilia.



"Avigad (Jeremy) - Principia"

Source: Aeon, 24 September 2018


Author's Introduction
  • When René Descartes was 31 years old, in 1627, he began to write a manifesto on the proper methods of philosophising. He chose the title Regulae ad Directionem Ingenii, or Rules for the Direction of the Mind. It is a curious work. Descartes originally intended to present 36 rules divided evenly into three parts, but the manuscript trails off in the middle of the second part. Each rule was to be set forth in one or two sentences followed by a lengthy elaboration. The first rule tells us that ‘The end of study should be to direct the mind to an enunciation of sound and correct judgments on all matters that come before it,’ and the third rule tells us that ‘Our enquiries should be directed, not to what others have thought … but to what we can clearly and perspicuously behold and with certainty deduce.’ Rule four tells us that ‘There is a need of a method for finding out the truth.’
  • But soon the manuscript takes an unexpectedly mathematical turn. Diagrams and calculations creep in. Rule 19 informs us that proper application of the philosophical method requires us to ‘find out as many magnitudes as we have unknown terms, treated as though they were known’. This will ‘give us as many equations as there are unknowns’. Rule 20 tells us that, ‘having got our equations, we must proceed to carry out such operations as we have neglected, taking care never to multiply where we can divide’. Reading the Rules is like sitting down to read an introduction to philosophy and finding yourself, an hour later, in the midst of an algebra textbook.
  • The turning point occurs around rule 14. According to Descartes, philosophy is a matter of discovering general truths by finding properties that are shared by disparate objects, in order to understand the features that they have in common. This requires comparing the degrees to which the properties occur. A property that admits degrees is, by definition, a magnitude. And, from the time of the ancient Greeks, mathematics was understood to be neither more nor less than the science of magnitudes. (It was taken to encompass both the study of discrete magnitudes, that is, things that can be counted, as well as the study of continuous magnitudes, which are things that can be represented as lengths.) Philosophy is therefore the study of things that can be represented in mathematical terms, and the philosophical method becomes virtually indistinguishable from the mathematical method.
Author's Conclusion
  • Mathematics has therefore soldiered on for centuries in the face of intractability, uncertainty, unpredictability and complexity, crafting concepts and methods that extend the boundaries of what we can know with rigour and precision. In the 1930s, the American theologian Reinhold Niebuhr asked God to grant us the serenity to accept the things we cannot change, the courage to change the things we can, and the wisdom to know the difference. But to make sense of the world, what we really need is the serenity to accept the things we cannot understand, courage to analyse the things we can, and wisdom to know the difference. When it comes to assessing our means of acquiring knowledge and straining against the boundaries of intelligibility, we must look to philosophy for guidance.
  • Great conceptual advances in mathematics are often attributed to fits of brilliance and inspiration, about which there is not much we can say. But some of the credit goes to mathematics itself, for providing modes of thought, cognitive scaffolding and reasoning processes that make the fits of brilliance possible. This is the very method that was held in such high esteem by Descartes and Leibniz, and studying it should be a source of endless fascination. The philosophy of mathematics can help us understand what it is about mathematics that makes it such a powerful and effective means of cognition, and how it expands our capacity to know the world around us.
  • Ultimately, mathematics and the sciences can muddle along without academic philosophy, with insight, guidance and reflection coming from thoughtful practitioners. In contrast, philosophical thought doesn’t do anyone much good unless it is applied to something worth thinking about. But the philosophy of mathematics has served us well in the past, and can do so again. We should therefore pin our hopes on the next generation of philosophers, some of whom have begun to find their way back to the questions that really matter, experimenting with new methods of analysis and paying closer attention to mathematical practice. The subject still stands a chance, as long as we remember the reasons we care so much about it.
Author Narrative
  • Jeremy Avigad is a professor in the department of philosophy at Carnegie Mellon University in Pittsburgh. He is associated with Carnegie Mellon's interdisciplinary programme in pure and applied logic.
Notes
  • I found this paper a bit of a muddle. Is it talking about metaphilosophy - how philosophy is to be conducted, and should this method be modeled on that of mathematics, or is it about the philosophy of mathematics and the conduct of mathematicians?
  • It covers quite a lot of ground in a seemingly random manner, including AI, Incompleteness theorems (Kurt Gödel) and Chomskyan linguistics (Noam Chomsky).
  • There's a reference to "Aronson (Polina) & Duportail (Judith) - The quantified heart".
  • As the paper was issued in 2018, it's a bit dated with respect to AI, but is probably worth a second reading.

Paper Comment
  • Sub-Title: "Is it possible that, in the new millennium, the mathematical method is no longer fundamental to philosophy?"
  • For the full text see Aeon: Avigad - Principia.



"Baggini (Julian) - Goodbye Pixel"

Source: Aeon, 24 January 2023


Author's Conclusion
  • Our pets give us the opportunity to think about the value of life and what makes humans and other animals different.
  • We can avoid the invitation, defy their mortality for as long as possible, imagining that cats and dogs are family members alongside brothers and sisters, mothers and fathers. Or we can accept it, taking them as examples of how it is more important to improve the quality of life than its quantity, and marvelling at how the worlds of humans, cats and dogs are radically different yet capable of intermingling.
  • Our pets are in some ways very different from us while sharing our condition as finite flowerings of conscious experience that whither all too soon. We live most honestly and rewardingly with other animals when we recognise both our differences and similarities for what they are, neither imagining that they are just like us, nor pretending they have nothing to teach us.
Author Narrative
  • Scott Hershovitz is the Thomas G and Mabel Long Professor of Law and professor of philosophy at the University of Michigan. He is the author of Nasty, Brutish, and Short: Adventures in Philosophy with My Kids (2022).
Notes
  • An excellent - though controversial - Paper.
  • Having been both a dog and a cat owner, or - better - guardian, I agree with the author more with respect to cats (that their relationship with humans is one-way and exploitative) than with respect to dogs (where the relationship is reciprocal). The author is a cat owner, though he does point out that dogs give as well as take (cat's give only in the sense that their selfish behaviour gives pleasure to their guardians).
  • I suspect the difference between cats and dogs is that felines - especially the males - are solitary animals whereas dogs are pack animals.
  • The history of changes on pet ownership (particularly of Italians) is interesting; no doubt partly down to our increasing affluence but partly down to increased understanding of the sentient nature of animals, and our duties towards them.
  • He's right that - strictly-speaking - our pets aren't really 'members of the family'. However, I think that they can be treated as though they are for most purposes.
  • Probably the most interesting - and controversial - sections are those on 'end of life care'. Baggini's view is that cats and dogs 'live in the moment' so if their lives are ended maybe earlier than they might be, they are not deprived in the way that a human being would be.
  • He's right that - in the wild - most animals' lives end horribly, so provided we give our farm animals 'good lives' and terminate them 'painlessly' we do them no harm. Their lives have then been worth living, even if they are shorter than they might have been. This is another contrast with 'early termination' of human beings, who might have unfulfilled plans and responsibilities.
  • There are many difficult and important philosophical issues raised; many picked up on the numerous Comments on the paper, which I've stored away for future consideration.

Paper Comment
  • Sub-Title: "Although it felt more like bereavement for a person than the loss of a thing, the death of a pet isn’t exactly like either"
  • For the full text see Aeon: Baggini - Goodbye Pixel.



"Baggini (Julian) - The vegan carnivore?"

Source: Aeon, 03 September 2013


Author's Introduction
  • The chef Richard McGeown has faced bigger culinary challenges in his distinguished career than frying a meat patty in a little sunflower oil and butter. But this time the eyes and cameras of hundreds of journalists in the room were fixed on the 5oz (140g) pink disc sizzling in his pan, one that had been five years and €250,000 in the making. This was the world’s first proper portion of cultured meat, a beef burger created by Mark Post, professor of physiology, and his team at Maastricht University in the Netherlands.
  • Post (which rhymes with ‘lost’, not ‘ghost’) has been working on in vitro meat (IVM) since 2009. On 5 August this year he presented his cultured beef burger to the world as a ‘proof of concept’. Having shown that the technology works, Post believes that in a decade or so we could see commercial production of meat that has been grown in a lab rather than reared and slaughtered. The comforting illusion that supermarket trays of plastic-wrapped steaks are not pieces of dead animal might become a discomforting reality.
  • The IVM technique starts with a harmless procedure to remove myosatellite cells — stem cells that can only become muscle cells — from a live cow’s shoulder. They are then placed in a nutrient solution to create muscle tissue, which in turn forms tiny muscle fibres. Post’s burger contained 40 billion such cells, arranged in 20,000 muscle fibres. Add a few breadcrumbs and egg powder as binders, plus some beetroot juice and saffron to give it a redder colour, and you have your burger. I was at the suitably theatrical setting of the Riverside Studios in west London to see the synthetic burger unveiled. The TV presenter Nina Hossain was hired to provide a dose of professionalism and glamour to what was in effect a live TV show, filmed by a substantial crew for instantaneous webcast.
  • When the lights dimmed, images of gulls flying over gentle sea waves were projected onto two screens by the sides of the stage. Over some sparse, slow, rising guitar chords, Sergey Brin, the co-founder of Google and a donor of €700,000 to Post’s research, uttered the portentous words: ‘Sometimes a new technology comes along and it has the capability to transform how we view our world.’ He was right. Never before has a human eaten meat without harming or killing an animal. But in a strange way the slick presentation detracted from the truly historic nature of the moment. A scientific landmark was sold to us in the manner of a glitzy product launch, a piece of corporate puff.
  • What was most striking to me was how the presentation led, not with science, but with ethics. As the introductory film continued, the scientist Ken Cook, founder of the US public health advocacy organisation Environmental Working Group, pointed out that ‘70 per cent of the antibiotics used in the United States now are not used on people, they’re used on animals in agriculture, because we keep them in such inhumane, overcrowded conditions’. He then reeled off a list of UN-backed statistics: ‘18 per cent of our greenhouse gas emissions come from meat production. We’re also using something like 1,500 gallons of water to produce just one pound of meat. Meat takes up about 70 per cent of our arable lands.’
  • Some might be surprised by this alliance: advocates of the most radical technological fix to our food supply agreeing with the critics of the contemporary industrialised food system — especially since the production of cultured meat tackles the problem by taking an even bolder technological leap forwards, rather than a step back to an older style of agriculture.
  • Dr Frankenstein is not supposed to also be an environmentalist who agrees that we must reduce land and water use, as well as synthetic inputs such as pesticides, fungicides, fertilisers, many of which depend more or less directly on oil for their production. The idea that IVM might have a part to play in a cleaner, fairer food system runs counter to a central idea put forward by many critics of industrial agriculture: that farming needs to be based more on traditional, natural, biological and ecological systems not artificial mono-cultures. Surely in vitro meat would be the most artificial mono-culture of them all.

Author's Conclusion
  • We need to reach a point where we are neither romantically devoted to traditional, small-scale farming methods nor addicted to technological fixes. We have to be able to determine the roles of both. As John P Reganold, professor of soil science and agroecology at Washington Sate University, put it when assessing the merits of organics: ‘a blend of farming approaches is needed for future global food and ecosystem security’. There are many ways in which food can be good or bad, and we cannot afford to pretend that we can get all that we want without getting some of what we don’t. Just as a healthy body needs a balanced diet, so a healthy attitude to food production means balancing different goods and not allowing one to become the master virtue, denying the claims of all others. Like children who are told to eat up their greens, we have to accept that we sometimes have to swallow things that we find unpalatable, for our own good and that of the world.
Author Narrative
  • Julian Baggini is a writer and philosopher. His latest book is How to Think Like a Philosopher (2023).
Notes
  • This is an interesting and important - if slightly dated - Paper. The choice of IVM (In Vitro Meat) is not important - it's the ethical (and practical and cultural) issues that are important.
  • For IVM, see the long article Wikipedia: Cultured meat. It seems it's moving along, but not readily available. I've not read this article, but I'd have thought that - while the impact on animal ethics is positive, the environmental impact would depend on the sourcing of the raw materials and the manufacturing process.
  • The author is right to distinguish the 'yuck factor' from ethical objections, and that all moral stances depend on trade-offs between competing moral instincts.
  • I suppose over time the thought of eating dead animals will become repulsive and the idea of meat-eating will go away. It does depend on the economic availability of nutritious and tasty alternatives. Attempting to produce 'like for like' seems slightly misguided and technically demanding.
  • It's not clear to me whether non-existence is better for 'ethically farmed' animals than life with the inevitable deaths. The end of all animals' lives is pretty grim.
  • There are no Aeon Comments.
  • This is related to my Notes on Animal Rights, Forensic Properties and Narrative Identity.

Paper Comment
  • Sub-Title: "It’s made in a lab, no factory farms and no killing, but it’s still meat. Looks like we’ll need a whole new food ethics"
  • For the full text see Aeon: Baggini - The vegan carnivore?.



"Baggott (Jim) - Calculate but don’t shut up"

Source: Aeon, 06 December 2021


Author's Conclusion
  • So, at what point did it become fashionable to gather together all the ills of quantum mechanics – all those conundrums that arise only in realist interpretations – and bundle them into a demonised version of the antirealist Copenhagen interpretation? The motives are fairly obvious. It’s hard to criticise a vague and amorphous culture of indifference in anything other than the most general terms and, in any case, such indifference is an issue for the sociology of science, not its content. Those more inquisitive physicists and philosophers looking to develop a more realist alternative interpretation needed a better foil, a more meaningful straw man to knock down.
  • And here was Bohr’s notorious obscurity and a handy, dogmatic, orthodox interpretation, a dogma that was not inspired by Bohr, but that was nevertheless inescapably associated with him. ‘Everybody pays lip service to Bohr,’ Bohm explained in 1987, ‘but nobody knows what he says. People then get brainwashed into saying Bohr is right, but when the time comes to do their physics, they are doing something different.’ Overlook (or ignore) its fragmented nature and questionable paternity, and the ‘Copenhagen interpretation’ is a great platform on which to build your counterarguments, or deepen discontent in order to foment your revolution. Or sell a few more books.
  • One of my favourite examples of this trend is an article by the US theorist Bryce DeWitt published in 1970 in Physics Today: ‘According to the Copenhagen interpretation of quantum mechanics,’ he wrote, ‘whenever a [wave function] attains a [certain form pertaining to measurement] it immediately collapses.’ DeWitt was seeking to validate an alternative reality based on the idea of ‘many worlds’, and no doubt his contrived version of Copenhagen helped him to breed discontent with the prevailing orthodoxy.
  • The timing is about right. The work of Bohm in the late 1950s, and Bell in the ’60s, had, by the early ’70s, led to another extraordinary conclusion. A so-called ‘locally real’ interpretation of quantum mechanics in which entities like photons or electrons are assumed to have intrinsic properties all along – and not just at their point of observation or measurement – makes predictions that differ from ‘ordinary’ quantum mechanics. It was realised that these predictions could be tested experimentally. Such tests have been performed at regular intervals ever since, with ever-increasing sophistication and precision, confirming that, despite how reasonable they might seem, all locally real interpretations are quite wrong. These experiments have, nonetheless, spawned entirely new disciplines – of quantum information and quantum computing – demonstrating that exploration of seemingly pointless philosophical issues can have profound practical consequences.
  • The deliberate conflation typified by DeWitt’s article has led to a world of confusion. In a 2016 survey of physicists, conducted by Sujeevan Sivasundaram and Kristian Hvidtfelt Nielsen at Aarhus University in Denmark, it was found that just a minority of physicists truly understood the meaning of the Copenhagen interpretation or the foundational concepts of quantum mechanics – based on the idea of a probabilistic universe, in which a particle is neither here nor there until measured, described by the wave function itself. In fact, only a minority of respondents had a proper grasp of the measurement problem that launched the field.
  • Mermin should be forgiven for following a trend that, by 1989, was entrenched in the quantum cultural mindset. I did much the same in my first book on quantum mechanics, published in 1992. We have both since learned to be more circumspect; we have to acknowledge that a dogma of indifference to philosophical questions was at least as much to blame for the rejection of foundational enquiry as anything Bohr might have said. Of course, the first to give expression to a meme such as ‘Shut up and calculate’ can claim no ownership over it and cannot control how others will use it. Irrespective of the historical rights and wrongs, those who continue to use it as a term of abuse directed at the Copenhagen interpretation are perfectly at liberty to do so.
  • But there is a growing number of commentators who are both familiar with the history and prepared to call this out. The purpose of this essay is to help you do the same.
Author Narrative
  • Jim Baggott is an award-winning British science writer based in Cape Town, South Africa. He is the co-author with John Heilbron of Quantum Drama: From the Bohr-Einstein Debate to the Riddle of Entanglement (2024).
Notes
  • Doubtless a plug for the author's latest book. Looking at the reviews on Amazon, it sounds as though it's a rather detailed historical survey that assumes quite a lot of background in the subject. As such, it's better than the standard popular-science mush. But I doubt I'll have time to read it at the moment, if ever.
  • I found this paper rather rambling, and wasn't quite sure what its stance was - other than to encourage the philosophy of science.
  • There are 30 Aeon Comments, some of them useful. They demand careful study.
  • This connects to my Notes on Quantum Mechanics and Fission.

Paper Comment



"Ball (Philip) - We are not machines"

Source: Aeon, 02 July 2024


Author's Introduction
  • You could be forgiven for thinking that the turn of the millennium was a golden age for the life sciences. After the halcyon days of the 1950s and ’60s when the structure of DNA, the true nature of genes and the genetic code itself were discovered, the Human Genome Project, launched in 1990 and culminating with a preliminary announcement of the entire genome sequence in 2000, looked like – and was presented as – a comparably dramatic leap forward in our understanding of the basis of life itself. As Bill Clinton put it when the draft sequence was unveiled: ‘Today we are learning the language in which God created life.’ Portentous stuff.
  • The genome sequence reveals the order in which the chemical building blocks (of which there are four distinct types) that make up our DNA are arranged along the molecule’s double-helical strands. Our genomes each have around 3 billion of these ‘letters’; reading them all is a tremendous challenge, but the Human Genome Project (HGP) transformed genome sequencing within the space of a couple of decades from a very slow and expensive procedure into something you can get done by mail order for the price of a meal for two. Since that first sequence was unveiled in 2000, hundreds of thousands of human genomes have now been decoded, giving an indication of the person-to-person variation in sequence. This information has provided a vital resource for biomedicine, enabling us, for example, to identify which parts of the genome correlate with which diseases and traits. And all that investment in gene-sequencing technology was more than justified merely by its use for studying and tracking the SARS-CoV-2 virus during the COVID-19 pandemic.
  • Nonetheless, as with the Apollo Moon landings – with which the HGP has been routinely compared – the decades that followed the initial triumph have seemed something of an anticlimax. For all its practical value, sequencing in itself offers little advance in understanding how the genome – or life itself – works. As the veteran molecular biologist Sydney Brenner wrote in 2010, the comparison with the Apollo programme turns out to be ‘literally correct’: "because sending a man to the moon is easy; it’s getting him back that is difficult and expensive. Today the human genome sequence is, so to speak, stranded on a metaphorical moon and it is our task to bring it back to Earth and give it the life it deserves."
  • That task hasn’t turned out as expected. The copious genome databases haven’t yet produced the flood of new treatments and drugs that some had predicted from gene-based medicine, nor delivered on the promise of therapies tuned to our own individual genomes. Despite the COVID-19 vaccines, drug development as a whole has stagnated or even slowed over recent decades, becoming ever more costly. And most drugs are still found by old-fashioned trial and error, not by leveraging genetic data. The outcomes have been particularly disappointing for understanding and treating cancer, long thought to arise from changes (mutations) in the sequences in our DNA that are either inherited or accumulated through age and environmental wear and tear. Despite the genetic data glut, biology seems to have settled back into a long, slow slog.
  • But I think this story is wrong. Fixing life remains difficult – but, in terms of understanding it, the course of cell and molecular biology over the past several decades isn’t a tale of unfulfilled promise. On the contrary, we’re in one of the most exciting periods since James Watson and Francis Crick discovered DNA’s double helix in 1953. The transformative advances of the post-genomic decades are revealing nothing less than a new biology: an extraordinary and fresh picture of how life works. And ironically, those advances turn out to undermine the skewed view of life on which the HGP itself was predicated, in which the genome sequence of DNA was (in the words Watson put into Crick’s mouth) the ‘secret of life’.
  • If that’s so, why haven’t we heard more about it? Why hasn’t it been trumpeted and celebrated as loudly as the HGP was? Part of the reason is that science is inherently and necessarily conservative: slow and reluctant to change its narratives and metaphors, not least because we have all (scientists and public alike) got accustomed to the old ones. And we have yet to find compelling new stories to replace them. Talk of a genetic blueprint, of selfish genes, of instruction books and digital codes gave us a narrative we could grasp. Even though we now know this to be at best a partial and at worst a misleading picture, it’s likely to remain in place until there is something better on offer.
  • The need for a new narrative isn’t just about communicating science; it also impacts how science is done. In 2013, the cancer biologist Michael Yaffe bemoaned the paucity of clinical advances that have come from a search for cancer-linked genes. We sought those genes, he suggested, not because we knew they were the key to developing new treatments so much as because we had the techniques for looking: ‘Like data junkies, we continue to look to genome sequencing when the really clinically useful information may lie someplace else.’ But then, where? What do we now know about how life works that might lead us to a more fruitful destination?

Author's Conclusion
  • The role of metaphor and narrative, as opposed to new theories or experiments, is too little recognised in discussions of the historian of science Thomas Kuhn’s paradigm shifts, supposed (and contested) moments of dramatic change in science. All scientists know how to go about scrutinising a theory: you use it to formulate some testable hypothesis, and then do the experiment. If the theory fails the test, that’s just the scientific method at work. But metaphors aren’t the kind of thing you test at all: there are no critical tools designed to challenge them. They become regarded merely as expressions of how things are: an invisible component of the prevailing paradigm.
  • As such, they are hard to dislodge when their utility has passed – scientists will instead find ingenious ways to hold on to them. Thus, genes may still be ‘selfish’, and organisms may still be ‘machines’, brains ‘computers’, genomes ‘blueprints’, so long as we give those metaphorical words different interpretations to the everyday ones – thereby, of course, negating their value as metaphors. Keller wrote eloquently on this issue: "[T]his style or habit of chronic slippage from one set of meanings to the other has prevailed for over 50 years; it has become so deeply ensconced as to have been effectively invisible to most readers of the biological literature. This feature I suggest qualifies it as a Foucauldian discourse – by which I mean a discourse that operates by historically specific rules of exclusion, a discourse that is constituted by what can be said and thought, by what remains unsaid and unthought, and by who can speak, when, and with what authority."
  • And there you have it: under cover of being neutral tools for communication, metaphors smuggle in ideological freight. If a metaphor is a kind of mental map, the sociologists Dorothy Nelkin and M Susan Lindee point out in their book The DNA Mystique (1995), quoting the curator Lucy Fellowes, that ‘every map is someone’s way of getting you to look at the world his or her way.’ I don’t suppose anyone who either supports or rejects the idea of ‘selfish genes’ would be so disingenuous as to deny that the arguments are not just about evolutionary biology but also about the broader connotations of the metaphor. I have heard it said that biologists who cleave to the claim that organisms are ‘machines’ do so not so much because of the aptness of the analogy but because it signifies allegiance to a materialist view of matter – as though one could not reject the idea that we are ‘machines made by genes’ without capitulating to a non-physical, mystical view of life.
  • Yet one can’t reasonably expect researchers to give up their metaphors unless they have others to replace them. In a 2020 commentary on my Nature article ‘Celebrate the Unknowns’ (2013), Keller (who saw the piece as a sign that even stodgy old Nature was waking up to something afoot) wrote that: "[I]f, as I claim, recent work in genomics has finally disrupted the narratives of developmental genetics that have prevailed for over a century, geneticists will now need a new narrative to help guide them through the thickets that lie before them."
  • So how now should we be speaking about biology? Keller herself tentatively suggested that we might adopt the prescient suggestion of the Nobel laureate biologist Barbara McClintock in recognising that the genome is a responsive, reactive system, not some passive data bank: as McClintock called it, a ‘highly sensitive organ of the cell’.
  • There’s virtue in that picture, but I think it points to a wider consideration: that the best narratives and metaphors for thinking about how life works come not from our technologies (machines, computers) but from life itself. Some biologists now argue that we should think of all living systems, from single cells upwards, not as mechanical contraptions but as cognitive agents, capable of sifting and integrating information against the backdrop of their own internal states in order to achieve some self-determined goal. Our biomolecules appear to make decisions not in the manner of on/off switches but in loosely defined committees that obey a combinatorial logic, comparable to the way different combinations of just a few light-sensitive cells or olfactory receptor molecules can generate countless sensations of colour or smell. The ‘organic technology’ of language, where meaning arises through context and cannot be atomised into component parts, is a constantly useful analogy. Life must be its own metaphor.
  • And shouldn’t we have seen that all along? For what, after all, is extraordinary – and challenging to scientific description – about living matter is not its molecules but its aliveness, its agency. It seems odd to have to say this, but it’s time for a biology that is life-centric.
Author NarrativeNotes
  • This is doubtless a plug for the author's "Ball (Philip) - How Life Works: A User’s Guide to the New Biology", which is in my reading queue.
  • So, most comments must await my reading of that tome.
  • The paper's title is usually added by the Editor to pique the interest of the potential reader. No doubt many will be delighted to know that 'the latest science' believes that we are not 'machines', but I could see nothing in this paper that tried to demonstrate that we are not; only that the process of our construction may be even more complex than might have been thought.
  • I seem to have 'quoted' most of this paper as the introductory and concluding sections are rather long! The middle section is worth quoting too!
  • I'm not sure I go along with the author's debunking of the old paradigms.
  • The genes - whatever they are - can be 'responsible' for the phenotype even if the process goes through a much more complex routine than simply reading off the recipe. Some switches are flipped by other genetic indicators (and some by environmental factors). What's the problem?
  • As for 'genes': if bananas - or fruit flies - have more genes than we do, there's something wrong with the counting.
  • There are no Aeon Comments, which is probably no bad thing.

Paper Comment
  • Sub-Title: "Welcome to the new post-genomic biology: a transformative era in need of fresh metaphors to understand how life works"
  • For the full text see Aeon: Ball - We are not machines.



"Banks (Jennifer) - What awaits us?"

Source: Aeon, 29 January 2024


Author's Introduction
  • In 2003, Edward Said wrote in the wake of the terrorist attacks of 11 September 2001 and in the context of the United States’ war on terror that ‘humanism is the only, and, I would go so far as saying, the final, resistance we have against the inhuman practices and injustices that disfigure human history.’ The moment, he felt, was ‘apocalyptic’, and the end was indeed near for him; he died of leukaemia later that year.
  • So why was it humanism that he held to so tightly as war and sickness cinched time’s horizon around him? Humanism, an intellectual and cultural movement that emerged in Renaissance Europe emphasising classical learning and affirming human potential, had been subject to decades of critique by the time Said was writing this. Among its many detractors were postcolonialists who argued that humanism’s elevation of a particular kind of human – Eurocentric, rational, empiricist, self-realising, secular and universal – had provided thin cover for the exploitation of large swaths of the world’s population.
  • But Said, one of the founders of postcolonial studies, hadn’t given up on the term, despite its imperialist entanglements. He imagined a humanism abused but not exhausted, an -ism more elastic and plural, more subject to critique and revision, and more acquainted with the limits of reason than many humanisms have historically been. Humanism, he argued, was more like an ‘exigent, resistant, intransigent art’ – an art that was not, for him, particularly triumphant. His humanism was defined by a ‘tragic flaw that is constitutive to it and cannot be removed’. It refused all final solutions to the irreconcilable, dialectical oppositions that are at the heart of human life – a refusal that ironically kept the world liveable and the future open.
  • At stake in his defence was not only the survival of the humanistic fields of study he had devoted his academic career to, but the survival, freedom and thriving of actual people, including those populations that humanisms had historically excluded. Various antihumanisms had gradually been eroding humanism’s stature within the academy, but it was humanism, he believed, with its positive ideas about liberty, learning and human agency – and not antihumanist deconstructions – that inspired people to resist unjust wars, military occupations, despotism and tyranny.
Author's Conclusion
  • The word ‘colonise’ comes up a lot in the transhumanists’ writings; they dream of colonising outer space – a place that appears empty and ripe for possession. But the global surveillance regime Bostrom imagines also entails an invasion of every corner of our inner lives as well. Here we can see how far we have travelled from Said’s postcolonial humanism, or Morrison’s humanism of the displaced, both of which always prioritised the rights of individual human actors, balancing them with responsibility, care, weight and limits, but never losing sight of freedom’s constitutive role in any sane society.
  • Transhumanism may well be the wave of the future; we are surely several steps along its path already. In such a future, Bostrom’s ‘eudaemonic agents’ might read Morrison’s lecture as yet another disappointed prophecy, but one that remains strangely resonant. Her humanism of the displaced would accrue eerie relevance after the entire human species is colonised and left to linger on as a curious species of useless hobbyists, subsisting on the altruistic but reluctant patrimony of superintelligent, non-biological beings.
  • But the future remains before us, as unthinkable as the farthest reaches of our still-uncolonised galaxy, or the startling mystery of our own births and deaths. I like to believe there’s still time to salvage whatever sane humanisms we can from the wreckage of modern history, to practise Said’s ‘exigent, resistant, intransigent’ arts, and to vindicate Morrison’s prophecy. The future, I hope, will remain hospitable to our species and to our children. The year 2030, the one that Morrison said our imaginations stumbled beyond, beyond which ‘we may be regarded as monsters to the generations that follow us’, is now just six short years away.
Author Narrative
  • Jennifer Banks is senior executive editor for religion and the humanities at Yale University Press. She is the author of Natality: Toward a Philosophy of Birth (2023). She lives in New England.
Notes
Paper Comment
  • Sub-Title: "Humanity’s future remains as unthinkable as the still-uncolonised galaxy or the enduring mystery of our own births and deaths"
  • For the full text see Aeon: Banks - What awaits us?.



"Barash (David P.) - Anthropic arrogance"

Source: Aeon, 18 September 2018


Author's Introduction
  • Welcome to the ‘anthropic principle’, a kind of Goldilocks phenomenon or ‘intelligent design’ for the whole Universe. It’s easy to describe, but difficult to categorise: it might be a scientific question, a philosophical concept, a religious argument – or some combination. The anthropic principle holds that if such phenomena as the gravitational constant, the exact electric charge on the proton, the mass of electrons and neutrons, and a number of other deep characteristics of the Universe differed at all, human life would be impossible. According to its proponents, the Universe is fine-tuned for human life.
  • This raises more than a few questions. For one, who was the presumed cosmic dial-twiddler? (Obvious answer, for those so inclined: God.) Second, what’s the basis for presuming that the key physical constants in such a Universe have been fine-tuned for us and not to ultimately give rise to the hairy-nosed wombats of Australia, or maybe the bacteria and viruses that outnumber us by many orders of magnitude? In Douglas Adams’s antic novel The Hitchhiker’s Guide to the Galaxy (1979), mice are ‘hyper-intelligent pan-dimensional beings’ who are responsible for the creation of the Earth. What if the Universe isn’t so much anthropic as mouse-thropic, and the appearance and proliferation of Homo sapiens was an unanticipated side effect, a ‘collateral benefit’?
  • For a more general perspective, in The Salmon of Doubt (2002), Adams developed what has become known as the ‘puddle theory’:
      [I]magine a puddle waking up one morning and thinking, ‘This is an interesting world I find myself in – an interesting hole I find myself in – fits me rather neatly, doesn’t it? In fact it fits me staggeringly well, must have been made to have me in it!’
  • It appears that Adams favoured a puddle-thropic principle. Or at least, the puddle did.
  • But perhaps I should be more serious about an idea that has engaged not just theologians and satirists but more than a few hard-headed physicists. The Australian astrophysicist Brandon Carter introduced the phrase ‘anthropic principle’ at a conference in Krakow, Poland in 1973 celebrating the 500th anniversary of the birth of Copernicus. Copernicus helped evict the Earth – and thus, humanity – from its prior centrality, something that the anthropic principle threatens (or promises) to re-establish. For Carter, ‘our location in the Universe is necessarily privileged to the extent of being compatible with our existence as observers’. In other words, if the Universe were not structured in such a way as to permit us to exist and, thus, to observe its particular traits, then – it should be obvious – we wouldn’t be around to marvel at its suitability for our existence!
Author's Conclusion
  • It should be clear at this point that the anthropic argument readily devolves – or dissolves – into speculative philosophy and even theology. Indeed, it is reminiscent of the ‘God of the gaps’ perspective, in which God is posited whenever science hasn’t (yet) provided an answer. Calling upon God whenever there is a gap in our scientific understanding may be tempting, but it is not even popular among theologians, because as science grows, the gaps – and thus, God – shrinks. It remains to be seen whether the anthropic principle, in whatever form, succeeds in expanding our sense of ourselves beyond that illuminated by science. I wouldn’t bet on it.
  • Yet, despite what has been called ‘Copernican mediocrity’, the deflating recognition that we aren’t the centre of the Universe (and to which I would add ‘Darwinian mediocrity’, the acknowledgment that we weren’t specially created as chips off the old divine block), all this debunking of human specialness isn’t necessarily cause for despair or for a spasm of species self-denigration. Just because the anthropic principle is shaky at best, this need not, and should not, give rise to an alternative ‘misanthropic principle’. Regardless of how special we are (or aren’t), aren’t we well-advised to treat everyone – including the other life forms with which we share this planet – as the precious beings we like to imagine us all to be?
Author Narrative
  • David P Barash is an evolutionary biologist and emeritus professor of psychology at the University of Washington in Seattle. His most recent books are Peace and Conflict Studies (5th ed, 2022), with Charles P Webel, Threats: Intimidation and Its Discontents (2020) and "Barash (David P.) - Through a Glass Brightly: Using Science to See Our Species as We Really Are" (2018), plus, with his wife, the psychiatrist Judith Eve Lipton, Strength Through Peace: How Demilitarization Led to Peace and Happiness in Costa Rica, and What the Rest of the World Can Learn from a Tiny, Tropical Nation (2018).
Notes
Paper Comment
  • Sub-Title: "Claims that the Universe is designed for humans raise far more troubling questions than they can possibly answer"
  • For the full text see Aeon: Barash - Anthropic arrogance.



"Barash (David P.) - Is God a silverback?"

Source: Aeon, 04 July 2016


Author's Conclusion
  • If, as Garcia and others suggest, we are prone as a species to imagine God as a dominant male harem-keeper, then it follows that this God – consistent with mammalian male anxieties generally – would be especially concerned about his paternity, with such concern manifesting itself not only in aggressive, threatening behaviour toward potential sexual competitors, but also in seeking reassurance that he hasn’t been cuckolded.
  • Accordingly, let’s look at one of the best-known, yet insufficiently examined Biblical passages, from Genesis 1:26. ‘Then God said, “Let us make mankind in our image, in our likeness, so that they may rule over the fish in the sea and the birds in the sky, over the livestock and all the wild animals, and over all the creatures that move along the ground.”’ This passage is often quoted by scholars critical of the damaging anti-ecological message it conveys: that human beings are granted dominion over all other life-forms. In Alpha God, Garcia notes that it also italicises an important connectedness – not between Homo sapiens and the rest of the organic world, but between God and his offspring: us. In Genesis 1:26 we thus have those offspring proclaimed by the Chief Polygynist to be legitimate Chips off the Old Divine Block. God further announces that because of this confirmed connection, His children are the legitimate inheritors of, well, everything.
  • Garcia is well aware that many Bible passages assert not only a family resemblance, but God’s direct paternity. For example: ‘Ye are the children of the Lord your God’ (Deut. 14:1); ‘Ye are the sons of the living God’ (Hosea 1:10); ‘We are the offspring of God’ (Acts 17:29). Furthermore, this relationship is strongly coloured with recognisable human emotion, notably paternal love and filial devotion: ‘See what great love the Father has lavished on us, that we should be called children of God!’ (1 John 3:1) And here I once again thank Hector Garcia, this time for pointing out that the most influential of Catholic theologians – the saints Augustine and Thomas Aquinas –argued that the body of man was made in the Imago Dei ‘adapted to look up to heaven’.
  • More accurately, Imago Dei is a case of Imago Homo, in which God was created in the image of man – or perhaps, Imago Homo polygynosus. Whatever the appropriate Latin phrase, the bio-logical reality is that religious traditions tend to emphasise precisely the continuity (genetic transubstantiated into theological) that evolutionary considerations would predict.
  • Harem-keeping males are prone to violent regime change: not only are they intensely territorial and aggressive to one another, but they are ruthless upon acquiring new females. In many species such as lions, the new conqueror will kill any young fathered by the previous alpha male. Judeo-Christian tradition conforms so closely to this sociobiological expectation that we might almost wonder if someone had ‘fudged’ the data. The Old Testament in particular brims with horrific exhortations of infanticide, consistent with biological reality if not current morality, and predictably evoked when the ancient Israelites conquer an unrelated tribe that worships other gods: ‘Now therefore kill every male among the little ones, and kill every woman who has known man by lying with him. But all the young girls who have not known man by lying with him, keep alive for yourselves.’ (Num. 31:17–18)
  • Harem-master males (regardless of species) are often at great pains to constrain the sexual ambitions of their subordinates, and not surprisingly, there are few religions in which God is portrayed as favouring sexual licence, and many in which acolytes are expected to practise abstinence, with virginity and celibacy being especially prized.
  • The big three Abrahamic religions most especially maintain that God strongly disapproves of various sexual practices, not just adultery. It is clear not only from the numerous examples adduced in Alpha God, but from the Bible and the Quran themselves, that the Abrahamic God is likely to be incensed by pretty much any kind of sexual pleasure, including homosexuality, masturbation, oral or anal sex, revealing clothing, even libidinous thoughts. Sexual restraint is a terrific way to avert jealous anger on the part of any dominant harem-keeper.
  • It seems likely that the human brain – for all its wonders – also contains a mammalian component that has evolved in an environment of male-dominated polygyny, along with more subtle, female-oriented polyandry – something I haven’t described here, but that also warrants attention. As a consequence, we are predisposed not only to overt polygyny plus covert infidelity, but also to a familiarity with and inclination to participate in systems of social deference and followership associated with an alpha-male polygynist. Not a pretty picture, but as Charles Darwin noted in The Descent of Man, and Selection in Relation to Sex (1871), ‘we are not here concerned with hopes or fears, only with truth as far as our reason permits us to discover it’.
  • An unfortunate consequence, however, of the evolutionary process that Darwin described so well is that, although Homo sapiens indeed has its vaunted reason, it has also been stuck with ossified behavioural patterns and psychological dynamics that persist in its religious values and strictures, a combination of prehistoric hopes and fears that reflect worldviews that most people would never endorse if they weren’t so used to hearing them. If that same imaginary Martian zoologist with whom we began this meditation were to interrogate an intelligent fish, asking for a description of her surroundings, probably the last thing she would say is: ‘It’s very wet down here.’ By the same token, our polygynous mindset lives on, not least in the unacknowledged deep-ocean trenches of our most prominent worship traditions.
Author NarrativeNotes
  • Time was when I'd have found this article deeply offensive, not to say tendentious ('an atheist would say that, wouldn't he').
  • When I read it recently (December 2024) I found it - on the whole - entirely convincing as an account of the character of YHVH as portrayed in the OT and as objected to by Marcion.
  • Some object that monotheism isn't best modelled by the Alpha male idea. But - it seems to me (and many others) that the OT is initially henotheistic (see Wikipedia: Henotheism) rather than monotheistic, though 'monolatry' is another useful term. Elohim is plural (see Wikipedia: Elohim); remember the division of the text of Genesis into the Elohist and the Yahwist (see Wikipedia: Documentary hypothesis) - a later attempt to bring henotheism in line with monotheism when it's thought through a bit more.
  • The author frequently cites the then recently published Alpha God: The Psychology of Religious Violence and Oppression by Hector A. Garcia. This - and some Amazon reviews thereon - may be worth following up.
  • There are quite a few Aeon Comments - polarised into two camps as one would expect. The critical ones deserve closer attention than I've given them.
  • I really need to read the article again more critically.
  • This relates to my Notes on Religion and Evolution.

Paper Comment
  • Sub-Title: "Protective, omnipotent, scary and very territorial. The monotheistic God is modelled on a harem-keeping alpha male"
  • For the full text see Aeon: Barash - Is God a silverback?.



"Barash (David P.) - Stuck with the soul"

Source: Aeon, 20 March 2023


Author's Introduction
  • Few ideas are as unsupported, ridiculous and even downright harmful as that of the ‘human soul’. And yet, few ideas are as widespread and as deeply held. What gives? Why has such a bad idea had such a tenacious hold on so many people? Although there is a large literature on the costs and benefits – psychological and economic – of traditional religion, there is a dearth of comparable research on religion’s near-universal handmaiden, the soul. As with Justice Potter Stewart’s non-definition of pornography – ‘I may not be able to define it, but I know it when I see it’ – the soul is slippery and, even though it cannot be seen (or smelled, touched, heard or tasted), soul-certain people seem to agree that they know it when they imagine it. And they imagine it in everyone.
  • Viewed historically and cross-culturally, there is immense variation regarding the soul, although some patterns can be discerned and are nearly universal. Souls are said to reside inside their associated bodies and are pretty much defined as immaterial, thereby contrasted with their fleshy habitations. Immortality is another close, but not quite universal, characteristic. Also widespread, but not invariant, is the soul’s ability to travel independent of its body, sometimes after death but often during sleep. Dreams are widely seen as demonstrating not only that the soul is ‘real’, but that it occupies its own unique plane of reality.
  • Jewish doctrine says almost nothing about the soul. ‘[T]here is no way on earth,’ wrote the influential Jewish philosopher Moses Maimonides, ‘that we can comprehend or know it.’ This agnostic attitude is consistent with Judaism’s lack of specificity regarding the afterlife generally and of heaven and hell in particular. By contrast, Christianity and Islam are clear when it comes to the soul, conceiving it as immaterial as well as immortal, the two perspectives being, as it were, soulmates. Although Islam has a variety of views when it comes to the soul, there is greater diversity within Christianity – between Protestant, Catholic, and Orthodox conceptions – and also within Protestantism, ranging from evangelical fundamentalism to the more relaxed and philosophical approaches of modern-day Quakers and Unitarians.
  • The Hindu soul resembles its Abrahamic counterpart with regard to immateriality and immortality, but with two major differences. For one, the soul (atman, or ‘self’) is conceived as a personalised part of a greater world-soul (brahman, closer to the Western ‘God’). Second, the Hindu soul is subject to regular reincarnations following the death of its body, including excursions into different kinds of animals, depending on its accumulated karma. The final desired destination of this process of repetitive birth and rebirth – oversimplified as nirvana – somewhat resembles the Western concept of heaven, although it is conceived more as a respite from the cycle of birth and rebirth than as an abode of ongoing bliss.
Author's Conclusion
  • So, where does this leave those of us who maintain in our hearts and non-existent souls that the whole business is a load of nonsense and hurtful to boot? Let’s face it: soul-belief is liable to persist about as long as souls themselves are purported to endure. Soul-sceptics can make their arguments but should probably also recognise that this concept fits so neatly into the human psyche that it will not be readily dislodged. We’re not stuck with souls, but most people are likely stuck with believing in them.
  • Parts of this Essay are based on the book Threats: Intimidation and its Discontents (Oxford University Press, 2020) by David P Barash.
Author Narrative
  • Aeon: David P Barash is an evolutionary biologist and emeritus professor of psychology at the University of Washington in Seattle. His most recent books are Peace and Conflict Studies (5th ed, 2022), with Charles P Webel, Threats: Intimidation and Its Discontents (2020) and Through a Glass Brightly: Using Science to See Our Species as We Really Are (2018), plus, with his wife, the psychiatrist Judith Eve Lipton, Strength Through Peace: How Demilitarization Led to Peace and Happiness in Costa Rica, and What the Rest of the World Can Learn from a Tiny, Tropical Nation (2018).
  • Amazon: David P. Barash is an evolutionary biologist (Ph.D. zoology, Univ. of Wisconsin) and professor of psychology emeritus at the University of Washington. He has written, co-authored or edited 41 books, dealing with various aspects of evolution, animal and human behavior, and peace studies. He is a Fellow of the American Association for the Advancement of Science and has received numerous awards. He is most proud, however, of his very personal collaboration with Judith Eve Lipton, his three children, five grandchildren, and having been named by an infamous rightwing nut in his book "The Professors" as one of the "101 most dangerous professors" in the United States. His dangerousness may or may not be apparent from his writing!
Notes
  • This is rather too dismissive of the soul (though I agree that there's no such thing, some eminent philosophers remain dualists:
    Richard Swinburne
    Dean Zimmerman
    David Chalmers (probably)
  • Anyway, Barash moves on from this to what he sees as the malign consequences of believing in souls.
  • I've forgotten what these are supposed to be - beyond political and religious control - so need to re-read the paper!
  • I have his "Barash (David P.) - Through a Glass Brightly: Using Science to See Our Species as We Really Are", but don't think I'll buy the book this paper is based on, as it's politics rather than metaphysics.
  • The 'right wing nut' referred to on Amazon is David Horowitz - The Professors: The 101 Most Dangerous Academics in America. While I might have some sympathy with Horowitz with regard to some US Professors, looking at his other recent books and support for Trump, he does seem a bit of a 'nut'.

Paper Comment
  • Sub-Title: "The idea of the soul is obviously a nonsense, yet its immaterial mysterious nature has deep hooks in the human psyche"
  • For the full text see Aeon: Barash - Stuck with the soul.



"Bayne (Tim) - Just a pale blue dot"

Source: Aeon, 25 April 2025


Author's Introduction
  • On St Valentine’s Day 1990, NASA’s engineers directed the space-probe Voyager 1 – at the time, 6 billion kilometres from home – to take a photograph of Earth. Pale Blue Dot (as the image is known) represents our planet as a barely perceptible dot serendipitously highlighted by a ray of sunlight transecting the inky-black of space – a ‘mote of dust suspended in a sunbeam’, as Carl Sagan famously put it. But to find that mote of dust, you need to know where to look. Spotting its location is so difficult that many reproductions of the image provide viewers with a helpful arrow or hint (eg, ‘Earth is the blueish-white speck almost halfway up the rightmost band of light’). Even with the arrow and the hints, I had trouble locating Earth when I first saw Pale Blue Dot – it was obscured by the smallest of smudges on my laptop screen.
  • The striking thing, of course, is that Pale Blue Dot is, astronomically speaking, a close-up. Were a comparable image to be taken from any one of the other planetary systems in the Milky Way, itself one of between 200 billion to 2 trillion galaxies in the cosmos, then we wouldn’t have appeared even as a mote of dust – we wouldn’t have been captured by the image at all.
  • Pale Blue Dot inspires a range of feelings – wonderment, vulnerability, anxiety. But perhaps the dominant response it elicits is that of cosmic insignificance. The image seems to capture in concrete form the fact that we don’t really matter. Look at Pale Blue Dot for 30 seconds and consider the crowning achievements of humanity ... Nothing we do – nothing we could ever do – seems to matter... What we seem to learn when we look in the cosmic mirror is that we are, ultimately, of no more significance than a mote of dust.
  • Contrast the feelings elicited by Pale Blue Dot with those elicited by Earthrise, the first image of Earth taken from space. Shot by the astronaut William Anders during the Apollo 8 mission in 1968, Earthrise depicts the planet as a swirl of blue, white and brown, a fertile haven in contrast to the barren moonscape that dominates the foreground of the image. Inspiring awe, reverence and concern for the planet’s health, the photographer Galen Rowell described it as perhaps the ‘most influential environmental photograph ever taken’. Pale Blue Dot is a much more ambivalent image. It speaks not to Earth’s fecundity and life-supporting powers, but to its – and, by extension, our – insignificance in the vastness of space.
  • But what, exactly, should we make of Pale Blue Dot? Does it really teach us something profound about ourselves and our place in the cosmic order? Or are the feelings of insignificance that it engenders a kind of cognitive illusion – no more trustworthy than the brief shiver of fear you might feel on spotting a plastic snake? To answer that question, we need to ask why Pale Blue Dot generates feelings of cosmic insignificance.

Author's Conclusion
  • Return to the contrast between Pale Blue Dot and Earthrise. Pale Bue Dot reveals something (albeit only a little) of the vastness of the cosmos in which Earth is located; Earthrise conceals this fact. But Earthrise reveals features that are concealed by Pale Blue Dot: Earth’s life-supporting capacities. Neither provide the ‘complete image’ of Earth from outer space – there is no such image.
  • Once we appreciate this fact, we can start to consider new perspectives on the question of cosmic significance.
  • Here’s one. Suppose that Voyager 1 had been equipped with a device designed to detect consciousness-supporting planets. And suppose that the images produced by this device marked the presence of such planets with bright red pixels. Had Voyager 1 directed its ‘consciousness camera’ Earthwards, we would have been as attention-grabbing as the scrape of a chair in a performance of John Cage’s 4’33”. The feelings generated by Bright Red Dot (as we might call this image) would surely be very different from those elicited by Pale Blue Dot. ‘Small’, the image might seem to say, ‘but enormously significant.’
  • Does that mean we are significant? Maybe not. Suppose that we used our ‘consciousness camera’ to map not just our corner of the solar system but the entire Universe. What kind of image might it produce?
  • One possibility is that Earth would emerge as the sole red dot in a vast expanse of blackness. (‘Nothing like us anywhere,’ we might say to ourselves with justifiable pride.) But the odds of that are surely low – perhaps vanishingly so. Astronomers suggest that there may be as many as 50 quintillion (50,000,000,000,000,000,000) habitable planets in the cosmos. What percentage of those planets actually sustain life? And, of those that sustain life, what percentage sustain conscious life? We don’t know. But let’s suppose that consciousness is found in only one of every billion or so life-supporting planets. Even on that relatively conservative assumption, there may be as many as 50 billion consciousness-supporting planets. Earth, as viewed through our consciousness camera, would be just one more red dot among a vast cloud of such dots.
  • Human creativity might be unmatched on this planet; it may even be without peer in the Orion arm of the Milky Way. But, given the numbers, we’re unlikely to be eye-catching from a cosmic point of view.
Author Narrative
  • Tim Bayne is professor of philosophy at Monash University in Melbourne, Australia and co-director of the Brain, Mind, and Consciousness programme of the Canadian Institute for Advanced Research (CIFAR). He is the author of The Unity of Consciousness (2010), Thought: A Very Short Introduction (2013), Philosophy of Religion: A Very Short Introduction (2018), and Philosophy of Mind: An Introduction (2022).
Notes
  • I suppose it's worth considering issues of cosmic value, but it's easy to fall into Nihilism.
  • Is there really any difference between the worries in this paper and those of the individual who thinks of the insignificance his own brief life in the sea of 8bn people on Earth? The author himself cites Blaise Pascal in this regard.
  • Each individual is significant in himself; insignificance only arises by comparison with others.
  • The problem with significance in the bean-counting sense is that we really have no idea how many civilisations - if any - are 'out there'.
  • There are no Aeon Comments, which - I suppose - is a relief; no end of drivel would have been talked.
  • I suppose this relates - vaguely - to my Notes on Narrative identity, Naturalism, Religion and Transhumanism.

Paper Comment
  • Sub-Title: "When we see the Earth as ‘a mote of dust suspended in a sunbeam’ what do we learn about human significance?"
  • For the full text see Aeon: Bayne - Just a pale blue dot.



"Bayne (Tim) - The stories of Daniel Dennett"

Source: Aeon, 13 December 2024


Author's Introduction
  • As the contemporary Oxford philosopher Anil Gomes observed in the London Review of Books in 2023, the key to understanding Dennett lies with another 20th-century American philosopher, Wilfrid Sellars – something of a philosopher’s philosopher. Sellars distinguished between two images of reality, the manifest image and the scientific image. The manifest image is the ordinary, everyday conception of reality – the conception of reality that human beings have prior to science. The scientific image, of course, is the conception of reality delivered by science.
  • The manifest image of the mind is also known as ‘folk psychology’. Folk psychology is, in effect, the mind’s natural, naive conception of itself. Unlike scientific psychology, folk psychology is independent of formal education. You grasp the essential features of folk psychology and have done so since early childhood. Folk psychology assumes the existence of a self – an ‘I’ that is the subject of thought and action. It assumes the existence of thoughts of various kinds – beliefs, desires and intentions. It assumes that we have voluntary control (‘free will’) over our actions, and that we are accountable for much of what we do. And it assumes that we are subjects of consciousness – creatures who undergo perceptual, emotional and bodily experiences.
  • What place can science find for these phenomena? Can we locate the self within a scientific account of the human being? What about beliefs, desires or intentions? What about free will or consciousness? Can science give an account of these phenomena in the way that it has accounted for other aspects of the manifest image (temperature, respiration, lightning) or will (some of) these phenomena go the way of the unicorns, gryphons and dragons of medieval bestiaries – entities that have no place within a scientifically informed catalogue of what there is? It is this question, more than any other, that lay at the heart of Dennett’s project.

Author's Conclusion
  • ‘The aim of philosophy, abstractly formulated,’ Sellars said, ‘is to understand how things in the broadest possible sense of the term hang together in the broadest possible sense of the term.’ Paraphrasing Sellars, we might characterise Dennett’s own thought as the attempt to understand how the folk image of the mind ‘hangs together’ with the scientific image of the mind. There are, of course, two ways of tackling that question. One can work from the science end of things, asking what exactly the scientific image of the mind involves. Alternatively, one can work from the folk end of things, asking what the ordinary, everyday conception of the mind involves.
  • Dennett had important things to say about the scientific image of the mind, but that’s not why he matters. Dennett matters because of what he had to say about the commitments of folk psychology – about what our everyday thought and talk about persons requires for its legitimacy. Dennett was quite willing to reject those aspects of folk psychology that he took to be at odds with science; as he once remarked in connection with free will, his aim was ‘saving everything that mattered about the everyday concept of free will, while jettisoning the impediments’. But in Dennett’s view, relatively little of folk psychology is beyond salvation. What needs to go is not so much folk psychology (whose commitments are relatively shallow), but the gloss on folk psychology that philosophers have imposed on it. In that regard, Dennett doesn’t mark a radical rupture in the aims or methods of philosophy of mind, but instead belongs firmly in the tradition of his mid-century heroes, Ryle and Wittgenstein.
Author Narrative
  • Tim Bayne is professor of philosophy at Monash University in Melbourne, Australia and co-director of the Brain, Mind, and Consciousness programme of the Canadian Institute for Advanced Research (CIFAR). He is the author of The Unity of Consciousness (2010), Thought: A Very Short Introduction (2013), Philosophy of Religion: A Very Short Introduction (2018), and Philosophy of Mind: An Introduction (2022).
Notes
Paper Comment



"Bernhardt-Radu (Stefan) - The eugenicist of UNESCO"

Source: Aeon, 02 December 2024


Author's Introduction
  • ... As the horrors of the concentration camps came to light, those who had advocated for utopian human possibilities through eugenics suddenly seemed naive. Galton and others appeared, at best, horribly misguided. Criticism of the Nazis’ racial hygiene project ran high immediately after the Second World War, with many equating eugenics itself with the Holocaust. Not only were the Nazis condemned, but the word ‘eugenics’ was, too. It was, as the American historian Nancy L Stepan wrote in 1991, eventually ‘purged from the vocabulary of science and public debate’.
  • In response to the devastation of the war, including the catastrophic outcomes of racial science, global efforts were made to establish a new humanitarian vision for Earth’s future. And central to this vision was the founding, in 1945, of the United Nations Educational, Scientific and Cultural Organization (UNESCO). From the beginning, UNESCO aimed to overcome the ignorance and prejudice that contributed to the Second World War. The agency’s founders sought to promote cosmopolitanism and interculturalism to ameliorate fraying international relations. UNESCO, they hoped, would become a beacon of peace, cooperation and human dignity. The only problem? The man hired to direct the fledgling institution was a eugenicist.

Author's Conclusion
  • Huxley’s tenure as the first director of UNESCO was challenged from the very beginning. The US did not support him, partly because Huxley had visited the USSR twice before his appointment, and was outspoken about ‘cooperation’, which meant US officials smelled the red threat on him. In the end, the US did vote for Huxley’s appointment, but on the condition that he served for two years instead of the normal six. During that time, he tempered his writing on eugenics, but it would be a mistake to view this as a withdrawal from his grand vision for humanity.
  • Huxley didn’t just settle for promoting education, fighting poverty and publicising the dangers of overpopulation, as Malthus and Galton had done. For Huxley, UNESCO was part of a much broader evolutionary vision aimed at increasing harmonisation between members of a world society and with nature. He envisioned global interculturalism, cosmopolitanism and world coordination. Just as cells in the body functioned better when coordinated by a brain, Huxley thought that world cultures could be better interconnected and harmonised by UNESCO. This coordination was not a short-term project. Even if birth control and eugenics were unpalatable in the mid-20th century, Huxley believed a time would come when more educated people would become aware of their advantages. For Huxley, eugenics was always future-oriented. After his talk to the Eugenics Society in 1936, one review in the journal Nature claimed that his views on eugenics were ‘destined to become part of the religion of the future, or of whatever complex of sentiments which may then take the place of organised religion.’
  • It would also be an error to think Huxley’s views are outdated. After all, UNESCO still strives to equalise the environment by addressing poverty and education. And, in many parts of the world, aspects of Huxley’s version of eugenics live on. Fetal diagnoses are still performed, and, upon seeing a fetus ‘unfit’, some might perform a legal abortion. In vitro fertilisation, a dream of Huxley’s, now allows some people to select the semen, eggs and embryos more likely to produce genetically superior children. We may baulk at any talk of eugenics, or think of Huxley as naive for his view of scientific world humanism. Yet many of us make decisions related to genetics or reproduction that Huxley would likely have agreed with.
  • As historians have realised, ‘eugenics’ now usually happens through individual choices, not the guidance of a central brain. For that reason, most of us do not think of our personal choices as cobblestones in the road of evolutionary progress and harmonisation – a road that was, at one time, planned and designed by UNESCO’s first director.
Author Narrative
  • Stefan Bernhardt-Radu is a postgraduate researcher in the School of Philosophy, Religion and History of Science at the University of Leeds, UK, and a 2024-2025 Ri Freer Fellow.
Notes
  • An interesting account of Julian Huxley's views and a fairly non-polemical non-condemnatory account of the eugenics movement.
  • Much of what the author praises Huxley for has little to do with his eugenics.
  • The author is right to point out that - given the choice - prospective parents are closet eugenicists.
  • But the whole eugenics movement was muddled in thinking that the welfare state would slow human evolution. Biological evolution proceeds at a much more leisurely pace.
  • In any case, society needs all sorts of 'economically active' people to function, and how many people it can support who are economically inactive depends both on the wealth of the society, the need for support by these people, and the availability of supporters. One suspects a crunch point is on the horizon (in the absence of a breakthrough in robotics). That, ... unless war, famine and pestilence perform their Malthusian functions.
  • There are no Aeon Comments.
  • This relates to my Note on Evolution, though it has nothing to do with evolution by natural selection but relates to selection by breeding. Otherwise, I suppose it relates to PID as a Forensic Property. I suppose eugenics should be covered in my Note on Genetics.

Paper Comment



"Boesch (Brandon) - More-than-human science"

Source: Aeon, 24 April 2025


Author's Introduction
  • The science of our age is computational. Without models, simulations, statistical analysis, data storage and so on, our knowledge of the world would grow far more slowly. For decades, our fundamental human curiosity has been sated, in part, by silicon and software.
  • The late philosopher Paul Humphreys called this the ‘hybrid scenario’ of science: where parts of the scientific process are outsourced to computers. However, he also suggested that this could change. Even though he began writing about these ideas more than a decade ago, long before the rise of generative artificial intelligence (AI), Humphreys had the foresight to recognise that the days of humans leading the scientific process may be numbered. He identified a later phase of science – what he called the ‘automated scenario’, where computers take over science completely. In this future, the computational capacities for scientific reasoning, data processing, model-making and theorising would far surpass our own abilities to the point that we humans are no longer needed. The machines would carry on the scientific work we once started, taking our theories to new and unforeseen heights.
  • According to some sources, the end of human epistemic dominance over science is on the horizon. A recent survey of AI researchers offered a 50 per cent chance that, within a century, AI could feasibly replace us in every job (even if there are some we’d rather reserve for ourselves, like being a jury member). You may have a different view about whether or when such a world is possible, but I’d ask you to suspend these views for a moment and imagine that such artificial superintelligences could be possible eventually. Their development would mean that we could pass over the work of science to our epistemically superior artificial progeny who would do it faster and better than we could ever dream.
  • This would be a strange world indeed. For one thing, AI may decide to explore scientific interests that human scientists are unincentivised or unmotivated to pursue, creating whole new avenues of discovery. They might even gain knowledge about the world that lies beyond what our brains are capable of understanding. Where will that leave us humans, and how should we respond? I believe we need to start asking these questions now, because within a matter of decades, science as we know it could transform profoundly.

Author's Conclusion
  • So, what will we do? In his original presentation of the automated scenario, Humphreys suggested that the automated scenario would replace human science. I disagree. Since our desires for understanding, explanation, knowledge and control will remain, we cannot help but take actions to address those desires – to continue to do science. We humans create beautiful things, pursue interhuman connection in friendship and romance, and find and construct meaning in life. The same holds true for our motivations for science. We will be stuck with our curiosity to understand and explain the natural world around us.
  • If the automated scenario comes to pass, it seems that it will have to be as some new, alternative, secondary path – not a replacement, but an addition. Two species, pursuing science side by side, with different motivations, interests, frameworks and theories. Perhaps there will also be parts of science that artificial superintelligence is simply less interested in, such as the human quest to better understand our own minds, choices, relationships and health.
  • Indeed, if we are to remain human (and I cannot but hope that we will), we must continue to pursue science. What are we, really, if we are not beauty-seeking, friendship-making, meaning-constructing, hopelessly curious animals? Perhaps it is my limited powers of imagination that prevent me from conceiving of a future world in which we have abandoned these human desires. There are plenty of transhumanists who may think so. But I do not count it as a lack of creativity to see the goodness in beauty, in love, in meaning, and in science. Quite the contrary. I, for one, take hope in our hopeless curiosity.
Author Narrative
  • Brandon Boesch is associate professor of philosophy at Morningside University, Iowa. He lives in Omaha, Nebraska.
Notes
  • Interesting but probably inaccurate? The paper probably deserves a second reading.
  • It’s true that – at least ultimately – the computing resources available to an AI will vastly exceed that of a human brain. Maybe even exceeding the billions of human brains acting in cooperation (given that the cooperation is imperfect and most of the brains aren’t up to much in the scientific sphere).
  • The author is probably right that the AI would be able to make connections that are beyond the human brain because of the amount of data to hand. But, at least currently, the information the AI has is derivative and often not even true (and likely to get worse).
  • Also, the AI is fundamentally cut off from the world and – in the absence of Functionalism (and maybe even assuming functionalism if the isomorphism is imperfect) – may never be sentient. The AI won’t know what it’s like to be one of us (though it’ll know what we write about ourselves). Unless, of course, it solved the Hard Problem of Consciousness.
  • I suppose – ultimately – it would be able to design investigative tools for us to build that would provide it with new data relevant to its investigations.
  • There would need to be some constraints on its investigations on the grounds of power-consumption.
  • I might add that some of the fundamental problems of physics are forever beyond our – or the AI’s – grasp because of the energies, timescales or distances – not to mention costs – involved. This was raised in "de Sutter (Adrien) - The stagnation of physics".
  • There are numerous references (some of which are not listed below as I’ve not had time to follow them up):-
    → "Humphreys (Paul) - The Philosophical Novelty of Computer Simulation Methods"
    Aeon: Video - On Wittgenstein
    Grace, Etc - Thousands of AI Authors on the Future of AI
    Aeon - AI Co-ordination Page (a useful summary)
  • There are no Aeon Comments.
  • This relates – just about – to my notes on Naturalism, Functionalism, Fictionalism, Intelligence, Computers, Wittgenstein, Consciousness and Transhumanism.

Paper Comment
  • Sub-Title: "When AI takes over the practice of science we will likely find the results strange and incomprehensible. Should we worry?"
  • For the full text see Aeon: Boesch - More-than-human science.



"Borkenhagen (David) - Octopus time"

Source: Aeon, 20 April 2023


Author's Conclusion
  • The semelparous octopus presents a relationship with death, and time, that is profoundly different from ours. As humans, we do not approach mating or childcare with the expectation that death will soon follow. We imagine our lives continuing past those events. The octopus sacrifices its own future for the future of its offspring. It becomes part of a process of intergenerational labour, which required the death of its parents and will someday require the death of its children. Through its death, the octopus submits itself to this labour, which it will never see completed. As a poetic interpretation, it’s as if death for the octopus is conceptualised as less rigid. Death is not a ‘dead end’, as we imagine it, but part of a more fluid process that stretches across generations.
  • Unlike our speculations about time, we will never really know how the octopus conceptualises death. But its physical reality, one of profound fluidity in life and death, can still be used to ground new human metaphors. Speaking of time as cyclical, where the past repeats itself in the future, has been shown to reduce estimations of the length of grief following a death in the family. Similarly, some patients in end-of-life care speak of shifting from ego-moving to time-moving metaphors, allowing them to conceptualise the passage of time as less fixed and more fluid. After this shift, they report a newfound ability to receive the help that is offered to them by their caretakers.
  • In many ways, the octopus represents a challenge, or a profound limit, to our conventional ways of thinking about time and death. But it’s more than a challenge. It’s also an invitation. With its unconstrained movements and semelparous lifecycle, the octopus offers a radically different perspective on the fluidity and flexibility of existence. Could we learn to move through time as an octopus moves through space? With equal access to the past, present and future – viewed wide or with sharp focus – we might better navigate the challenges of living and dying on Earth. The octopus invites us to think in a way that dissolves the boundaries between the present and the future, understanding our ‘ending’ less as a fixed point and more as a fluid process stretching across generations. As the boundary between life and death dissolves and becomes more porous, so do the boundaries between ourselves and others. The metaphors we used to inhabit our time here may seem impoverished, but there’s another way. It’s in the unconstrained movements of an octopus travelling through space – fluid, flexible and free.
Author Narrative
  • David Borkenhagen is a postdoctoral fellow working at the Mathison Centre for Mental Health Research & Education at the University of Calgary in Canada.
Notes
  • This is an interesting but rather absurd piece, as some of the Comments (which I've reserved for future analysis) point out.
  • It's not really about time, but about our psychological experience of time.
  • That there's an arrow of time has nothing to do with the fact that humans walk forwards.
  • We can remember the past but not the future. The same goes for octopuses.
  • Still, it's worth rereading the paper and commenting on it in detail.
  • In particular its distinction between 'ego moving' and 'time moving' aspects of the passage of time.

Paper Comment
  • Sub-Title: "We humans are forward-facing, gravity-bound plodders. Can the liquid motion of the octopus radicalise our ideas about time?"
  • For the full text see Aeon: Borkenhagen - Octopus time.



"Butterworth (Brian) - A basic sense of numbers is shared by countless creatures"

Source: Aeon, 12 October 2022


Author's Conclusion
  • One thing that my colleagues and I discovered in working with fish was that some individual fish seem to be much worse on numerical tasks than others of the same species. We are now investigating whether there is a genetic basis to these individual differences. This could help explain why around 5 per cent of people – those with dyscalculia – have serious trouble with even simple number tasks. It is possible that this learning disability, rather like colour vision deficiency, is the result of one or more genetic variants that makes the number mechanism less efficient in representing numbers, which in turn makes learning arithmetic more difficult.
  • Of course, even if we have inherited a basic number sense from distant ancestors, there are some big differences between humans and other creatures. First, we have languages, and counting words can be useful for careful and accurate enumeration – even if counting does not fundamentally depend on counting words. Second, we have very elaborate methods of social learning that depend in part on language, including formal and informal education. This means that numerical methods can be passed down and improved from generation to generation. The most commonly used system of symbolic numbers – the Hindu-Arabic digits with zero – is a relatively recent invention in the history of humanity. These digits make calculation much easier, which is why Leonardo Fibonacci’s book Liber Abaci (1202), which explained how to use them, became a bestseller, and why 13th-century merchants sent their sons to scuole d’abaco (schools of calculation) in Italy to learn them.
  • Nevertheless, modern, sophisticated numerical skills seem to be founded on a mechanism we have inherited from our nonhuman ancestors. When that mechanism isn’t working properly, then it is very difficult to acquire these skills.
Author Narrative
  • Brian Butterworth is emeritus professor of cognitive neuropsychology in the Institute of Cognitive Neuroscience at University College London. He is the author of Can Fish Count? What Animals Reveal About Our Uniquely Mathematical Minds (2022).
Notes
  • An interesting Paper, though I wasn't convinced that remote evolutionary connections had anything to do with dyscalculia, if there is any such thing, though it might be worth reading Butterworth - Developmental Dyscalculia in order to find out!
  • Despite the sub-title, dyscalculia in humans is not the focus of this paper, which is probably intended to get us to buy the author's latest book.
  • I'd not known - or had forgotten - that bird's brains have no cerebral cortex: 'Unlike mammals, birds have a reptilian brain, inherited from their dinosaur ancestors, that lacks a cerebral cortex. Nevertheless, Nieder discovered neurons in the crow’s pallium that act like the number neurons in a primate cortex.'. Whatever Parrots do, linguistically - and numerically - must make use of radically different neural structures to our own. Unsurprising, of course.
  • The idea of 'counter neurons' makes some sense of having a feel for small numbers, but being overwhelmed by larger numbers (ie. we can 'just see' that there are 5 beans on the table, but not that there are 109. But certain savants (if I remember correctly) have a precise 'intuitive' grip on much larger numbers.
  • I need to re-read this paper!

Paper Comment



"Cacciafoco (Francesco Perono) - The lonely life of a glyph-breaker"

Source: Aeon, 07 April 2025


Author's Introduction
  • I am a glyph-breaker. I confess. Guilty as charged. A glyph-breaker who didn’t break anything, and that is quite paradoxical, because, to be a true glyph-breaker, you should have deciphered an undeciphered script, like Jean-François Champollion (the founder of Egyptology, who decoded the Ancient Egyptian hieroglyphs), Henry Rawlinson (who gave us the key to cuneiform) or Michael Ventris (who deciphered Linear B). Well, I didn’t. But I tried. I still try, in a way. And, in our times of devolution, that probably qualifies a guy to be called a glyph-breaker. The age of the great decipherments is, in all likelihood, over. What remains: a considerable amount of poorly documented, extremely elusive writing systems and ‘inscribed relics’, like Linear A, the Indus Valley Script, Rongorongo, and the Singapore Stone. Puzzles. Possibly unsolvable. Headache-generators. Nasty stuff.
  • Despite this, a glyph-breaker cannot be scared. A glyph-breaker doesn’t surrender. Theoretically. I started a quarter of century ago, in 1999, when I was at the University of Pisa in Italy, with Linear A, the undeciphered writing system from Bronze Age Crete, ‘hiding’ the so-called (unknown) Minoan language. I probably studied everything there was to study on that script, I reproduced many of the previous (and unsuccessful) decipherment attempts, and I tried to decipher the writing system by myself. I failed. Obviously. I am not Ventris, the genius who deciphered the ‘grammatological daughter’ of Linear A, which already transcribed Ancient Greek in its oldest known form. And Linear A is not Linear B. I didn’t completely give up, nonetheless. I started, with my research teams – originally at Nanyang Technological University (NTU) in Singapore, and now at Xi’an Jiaotong-Liverpool University in Suzhou (Jiangsu) in China – to develop cryptanalytic and computational approaches and tools to better understand Linear A. I am analysing it in a comprehensive and unbiased fashion, trying to provide my team and scholars from around the world with insights that, one day, could perhaps lead to the decipherment of that writing system.
  • I don’t believe much in computational approaches. But I believe in cryptanalysis – breaking secret codes to reveal hidden information. And, in today’s world, cryptanalysis is computational by definition. Technology cannot replace human ingenuity and cannot lead automatically to the decipherment of an undeciphered script. But it can save a lot of work and pain to scholars. My team developed a Python program that feeds a computer we called ‘The Machine’ that can develop, in a reasonable amount of time, an exhaustive cryptanalytic brute force attack on the Linear A signs.

Author's Conclusion
  • Today, many think that everything can be solved by AI (and quantum computing). We cannot teach anymore without that, we cannot validate our findings, we cannot produce our original ideas, we cannot say two words without having to mention some artificial intelligence’s wonders. AI is definitely a prodigious tool, and it’s not impossible it will provide us with keys to decipher long-undeciphered scripts. That would be ideal, and I would be the first to rejoice if AI was able to complete a full decipherment, but I doubt it will. Champollion and Ventris weren’t able to avail themselves of AI – yet, thanks to their brilliance and ingenuity, they got where AI probably won’t lead us, at the end of the day.
  • Rather than trying to amass oceans of scientific papers listing self-evident results and futile findings, instead of inserting AI and unnecessary technologies into the most human of activities, like teaching, universities and research institutions should value and support their people (something that, nowadays, happens more and more rarely) and be especially inclusive of the ones among them who are able to think outside the box, to go the extra mile towards the achievement of what is believed to be unattainable or, simply, impossible – like the decipherment of undeciphered writing systems.
  • There’s no question that challenges remain. Linear A, the Phaistos Disc, the Indus Valley script, the Rongorongo writing system, the Singapore Stone and many other mysteries still await their codebreakers. Their decipherment seems unlikely – especially for scripts with very few surviving texts – but so too did the breakthroughs in Ancient Egyptian hieroglyphs and Linear B.
  • It will take dedicated and brilliant scholars to push these frontiers forward. But, for that to happen, universities must recognise the value of their work and provide them with the resources, collaboration and stability needed to continue. Without this support, we risk not only the end of decipherment, but also a profound loss of human curiosity and discovery – an intellectual silence that no AI, nor the short-sightedness of those in power, should be allowed to impose.
Author Narrative
  • Francesco Perono Cacciafoco is associate professor in linguistics at Xi’an Jiaotong-Liverpool University in Suzhou, China.
Notes
Paper Comment



"Cain (Susan) - The immortalists have got it wrong – here’s why we need death"

Source: Aeon, 14 December 2022


Authors' Conclusion
  • It’s a nice idea, but solving toxicity and conflict is unlikely to be this simple. Indeed, our true challenge may not be death at all (or not only death), but rather the sorrows and longings of being alive. We think we long for eternal life, but maybe what we’re really longing for is perfect and unconditional love; a world in which lions actually do lay down with lambs; a world free of famines and floods, concentration camps and Gulag archipelagos; a world in which we grow up to love others in the same helplessly exuberant way as we once loved our parents; a world in which we’re forever adored like a precious baby; a world built on an entirely different logic from our own, one in which life needn’t eat life in order to survive. Even if our limbs were metallic and unbreakable, and our souls uploaded to a hard drive in the sky, even if we colonised a galaxy of hospitable planets as glorious as Earth, even then we would face disappointment and heartbreak, strife and separation. And these are conditions for which a deathless existence has no remedy.
  • Maybe this is why the prize, in Buddhism and Hinduism, is not immortality, but freedom from rebirth. Maybe this is why, in Christianity, the dream is not to cure death but to enter heaven. We’re longing, as mystics might say, to reunite with the source of love itself. We’re yearning for the perfect and beautiful world, for ‘somewhere over the rainbow’, for C S Lewis’s ‘place where all the beauty came from’. And this longing for Eden, as Lewis’s friend J R R Tolkien told his son, is ‘our whole nature at its best and least corrupted, its gentlest and most human.’ Perhaps the immortalists, in their quest to live forever and to ‘end the separation between people’ are longing for these things, too; they’re just doing it in a different language.
  • But they’re also, I think, pointing in a different direction. Sure, I’d love to live long enough to meet my great-great-grandchildren, and if I can’t, I hope that my children live to meet theirs. Yet I also hope that this won’t cause them – cause us – to deny the bittersweet nature of the human condition. The immortalists believe that beating death will reveal the road to peace and harmony. And I believe exactly the opposite: that sorrow, longing and maybe even mortality itself are a unifying force, a pathway to love. We don’t welcome them. We certainly don’t enjoy them. But, in the end, it’s life’s very fragility that has the power to connect us all.
Author Narrative
  • Susan Cain is the bestselling author of Quiet: The Power of Introverts in a World That Can’t Stop Talking (2012). Her latest book is Bittersweet: How Sorrow and Longing Make Us Whole (2022). She lives in New York.
Notes
  • This is an extract from the author's latest book, for which it is doubtless a plug. Maybe it's worth buying (and reading!)? Maybe not!!
  • It's really a statement of a point of view, rather than an argument (though alternative views are critiqued, so maybe I'm being unfair).
  • It's a direct critique of the hopes of Transhumanists.
  • There's an allusion to the ideas in the Makropulos Case, though no actual citation.
  • There are lots of comments, some rather intemperate, which I've saved for later; no replies by the author, though.
  • I think this is an area where Intuitions differ.
  • It deserves closer attention!

Paper Comment



"Callcut (Daniel) - Wrestling with relativism"

Source: Aeon, 20 October 2023


Author's Conclusion
  • Truthfulness, conceptual genealogy, comparative ethical study: these ingredients give Williams’s philosophy of value its critical bite. There are many resources left for ethical and political criticism after moral philosophy fully emerges from what Williams called ‘the shadow of universalism’ – or so he endeavoured to show. His aim was to hold on to the vital distinction between what is and what ought to be while maintaining that norms about what ought to be are themselves ultimately cultural creations. His position, in this respect, is akin to the view that human beings create the norms about what counts as good and bad art rather than discover mind-independent and timeless truths about beauty.
  • Williams never thought that moral philosophy could make ethical life any easier than it is. Nonetheless, he offers a vision of how philosophy, allied with other disciplines such as history, can provide both criticism and support for one’s ethical orientation in the world. And in his engagement with moral relativism, he doesn’t just point to a middle way between his contemporaries Richard Rorty and Derek Parfit. He offers an example of how to make one’s way through the culture wars.
Author Narrative
  • Daniel Callcut is a freelance writer and philosopher. He is a former SIAS Fellow at Yale Law School. He has taught and published on a wide range of topics including the philosophy of love, the nature of value, media ethics, and the philosophy of psychiatry. He is the editor of "Callcut (Daniel), Ed. - Reading Bernard Williams" (2008). He lives in Lincolnshire, United Kingdom.
Notes
  • A fascinating paper, worthy of more effort commenting on it than I have time for at the moment.
  • Interesting to see Bernard Williams placed between the extreme postmodernist Richard Rorty and the ethical realist Derek Parfit.
  • Also interesting is the comment that "Williams (Bernard) - Truth and Truthfulness: An Essay in Genealogy" was poorly received; that’s what my supervisor at Birkbeck said at the time it came out.
  • Other works by Williams referenced are:-
    → "Williams (Bernard) - Morality"
    → "Williams (Bernard) - Ethics and the Limits of Philosophy"
    In the Beginning Was the Deed: Realism and Moralism in Political Argument
    Shame and Necessity
  • Also worth remembering Williams pointing out that relativists fall victim to the accusation of inconsistency if they say the tolerance is a universal virtue. The same kind of objection was made against the logical positivists’ demarcation principle.
  • It strikes me that moral relativism gets most of its purchase from the obvious differences of sexual ethics across societies, not to mention the move for ‘sexual freedom for all’ stemming from the 1960s. There’s less enthusiasm for other relativisms – say attitudes to slavery or the position of women.
  • There are numerous germane – and sometimes rather stroppy – comments which I’ve saved along with the pdf of the paper itself.

Paper Comment



"Carroll (Sean M.) - Splitting the Universe"

Source: Aeon, 11 September 2019


Author's Introduction
  • One of the most radical and important ideas in the history of physics came from an unknown graduate student who wrote only one paper, got into arguments with physicists across the Atlantic as well as his own advisor, and left academia after graduating without even applying for a job as a professor. Hugh Everett’s story is one of many fascinating tales that add up to the astonishing history of quantum mechanics, the most fundamental physical theory we know of.
  • Everett’s work happened at Princeton in the 1950s, under the mentorship of John Archibald Wheeler, who in turn had been mentored by Niels Bohr, the godfather of quantum mechanics. More than 20 years earlier, Bohr and his compatriots had established what came to be called the ‘Copenhagen Interpretation’ of quantum theory. It was never a satisfying set of ideas, but Bohr’s personal charisma and the desire on the part of scientists to get on with the fun of understanding atoms and particles quickly established Copenhagen as the only way for right-thinking physicists to understand quantum theory.
  • In the Copenhagen view, we distinguish between microscopic quantum systems and macroscopic observers. Quantum systems exist in superpositions of different possible measurement outcomes, called ‘wave functions’. A spinning electron, for example, has a wave function describing a superposition of ‘spin-up’ and ‘spin-down’. It’s not merely that we don’t know the spin of the electron, but that the value of the spin does not exist until it is measured. An observer, by contrast, obeys all the rules of familiar classical physics. At the moment that an observer measures a quantum system, that system’s wave function suddenly and unpredictably collapses, revealing some definite spin or whatever has been measured.
  • There are apparently, therefore, two completely different ways in which quantum systems evolve. When we’re not looking at them, wave functions change smoothly according to the Schrödinger equation, written down by Erwin Schrödinger in 1926. But when we do look at them, wave functions act in a totally different way, collapsing onto some particular outcome.
  • If this seems unsatisfying, you’re not alone. What exactly counts as a measurement? And what makes observers so special? If I’m made up of atoms that obey the rules of quantum mechanics, shouldn’t I obey the rules of quantum mechanics myself? Nevertheless, the Copenhagen approach became enshrined as conventional wisdom, and by the 1950s it was considered somewhat ill-mannered to question it.

Author's Conclusion
  • But physics hasn’t forgotten Everett; if anything, Everett’s ideas are more relevant than ever. His attempts to understand quantum cosmology were ahead of their time, but modern physics has made slow but steady progress on appreciating how to reconcile gravity with quantum theory. And Everett was right; once the whole Universe is your subject of study, it doesn’t make much sense to carve out a special place for a classical observer.
  • In my own research, I’ve gone even farther, arguing that the quest for quantum gravity is being held back by physicists’ traditional strategy of taking a classical theory (such as Albert Einstein’s general relativity) and ‘quantising’ it. Presumably nature doesn’t work like that; it’s just quantum from the start. What we should do, instead, is start from a purely quantum wave function, and ask whether we can pinpoint individual ‘worlds’ within it that look like the curved spacetime of general relativity. Preliminary results are promising, with emergent geometry being defined by the amount of quantum entanglement between different parts of the wave function. Don’t quantise gravity; find gravity within quantum mechanics.
  • That approach fits very naturally into the Many-Worlds perspective, while not making much sense in other approaches to quantum foundations. Niels Bohr might have won the public-relations race in the 20th century, but Hugh Everett appears ready to pull ahead in the 21st.
  • This is an edited extract from "Carroll (Sean M.) - Something Deeply Hidden: Quantum Worlds and the Emergence of Spacetime". Copyright © 2019 by Sean Carroll.
Author NarrativeNotes
  • Doubtless a plug for the author's book.
  • As such, detailed comments must wait for my reading of that book, recently purchased (March 2025).
  • However, I think reference to the wave function of the entire universe obeying Schrodinger's equation are a bit glib. We're not even able to 'shut up and calculate' with that. What values would its variables take?
  • Otherwise, for now I just note that there's a useful comment on 'the observer' which I think clarifies what I've always thought - that the observer is any macroscopic object (the screen, not the person looking at the screen; there's much careless talk in the popular science literature) though I wouldn't want this notion to be tied in with the MW interpretation of QM.
      The Many-Worlds formulation of quantum mechanics removes once and for all any mystery about the measurement process and collapse of the wave function. We don’t need special rules about making an observation: all that happens is that the wave function keeps chugging along in accordance with the Schrödinger equation. And there’s nothing special about what constitutes ‘a measurement’ or ‘an observer’ – a measurement is any interaction that causes a quantum system to become entangled with the environment, creating a branching into separate worlds, and an observer is any system that brings about such an interaction. Consciousness, in particular, has nothing to do with it. The ‘observer’ could be an earthworm, a microscope or a rock. There’s not even anything special about macroscopic systems, other than the fact that they can’t help but interact and become entangled with the environment. The price we pay for such a powerful and simple unification of quantum dynamics is a large number of separate worlds.
  • There are no Aeon Comments.
  • This connects to my Notes on Quantum Mechanics and Fission.

Paper Comment



"Cheek (Nathan) - Many of us have the wrong idea about poverty and toughness"

Source: Aeon, 11 April 2024


Author's Introduction
  • Imagine this upsetting scenario: two women are both suffering physical abuse from their partners. One woman is relatively affluent, while the other struggles to make ends meet. In this kind of situation, where two people of different means are experiencing similar harm, who will receive help? Are the neighbours of both women, upon learning of the violence, equally likely to reach out or call the authorities? Or might they decide in one of these cases that getting help isn’t so urgent?
  • One might guess that people in poverty are generally seen as needing more help, given their more precarious circumstances, whereas more affluent people might be seen as less vulnerable due to their greater financial resources. But when I and other researchers conducted a study asking people about the appropriate bystander intervention for intimate partner abuse, we found that participants thought a higher level of intervention would be necessary for a higher-income woman, compared with a lower-income woman.
  • This finding is just one example of a well-documented pattern of neglect and mistreatment of lower-income individuals, especially people in poverty. Students from lower-income families receive less positive attention from their teachers. Lower-income customers receive worse treatment while shopping. Lower-income patients receive less care from their physical and mental healthcare providers. And lower-income defendants receive harsher punishments in the courtroom. More generally, people in poverty receive less help and less support interpersonally and institutionally across many domains of everyday life.
  • Why are people in poverty, who have fewer resources at their disposal and are, if anything, in need of greater support, so often ignored and sidelined, compared with their higher-income counterparts? Behavioural scientists studying this question have already pointed to some important causes of class-based discrimination, ranging from structural barriers (eg, cost and insurance-related barriers in healthcare) to biased beliefs. When researchers study biased beliefs about low-income individuals, they typically focus on stereotypes about the supposed incompetence or laziness of people in poverty.
  • Recently, my collaborators and I have been trying to understand how a different set of biased beliefs might further explain many different social class disparities. In particular, we have found evidence that people think lower-income individuals are less affected by negative events – and, therefore, less in need of help – than higher-income individuals are. We call this the ‘thick skin bias’: the idea being that lower-income people are seen as having a ‘thicker skin’.

Author's Conclusion
  • People’s failure to accurately perceive the pain, distress and suffering of individuals in poverty could have profound consequences in terms of inequality and injustice. Healthcare professionals may mistakenly think that patients in poverty are less sensitive to harm, causing them to spend less time in diagnosis or to prescribe weaker treatments. Teachers may think less affluent students do not need as much emotional support or encouragement in the classroom, disproportionately shifting their focus to more affluent students. And across everyday interactions, people in poverty may broadly encounter a lack of care, attention, support and resources due partly to faulty assumptions about their needs.
  • What this all means is that we need to question our assumptions about other people’s experiences. Why might we think one person is suffering intolerably, while the other person is fine? What biased intuitions are misleading us? How do we know whether or not someone needs help? We might try to imagine ourselves in others’ shoes or to err on the side of offering help even when it might not seem absolutely necessary. More research is needed on what specific strategies can effectively fight the thick skin bias. But I hope that the findings to date at least spur some reflection on the ways in which flawed intuitions about adversity might lead us astray, and on how to ensure that, when people in poverty experience harm, others grasp the extent of their pain.
Author Narrative
  • Nathan Cheek is assistant professor in the Department of Psychological Sciences at Purdue University in Indiana. He is a social psychologist and studies prejudice, inequality and decision-making.
Notes
  • I need to read this paper again with more attention. I'd need to follow up on precisely what situations and questions were put to people in the surveys undertaken.
  • It hails from the US, so may not apply directly to the UK.
  • The main point seems to be that people are led astray by the 'acclimatisation effect' to assume that 'the poor' get used to being poor - and the deprivations that brings with it - and so 'being poor' isn't as bad for them as it would be for a rich person suddenly reduced to that state.
  • Well, I don't know whether the author puts things that way, as it certainly would be worse for the erstwhile rich person, who would be unprepared psychologically as well as practically to survive in poverty, though they would have to acclimatise over time (as displaced persons have to).
  • But - in the rich West - we're not - except in unusual circumstances - talking about destitution or absolute poverty as experienced in developing countries. We're talking about relative poverty. The destitute in developing countries, war zones or refugee camps suffer terribly. But is this degree of suffering the case for the teeming millions of poor people in India, say, who live in conditions that wouldn't be tolerable in the affluent West? Given that so many of their neighbours are in the same boat, don't they just get on with things? Or they accept life as it is? You don't complain about not being able to put the air conditioning on if neither you nor your neighbours have ever had air conditioning. Similarly with central heating in the UK, going back 60 years.
  • In the West, it's not having as much as you'd like or have been led to expect (or as much as you see others have) that - as much as poverty itself - causes much of the pain.
  • I might add that it's not only the poor who suffer financial stress. As Mr Micawber said: "Annual income twenty pounds, annual expenditure nineteen nineteen and six, result happiness. Annual income twenty pounds, annual expenditure twenty pounds nought and six, result misery." (Wikipedia: Wilkins Micawber).
  • The paper wasn't open to Aeon Comments. It deserved some.

Paper Comment



"Cleary (Anne) - Déjà vu"

Source: Aeon, 23 October 2023


Author's Introduction
  • Déjà vu, the eerie sense that something new has been experienced before, has confounded us for hundreds of years. Along with the public, philosophers, physicians, intellectuals and, more recently, scientists have tried to get to the bottom of the phenomenon. Potential explanations have ranged from double perception (the idea that an initial glance at something was only partially taken in, leading to déjà vu upon a second, fuller glance) to dissolution of perceptual boundaries (a brief blurring of boundaries between the self and the environment) to seizure activity to memory-based explanations (the idea that déjà vu results from a buried memory).
  • Now, research emerging from my lab and others suggests that déjà vu is not just a spooky experience, but a possible mechanism for focusing attention – perhaps an adaptive mechanism for survival shaped by evolution itself.

Author's Conclusion
  • Déjà vu may be an eerie shadow of the mind at work, and a window into the mind’s evolutionary past. Most of the time, our cognitive processing takes place smoothly and effortlessly – we just process the world around us and retrieve relevant information rapidly, without introspective access to how that occurs. It just does.
  • Déjà vu occurs when there is a hiccup in the system, and we notice the pull on our attention; it grabs hold of our focus, allowing us to catch a quick glimpse of our memory’s operation occurring in slow motion.
  • What would ordinarily take place quickly beneath the surface – the unfolding process of familiarity-detection followed by inward-directed attention and retrieval search effort leading to retrieval of relevant information – suddenly has a light shining on the spot where the halt occurred, where the retrieval piece was not successful, and we find ourselves in a heightened state of searching our memory, trying to find out why the situation feels so familiar. But rather than being an odd quirk of memory, this cognitive mechanism could be forcing us to retrieve the very memories we need to survive – and could be evolution’s way of forcing the mind inward, when it needs that insight most.
Author Narrative
  • Anne Cleary is professor of psychology at Colorado State University where she leads the Human Memory Lab. She is the co-editor of Memory Quirks (2020) and the co-author of The Déjà Vu Experience (2nd ed, 2021).
Notes
  • Naturally, this paper is a subtle plug for the author's book on the topic (written jointly with the person - Alan S Brown - who first published on the subject, as introduced early in the paper). Unfortunately, the book is far too expensive.
  • This is probably important to get to grips with as it links to Memory.
  • Naturally, the phenomenon has no parapsychological implications and does not imply anything about the structure of Time.
  • It is a psychological quirk, but not necessarily Psychopathological.
  • I intend to comment on the paper in due course.
  • Enough to say here that I thought that the author's conclusion was rather more confident than the evidence presented warranted.

Paper Comment
  • Sub-Title: "Have you been here before? The eerie sensation is the shadow of your mind searching inward for clues to its own survival"
  • For the full text see Aeon: Cleary - Déjà vu.



"Cohn-Gordon (Reuben) - Cathedrals of convention"

Source: Aeon, 04 March 2024


Author's Introduction
  • Cratylus and Hermogenes disagree about language. As only the format of a fictional debate will allow they hold opposing and extreme positions. Cratylus believes that the sound of each word is a reflection of what it describes in the world. The sliding sound of the /l/ in liparon, for instance, is there precisely because the word means ‘sleek’ or ‘slippery’ in Cratylus’ native Greek. If he spoke English, he might argue in the same vein that the word ‘wind’ acquires its meaning from its sound, which resembles what it describes. Nothing is arbitrary.
  • Everything is arbitrary, counters Hermogenes. The relationship between the sound and meaning of a word is the product of a wildly stochastic process that plays out differently every time, to which the variety of languages is a testament. The flow of air happens to be called ‘wind’ in English, and ‘viento’ in Spanish, but neither betrays a special connection between form and meaning. They both could have been otherwise.
  • The positions represented by these two characters, appearing in Plato’s Cratylus, go well beyond language. Astrology, in its Western incarnation at least, is premised on the idea that the time you are born - an apparently incidental fact of your life - profoundly shapes who you are. That is, your zodiac sign is linked to who you love, what you achieve, and so on. This has the flavour of Cratylus’ naturalism, with the similar implication that, if a person’s life played out again from birth, it would tend inexorably towards the same paths.
    Then there is gender, an arena where this tug-of-war between what is natural and what is arbitrary persists today. The ‘Cratylus’ view is that gender, a smorgasbord of behaviours, preferences and ways of being in the world, is a direct manifestation of a biological characteristic. ‘An essence defined with as much certainty as the sedative quality of a poppy,’ as Simone de Beauvoir describes this view (which she rejects). Dressing in floral colours, passivity, and compassion? Consequences of being biologically female. Answering questions with unearned confidence, the potential for powerful and singular genius, and ambition? Consequences of being biologically male.
  • What is the source of this impulse to naturalise, to perceive an underlying natural essence in what is fundamentally arbitrary? And what, if anything, does the answer have to do with language?
  • The sociologists Judith Irvine and Susan Gal offer an insight into these questions. They explore the racist colonial pseudoscience of relating grammatical features of Senegalese languages such as Fula to purported differences in the character of their speakers ...
Author's Conclusion
  • What Judith Butler is suggesting is that the rules of gender are so effectively followed that it appears they must come from some innate source. Each time someone conforms to the social norms expected of men, say by resisting the urge to cry, it provides evidence for the view that those propensities must be intrinsic to being a man. The appearance of naturalness in the relationship between sex and gender is a byproduct of the success of the convention.
  • The same insight gives an answer to why language is so often naturalised: it is because we act as if the convention is natural and, in so doing, make that hypothesis plausible. We approach language as if it has a true form to be determined and, in the act of excavating that form, create it.
  • This perception of naturalness is evident not only in the iconisations and etymythologies but in the idea that there is a true form of a language, one that is slowly weathering and decaying in the hands of the younger generation, and which ought to be saved by efforts to control sloppy usage - a complaint made as early as Cicero, and non-stop since then.
  • In the end, it all comes back to the manifest and scientific images. Conventions live in the manifest image, but owe their existence to the collective pretence that they are in the scientific image. But perhaps the best way to understand the intuitive appeal of Cratylus’ view is with the story about the Englishman who is trying to demonstrate the intrinsic superiority of his native language. In French, he argues, a spoon is called a cuillere, while in Spanish it is a cuchara, and in Hebrew a qo /kaf/. But in English, it is called nothing other than what it truly is: a spoon!
Author Narrative
  • Reuben Cohn-Gordon is a writer and AI researcher who applies machine learning techniques to Bayesian models of language and vision. He lives in Amsterdam.
Notes
  • This is an important and interesting topic, and the author has some sensible things to say. But - to my mind - the essay is tainted by the casual wokeism so popular today - in particular the references to gender and racism / colonialism. Even there, the comments aren't entirely off course, but need to be clarified and tidied up. No-one is going to resist these arguments for fear of being called sexist or racist, so - it seems - you can be as sloppy as you like.
  • I don't have time to address these and other issues at the moment, but intend to do so in due course.

Paper Comment



"Curry (Devin Sanchez) - Why academia should embrace ‘Grandma’s metaphysics’"

Source: Aeon, 08 September 2022


Author's Conclusion
  • The idea that intelligence comes in many forms suggests a better rationale for enjoining more (and more different) voices to join the intellectual choir. Diverse scholars don’t just sing the same standards differently; they grow up learning different songbooks. Philosophy ought to nurture Grandma’s variety of smarts alongside Socrates’, neither of which is well captured by IQ (Thrasymachus, Socrates’ arch-nemesis, would undoubtedly have qualified for Mensa). Making the cohort of professional philosophers less white and less male is a good idea largely because it would make philosophy itself less parochial, by expanding the range of questions we care to ask and by taking seriously diverse perspectives on what constitutes a successful answer to those questions.
  • Considerations of reparative justice also speak in favour of promoting diversity. But calls for reparative justice first need to make plain why increased access to the groves of academe would be good for structurally disadvantaged people. The plain fact is that any human life lived well involves developing ways of being smart that are tailored to the idiosyncratic contours of one’s own life. It’s good for everyone in academia to be exposed to the many varieties of intelligence recognised in different (sub)cultures. It’s also good for wise – and potentially wise – people to have access to ways of living, such as being a philosopher, that allow them to dedicate themselves to reflecting and expounding on what they learned from their grandmothers – or whatever else their own, personal intelligence enables them to see especially clearly.
  • To the extent I heeded it, my dad’s mantra kept me out of trouble as a young man. But as Aristotle might have said (if he hadn’t himself been a gatekeeping old coot), the best life for a human being consists not only in avoiding stupidity, but also in helping diverse intelligences flourish in conversation with each other.
Author Narrative
  • Devin Sanchez Curry is an assistant professor of philosophy at West Virginia University, specialising in the history and philosophy of psychology. His research focuses on the interplay between common-sense and scientific approaches to understanding minds.
Notes
  • This is - it seems to me - rather a jumble. It's not really anything to do with Metaphysics but with Race and Intelligence.
  • The author is right that there are many aspects to intelligence apart from IQ. Also that different perspectives are helpful.
  • However, analytic philosophy is a very precise discipline and we don't just savour different points of view, but try to resolve contradictions.
  • For the author, see:-
    Devin Sanchez Curry: Home Page
    Devin Sanchez Curry: Writing
    West Virginia University: Devin Sanchez Curry
  • From reading this Paper, and being led astray by the cover photo, I'd assumed that the author was Black. But - based on the above link - it seems not!
  • Some of his papers might be worth following up, especially those on Intelligence, since the drafts are freely available.
  • I need to re-read this Paper with more attention.

Paper Comment



"de Bres (Helena) - Both one and yet distinct"

Source: Aeon, 21 November 2023


Author's Introduction
  • In Washington state in 2002, Lydia Fairchild nearly lost custody of her three children, when a test revealed that none of them shared her DNA. It turned out that Fairchild’s body was populated with cells from a non-identical twin she’d unknowingly had before birth, making her, in effect, the biological aunt of her own children.
  • The technical term for Fairchild is a ‘human chimera’: a human being composed of cells that are genetically distinct. The phenomenon can happen artificially, through a transfusion or transplant, or naturally, as in Fairchild’s case, through the early absorption of a twin zygote. Only 100 cases of natural chimerism are documented, but there may be many more. Scientists estimate that 36 per cent of twin pregnancies involve a vanishing twin. Most such twins likely disappear without a trace, but some get partly absorbed into their neighbour. The survivor is unlikely to learn of their lost sibling’s genetic presence, unless an unrelated test or procedure inadvertently reveals it. Go in for a routine cheek swab, come out with a twin.
  • Many find the idea of unknowingly carrying the vestiges of their twin unsettling. One person I told about Fairchild instantly burst into tears. I’m less perturbed by it, likely because I’ve known I have a twin for decades. My own twin Julia survived our joint gestation (rather than me, what, eating her? Gross!) If I find out I’ve got another one in there somewhere, it won’t be my first rodeo.
  • What mainly interests me about human chimeras are the philosophical, not the personal, implications. What should we say, metaphysically, about Fairchild and her ilk?
  • Journalists reporting on Fairchild’s case didn’t quite know what to make of it. ‘She’s her own twin,’ proclaimed ABC News. ‘The many yous in you,’ intoned Ed Yong in National Geographic. ‘A Guide to Becoming Two People at Once,’ wrote Maia Mulko in Interesting Engineering in 2021. Such headlines are clickbait because they challenge a standard presumption of modern Western culture, so basic as to go unstated. Westerners generally think that each person is physically discrete, cleanly distinguished from all other people by their location, solo, within an unbroken continuum of skin.
  • Actually, though, human chimeras leave this assumption intact. Fairchild isn’t two people in one, because the mere presence of human DNA doesn’t indicate the presence of a person. Any stray hair you leave on your pillow overnight is biologically human, but that doesn’t mean that, every time you shed hair, you’re multiplying the number of people in the room. Personhood requires something more than a particular type of genetic material: it arises only with the larger-scale structural organisation of that material, which permits capacities like consciousness, thought and moral agency. At the macro level that matters for personhood, Fairchild is a singleton.
  • Still, the one-person-per-body assumption is worth questioning, and there’s a much more convincing example of its violation at hand. Conjoined twins, unlike chimeras, contain only one genetic cell line. But (when two heads are present) they overwhelmingly consider themselves to be two unique, distinct beings, despite sharing a body. It’s typical for them to speak of themselves as individuals, and to develop a personality and tastes different from each other’s. Their families and friends, too, think of them as two people who just happen to be physically attached.
  • The case of conjoined twins reveals the falsity of the assumption that bodies correlate one-to-one with people. Recognising this has large implications. If one body can contain two people, why couldn’t one person be spread across two bodies? Why couldn’t that person be me, or you?
Author's Conclusion
  • We don’t really need chimeras or twins to reveal the deeply relational nature of our species. The experience of merged personhood is common in many other types of close couples. New parents speak of the startling sensation of having a part of themselves exist outside their body: their infant, a piece of their actual heart, sleeping quietly in the next room. Frank Sinatra croons to his lover: ‘I’ve got you under my skin … so deep in my heart that you’re really a part of me.’ Michel de Montaigne wrote, after his best friend’s death, that he’d become ‘so formed and accustomed to being a second self everywhere that only half of me seems to be alive now.’ We can take all of this figuratively – as a poetic expression of strong feeling – or we can treat it as a literal and defensible metaphysical stance. After all, once we twins have embraced breaking the body barrier, what’s stopping singletons from doing it, too? What makes you so sure that all of you is contained within that single envelope of skin?
Amazon Book Description for How to Be Multiple: The Philosophy of Twins
  • Philosopher Helena de Bres uses the curious experience of being a twin as a lens for reconsidering our place in the world, with illustrations by her identical twin Julia.
  • Wait, are you you or the other one? Which is the evil twin? Have you ever switched partners? Can you read each other's mind? Twins get asked the weirdest questions by strangers, loved ones, even themselves. For Helena de Bres, a twin and philosophy professor, these questions are closely tied to some of philosophy's most unnerving unknowns. What makes someone themself rather than someone else? Can one person be housed in two bodies? What does perfect love look like? Can we really act freely? At what point does wonder morph into objectification?
  • Accompanied by her twin Julia's drawings, Helena uses twinhood to rethink the limits of personhood, consciousness, love, freedom, and justice. With her inimitably candid, wry voice, she explores the long tradition of twin representations in art, myth, and popular culture; twins' peculiar social standing; and what it's really like to be one of two. With insight, hope, and humor, she argues that our reactions to twins reveal our broader desires and fears about selfhood, fate, and human connection, and that reflecting on twinhood can help each of us-twins and singletons alike-recognize our own multiplicity, and approach life with greater curiosity, imagination, and courage.
Author Narrative
  • Helena de Bres is professor of philosophy at Wellesley College in Massachusetts. She is the author of Artful Truths: The Philosophy of Memoir (2021) and How to Be Multiple: The Philosophy of Twins (2023)
Notes
  • This is quite a rich essay, despite it being a plug for the author's latest book (written with her twin sister, Julia, but who seems only to have supplied the illustrations).
  • I won't be buying the book any time soon, it being a tad expensive new in hardback, so I've appended its Abstract from Amazon to this paper's Abstract.
  • The paper covers much else apart from Twinning and Chimeras, including what a Person is, and (in passing) whether there can be Degrees of Personhood.
  • I'll return to the paper shortly, I hope.

Paper Comment



"de Sutter (Adrien) - The stagnation of physics"

Source: Aeon, 31 March 2025


Author's Introduction
  • Browse a shelf of popular science books in physics and you’ll often find a similar theme. Whether offering insights into "Greene (Brian) - The Hidden Reality: Parallel Universes and the Deep Laws of the Cosmos" (2011), "Carroll (Sean M.) - Something Deeply Hidden: Quantum Worlds and the Emergence of Spacetime" (2019) or Our Mathematical Universe (Max Tegmark, 2014; see "Tegmark (Max) - The Mathematical Universe"), these books hint at an underlying, secret world waiting to be unravelled by physicists – a domain beyond our sensory perception that remains their special purview.
  • Over its history, physics has delivered elegant and accurate descriptions of the physical Universe. Today, however, the reality physicists work to uncover appears increasingly removed from the one they inhabit. Despite its experimental successes, physics has repeatedly failed to live up to the expectation of delivering a deeper, ‘final’ physics – a reality to unify all others. As such, physicists appear forced to entertain increasingly speculative propositions.
  • Yet, with no obvious avenues to verify such speculations, physicists are left with little option but to repeat similar approaches and experiments – only bigger and at greater cost – in the hope that something new may be found. Seemingly beset with a sense of anxiety that nothing new will be found or that future experiments will reveal only further ignorance, the field of fundamental physics is incentivised to pursue ever more fanciful ideas.
  • I argue that the pursuit of unity and dominance of a more fundamental reality presents itself not as physicists’ unique prerogative, but instead as an impossible burden placed on their shoulders by the modern world. I suggest that we should embrace a more pluralist and nuanced understanding of what comprises the cosmos, an understanding that not only accepts but invites criticism from other practices, disciplines and realities into its current predicament.
  • My time spent in physics, both as an aspiring theoretical physicist and later as a sociologist studying the practices of fundamental physics, has left me to wonder to what extent narratives of unity and finality continue to serve the communities that proliferate them. And, further, to what extent does achieving greater fidelity towards what comprises existence, to what reality is, and to the constituents of the cosmos require that physics give up the mantle as reality’s primary purveyor?

Author's Conclusion
  • I suggest that we must take seriously the possibility of other worlds. By this I do not mean the familiar speculations of the multiverse, or the Many-Worlds hypothesis, introduced by physics to come to terms with the Universe’s ongoing indeterminacy. Rather, it is to take seriously those worlds that physics and modern realism have otherwise dismissed. That is, worlds in which, for instance, the Earth beings of Indigenous peoples are real, the ghosts of Japanese family members are cared for, and where God talks back to evangelical believers who speak with him.
  • This is also to suggest a return to openly questioning the claims of physics. Where once large public debates took place between physicists and philosophers on the nature of time and the extent to which physics can be said to speak for reality, today, public debates between physics and philosophy are seldom serious. Instead, when similar questions are debated in philosophy journals, they are largely settled in deference to the claims of physics.
  • Now, there is no doubting the immense achievements of fundamental physics. Further, its speculative wagers may yet be rewarded with time. However, such wagers have left the field unclear where to look for alternatives should it fall short of its own high expectations, and its practitioners, who are believed to be closest to reality, the furthest away from it.
  • In response, we need a humbler physics that is no less radical in its speculative ambitions but invites contradictory visions into its propositions, not only visions that are subservient to physics’ claims. In this, we would have a more adventurous physics, one that accepts and invites criticism from other practices and disciplines into its current predicament.
  • More pragmatically, it would allow for the possibility of engaging physics with other practices. Biophysics and climate physics are good examples of this. Unlike cosmology and fundamental physics, they are not premised on the supremacy of one field over the other but understand their limitations, and respective and restricted areas of application.
  • But we might go even more radical than this, departing from a purely physicalist approach to embrace modes dismissed as mere fantasy, story or ‘immaterial’, to re-engage physics with alternative debates, visions and configurations of reality. This may require that we abandon physics’ privileged place as science’s standard bearer, like we did the philosophers and high priests of old, as the practice with unique access to a deeper reality more fundamental than others.
  • It may even require that we abandon doing physics altogether, in the attainment of an expanded reality that not only accepts but encourages the possibility of difference and more. Or, as the speculative fiction writer Ursula Le Guin once put it, what we require are ‘the realists of a larger reality’.
Author Narrative
  • Adrien De Sutter completed his PhD in sociology at Goldsmiths, University of London, and is a visiting fellow at the Max Planck Institute for the History of Science in Germany. An interdisciplinary researcher specialising in science and technology studies and the history and philosophy of science, he focuses on the philosophical, sociological and political implications of fundamental physics research.
Notes
  • This is a silly paper, but incited by some silly claims put about by some theoretical physicists in some popular science books.
  • Our world is fundamentally physical, so fundamental physics is basic to its understanding and constrains whatever else we might like to believe about it. Other disciplines (starting with chemistry and biology) build on physical foundations.
  • However, it has become increasingly difficult - and expensive - to make further progress in the physics of the very large and very small. This has little to do with reality in itself but with our limitations. It may be that it's not worth pursuing these ultimate questions because the energies, distances and timescales are beyond us. But that doesn't mean that we can get just as satisfactory answers by retreating to our armchairs and scratching our heads. Nor does it mean that any answer will do. Some things we will never know, not because of our cognitive limitations but because we can't get our hands on the data.
  • The author's conclusion is just so much sociological blather.
  • There are no Aeon Comments.
  • This relates to my Note on Naturalism.

Paper Comment



"Emery (Nina) - Desperate remedies"

Source: Aeon, 05 September 2024


Extracts
  • What these examples, taken together, show is that leaving an observed pattern in the data without an explanation is something that physicists are unwilling to do, and this is true even if the only way to explain the pattern is to introduce entities that are novel, weird or poorly understood. This core commitment of scientific methodology doesn’t have a standard name, much less an evocative one like Occam’s razor, but in other work I’ve called this the pattern explanation principle or PEP for short. PEP, on my view, is perhaps the least contentious of the extra-empirical principles that play a role in scientific reasoning. And, indeed, PEP is the way in which odd entities often end up getting introduced into our overall picture of what the world is like. When scientists find themselves in this kind of position, they are more committed to there being something that explains the pattern in question than they are to any particular account of what that entity is or to any previous scruples that they might have had about entities of that type.
  • Insofar as one is a methodological naturalist, then, one should take PEP seriously when doing philosophy as well as when doing physics. Introducing odd, unverifiable entities in philosophy is in good standing as long as we need those entities in order to explain some sort of pattern in the data that we observe about the world.
  • Here’s an example of a philosophical debate where PEP has the potential to play an important role. Contemporary metaphysical views about laws of nature tend to break down into two broad camps.
    1. On the one hand, there are descriptive accounts of laws, which understand laws as just descriptions of patterns in the data.... Views in this camp are often labelled ‘Humean’ after the 18th-century Scottish philosopher David Hume, who argued against metaphysical speculation in general, and against necessary connections – including those underwritten by natural laws – in particular. The current most popular descriptive account of laws is David Lewis’s best system analysis, according to which laws are the set of propositions describing the world that best balance simplicity and descriptive strength.
    2. On the other hand, there are governing accounts of laws. On governing accounts, laws don’t just describe the way things are, they make things that way.... he trick for these accounts is to say exactly what kinds of things laws are, and how exactly they do their explanatory work. Recent work on these questions has highlighted an important distinction between views on which laws produce later states of affairs from earlier ones, and views on which laws are constraints on spacetime as a whole. But either way, proponents of these views often end up saying that laws are primitive, sui generis entities, which cannot be given any further analysis or understood in terms of more familiar things to which we are already committed.
  • While our best scientific theories certainly refer to laws, they don’t say much about what kinds of entities laws are and, in particular, they don’t take any stand at all on whether laws are merely descriptive, or if they play a full-blown governing role. But the methodological naturalist isn’t just interested in whether a particular metaphysical view is compatible with the content of our best scientific theories. They also want to know how that metaphysical view stands with respect to the methodology that produces our best science.
  • And, if I’m correct that the pattern explanation principle is one of the core commitments of scientific methodology, then this looks like a pretty clear win for the governing account of laws. After all, we have a number of patterns in our data, for instance the pattern that net force is always equal to mass times acceleration. According to PEP, this pattern requires an explanation. On the governing account of laws, it has one. Maybe we don’t know what laws are. Maybe they are weird or even wholly sui generis entities. But so, too, were the neutrino and the electromagnetic field and dark energy. Physicists still accepted them, because they were needed in order to play an important explanatory role. Philosophers, at least insofar as they are methodological naturalists, should take the same attitude toward governing laws.
  • Of course, there are things that the defender of a descriptive account of laws can say in response. They might argue that the kinds of comprehensive patterns that laws are supposed to describe are not the kinds of patterns to which PEP applies.... Or fans of the descriptive account might argue that there is something about the mysterious nature of governing laws that blocks them from playing the kind of explanatory role identified by PEP. Or they might try to claim that there is a sense in which merely descriptive laws can play the relevant explanatory role after all.
  • It’s hard to say whether either of these moves are very promising, but I’m open to seeing how they play out. What I don’t think we should be willing to do is to give up methodological naturalism. After all, what are the alternatives? If you don’t think the methodology of science is a good guide to philosophical theorising, then why do you care about conflicts with our best scientific theories? You might as well let philosophical speculation float free from any grounding in science at all. And if science isn’t relevant to metaphysical questions, then how can philosophers claim to be putting forward theories about what the world is like? Perhaps we can reconceptualise metaphysical theorising as something more like an artistic practice, which expands our imaginative capabilities, or as a purely hypothetical exercise in which merely possible concepts are explored and developed. But both of these moves involve a radical reinterpretation of what many philosophers working on metaphysical questions, both historical and contemporary, think they are doing when they put forward their views.
  • Of course, as the nuance of the above discussion shows, there’s no way around the fact that methodological naturalism is going to be difficult. Teasing apart the various aspects of scientific methodology, especially the extra-empirical aspects, is something that philosophers of science have long been trying to do without reaching any clear consensus. Indeed, these days much of the emphasis in philosophy of science is on the way in which scientific methodology varies from context to context, and some philosophers have given up on the idea of science having any unified methodology at all. The methodological naturalist will need to wrestle with difficult questions about how these contexts interact, among many others. But no one ever promised that philosophy – and metaphysical theorising in particular – would be easy. And, in any case, as Pauli might say: nothing ventured, nothing gained.
Author Narrative
  • Nina Emery is professor of philosophy at Mount Holyoke College in South Hadley, Massachusetts. She is the author of Naturalism Beyond the Limits of Science: How Scientific Methodology Can and Should Shape Philosophical Theorizing (2023).
Notes
  • This is doubtless a plug for the author's book, which is far too expensive.
  • However, I'm attracted to Naturalism both in the scientific and ethical sphere.
  • I wasn't quite clear on the precise application of the author's ideas to Metaphilosophy.
  • I need to read it again. There are no Aeon Comments.

Paper Comment
  • Sub-Title: "In order to make headway on knotty metaphysical problems, philosophers should look to the methods used by scientists"
  • For the full text see Aeon: Emery - Desperate remedies.



"Evans (Gavin) - The myth of mirrored twins"

Source: Aeon, 27 June 2023


Author's Introduction
  • Thirteen days before the start of the Second World War, a 35-year-old unmarried immigrant woman gave birth slightly prematurely to identical twins at the Memorial Hospital in Piqua, Ohio and immediately put them up for adoption. The boys spent their first month together in a children’s home before Ernest and Sarah Springer adopted one – and would have adopted both had they not been told, incorrectly, that the other twin had died. Two weeks later, Jess and Lucille Lewis adopted the other baby and, when they signed the papers at the local courthouse, calling their boy James, the clerk remarked: ‘That’s what [the Springers] named their son.’ Until then they hadn’t known he was a twin.
  • The boys grew up 40 miles apart in middle-class Ohioan families. Although James Lewis was six when he learnt he’d been adopted, it was only in his late 30s that he began searching for his birth family at the Ohio courthouse. In 1979, the adoption agency wrote to James Springer, who was astonished by the news, because as a teenager he’d been told his twin had died at birth. He phoned Lewis and four days later they met – a nervous handshake and then beaming smiles. Reports on their case prompted a Minneapolis-based psychologist, Thomas Bouchard, to contact them, and a series of interviews and tests began. The Jim Twins, as they were known, became Bouchard’s star turn.
  • Both Jims, it transpired, had worked as deputy sheriffs, and had done stints at McDonald’s and at petrol stations; they’d both taken holidays at Pass-a-Grille beach in Florida, driving there in their light-blue Chevrolets. Each had dogs called Toy and brothers called Larry, and they’d married and divorced women called Linda, then married Bettys. They’d called their first sons James Alan/Allan. Both were good at maths and bad at spelling, loved carpentry, chewed their nails, chain-smoked Salem and drank Miller Lite beer. Both had haemorrhoids, started experiencing migraines at 18, gained 10 lb in their early 30s, and had similar heart problems and sleep patterns.

Author's Conclusion
  • Even the Jim Twins, raised by similar families, in the same part of the same state, have their own stories to tell because of their unique upbringings. Focus on these, and a different picture emerges. When they first met, they had distinct hairstyles and facial hair (one a bit Elvis, the other more Beatles) and different kinds of jobs. Their children were of different ages and most had different names. Springer stayed with his second wife, Betty, while Lewis married a third time. More significantly, they displayed marked character differences, noticeable to anyone who met them: Springer, the more loquacious of the brothers, called himself ‘more easy-going’ and said Lewis was ‘more uptight’. Lewis was reticent in public and, in private, he preferred to write down his thoughts.
  • Much of the magic evaporates when we lift the lid on the sensational tales of parallel lives. What emerges in place of this seductive mirror myth of the hidden double are more mundane tales of everyday difference, revealing the unique selfhood that is part of the inheritance of all people – including those with genetic doppelgängers.
Author Narrative
  • Gavin Evans is a writer whose work has been published in The Guardian, Die Zeit, The Conversation and The New Internationalist, among others. His books include Mapreaders and Multitaskers: Men, Women, Nature, Nurture (2016), The Story of Colour (2017) and "Evans (Gavin) - Skin Deep: Dispelling the Science of Race" (2019). He lives in London.
Notes
  • This is useful, but the author is on a moral crusade and is arguing a case rather than providing a balanced account. He's trying to redress the balance, and so is as biased on the 'nurture' side as are the 'naturists' on theirs.
  • I'll add comments later. This paper has incited me to buy one of his books as it's going really cheap second hand!
  • See "Evans (Gavin) - Skin Deep: Dispelling the Science of Race".
  • His other books on the topic of 'nature versus nurture' may also be worth reading - however annoying they may be - but will have to wait.

Paper Comment



"Evans (Gavin) - There was no Jesus"

Source: Aeon, 15 February 2024


Author's Introduction
  • Most New Testament scholars agree that some 2,000 years ago a peripatetic Jewish preacher from Galilee was executed by the Romans, after a year or more of telling his followers about this world and the world to come. Most scholars – though not all.
  • But let’s stick with the mainstream for now: the Bible historians who harbour no doubt that the sandals of Yeshua ben Yosef really did leave imprints between Nazareth and Jerusalem early in the common era. They divide loosely into three groups, the largest of which includes Christian theologians who conflate the Jesus of faith with the historical figure, which usually means they accept the virgin birth, the miracles and the resurrection; although a few, such as Simon Gathercole, a professor at the University of Cambridge and a conservative evangelical, grapple seriously with the historical evidence.
  • Next are the liberal Christians who separate faith from history, and are prepared to go wherever the evidence leads, even if it contradicts traditional belief. Their most vocal representative is John Barton, an Anglican clergyman and Oxford scholar, who accepts that most Bible books were written by multiple authors, often over centuries, and that they diverge from history.
  • A third group, with views not far from Barton’s, are secular scholars who dismiss the miracle-rich parts of the New Testament while accepting that Jesus was, nonetheless, a figure rooted in history: the gospels, they contend, offer evidence of the main thrusts of his preaching life. A number of this group, including their most prolific member, Bart Ehrman, a Biblical historian at the University of North Carolina, are atheists who emerged from evangelical Christianity. In the spirit of full declaration, I should add that my own vantage point is similar to Ehrman’s: I was raised in an evangelical Christian family, the son of a ‘born-again’, tongues-talking, Jewish-born, Anglican bishop; but, from the age of 17, I came to doubt all that I once believed. Though I remained fascinated by the Abrahamic religions, my interest in them was not enough to prevent my drifting, via agnosticism, into atheism.
  • There is also a smaller, fourth group who threaten the largely peaceable disagreements between atheists, deists and more orthodox Christians by insisting that evidence for a historical Jesus is so flimsy as to cast doubt on his earthly existence altogether. This group – which includes its share of lapsed Christians – suggests that Jesus may have been a mythological figure who, like Romulus, of Roman legend, was later historicised.
  • But what is the evidence for Jesus’ existence? And how robust is it by the standards historians might deploy – which is to say: how much of the gospel story can be relied upon as truth?
Author's Conclusion
  • Paul frequently refers to the crucifixion and says Jesus was ‘born of a woman’ and ‘made from the sperm of David, according to the flesh’. He also refers to James, ‘the brother of Christ’. Using these examples, Ehrman says there’s ‘good evidence that Paul understood Jesus to be a historical figure’. Which was certainly the view of the writer/s of Mark, a gospel begun less than two decades after Paul’s letters were written.
  • If we accept this conclusion, but also accept that the gospels are unreliable biographies, then what we are left with is a dimly discernible historical husk. If Jesus did live at the time generally accepted (from 7-3 BCE to 26-30 CE) rather than a century earlier as some of the earliest Christians seemed to believe, then we might assume that he started life in Galilee, attracted a following as a preacher and was executed. Everything else is invention or uncertain. In other words, if Jesus did exist, we know next to nothing about him.
  • One way of looking at it is to think of a pearl, which starts as a grain of sand around which calcium carbonate layers form as an immune response to the irritant until the pearl no longer resembles the speck that started it. Many legends have developed in this way, from the tale of the blind bard Homer onwards.
  • The outlaw and thief Robert Hod was fined for failing to appear in court in York in 1225 and a year later he reappeared in the court record, still at large. This could be the grain of sand that begat Robin Hood, whom many people assumed to have been a historical figure whose legend grew over the centuries. Robin started as a forest yeoman but morphed into a nobleman. He was later inserted into 12th-century history with King Richard the Lionheart and Prince John (earlier versions had Edward I), along with his ever-expanding band of outlaws. By the 16th century, he and his Merry Men had mutated from lovable rascals to rebels with a cause who ‘tooke from rich to give the poor’.
  • The Jesus story likewise developed fresh layers over time. At the start of the common era, there may well have been several iconoclastic Jewish preachers, and one of them got up the noses of the Romans, who killed him. Soon his legend grew. New attributes and views were ascribed to him until, eventually, he became the heroic figure of the Messiah and son of God with his band of 12 not-so-merry men. The original grain of sand is less significant than most assume. The interesting bit is how it grew.
Author Narrative
  • Gavin Evans is a writer whose work has been published in The Guardian, Die Zeit, The Conversation and The New Internationalist, among others. His books include Mapreaders and Multitaskers: Men, Women, Nature, Nurture (2016), The Story of Colour (2017) and "Evans (Gavin) - Skin Deep: Dispelling the Science of Race" (2019). He lives in London.
Notes
  • I wrote the diatribe below as a 'Comment' for Aeon, but it was (way too) long to be published, and I couldn't be bothered to whittle it down.
  • The paper requires a closer reading - as do the Comments - some of which are informed, but most by ignoramuses already convinced of the evils of religion.
  • So ...
  • There’s so much that could be said in response to this piece.
  • Firstly, given how much that was written in antiquity has been lost – including half of Tacitus’s Histories / Annals, for instance – it’s hardly surprising that the official accounts – if any – of the actions and execution of an obscure Palestinian preacher should fail to survive. All we really have is the New Testament (NT) itself. But then all we have for the detail of Caesar’s Gallic Wars is Caesar’s doubtless slightly self-serving account.
  • Secondly, we have to ask how the Christian communities could have arisen as quickly as they did – given they were ‘Christ centered’ – in the complete absence of its supposed founder. Admittedly, explaining precisely how the NT documents came into existence is difficult, but explaining this in the complete absence of their star character is even more difficult.
  • No doubt there are many internal inconsistencies within the NT and many supposed happenings are difficult to believe. But I would say that – compared to the unhinged Apocryphal Gospels and Gnostic texts – the documents that make up the NT are wholesome and not utterly fantastical so shouldn’t be written off as completely ahistorical whatever embellishments they may be assumed to contain.
  • Then there’s the question of when the NT documents were written. The Acts of the Apostles – carrying on from Luke’s Gospel – is an account of early missionary activity, though often into territory with existing Christian communities, which stops dead with Paul stuck in prison in Rome, so was presumably written then rather than later when it might have told us about Paul’s fate, the persecution of Christians after Nero’s fire of Rome, the destruction of the Jewish state somewhat later, and all that. Bishop John Robinson (the ‘Honest to God’ man) wrote a book arguing that all the NT documents were written before AD 70. Maybe this is a stretch, but they weren’t written when the Christianity was the official Roman religion.
  • To contextualise these writings: they are full of squabbles between Jews with slightly different beliefs, starting with Jesus and the Pharisees. The prophesies of the destruction of Jerusalem are vague. Wouldn’t a forgery be more precise? Jesus is said to have stated that some standing before him would not taste death until they saw him return in glory. Last I heard, he hasn’t, and this fact has been rather awkwardly explained (it was recognised as a problem in the last writings of the NT). Why invent such a promise, especially if you’re inventing the promiser?
  • The messianic ‘prophesies’ hardly leap out of the Old Testament text as requiring their Christian interpretation but seem to represent the Jewish exegetical practices of the day. That’s why the early Christians used them – to try to convince their fellow Jews, not the Romans. If you wanted to invent a story about a ‘cosmic Christ’, why invent one with that background?
  • This essay is in the context of the very existence of Jesus. Now, I agree that there seem to be three ‘Jesuses’ in the NT – the eschatological prophet of the Synoptic Gospels who delivers pithy sayings, the Word of God in John who delivers long speeches (surely reconstructed and embellished in the manner of the times, whatever your views on their basic historicity) and the saviour-redeemer of Paul, who had no great interest in the earthly life of Jesus, only on the significance of his death (and resurrection) which he takes to be what the substitutionary atonement rituals of the Jewish temple cultus were intended to point towards.
  • But there’s a lot of detail in the Gospel accounts that – despite difficulties and contradictions – hangs together and gives an account of an individual that impressed himself – positively or negatively – on the minds of contemporaries. It’s all too complicated and interwoven for pure invention. If they had been written hundreds of years after the event and betrayed the interests of the then Church one might be suspicious, but they weren’t and they don’t.
  • Then there are episodes that are awkward but have an air of verisimilitude and which have to be given a Christian gloss. The Jewish High Priest says ‘it is good for one man to die for the people’ (to avoid Roman reprisals in case of a disturbance); there’s the reference to Jesus as ‘that deceiver’ (hence the placing of the guards at the tomb) and the story that the disciples stole the body; that some disciples ‘doubted’ that Jesus had risen from the dead; Jesus’ cry from the cross – ‘my God, my God, why have you forsaken me’. If we were arguing about the truth of the gospel, we could discuss whether the High Priest was prophesying, whether Jesus did or did not rise from the dead, whether he died in despair or was quoting a Psalm, and so on. But we aren’t. Another commentator has pointed out the Nazareth versus Bethlehem difficulty. Why dig yourself into that hole? There’s Peter’s denial. We are only considering whether Jesus existed. Why stuff your invention with difficulties?
  • An inference to the best explanation would be that Jesus did exist, even though much of what he said and did is hard to recover, and the significance of his life and death is a matter of faith.

Paper Comment
  • Sub-Title: "How could a cult leader draw crowds, inspire devotion and die by crucifixion, yet leave no mark in contemporary records?"
  • For the full text see Aeon: Evans - There was no Jesus.



"Fine (Cordelia) & Hooven (Carole) - Does testosterone make men?"

Source: Aeon, 08 April 2025


Editor's Astract
  • Does biology determine destiny, or is society the dominant cause of masculine and feminine traits? In this spirited exchange, the psychologist Cordelia Fine and the evolutionary biologist Carole Hooven unpack the complex relationship between testosterone and human behaviour.
  • Fine emphasises variability, flexibility and context – seeing gender as shaped by social forces as much as it is by hormones. By contrast, Hooven stresses consistent patterns; while acknowledging the influence of culture and the differences between individuals, she maintains that biology explains why certain sex-linked behaviours persist across cultures.
  • At stake in this debate is how we understand ourselves and organise our communities. Can we achieve equality by changing cultural norms, or must we accommodate biological realities that evolution has inscribed in our brains? As you read, notice how these scholars interpret the same evidence through fundamentally different frameworks – revealing why discussions about sex differences remain both scientifically complex and politically charged.

Cordelia Fine’s Conclusion
  • Not all species have evolved ‘traditional’ sex roles; environment and culture shape human behaviour; and not all men behave in stereotypically masculine ways. The fact that Carole and I agree on these things isn’t surprising. Understanding our disagreements requires digging deeper.
  • In this dialogue, Carole has denied that she thinks that testosterone ‘makes men what they are’. Yet in her recent TED talk, she stated that prenatal testosterone ‘made my son who he is today’.
  • Carole says that no ‘serious biologist’ thinks that maleness determines the predispositions of the male sex role. Yet in an opinion piece for The Boston Globe this February she wrote that: ‘Despite the vast natural variation among individuals, one constant stands out: sperm producers compete – often fiercely – for access to egg producers, while egg producers invest more in parenting …’ In this dialogue, what she calls ‘“sex-reversed” species’ are exceptions that prove the rule.
  • Carole says that testosterone ‘drives higher rates of aggressive competition in male mammals’ – including humans. As she puts it, sex differences ‘in traits like aggression (or anything else), originate in our differences in inherited biology’. Yet, according to her, this causally potent and ubiquitous hormonal force leaves only some men with a predisposition to aggression (setting aside whether or not it is expressed).
  • It seems clear that T-Rex essentialism is no straw man, even if he is sometimes shy about showing himself.
  • Sex does indeed help us understand why, across species, some sex differences are more common than others. But the diversity of sex roles is an equally salient fact. This diversity reflects the many innovations evolution has found to help species reproduce – innovations that, in humans, include our capacities for cooperation, social learning and cultural transmission across generations.
  • Testosterone and other hormones do help us adapt to conditions and contexts. But human exceptionalism goes well beyond ‘cultural norms [that] can modify how behavioural predispositions are expressed’, or self-reflectiveness about ‘our biological impulses’. Any scientific explanation of sex differences in behaviour must take seriously the idea that, unlike any other mammal, we have evolved to socially construct gender roles, as my most recent book [Patriarchy Inc.: What We Get Wrong About Gender Equality and Why Men Still Win at Work (2025)] explores.
  • Consider: why is male violence against women rare among Aka Pygmies? Why is same-sex adolescent fighting more than eight times more common in boys in some lower-income countries (such as Tunisia and Suriname) but equally common among boys and girls in others (such as Tonga and Ghana)? Why do only 4 per cent of Swedish males follow a developmental trajectory leading to violent crime, while the remaining 96 per cent do not?
  • These are the kinds of questions we should be asking if we are serious about addressing male violence. T-Rex will not help us find the answers.

Carole Hooven’s Conclusion
  • What explains differences in male and female behaviour? The answer involves a complex mix of environmental and biological factors, including gender socialisation, genes and hormones. On this, Cordelia and I largely agree, and I’ve been grateful for the opportunity to discuss it all with her.
  • Cordelia opened this dialogue with a case study about men on an oil rig, illustrating the kind of evidence that’s made her ‘sceptical that [testosterone] is the root cause of the many gendered differences in behaviour that we see in humans.’ Such evidence, she claims, reveals that testosterone is ‘just … one of many factors that feeds into an animal’s decision-making,’ rather than being ‘the essence … of masculinity’. She says that I ‘take quite a different line’.
  • But I don’t take a different line. Of course testosterone isn’t the male essence. And of course the hormone isn’t the only factor guiding behavioural decisions in humans or any other animals. Countless interacting factors – like health, marital status, local laws – affect our decisions, like whether to throw a punch in response to an insult. Yet across the diverse influences – physical, social and psychological – men are more likely to behave violently. Culture certainly plays a role here; but there is no denying that humans are far from the only animals in which males are more aggressive than females, and that testosterone is, at a minimum, strongly associated with those patterns. All of this, especially couched within the framework of sexual selection, implicates testosterone as a key player in human sex differences. Cordelia has offered no strong hypothesis that explains the fact that, across time and place, men are more likely than women to decide to throw that punch.
  • What role does testosterone play, then? It acts on the male body and brain during critical developmental periods – in utero, around birth, and during puberty – with effects on behaviour that often show up down the road. Boys tend to play more roughly than girls, even though young children’s testosterone levels don’t differ very much. Scientists believe that higher testosterone in fetal males drives this preference, as it does in other mammals. Violent crime, which is overwhelmingly committed by men, doesn’t peak when testosterone peaks in the late teens; instead, it peaks in men’s 20s, the phase of life when size, strength and competition for mates are at their highest. The existence of the paternal California mouse, as well as emotionally sensitive, high-testosterone human roughnecks, is entirely compatible with the hormone driving sex differences in aggression. An evolutionary perspective can help to make sense of these patterns.
  • If, as I believe, testosterone drives some important sex differences, this shouldn’t deter us from pursuing a safer, more just society. The solution lies in harnessing the power of culture, rather than altering our genes and hormones. An openness to the strongest evidence, and to learning all we can about how genes and environment interact to produce behaviour, can only help.
  • Killing off T-Rex entirely serves only to shoot ourselves in the foot. Behavioural endocrinology and evolutionary theory provide powerful, time-tested frameworks for understanding sex differences in humans and other animals. Keep a sane T-Rex alive!
Author Narrative
  • Cordelia Fine is a psychologist, writer and professor in the history and philosophy of science programme at the University of Melbourne. She is the author of Delusions of Gender: How Our Minds, Society, and Neurosexism Create Difference (2010), Testosterone Rex: Myths of Sex, Science, and Society (2017) and Patriarchy Inc.: What We Get Wrong About Gender Equality – and Why Men Still Win at Work (2025). She lives in Melbourne, Australia.
  • Carole Hooven is a human evolutionary biologist with a focus on behavioural endocrinology. She is a nonresident senior fellow at the American Enterprise Institute, an associate in Harvard’s Department of Psychology, and the author of T: The Story of Testosterone, the Hormone That Dominates and Divides Us (2021). She lives in Cambridge, Massachusetts.
Notes
  • Our authors seem to be talking past one another.
  • Both seem to agree that there's part nature and part nurture.
  • Also, all agree that there's a spectrum, with overlap between males and females.
  • Culture plays a much more important role in modern times because of the rise of the machines. The extra strength and aggressiveness (on average) of males is no longer required, even in war.
  • Even so, there's still the tendency for 'the strong to do what they can and the weak to suffer what they must' (The Melian Dialogue (Thucydides 6)). This breaks out without the rule of law - which (of course) can only be achieved by force where not by consensus. Unless we're going to arm the police and start shooting people routinely, force remains a male preserve. No doubt everyone feels rage and frustration from time to time. The fact that most violence is male isn't just a cultural phenomenon, it's because it comes more naturally to males, and is more likely to be successful.
  • The focus should be on large mammals - particularly great apes - rather than small rodents - the 'exceptions that prove the rule'.
  • That said, the females of large predators can be very fierce, both in hunting and defence of their cubs, presumably without the benefit of raging testosterone. Female dogs can be rather nasty too. So, the situation is complex.
  • There are 24 Aeon Comments, including replies by both authors. I need to study them.
  • This is related to my Notes on Evolution, Animals, Homo Sapiens, Society and Narrative Identity.

Paper Comment



"Forbes (Graeme A.) - How to think about time"

Source: Aeon, 27 March 2024


Key points – How to think about time
  1. Change and variation. Whether or not change is variation across time or something more, you experience change – indeed, you can’t live without changing.
  2. Past and future. You’re stuck with the past. What happened happened. But because the future is open, the meaning and significance of the past is itself subject to change.
  3. The nature of time and your experience of it. When thinking about time, we’re trying to say something general about it, freed from your subjective experience of time, shaped by your particular moods, hopes, fears, interests and so on.
  4. The God’s eye view. In the Middle Ages, debates about time were largely concerned with the nature of God. Even if today we are more interested in an objective, scientific understanding of the world, the Middle Ages remain a useful way of framing views about time.
  5. Science versus subjective experience. The classic, modern formulation of the debate about time pits Einstein, representing the scientific approach to time, against Bergson, who argued in defence of ‘common sense’.
  6. Relativity says there is no way of specifying a universal now. The theory – well tested and universally endorsed by scientists – supports the view that our scientific theories have no role for change distinct from variation across time.
  7. We need change to make sense of doing things. A lot of the time, when you do something, you are aiming to bring about a change. Much of the rest of the time, you are trying to stop something from changing.
  8. We do things to change the future, not the past. Any theory of time must account for two puzzling things. First that, time is fundamentally different from space and, second, that the future is fundamentally different from the past.
  9. Our emotions are directed at what was, what might be, and what wasn’t/won’t be. The past is unchangeable and the future is always ahead of you. But the meaning of both are constantly in flux and change as the story of your own life changes.

Reading List
  1. The episode ‘Does Time Exist?’ (BBC - The Infinite Monkey Cage - Does Time Exist?, 2020) of the BBC Radio 4 show The Infinite Monkey Cage discusses whether time passes, according to physics.
  2. The episode ‘Time for Philosophers’ (Philosophers Zone - Time for philosophers, 2008) of the Australian radio show The Philosopher’s Zone features the philosopher of time David Braddon-Mitchell.
  3. The novel "Vonnegut (Kurt) - Slaughterhouse 5, or The Children's Crusade - A Duty-dance with Death" (1969) is an anti-war novella with time-travel as a metaphor for PSTD, but with a brilliant discussion of how our emotions relate to our experiences of time in literature.
  4. The book What Makes Time Special? (2017) by Craig Callender won the Lakatos Award for an outstanding contribution to the philosophy of science; it argues for the view that our experience of time is ‘more or less rubbish’ as a guide to what time is really like.
  5. The book Out of Time (2022) by Samuel Baron, Kristie Miller and Jonathan Tallant provocatively argues that, for all we know, our best theories of physics don’t have any role for time at all.
  6. The book The Physicist and the Philosopher (2015) by Jimena Canales takes a historical approach to the philosophy of time through the debate between Einstein and Bergson.
  7. The book Philosophy of Physics: Space and Time (2012) by Tim Maudlin offers an introduction to the physics of spacetime, with a focus on the philosophical questions rather than the mathematical formulas.
  8. My book Philosophy of Time: The Basics (2024) is a textbook, forthcoming in May, that expands on these themes.
Author Narrative
  • Graeme A Forbes spent 10 years as an academic philosopher, and is now a freelance philosophical consultant. He has two books on philosophy of time in press: Philosophy of Time: The Basics (2024) and The Growing Block View (forthcoming, Bloomsbury). He lives in Canterbury, UK.
Notes
  • Interesting - with a focus more on psychology than physics.
  • Some of the books in the reading list are rather expensive.
  • Kurt Vonnegut gets a mention in "Bennett (Jonathan) - Time in Human Experience", so I have bought and intend to read his book, which is important in its own right.
  • I'll review this again in due course, along with some interesting Aeon Comments which I need to study.
  • The philosophies of Time1 – and of Change2 – are fairly central topics for my Thesis on PID.

Paper Comment
  • Sub-Title: "This philosopher’s introduction to the nature of time could radically alter how you see your past and imagine your future"
  • For the full text see Aeon: Forbes - How to think about time.



"Francione (Gary) - We must not own animals"

Source: Aeon, 01 March 2022


Author's Conclusion
  • And it’s not just meat that is a problem; there is no morally significant difference between meat on the one hand, and dairy and eggs on the other. All of these products involve suffering and death. Veganism is not an extreme position; what is extreme is claiming to believe that animals matter morally and then inflicting suffering on them for no reason other than culinary pleasure or convenience. It is also extreme to continue to ignore that, if we adopted a vegan diet, we could substantially reduce, if not end, world hunger, and take the single most significant step we can take as individuals to address the climate crisis.
  • Although I am sure that many readers will have various objections to what I have said here, I want to anticipate the one that I think will be the most prevalent: where do we draw the line? What about insects? What about plants? Are they all sentient? The answer is that lines in ethics are almost always hard to draw but, in this case, we can say with confidence that just about all of the animals we exploit as a matter of institutionalised practice – the mammals, the birds, the fish, the lobsters, the crabs, the octopi , etc – are sentient. We can start there and worry about refining the line later on.
  • As for plants, there is absolutely no evidence to date that they have any sort of minds that prefer, want or desire anything. Yes, they have certainly evolved biological processes that seek to assure their flourishing but no, they are not subjectively aware. If it turns out that plants are sentient, given that it takes many more units of plants to produce one unit of an animal product, we would still be obligated to choose to consume the plants directly if we did not conclude that we have an obligation to starve.
Author Narrative
  • Gary L Francione is Board of Governors Professor of Law at Rutgers University School of Law in New Jersey, US; visiting professor of philosophy at the University of Lincoln, UK; and honorary professor of philosophy at the University of East Anglia, UK. His most recent book is Why Veganism Matters: The Moral Value of Animals (2021).
Notes
  • This is a very important paper that deserves careful consideration, which I intend to supply shortly!
  • It is – like many papers on Aeon – something of a plug for the author’s latest book, but it is very clearly argued.
  • I agree with much of what the author has the say, but don’t believe the implications have been sufficiently thought through (maybe they are in the books).
  • The author disagrees with both Peter Singer (utilitarian) and Tom Regan (rights theorist) in arguing that while it is a moral advance to consider the suffering of other sentient beings in our dealings with them, the key issue is with us treating them as property with no interest in their own survival because they have ‘no plans for the future’.
  • He also argues that almost all our use of animals is ‘frivolous’ – in that it is unnecessary and arises merely for economic reasons because we consider animals as property.
  • I agree with the author that animals – while they may not have ‘plans’ like we do – do have an interest in their own survival and suffer loss when deprived of life. I also agree that they should not be treated like inanimate ‘property’.
  • However, I don’t agree that we should count them as Persons on that account. The author’s intuition is diametrically opposed to that of Lynne Rudder Baker, which is also mistaken, in my opinion.
  • There is an interesting and challenging discussion of humans with dementia and normal non-human mammals that asks the question why the former have absolute moral priority and the latter don’t.
  • Also, the author doesn’t really seem to take into account the fact that human and non-human animals are in competition for resources. Even were we to all to become vegans, we’d have to protect our food-sources from predation by rodents, insects, birds, ruminants, … and ourselves from large predators.
  • The author seems to think there is objectivity in ethics. While an ethical position can be clearly argued, if it’s not accepted things seem to degenerate into shouting.
  • Finally, I don’t like the current fad to link red-meat eating to climate-change. While the production of methane by ruminants does lead to climate change, this would be the case whether they were bred for human use or were wild. Should they not exist?
  • I note that the comments are worth reading as the author has made cogent replies to objections. I’ve saved them for future reference.
  • That’s all I’ve time for at the moment.

Paper Comment



"Francione (Gary) & Charlton (Anna E.) - The case against pets"

Source: Aeon, 08 September 2016


Authors' Conclusion
  • Some critics have claimed that our position concerns only the negative right not to be used as property, and does not address what positive rights animals might have. This observation is correct, but all domestication would end if we recognised this one right – the right not to be property. We would be obliged to care for those domesticated animals who presently exist, but we would bring no more into existence.
  • If we all embraced the personhood of non-humans, we would still need to think about the rights of non-domesticated animals who live among us and in undeveloped areas. But if we cared enough not to eat, wear or otherwise use domesticated non-humans, we would undoubtedly be able to determine what those positive rights should be. The most important thing is that we recognise the negative right of animals not to be used as property. That would commit us to the abolition of all institutionalised exploitation that results in the commodification and control of them by humans.
  • We love our dogs, but recognise that, if the world were more just and fair, there would be no pets at all, no fields full of sheep, and no barns full of pigs, cows and egg-laying hens. There would be no aquaria and no zoos.
  • If animals matter morally, we must recalibrate all aspects of our relationship with them. The issue we must confront is not whether our exploitation of them is ‘humane’ – with all of the concomitant tinkering with the practices of animal-use industries – but rather whether we can justify using them at all.
Author Narrative
  • Gary L Francione is Board of Governors Professor of Law at Rutgers University School of Law in New Jersey, US; visiting professor of philosophy at the University of Lincoln, UK; and honorary professor of philosophy at the University of East Anglia, UK. His most recent book is Why Veganism Matters: The Moral Value of Animals (2021).
  • Anna E Charlton is adjunct professor of law at Rutgers University and the co-founder of the Rutgers Animal Rights Law Clinic. Her latest book, together with Gary L Francione, is Animal Rights: The Abolitionist Approach (2015).
Notes
  • This is an important - if misguided - paper.
  • It dates from 2016 and has to be read in the light of the "Francione (Gary) - We must not own animals", published in 2022.
  • I'll write up my annotations in due course.

Paper Comment



"Frankish (Keith) - The mind isn’t locked in the brain but extends far beyond it"

Source: Aeon, 07 July 2016


Author's Introduction
  • Where is your mind? Where does your thinking occur? Where are your beliefs? René Descartes thought that the mind was an immaterial soul, housed in the pineal gland near the centre of the brain. Nowadays, by contrast, we tend to identify the mind with the brain. We know that mental processes depend on brain processes, and that different brain regions are responsible for different functions. However, we still agree with Descartes on one thing: we still think of the mind as (in a phrase coined by the philosopher of mind Andy Clark) brainbound, locked away in the head, communicating with the body and wider world but separate from them. And this might be quite wrong. I’m not suggesting that the mind is non-physical or doubting that the brain is central to it; but it could be that (as Clark and others argue) the mind extends beyond the brain.

Author's Conclusion
  • You might want to ask why we should think of minds extending into bodies and artefacts, rather than merely interacting with them. Does it make any difference? One answer is that, in the cases described, brain, body and world are not acting as separate interacting systems, but as a coupled system, tightly meshed by complex feedback relations, and that we need to look at the whole in order to understand how the process unfolds. (It’s worth noting, too, that the brain itself is a collection of coupled subsystems.)
  • Of course, we think of ourselves as being situated in our heads. But that is because of how our perceptual systems model the world and our location in it (reflecting the location of our eyes and ears), not because our brains happen to be in there. Imagine (if it isn’t too gruesome) having your living brain temporarily removed from your skull, nerve connections intact, so that you could hold it and look at it. You would still seem to be in your head, even though your brain was in your hand.
  • If the mind is not bounded by the brain or the skin, where does it stop? What is the boundary line? The short answer is that there isn’t one – not a stable one, at any rate. The mind expands and shrinks. Sometimes (in silent thought, for example) mental activity is confined to the brain, but often it loops out into the body and external world. It’s a slippery thing, which can’t be contained.
Author Narrative
  • Keith Frankish is a philosopher and writer. He is an honorary reader in philosophy at the University of Sheffield, a visiting research fellow with the Open University, and an adjunct professor with the Brain and Mind programme at the University of Crete.
Notes
  • A very interesting - though rather brief and dated (2016) paper.
  • I have no time to address it at the moment.
  • For now, I just note that Mind (as distinct from Brain, CNS or PNS) is a somewhat flexible term. Consider how much easier it is to answer the question 'which animals have a brain or CNS' than 'which animals have minds'.
  • I agree that information processing takes place in animals in places other than in their brains. It probably also takes place outside their bodies - calculations on pieces of paper - but we may be arguing over words to say the the mind is 'extended'.
  • Still, I need to address the thoughts - and arguments - on the subject by Andy Clark, David Chalmers, Daniel Dennett, and - doubtless - Keith Frankish.
  • All this has an impact on BIV TEs.
  • I've saved the numerous Aeon Comments for future perusal and will write my own thoughts in due course.

Paper Comment



"Friston (Karl) - Karl Friston: Embodied cognition"

Source: Aeon, 16 December 2021


Editors' Abstract
  • That our brains exist in the context of a body might seem obvious, but for many thinkers and researchers working at the intersection of neuroscience and philosophy, this notion has become increasingly vital to understanding the human mind. The body and, crucially, movement give the brain access to our physical environments so that we can navigate the outside world. In this way, the brain and the body are partnered – one is essential to the other, and each informs the other.
  • This framing is central to what’s known as ‘embodied cognition’, a concept with intellectual roots dating back to the early 20th century. This radical and relatively recent approach to cognition emphasises the importance of the body and rejects the once-common view of the brain as the body’s sole director.
  • In this interview with Serious Science, Karl Friston, a neuroscientist at University College London, explores the ‘different flavours’ – some common sense, others controversial – tethered together by the idea of embodied cognition, as well as their implications for the field of neurophilosophy, and beyond.

Notes
  • This is an important talk to get to grips with as it brings together the mind1, brain2 and body3.
  • I intend to write up notes of the talk and comment thereon, so won't write any more for now.

Paper Comment



"Frohlich (Joel) - When does the first spark of human consciousness ignite?"

Source: Aeon, 29 October 2024


Author's Introduction
  • As recently as the 1980s, many infants were operated on without anaesthesia – a practice that would be unthinkable today. Recounting this history in his book "Birch (Jonathan) - The Edge of Sentience: Risk and Precaution in Humans, Other Animals, and AI" (2024), the philosopher Jonathan Birch notes his surprise at the moral arrogance of anaesthesiologists of the time. What could have justified this practice? While general anaesthesia in infants does carry a risk of breathing problems, either that risk was over-weighed, or the mental life and suffering of an infant was entirely neglected.
  • Maybe you don’t need much convincing that infants are conscious – that there is something it’s like to be an infant. In my experience telling people about my research, this is a commonly held intuition. And in academia, more than three-quarters of researchers surveyed at an annual conference on consciousness viewed babies as conscious, or at least probably so.
  • Today there is considerable evidence that infants are more than bundles of reflexes. I, along with my co-authors Tim Bayne, Rhodri Cusack, Julia Moser and Lorina Naci, recently published a scholarly review outlining the case for consciousness in early infancy. But the debate isn’t over. Since infants can’t tell us about their experiences – and adults typically cannot remember their own first year of life – firm evidence of consciousness is hard to establish. And according to that survey I cited, a small fraction of academics still doubt that babies are conscious. Conversely, if babies are indeed conscious, does consciousness begin at birth? Or even earlier?
  • The question of when consciousness first emerges, then, is an open one. Let’s consider three possible answers, each based on milestones that occur after, during, and before birth.

Author's Conclusion
  • If consciousness does emerge before 24 weeks, it is not likely consciousness as we generally know it, but rather more likely the contentless consciousness described by philosophers such as Thomas Metzinger: with no incoming sensations to divide up the passage of time or to delineate space, all is timeless, spaceless and empty. Even stranger still, given the immature integration of fetal brain networks, compared with older brains (think of the US passenger railway system compared with Europe’s), a fetal mind might be fragmented into many parallel minds, or ‘islands of awareness’ (to continue the analogy, imagine the US Northeast Corridor as having one mind, while the railway network centred around Chicago has a separate mind).
  • Of course, any discussion of fetal consciousness remains highly speculative. Based on the evidence so far, I predict that the light of consciousness shines from sometime near birth – if not a bit earlier – until death. It’s also unlikely that consciousness emerges all at once. Yet, even if its emergence is gradual, there must be a moment when its first spark ignites, like the first photon of light radiating from a lightbulb as its analogue dial is turned upward. Perhaps, one day, we will know just when this happens.

Author Narrative
  • Joel Frohlich is a neuroscientist studying fetal development at the University of Tübingen fMEG Center in Germany, focusing on the origins of consciousness and complexity in the developing brain. His work has appeared in The Atlantic and Nautilus, and he writes a Substack newsletter called Something It’s Like.

Notes
Paper Comment



"Gadsby (Stephen) & Van de Cruys (Sander) - The surprising role of deep thinking in conspiracy theories"

Source: Aeon, 12 September 2024


Authors' Introduction
  • Conspiracy theories are seemingly everywhere. To explain their prevalence, many commentators point to the gullibility of conspiracy theorists. According to this view, believers in conspiracy theories accept evidence without bothering to scrutinise its credibility, making them vulnerable to the misinformation that pervades online ecosystems. But while it’s tempting to take this view, we believe it relies on an unrealistic picture of misinformation and the people who consume it – which is likely undermining attempts to deal with the problem.
  • Far from passively accepting the truth of conspiracy theories, conspiracy theorists enthusiastically participate in generating, discussing and dissecting them. They also appear genuine in their attempts to get to the bottom of things. They develop sophisticated arguments, go to considerable lengths to find the ‘right’ sources of information, and preach the importance of rigorous and independent research. Conspiracy theorists don’t fall for conspiracy theories. They discover them.

Authors' Conclusion
  • One approach to delivering insight-based interventions could be via developments in AI, which hold the promise of countering conspiracy thinking via personalised interactions. For example, in a recent study, interactions with ChatGPT-4 Turbo substantially and durably shifted the beliefs of even the staunchest of believers in conspiracies ranging from the causes of COVID-19 to the Moon landings to the death of Princess Diana. Analysing these conversations showed that the model didn’t simply present counterevidence but questioned and reasoned with users. Although users were not explicitly asked to report insight experiences, we would venture that such experiences were a crucial mechanism underlying their shift in beliefs.
  • The mind-changing potential of these systems is bound to continue to grow. They already have more ‘patience’ than any human for gaining our idiosyncratic perspective through dialogue. As they can ingest more information about us, they will be able to simulate our positions more effectively. That will allow them to challenge us at the just-right level – eventually coming up with new, resonating metaphors and pointed questions. We are hopeful that such personal(ised) AI assistants will help to prevent and counter harmful beliefs at scale (but they will need to be implemented carefully, given the risk of misuse).
  • Conspiracy believers are not unintelligent and gullible. They are driven by their hunger for insight. Whether via AI or other media, recognising and respecting this hunger is the way towards more effective interventions.
Author Narrative
  • Stephen Gadsby is a Fonds Wetenschappelijk Onderzoek (FWO) postdoctoral fellow at the Centre for Philosophical Psychology at the University of Antwerp in Belgium.
  • Sander Van de Cruys is a researcher in psychology based in Belgium. In his work, he tries to understand the internal rationality of supposedly irrational thinking and behaviour.
Notes
  • I think the authors are too kind to conspiracy theorists.
  • I think there are two sorts of people who fall into this category: leaders and followers.
  • Followers are - I suspect - often uncritical, lazy thinkers who support ideas that go along with their world view or social needs.
  • Leaders may well be highly intelligent and innovative, but they suffer from the same defect as followers, insofar as they are reading the world so it aligns with their prejudices.
  • I suspect both camps are lazy in another respect. Being an expert in any technical discipline takes years of training in addition to intelligence and innovation. Conspiracy theorists may take the easy way out and start from scratch. They ignore the experts who have earned the right to have an informed opinion.
  • Contrast with "Cassam (Quassim) - Bad thinkers" which is referenced (probably by the editor) and which takes a more negative stance.
  • There are no Aeon Comments, but a few more links to be followed up.

Paper Comment



"Garson (Justin) - Targeted"

Source: Aeon, 02 September 2024


Extracts
  • Luca agreed to take the drugs, not because he thought he was mad, but because he hoped they would disrupt the signal between his brain and The Team. But the drugs made things worse. They made The Team angry – so they tortured him more. He thought constantly about suicide. Sometimes the voices were so overwhelming he had to lie down wherever he was. One day he found himself on the floor of a grocery store. He typed ‘mind rape’ into his phone. That’s when he discovered the targeted individual community.
  • Before about 2000, people with experiences like Luca’s had few options. They could turn to a psychiatrist, or spiral further into isolation, fear and paranoia. But the advent of the personal computer and the availability of the internet changed that. People like Luca were now communicating with each other, finding parallels between their experiences, and trying to track down who was doing this to them.
  • Hence was born the targeted individual (TI) community: a group of people who openly shared their experiences of high-tech harassment and organised stalking.
  • ... Today the TI community is a loosely organised global network with regional and local support groups. In 2016, The New York Times estimated there were at least 10,000 people who identify as TIs.
  • TIs have a diverse range of experiences. Some involve electronic harassment. Voices projected into the mind. Crackling or popping sounds in the ears. Burning or pricking sensations on the skin. Migraines. Sleeplessness. Others centre around gang-stalking. Being followed in the streets. Several people wearing the same-coloured shirt, or driving the same-coloured car, as a coded threat. Strangers in public commenting on the TI’s private life. People breaking into their homes and damaging things. Some TIs have sought refuge abroad. Many have the belief they’ve been microchipped. Some implore surgeons to remove the implanted devices.
  • Journalists often depict the TI community as a postmodern tragedy – a byproduct of unregulated social media. Here are thousands of very sick people, we’re told, who are just reinforcing each other’s delusions and making each other sicker because they refuse to see psychiatrists. These reports dismiss TIs as promoting one more dangerous conspiracy theory (though I doubt that TIs will ever become a major driver of conspiracy theories, partly because they have unusual experiences that run-of-the-mill conspiracy theorists lack).
  • However, as I talked with TIs and some of the mental health professionals who have taken a sympathetic interest in them, I began to see a different story emerge. What if the TI community is an inevitable reaction to the shortcomings of medical psychiatry itself?
  • Put differently, what if medical psychiatry is inadvertently pushing people like Luca deeper into the TI community?
  • I began to see TIs like Luca as a group of individuals who are caught between two competing narratives.
    1. By all accounts, these are people who are hearing terrifying voices, experiencing painful sensations or seeing themselves as being stalked. Over the years, the main narrative that has emerged to explain their experiences belongs to medical psychiatry. It holds that these voices and beliefs – or ‘hallucinations’ and ‘delusions’ – are symptoms of a disorder such as schizophrenia, delusional disorder or schizoaffective disorder. These disorders likely stem from brain dysfunctions or defective genes. In the best of cases, these symptoms can be managed with drugs, or some combination of drugs and therapy. Though the medical narrative helps some, research is showing just how stigmatising and disempowering it is for others. Some get the message that their brain is broken and that they can never trust their thoughts and perceptions. Many, like Luca, are put on antipsychotic drugs with debilitating side-effects.
    2. The second narrative – the TI narrative – is that if you’re having these sorts of experiences, nothing is wrong with your mind. Your perceptual and reasoning abilities are functioning exactly the way they’re designed to. Unfortunately, you are the victim of gang-stalking or electronic harassment. Despite your suffering, however, there is hope: you can band together with other TIs in a global movement to expose your attackers and dismantle their techniques.
  • Trapped between these two narratives, many opt for the TI narrative. It validates their basic ability to perceive the world and reason about it – precisely what psychiatry’s medical narrative denies. It infuses their frightening experiences with a powerful sense of purpose and coherence. It gives TIs the most precious resource of all: community, belonging, even love.
  • But if psychiatry’s broken-brain narrative is pushing people deeper into the TI community, how did this happen? How is it possible that the very profession that was designed to support people like Luca might be making them worse off? And are there any alternatives to these two narratives – the idea that your brain is broken, and the idea that you’re actually being persecuted?
  • ... Historians of psychiatry agree that a momentous paradigm shift took place in the 1980s. They often call this shift the ‘second biological revolution’, after a similar revolution that took place in Germany in the 1850s. It was popularised by books like Nancy Andreasen’s The Broken Brain (1984), Solomon Snyder’s Drugs and the Brain (1986) and Jon Franklin’s Molecules of the Mind (1987). It coincided with an explosive growth in the pharmaceutical industry that promised to heal the mind with drugs.
  • In this vision, mental disorders were best understood as brain dysfunctions that could be traced to abnormal genes. Schizophrenia stemmed from an imbalance of the neurotransmitter dopamine. Depression was a serotonin imbalance. Bipolar disorder was a lithium imbalance. ADHD a deficiency of norepinephrine. These conditions, of course, could be triggered by life events, but they were ultimately biological. It was a reincarnation of the ancient Greek humoral theory, which saw the various forms of madness as imbalances in the four humours: blood, phlegm, yellow bile, black bile.
  • .... It would be one thing if the medical vision were true. The fact that TIs rebel against the broken-brain narrative would be no more surprising than that someone can be in denial about diabetes or cancer. But the past three decades have shown that psychiatry’s medical vision is neither scientifically credible nor morally sound.
  • The scientific problems came first. Even 20 years after Snyder’s breakthrough paper, nobody could find the ‘smoking gun’ – the supposed dopamine abnormalities in patients who’d never taken antipsychotic drugs. Moreover, a newer generation of drugs, the so-called ‘atypicals’, took a much larger share of the psychiatric drug market. They seemed to manage symptoms equally well with fewer side-effects – even though they targeted a much wider profile of chemicals than dopamine. Dopamine began to look like one small piece of a much larger puzzle. Psychiatrists became equally disenchanted with the serotonin theory of depression. Today, the idea that these drugs work by reversing chemical imbalances appears increasingly groundless.

Author's Conclusion
  • The problem is this: unless people like Luca have access to this whole range of meaning-making frameworks, they’ll either understand their voices in terms of a hypothetical brain disease, or in terms of hostile intruders running an experiment on them. We urgently need to expand the range of scientifically plausible alternatives.
  • This isn’t to say that drugs have no place in the new frameworks. Drugs can be valuable tools to help manage distressing experiences. That has been well known for millennia with drugs like psylocibin, hashish, opium, tobacco and alcohol. But they don’t seem to work by reversing chemical imbalances – just as alcohol doesn’t make you feel good by ‘reversing an alcohol deficiency’. Rather, they help us temporarily blunt painful thoughts and feelings so we can better address their root causes.
  • The goal of these frameworks, as I see it, is to break the grip of the forced choice between two grim alternatives – not to impose any particular narrative onto suffering people. Ideally, people would explore these alternative paradigms in a therapeutic community that emphasises shared decision-making and personal autonomy. If someone chooses to embrace the medical narrative, it should be because they’ve been exposed to multiple paradigms and feel that this narrative makes the most sense for them. And if others, after becoming aware of these paradigms, choose to accept the TI narrative, then the best path forward is for them to connect with a community like PACTS that can give them the non-stigmatising support they need. But mental health professionals ought to put more time and effort into raising awareness of the neglected third path: we can embrace voices and visions as part of ordinary human experience, without accepting the literal truth of everything they say.
  • I recently reconnected with Luca. His own journey, which began so abruptly in November 2020, has now taken on new contours. He still hears the voices, and he holds the belief that he’s the subject of a cruel experiment. He explicitly identifies as a TI. But he’s also placing those horrific experiences into a broader spiritual perspective, much like that advanced by the International Spiritual Emergence Network.
  • Luca now sees himself as intensely receptive to spiritual influences. The same psychic openness that allowed The Team to gain a foothold into his mind is also a fountain of powerful revelations – massive ‘information downloads’ from the cosmic mind. These revelations centre around the broken state of humanity, the interweaving of the spiritual and material worlds, and the need for love – insights he’s compiling into a book. He is no longer preoccupied with victimhood, but with survivorship, and with his greater mission to promote ‘unprecedented change, healing and love’.
  • The real tragedy here is not that psychiatrists failed to stamp out Luca’s voices and strange beliefs. The tragedy is that he had to discover this spiritual path on his own.

Author Narrative
  • Justin Garson is professor of philosophy at Hunter College and the Graduate Center, City University of New York. He is the author of Madness: A Philosophical Exploration (2022) and The Madness Pill: The Quest to Create Insanity and One Doctor’s Discovery that Transformed Psychiatry (St Martin’s Press, forthcoming). He also writes for Psychology Today on different paradigms of mental illness.

Notes
  • I'm unsympathetic to some of what the author has to say, though I agree that treating mental illness pharmacologically without therapy is insufficient, and also that the currently available medication doesn't seem to work very well. Often, this is on account of trying to suppress symptoms rather than get to grips with root causes.
  • I hadn't been aware of the TI movement. I thought the author - in his Conclusions - was rather too sympathetic towards it.
  • I've cited rather too much of the paper in the extracts. I need to read it again more carefully. I thought a couple of the analogies rather glib and misguided.
    → There's no parallel at all between the four 'humours' and neurotransmitter imbalances. There's reason and experiment in support of the latter,
    → While it's true that sundry natural products are psychoactive, this isn't parallel to medication. Escaping your problems with drugs never leads to any good. Medication isn't intended as escapism.
  • And, pace the author, psychopathologies - when severe enough to adversely affect the sufferer's well-being and ability to function - just are symptoms of a damaged brain. If this is a stigma, it's society that needs to change (just as it has for physical disabilities).
  • The author's books look interesting but are too expensive.
  • There are a lot of Aeon Comments that look worth following up when I have time.

Paper Comment
  • Sub-Title: "For those who hear voices, the ‘broken brain’ explanation is harmful. Psychiatry must embrace new meaning-making frameworks"
  • For the full text see Aeon: Garson - Targeted.



"Gilbert (Bennett) - All that we are"

Source: Aeon, 23 July 2024


Extract
  • Though personalism continues to be a field of robust philosophical research, in American academic philosophy after the Second World War it faded under the hegemony of analytic philosophy. But in (Martin Luther) King’s hands it became forceful as a practice for justice and other moral ends. Its resources have not been exhausted. Careful revision and updating can make it a source of illumination and hope in the circumstances we face a half-century after King.
  • Why should we update personalism, and what useful purpose will this serve? Our ideas about the nature of human beings are today undergoing a severe challenge by the new philosophies of transhumanism. Through personalism, we can understand and appreciate our purposes and obligations, as well as the dangers posed by transhumanism.
  • The best known of these transhumanist philosophies is effective altruism (EA). The Centre for Effective Altruism was founded at the University of Oxford in 2012 by Toby Ord and William MacAskill; largely inspired by Peter Singer’s utilitarianism, EA has been an influential movement of our time. As MacAskill defines it in Doing Good Better (2015): Effective altruism is about asking, ‘How can I make the biggest difference I can?’ And using evidence and careful reasoning to try to find an answer. It takes a scientific approach to doing good.
  • This is not as clear cut as it might seem, and it has often led to the uncomfortable conclusion that the accumulation of capital by the wealthy is morally necessary in order to affect the world for the better in the future, largely regardless of the consequences for living persons. Its proponents argue that society does not sufficiently plan for the distant future and fails to store up the wealth that our successors will need to solve social and existential challenges.
  • Other transhumanist theories include longtermism, the idea that we have a moral obligation to provide for the flourishing of successor bioforms and machinic entities in the very distant future, at times regardless of consequences for those now living and their proximate next generations. There is also a kind of rationalism that justifies the moral calculations on which provision for the future instead of for the living is based; cosmism, the vision for exploration and colonisation of other worlds; and transhumanism, which aspires to assemble technologies for the evolution of humankind into successor species or for our replacement by other entities as an inevitable and thereby moral duty. All of these, including the various versions, are sometimes named by the acronym TESCREAL (transhumanism, extropianism, singularitarianism, cosmism, rationalism, effective altruism, longtermism). Here I refer to these as ‘transhumanism’. The core argument common to these lines of thinking, according to the philosopher Emile Torres writing in 2021, is that: [W]hen one takes the cosmic view, it becomes clear that our civilisation could persist for an incredibly long time and there could come to be an unfathomably large number of people in the future. Longtermists thus reason that the far future could contain way more value than exists today, or has existed so far in human history, which stretches back some 300,000 years.
  • From this point of view, human suffering today matters little by the numbers. Nuclear war, environmental collapse, injustice and oppression, tyranny, and oppression by intelligent technology are mere ripples on the surface of the ocean of history.
  • Each element of these transhumanist ideologies regards human personhood as a thing that is expiring and therefore to be replaced. As the longtermist Richard Sutton told the World Artificial Intelligence Conference in 2023: ‘it behooves us [humans]... to bow out... We should not resist succession.’ Their proponents argue for the factual truth of their predictions as a way to try to ensure the realisations of their prophecies. According to the theorist Eliezer Yudkowsky, by ‘internalising the lessons of probability theory’ to become ‘perfect Bayesians’, we will have ‘reason in the face of uncertainty’. Such calculations will open a ‘vastly greater space of possibilities than does the term “Homo sapiens”.’
  • A personalist approach deflates these transhumanist claims. As the historian of science Jessica Riskin has argued, a close examination of the science of artificial intelligence demonstrates that the only intelligence in machines is what people put into them. It is really a sleight-of-hand; there is always a human behind the curtain turning the wizard wheels.

Author Narrative
  • Bennett Gilbert is adjunct assistant professor of history and philosophy at Portland State University, US. He is the author of A Personalist Philosophy of History (2019) and Power and Compassion: On Moral Force Ethics and Historical Change (forthcoming from Amsterdam University Press), as well as numerous papers, and is co-editor of Ethics and Time in the Philosophy of History: A Cross-Cultural Approach (2023).

Notes
  • The author contrasts Personalism with Analytic Philosophy and it shows.
  • While there's doubtless much of value in what the author says, the argument - if there is one - is all over the place.
  • He also seems to bundle together all sorts of trends he doesn't like into one big Transhumanist1 bucket. I don't see what Effective Altruism has to do with Transhumanism.
  • The author seems supportive of panpsychism. Not a good step, in my view. I need a Note on Panpsychism too.
  • I do agree that some of the wilder claims and aspirations of Transhumanism deserve to be debunked, but you don't need Personalism for that.
  • I need a Note on Personalism, but I don't think it's the right foundation for philosophy.
  • There are a few comments of similar quality to the paper (or worse).
  • I hope to write more shortly.

Paper Comment
  • Sub-Title: "The philosophy of personalism inspired Martin Luther King’s dream of a better world. We still need its hopeful ideas today"
  • For the full text see Aeon: Gilbert - All that we are.



"Gismundi (Antonella) - Speaking a different language can change how you act and feel"

Source: Aeon, 31 October 2024


Extract
  • For many multilinguals who feel like or seem like a different person depending on which language they are using, language and cultural cues might be priming different self-perceptions, triggering shifts in personality trait expression in ways that align with the corresponding linguistic and social environment. For someone who is working in a Taiwanese cultural context and trying to fit in with the way others speak and act, ideals such as loyalty and hard work might become especially salient and something to emphasise in one’s self; whereas, in the US, it might be qualities like assertiveness and initiative. Even as a multilingual person’s core self remains a constant, their present context might change the lens through which they perceive their own identity – including which aspects become amplified or toned down in their mind – as well as how they interact with others.
  • In social psychology, tweaking your behaviour based on contextual cues in order to suit community norms has been described as ‘cultural frame switching’. Interacting in a particular language can serve as one of these cues. The cultural frame-switching model also suggests that people with a higher degree of cultural awareness are more susceptible to cultural priming.
  • How is this cultural awareness developed? Speakers with a higher degree of immersion – those who live abroad or have strong community ties – are more likely to develop ‘pragmatic competence’ in their target language. This kind of competence goes beyond the accurate use of vocabulary and grammar: it involves understanding and using language in socially appropriate ways. It means knowing not just what to say, but also when and how to say it, and being able to predict how it will be received. This is essential for social functioning in the society where a language is spoken, and it is hard to acquire in a classroom context, without authentic interactions with other speakers. Motivation and adaptability are also crucial in this process of internalising new language conventions.
Author Narrative
  • Antonella Gismundi is a data analyst with a background in linguistics and psychology. She lives in Taipei, Taiwan.
Notes
  • The author is an Italian who has spent the 10 years since graduating in Taiwan, obtaining a Masters in Chinese and where Chinese is her language of business and everyday life. She seems to be almost as comfortable in English as Italian, but less so in Chinese. It's not explained where her competence in English comes from and how she maintains it in the context of her daily environment.
  • I had difficulty extracting an 'Abstract' of this paper as it's all interesting.
  • There are a few Aeon Comments with a couple of responses by the author.
  • It's obvious that one's facility in a language will impact how you come across to others and how you can operate. Also - as she notes - one's situation and environment affects how you act and come across to others.
  • More could be said. The paper is brief so I should re-read it and extract other lessons from it.
  • It relates to both Narrative Identity and Language.

Paper Comment



"Glaser (Eliane) - Our narrative prison"

Source: Aeon, 13 May 2025


Author's Introduction
  • How is it that we live in an era of apparently unprecedented choice and yet almost every film and TV series, as well as a good many plays and novels, have exactly the same plot? We meet the protagonist in their ordinary world, plodding along, not living their best life. And then an inciting incident changes everything, making it impossible for the protagonist to carry on as normal. They are pulled into a new quest. On the way, they meet someone who shows them a completely different way of being. They ask themselves: have I been living a lie?
  • This is the mid-point, the point of no return. Life can never be the same. But there’s a double wobble since the protagonist’s quest is opposed by a powerful antagonist who frustrates the hero at every turn. At their lowest point, the protagonist realises their old mode of being is redundant, but the new one is too daunting. The story is resolved either in the protagonist’s favour or against them: they triumph or else fail tragically. The important thing is that their life philosophy has been turned upside down. When they return home, everything is the same, but everything is also completely transformed.
  • The formula is particularly repetitive in cinema. As it happens, aspiring screenwriters in 21st-century Hollywood are following a rubric set out in the 4th century BCE. In his Poetics, Aristotle defines a well-constructed plot as having three main acts, and names other essential elements such as the ‘reversal of the Situation’, which is ‘a change by which the action veers round to its opposite’ – eg, the moment in The Sixth Sense (1999) when the therapist realises he is dead – and ‘recognition’, which he defines as a ‘change from ignorance to knowledge’ (Oedipus’ recognition is a big one). Aristotle’s schema was developed by later thinkers from Terence and Seneca to the 19th-century German novelist and playwright Gustav Freytag, who distilled stories into his pyramid diagram of exposition, rising action, climax and resolution. A philosophical parallel might arguably be found in Hegel’s dialectic, from thesis to antithesis and finally to synthesis. As the US historian Hayden White observed, even historians tend to shape their accounts of the past using narrative tropes.

Author's Conclusion
  • Early 20th-century literary modernism rejected the smooth illusions of 19th-century fiction, grappling instead with the dislocations of postwar modernity. Likewise, these attempts, in Le Guin’s words, ‘to describe what is in fact going on, what people actually do and feel’ are aesthetically and politically bracing. They defamiliarise what is naturalised, making the world strange so we can see it, challenge it, and potentially change it. Traditional story structure may resonate deeply, but it does not give us that jolt. Paradoxically, the monomyth dramatises change, but also embodies continuity.
  • ‘Stories allow us to progress, and they allow us to stay the same,’ Yorke said. ‘That’s probably quite a healthy balance … We tell stories to define ourselves, our families, our nations.’ The monomyth was modelled on stories told by traditional societies governed by cyclical time and generational renewal: the hero’s journey is a rite of passage. But now that real-world Bond villains like Elon Musk are threatening geopolitical stability and ecological survival, that final-act reset is surely coming under strain.
  • Ironically, the monomyth is now being stretched out of shape by commercial forces, too. Franchises, sequels and box-set formats are extending stories in multiple directions to eke out ever more revenue, bringing to mind Musk’s intergalactic ambitions, which imply there’s a franchise option for human life: late capitalism, it would seem, respects neither narrative nor planetary boundaries. ‘It’s outrageous, really,’ Yorke says of endless sequels. ‘If you think of it in basic terms, a story is a question and answer, dramatised. And when the question is answered, there is nowhere else to go.’ Not surprisingly, Hollywood is working hard to combine narrative boundlessness with satisfying, self-contained stories: the Marvel ‘Multiverse’ is a kind of vast conglomerate of autonomous (super)heroes’ journeys.
  • In contrast to the spectacle of an individual saving the world, we once had idealism and the sense of a common goal: it was called ideology. But grand narratives are a relic of the past century. Khalid links the relative absence of creative experimentation with a narrowing of our ideological horizons. ‘Our collective imaginations … are being stifled,’ he told me. Like Le Guin, he believes that we must ‘fundamentally think about how we rebuild structures’, because those we are living under ‘are literally killing us … whether that be patriarchy or racism or the climate catastrophe.’ Ambitious art can ‘shape what we think it is possible for us to do.’
  • Vogler and Yorke, while supportive of experimental fictions, note that they are of course harder to finance. And although it may be counterproductive to reassure people that everything is fine when the planet is burning, Yorke insists on being realistic about audience choices in a free market. ‘Stories have to make you alarmed,’ he told me, ‘but they also have to offer you hope … You’ve got to have a model of what’s worth living for.’
  • Even art-house films that self-consciously depart from the three-act structure nonetheless define themselves against it. Charlie Kaufman’s brilliantly metafictional Adaptation (2002) dramatises a deconstruction of the formula (the protagonist, a struggling screenwriter, even attends a seminar by Robert McKee); but still he ends up (albeit knowingly) following it. Abandoning the structure altogether, it seems, is neither desirable nor possible. Even so, whether we regard it as a palace or a prison, we need now more than ever to understand how it is built.
Author Narrative
  • Eliane Glaser is a writer and radio producer. Her books include Elitism: A Progressive Defence (2020) and Motherhood: Feminism’s Unfinished Business (2022), and her writing has appeared in The Guardian, Prospect and the London Review of Books, among others.
Notes
  • I had high hopes of this Paper - as it mentions Gustav Freytag (Wikipedia: Gustav Freytag) and his pyramid, also referenced in P W Mansell: Acts Website Appendix A (though described as 'Freytag's ride' for comedic purposes).
  • There are 20-odd Aeon Comments with the occasional fairly worthless response by the author to the appreciative ones. Some of the critical comments are cogent, in my view. But all need a close reading.
  • One Comment refers us to YouTube: Kurt Vonnegut - Shape of Stories, which looks amusing and informative (I'm a fan of Kurt Vonnegut) but I've not had time to view it yet.
  • Some of the Comments seek to point out that the Paper itself might follow the despised structure and that the denouement is more to do with content than structure. Also, that the structure is followed by story-tellers from all societies - including indigenous ones. The author's axe that she's grinding is to base all of society's ills on 'patriarchy or racism or the climate catastrophe', the current concerns of today's academics. It might be argued that all the good things in modern society have arisen from these same evils (admittedly, this requires some argument: the basic idea is that all the goods of today's advanced societies - massive improvements in the standard of living, healthcare and life-expectancy - arose in their historical context of stable 'going places' societies that enabled the leisure required for scientific investigation and the capital for industrial development.
  • This relates to my Notes on Narrative Identity and Language.

Paper Comment
  • Sub-Title: "The three-act ‘hero’s journey’ has long been the most prominent kind of story. What other tales are there to tell?"
  • For the full text see Aeon: Glaser - Our narrative prison.



"Goddu (Mariel) - Suffused with causality"

Source: Aeon, 21 March 2025


Author's Introduction
  • Causal understanding is the cognitive capacity that enables you to think about how things affect and influence each other. It is your concept of making, doing, generating and producing – of causing – that allows you to grasp how the Moon causes the tides, how a virus makes you sick, why tariffs change international trade, the social consequences of a faux pas, and the way each event in a story leads to what happens next. Causal understanding is the foundation of all thoughts why, how, because, and what if. When you plan for tomorrow, wonder how things could have turned out differently, or imagine something impossible (What would it be like to fly?), your causal understanding is at work.
  • In daily life, causal understanding imbues your observations of changes in the world with a kind of generativity and necessity. If you hear a sound, you assume something made it. If there’s a dent on the car, you know that something – or someone – must have done it. You know that the downpour will make you wet, so you push the umbrella handle to open it and avoid getting soaked. You watch as an acorn falls from a tree, producing ripples in a puddle.
  • The human power to view cause-and-effect as part of ‘objective reality’ (a philosophically fraught idea, but for now: the mind-independent world ‘out there’) is so basic, so automatic, that it’s difficult to imagine our experience without it. Just as it’s nearly impossible to see letters and words as mere shapes on a page or a screen (try it!), it is terrifically challenging to observe changes in the world as not involving causation. We do not see: a key disappearing into a keyhole; hands moving; door swinging open. We see someone unlocking the door. We don’t see the puddle, then the puddle with ripples-plus-acorn. We see the acorn making a splash.
  • Most people don’t realise that any of this is a cognitive achievement. But, in fact, it is highly unusual. No other animal thinks about causation in the hyper-objective, hyper-general way that we do. Only we – adult humans – see the world suffused with causality. As a result, we have unparalleled power to change and control it. Our causal understanding is a superpower.
  • The scientific story of how our causal minds develop features another superpower: human sociality. It’s our unique sensitivity to other people that lets us acquire our special causal understanding. The story also raises questions about ‘other minds’. If our causal understanding is the exception, rather than the rule, then how does the world show up for other animals? If we try to suspend the causal necessity that structures so much of our experience, what’s left over?
  • I’m going to suggest that what remains is our experience of doing – a value-laden, first-personal and inherently interactive perspective. It is in this involved, participatory ‘point of do’ – as opposed to a detached, objective point of view – that the seeds of higher cognition take root.
  • Appreciating that our original perspective is action-oriented and goal-directed can also help us understand our own shortcomings – and how to change them.

Author's Conclusion
  • Our collective capacity to make new choices about what to do with all our power will determine the fate of our species. The thing that’s so scary and frustrating and hard is that it seems out of our control. It’s bigger than any one of us – far beyond the scale of goal-directed action we evolved to consider.
  • Here’s why I’m hopeful. I think we can use our causal understanding to intervene in our own behaviour. For one, we know that it’s highly flexible. Even primary school children can learn about the complex causal relations involved in ecosystems, food chains and structural inequality – this can provide guidance for education, storybooks and children’s media. We also know about the power of sociality – the power of highlighting variables for one another. Friends and family are an influential source for developing habits around causal factors that affect our own health (like exercise, diet and microplastics) and the planet’s (like eating meat, composting and sustainable consumer practices). The more we talk to each other about these difference-makers, the more these actions can echo and amplify, species-wide.
  • Finally, causal understanding is rooted, originally, in our values – things we want. The most primal causal learning happens by aiming at things we want to make happen. This means that optimistic, action-oriented suggestions are probably more effective than doom and gloom. My favourite recent instance of human causal imagination is the book What If We Get It Right? (2024) by the marine biologist and climate activist Ayana Elizabeth Johnson. In it, Johnson invites us to imagine the future we want to live in, and shoot for it – each in our own way, in our own communities. We already have a lot of solutions, she says; we just need to scale, spread and use them.
  • With that in mind, I think there’s hope. It’s only a small step from What if we get it right? to What can I do?
  • Let’s get doing!
Author Narrative
  • Mariel Goddu is a doctoral student in philosophy at Stanford University in California. From 2012-22, she was a practising cognitive scientist, focusing on causal reasoning in early childhood. She earned her first PhD in developmental psychology from the University of California, Berkeley in 2020. Her philosophical work lies at the intersection of philosophy of action, biology, and mind.
Notes
  • An interesting Paper, but one I found a little confusing. As the author points out in reply to a Comment, 'The essay is concerned with the *psychological* development of human causal understanding –– not with causation, itself'.
  • There's rather a negative view of animals' powers of causal reasoning.
  • One the Paper moves on to our control of the environment, it diverts into something of a final environmentalist rant.
  • There are several sensible Aeon Comments - with some extensive replies by the author with links to further material.
  • I need to read this Paper again; it was too complex to absorb while walking Bertie.
  • It relates to my Notes on Causality, Psychology, Animals

Paper Comment
  • Sub-Title: "Humans have a superpower that makes us uniquely capable of controlling the world: our ability to understand cause and effect"
  • For the full text see Aeon: Goddu - Suffused with causality.



"Godfrey-Smith (Peter) - If not vegan, then what?"

Source: Aeon, 10 February 2023


Author's Introduction
  • Suppose a person is very concerned about the ethical issues around food and farming, especially animal welfare, but for whatever reason finds that a wholly plant-based diet does not work for them. What is the most defensible step away from veganism – the best compromise to make, if it is a compromise at all?
The Options
  1. Humanely farmed meat (especially beef)
  2. Wild-caught fish
  3. Dairy products (conventionally farmed)
Author's Conclusion
  • Finally, I realise that at least some of the options I am considering here do not ‘scale up’ to yield a solution to questions about diet for humanity as a whole, especially in the long term. These reflections are intended for people right now, in situations where all three of the options discussed are feasible everyday choices, given a person’s economic situation and what is available to them. The future will probably be different, including not just advances in plant-based foods but, if the technology works out, a lot of cultured or lab-grown meat. The fact that, at some time in the future, our food choices will look very different does not change the fact that we do have these choices now. And at least for people whose constitution resists veganism, the choice is vivid. I am not left, at the end of all this, with a definite conclusion. What do you think?
Author NarrativeNotes
  • This is a useful paper, but unfortunately rather misguided.
  • I think it was a mistake to motivate the discussion by citing the author's personal difficulties in adopting veganism. This provoked an avalanche of approbrium from the self-righteous who had successfully made the transition, together with some helpful advice.
  • His approach is basically utilitarian, and I agree with his thought that if we didn't eat beef cattle, there would not be lots of 'happy cows' - there would just be no more cows. A commentator points out that this may be no bad thing (given their greenhouse gas emissions). But there are still lots of 'happy lives' (overall) that will no longer be had (and no spin-off benefits, of course).
  • With respect to his 'third option', dairy products, he thinks the lives of dairy cattle aren't up to much, but we get a lot of milk per cow. He hopes their lives can be ameliorated, and cites a farm where cows are allowed to share their milk with their calves.
  • The only problem with 'humanely raised' beef cattle is that they have to be killed, but 'they would die anyway'. Death for any animal isn't pleasant, and for most is worse than that for beef cattle, but even so their lives are cut short. He doesn't mention sheep, for some reason - maybe their lives are even shorter. Does the length of life matter? Was Macbeth right to say, resignedly, of his dead wife, 'she would have died hereafter'? See Macbeth: Act 5, scene 5, lines 16–27.
  • A point I think is often missed is that all food production involves the mass extirmination of sentient - or possibly sentient - life. In order to maximise yields, farmers have to protect their crops from predation by rodents and insects. Also, countless numbers of both are destroyed during harvesting or readying the fields for the next crop.
  • I'm in agreement with all parties that (at the very least) the consumption of animal products should be minimised where it causes animal suffering, that farming methods should be improved to ameliorate the lots of farm animals, and that industrial-scale factory farming of chickens, pigs and the like should be stopped as soon as is practical.
  • I've retained the - often rather nasty - comments for future consideration.
  • More should be said ... including connecting to other 'non-vegan' postings on Aeon, which have received similar approbrium.
  • We might also connect it to arguments that all lives are bad, when taken in the round, and we are always wrong to create it.

Paper Comment



"Goettlich (Kerry) - Could conquest return?"

Source: Aeon, 13 March 2025


Author's Introduction
  • We live in a world where less and less seems to be universally agreed on, but there is one important exception. Virtually all national governments, either implicitly or explicitly, agree that respect for the ‘sovereignty and territorial integrity’ of other nation-states is a fundamental principle of the international community. According to the United Nations Charter ratified in 1945, states are committed to refraining ‘from the threat or use of force against the territorial integrity or political independence of any state.’ (Note that in this essay I use the term ‘state’ rather than the more ambiguous terms ‘nation’ or ‘country’. This does not refer to the subordinate political units such individual states within the United States). It is rare to find anyone who will openly support the idea that annexing territory from another state, after forcibly conquering it, could be legitimate. Conquest exists, of course, but it is almost always disguised as something else, whether it is Russia’s technique of promoting the secession of neighbouring regions, and then annexing them after holding a referendum, or Israel’s technique of calling it an occupation rather than a conquest.
  • Political leaders today take pride in rejecting conquest as illegitimate, which makes our current international order seem civilised and peace-loving. What could possibly justify taking by force territory that is not one’s own? But the idea that conquest is never legitimate and acceptable in international affairs is relatively new. As the 17th-century Dutch jurist Hugo Grotius argued, treaties that end wars should be honoured, even if they forcibly impose unjust conditions, for example by taking away part of a state’s territory. Such treaties, even if unjust, may sometimes be the only way to end wars, and rejecting them on principle would merely make it impossible for wars to end. Moreover, as the 19th-century American jurist Henry Wheaton observed: ‘The title of almost all the nations of Europe to the territory now possessed by them … was originally derived from conquest, which has been subsequently confirmed by long possession.’ The very existence of almost any state, from this perspective, seems to depend, inevitably, on the legitimacy of conquest.
  • But instead of Grotius’s law of nations, which attempts to limit conquest by allowing it a regulated path to legality, we have an international order that guarantees as an absolute right the territory of each state as it currently exists. What is banned is not profiting from conquest as such, but only profiting from conquest that took place after about 1945, or even more recently in the case of conquests against colonial empires by emerging independent states. Apparently, the conquests that happened before a certain point in history are completely legitimate, but now conquest is one of the worst crimes imaginable.
  • How did we come to have an international order that is so radically protective of the status quo?

Author's Conclusion
  • This narrow interpretation of conquest has been revived many times since, but perhaps most importantly during the 2003 invasion of Iraq by the US and its coalition partners. In the lead-up to the invasion and its aftermath, the US president George W Bush and the UK prime minister Tony Blair emphasised their respect for Iraq’s territorial integrity, meaning not that they wouldn’t use conquest to gain control over the country – this they did with a spectacular show of force – but that they would not alter its borders. Upholding the sanctity of territorial integrity and interstate borders, they hoped, would signal a regard for international order that would make up for their refusal to act multilaterally and through consultation with the UN Security Council in invading Iraq.
  • The narrow interpretation of territorial integrity strictly as a rule against annexation of conquered territory is the one that stands to lose the most from recent annexations by Russia and Israel. It is not that every contested border around the world will suddenly be up for grabs, with military aggression breaking out in every corner of the globe. Almost all states frequently repeat support for the principle of territorial integrity in response to events where it appears threatened. Even if Iran and China did not participate in the UN General Assembly vote to condemn Russia for its invasion, they still affirmed the territorial integrity of all the relevant states. While the US may have been at the forefront of the abolition of conquest, there are too many other states in the world that are keen to protect their own borders for territorial integrity to be simply forgotten. But according to the analyst Bonny Lin writing in Foreign Affairs in 2023: ‘Some Chinese scholars have suggested that sovereignty and territorial integrity should be viewed as only one of 12 core principles for China to balance – in other words, not the most important one, or a value that needs to be respected completely.’ Annexation-by-conquest may remain illegal but may also decline in its perceived gravity, relative to other kinds of violations of a country’s territory.
  • International orders come and go. Before the territorial integrity principle, there was a different system, in European states and their colonies, in which conquest was regulated by norms and principles but not explicitly illegal. And change will come again to the international order. Shifts in relative power between different social and political forces in the world, with different cultural and moral practices, make this all but inevitable. What comes next is impossible to predict with certainty, but it is becoming less and less likely that attitudes towards conquest will be shaped by the ideological constructs of the US.
Author Narrative
  • Kerry Goettlich is a lecturer in international security in the Department of Politics and International Relations at the University of Reading, UK.
Notes
  • This is a useful Paper, especially in the present geopolitical climate.
  • It's useful to have the background on just why the retention of international borders is deemed to be so important. It's also good to see the US getting the blame for something.
  • Whether the 'Open Doors' policy of the US - to provide free access for the US to trade in relatively stable countries - is correct is - I suppose - moot. But I'm sympathetic to the idea. The US was against 'hard' empires because they got in the way of their 'soft' empire.
  • The US master plan is supposed to be to interfere in countries - by military and other means - as much as is needed to keep the doors open. All the while keeping borders fixed even though the governments may be replaced.
  • It seems to me that most wars these days arise from badly-drawn 'lines in the sand' that collapsing empires came up with in their haste to disengage. These act like the fault-lines between techtonic plates. Where they have been drawn wrongly, one side wants to keep the territory they have unfairly gained and wants to get back the territory it has lost. Or, one 'people' wants to be united across artificial boundaries.
  • There are some useful Aeon Comments, with replies by the author.
  • I suppose this Paper is rather losely related to my Note on Narrative Identity.

Paper Comment
  • Sub-Title: "It’s only a century since US diplomats first persuaded the world that it’s wrong for countries to annex their neighbours"
  • For the full text see Aeon: Goettlich - Could conquest return?.



"Goff (Philip) - My leap across the chasm"

Source: Aeon, 01 October 2024


Author's Introduction
  • I rejected Christianity at the age of 14, upsetting my grandmother by refusing to get confirmed in the Catholic faith of my upbringing. Partly it was intellectual issues: why would a loving and all-powerful God create a world with so much suffering? Partly it was ethical issues. It was a time when I was questioning my sexuality, and it seemed to me wrong not to allow a gay person to flourish through a loving relationship with a partner they are attracted to. But, most of all, Christianity just seemed very unspiritual. I got very little out of boring church services, and it seemed to be all about pleasing the old guy in the sky so you get to heaven. Science and philosophy seemed a more rational way to make sense of life, which ultimately led me to become a philosophy professor.
  • Despite rejecting religion, I always had a spiritual sense, a sense of a greater reality at the core of things, what William James called ‘The More’. But I would connect to ‘The More’ in my own way, through meditation and engagement with nature. In other words, I was a signed-up member of the ‘spiritual but not religious’ grouping.
  • And thus I remained for a couple of decades. I was happy in this club. There was no ‘God-shaped hole’ in my life. But, more recently, a few things have changed. The first was intellectual. Most of my fellow philosophers are persuaded either by the arguments for the very traditional idea of God, or by the case for Richard Dawkins-style atheism. I’ve come to think that both sides of this debate have something right.

Author's Conclusion
  • Faith is not about certainty. It is fundamentally a decision to trust your spiritual experiences, and to trust a certain framework for interpreting and acting upon those experiences. Hindus interpret their spiritual experiences as awareness of Ultimate Reality at the core of one’s being, and respond by meditating to realise their identity with Ultimate Reality. Christians interpret their spiritual experiences as awareness of a loving creator, and pray to deepen their relationship with God. These decisions to trust certain experiences influence how you see the world, how you respond to other people, and how you engage with nature. For a person of faith, each moment of daily life is permeated with meaning and significance.
  • This openness to uncertainty allows for pluralism. If faith requires certainty, then people of faith must be certain that their religion is right, and hence certain that other religions are wrong. But for trust to be rational, it’s only required that we’re not putting our trust in something wildly improbable. If there’s a 30 per cent chance that my loved one will make it, then it’s rational to have faith that they’ll pull through. But if the doctors tell me the chances of survival are sadly less than 1 per cent, then my loved one and I should enjoy our last moments and prepare to say goodbye.
  • This doesn’t mean faith gets a free pass. If Dawkins is right, there’s less than a 1 per cent chance that God exists, in which case it’s irrational to trust in the tenets of a theistic religion. However, if Dawkins is wrong, it might turn out that more than one religion is probable enough to have faith in. I have come to think that Christianity, in a certain form, is a credible possibility. But I have no problem with the idea that other religions may also be probable enough for it to be rational to have faith in them. If it is highly uncertain which religion is true, it may be rational to bring in pragmatic considerations, such as which religion you feel culturally comfortable in, to select a faith to follow.
  • Finally, I want to bring in one crucial element I haven’t mentioned so far: the extraordinary teaching of Yeshua. His focus on the poor and the weak, his talk of loving your enemies and turning the other cheek, his attacks on those who overvalue tradition or social status, were light years ahead of their time, and have played a crucial role in shaping the modern ethical ideals that we still struggle to live up to. This in itself proves nothing. But, for me, it’s a vital element in the mix, giving credibility to the possibility that the events depicted in the New Testament describe some profound moment in the evolution of reality.
  • Life is short and much is uncertain. We all have to take our leap of faith, whether that’s for secular humanism, one of the religions, or simply a vague conviction that there is some greater reality. In deciding, it’s important to reflect on what’s likely to be true but also what’s likely to bring happiness and fulfilment. For my own part, I have found a faith that is certain to bring me happiness, and which is, in my judgment, probable enough to be worth taking a bet on.
Author NarrativeNotes
  • This essay shows that Goff has moved on a bit since "Goff (Philip) - Why? The Purpose of the Universe".
  • His version of Christianity is tailored to his panentheist and panpsychist views.
  • He thinks that the problem of evil means that God is not omnipotent.
  • He - correctly - will have nothing to do with the fires of Hell. He thinks of Jesus role not as a cosmic redeemer, but as bringing the human and the divine closer together. Incarnation but not atonement.
  • He things that the resurrection experiences are real but are visions. He thinks Jesus did 'rise' - in that his body was transformed into an immaterial 'spiritual body' - but that he didn't wander about and then fire off to heaven.
  • The attraction of Christianity over alternative spirituality seems to be of Jesus moral teaching and being able to fit comfortably into a community of worshippers.
  • His conclusion involves a weaker version of Pascal's Wager.
  • This relates to my Notes on Religion and Resurrection.
  • There are around 100 Aeon Comments, which I've hardly looked at and need to follow up.

Paper Comment
  • Sub-Title: "After years of debate and contemplation, I’ve come to think a heretical form of Christianity might be true. Here’s why"
  • For the full text see Aeon: Goff - My leap across the chasm.



"Goff (Philip) - Why religion without belief can still make perfect sense"

Source: Aeon, 01 August 2022


Author's Introduction
  • It is common to assume that religion is all about belief. Religious people are ‘believers’. Muslims believe that God revealed the Quran to Muhammad; Christians believe Jesus rose from the dead; Buddhists believe in cyclical rebirth and the non-existence of the self.
  • But there is more to a religion than a cold set of doctrines. Religions involve spiritual practices, traditions that bind a community together across space and time, and rituals that mark the seasons and the big moments of life: birth, coming of age, marriage, death. This is not to deny that there are specific metaphysical views associated with each religion, nor that there is a place for assessing how plausible those views are. But it is myopic to obsess about the ‘belief-y’ aspects of religion at the expense of all the other aspects of the lived religious life.
  • Some people become religious because they become convinced on intellectual grounds that the specific doctrines of a particular faith are highly likely to be true. That’s all well and good. But I want to suggest that there are fruitful ways of engaging with religion that don’t involve belief. Perhaps the best way to do this is to sketch some possibilities.
Excerpts - Fictional Case Studies
  • Faiza is what’s called a practising agnostic. She was raised a British Muslim and believes, on the basis of personal experience, that there is a spiritual dimension to existence, a ‘higher power’ as she calls it. But she’s not sure whether that higher power is a personal God. Faiza studied philosophy at university, and was somewhat impressed by arguments for the existence of God, although she didn’t find any of them conclusive. As a young child, Faiza was taught to read the Quran in Arabic: she has some feel for the great beauty of its verses, and finds it plausible that this wondrous text had a divine origin. On the other hand, when she reflects on the plurality of religions around the world, each with their insights and great books, she feels she cannot be too confident that her own religion is the correct one. If she had to give odds, Faiza would say there’s a 50/50 chance of Islam being true. In other words, Faiza is a perfect agnostic regarding the truth of Islam. Does Faiza believe in Islam? The answer of course depends on what we mean by ‘belief’. According to one standard definition, to believe something is to feel confident that it’s true. Belief, in this sense, doesn’t imply 100 per cent certainty, but it does imply confidence significantly greater than 50 per cent.
  • Pete is what is called a religious fictionalist. He was raised a Christian in the US. Like Faiza, he has spiritual convictions. Experiences with psychedelics in his early 20s led Pete to believe that there is a reality greater than what we can perceive with our senses. He finds it hard to pin down exactly what this ‘greater reality’ is but likes to refer to it with William James’s term ‘the “more”’. However, in contrast to Faiza, Pete is a resolute atheist, at least about the ‘Omni-God’ of traditional Western religion: all-knowing, all-powerful, and perfectly good. In his personal investigations of the philosophical arguments for/against God’s existence, Pete struggled to find any merit in the arguments for, but was overwhelmingly persuaded by the arguments against. Whereas Faiza is 50/50 on the truth of Islam, Pete finds it deeply implausible that an all-powerful and loving God would create a universe with so much suffering, and concludes on this basis that there is, at best, a 5 per cent chance of Christianity being true. We standardly use the phrase ‘don’t believe’ to cover both the situation of Faiza and the situation of Pete, but they are not the same. While Faiza merely lacks belief in the religion of her birth, Pete positively dis-believes in his.
Author's Conclusion
  • Faiza and Pete are not ‘believers’ in the traditional sense, but they do have spiritual beliefs in a greater reality underlying the world we perceive with our senses. I personally find it harder to see the motivation for engagement with religion in the absence of any kind of belief in a transcendent reality (although there are some such religious fictionalists). However, even in the highly secular United Kingdom, belief in a transcendent reality is not a fringe position. In a recent survey, 46 per cent of UK adults agreed that ‘all religions have some element of truth in them’, and 49 per cent that ‘humans are at heart spiritual beings’. Some of these, of course, will be traditional religious believers. Other will identify as ‘spiritual but not religious’. The purpose of this article is simply to point out that there is a third option that many are not aware of, and that some may find attractive: religion without belief.
Author Narrative
  • Philip Goff is professor in philosophy at Durham University, UK. He blogs at Conscience and Consciousness, and his work has been published in The Guardian and Philosophy Now, among others. He is the author of Consciousness and Fundamental Reality (2017), Galileo’s Error: Foundations for a New Science of Consciousness (2019) and Why? The Purpose of the Universe (2023), and the co-editor of Is Consciousness Everywhere? Essays on Panpsychism (2022).
Notes
  • I think the author's approach is fundamentally misguided. If you want to become more ethical, connect to your community or engage with the universe beyond shallow materialism there are very many ways of doing this without believing in fictions.
  • Naturally, there's a plethora of Comments on this paper; I've reserved them for future reading. I'll add further comments of my own in due course.
  • For Religious Fictionalism, Robin Le Poidevin - Religious Fictionalism

Paper Comment



"Goldin-Meadow (Susan) - Expert tips on using gestures to think and talk more effectively"

Source: Aeon, 25 September 2024


Author's Introduction
  • The gesticulations that accompany your speech are so much more than mere hand-waving – they contain and convey meaning
    You’ve probably noticed yourself moving your hands when you talk, and seen other people doing it too. The more excited you are about what you’re saying, I bet the more likely you are to move your hands – to gesture. Everybody gestures, although some of us do it much more than others; I’ve even met people who believe they can’t talk without moving their hands. Most of us know on some level that our hands and mouths work together to communicate, yet few people give gesturing the attention it deserves.
  • The first time I realised that I was ignoring gesture was when I watched a certain video for the umpteenth time. It showed children trying to explain that the number of checkers in a row changes when the checkers are spread out (of course it doesn’t – but the children didn’t know that yet). I’d watched that video so many times because for years I’d played it to my developmental psychology class. But then one day it dawned on me that all the children gestured as they gave their explanations. Once I paid attention to their gestures, I discovered something surprising – the children’s hand movements conveyed meaning and, at times, they expressed a correct idea about the checkers with their gestures (pointing back and forth between the corresponding checkers in the two rows) and a different, incorrect idea with their speech.
  • Looking closely at this video changed my research life. I began seeing gesture everywhere, even where we least expect it. For instance, people who are blind from birth move their hands when they talk – even though they have never seen anyone gesture. I also began thinking hard about what we can learn about our minds from gesture, and I discovered that gesture isn’t just hand-waving. Your waving hands reflect, and change, thinking – yours and other people’s. You have a powerful tool for thinking at your fingertips. Why not put it to good use?

Sections
  • Use gestures to boost your memory
  • Use gestures to help yourself think
  • Use gestures to communicate more effectively
  • Use other people’s gestures to be a better teacher and listener
  • Don’t forget to gesture during online meetings
Author Narrative
  • Susan Goldin-Meadow is the Beardsley Ruml Distinguished Service Professor at the University of Chicago. She is a member of the American Academy of Arts and Sciences and the National Academy of Sciences.
Notes
  • Interesting - especially the relation between gesticulation and memory retention.
  • Very few - and very brief - Aeon Comments, but useful for once.
  • One claims that dogs can be trained to understand gestures - such as pointing - whereas Chimpanzees have difficulty. Worth following up. I'm suspicious, though, as they can be taught sign-language.
  • Another points out (as in the cover photo) how much italians use gesture. So do people from Ipswich. My wife Julie gestures even when on the phone. I'm always telling her off, but maybe there's reason behind it. I don't think I gesture much.
  • This relates to Language, Memory and Narrative Identity.

Paper Comment



"Gotlib (Anna) - Main character syndrome"

Source: Aeon, 27 September 2024


Author's Introduction
  • Driving on one of New York’s poorly maintained and crowded roads, I found myself in a situation one can more safely observe through numerous YouTube ‘bad driver’ videos: a driver for whom all other traffic apparently ceased to exist confidently pulled into a lane I already happened to occupy. After a quick manoeuvre that probably spared me a role in one of the aforementioned videos, I said, perhaps louder than was necessary, to nobody in particular: ‘Is this [deleted] aware of anything but his own [deleted]?’
  • Because I am a philosopher, and thus tend to be rather bad at letting go of ideas – especially ideas without good answers – my close brush with YouTube fame led me to consider other instances of what has come to be called ‘main character syndrome’ (MCS) or, perhaps more annoyingly, ‘main character energy’. Not a clinical diagnosis but more a way of locating oneself in relation to others, and popularised by a number of social media platforms, MCS is a tendency to view one’s life as a story in which one stars in the central role, with everyone else a side character at best. Only the star’s perspectives, desires, loves, hatreds and opinions matter, while those of others in supporting roles are relegated to the periphery of awareness. Main characters act while everyone else reacts. Main characters demand attention and the rest of us had better obey.
  • You have probably heard of MC behaviour – or perhaps even witnessed it online or in person. A TikToker and her followers physically push aside those inconvenient extras ‘ruining’ their selfies – and then post their grievances on social media. A man on a crowded subway watches a loud sports broadcast without headphones while ignoring other commuters’ requests to turn it down a bit. This is no mere rudeness: in the narrowly circumscribed world of main characters, the rest of us are merely the insignificant ghosts who happen to intrude on their spaces. Akin to chess pieces, or perhaps to animatronic figures, we have agency only in the development of the MC’s story. In current parlance, we are non-player characters (or NPCs) – a term that originated in traditional tabletop games to describe characters not controlled by a player but rather by the ‘dungeon master’. In video games, NPCs are characters with a predetermined (or algorithmically determined) set of behaviours controlled by the computer. Rather than agents with a will and intent, NPCs are there to help the MC in his quest, to intersect with the MC in preset ways, or to simply remain silent – a kind of prop, or perhaps human-shaped furniture, a part of the scenery. Another way to view NPCs is to imagine what the philosopher David Chalmers calls a philosophical zombie, or p-zombie, a being that, while physically identical to a normal human being, does not have conscious experience. If a p-zombie laughs, it’s not because it finds anything funny – its behaviour is purely imitative of the real (main character!) individual. For someone convinced of their MC identity, the rest of us are, perhaps, just so many zombies.

Author's Conclusion
  • Where does this leave us? MCS is not a puzzle to be solved via a ‘do and don’t’ listicle. It is not a social problem against which laws can be passed. Instead, it calls on us to engage in what Joseph Campbell, among others, called a ‘dark night of the soul’. This might mean sitting with our anonymity, solitude, boredom and lostness; pushing back on the equivocation between performance and authentic connections; making ourselves vulnerable to others, and thus to failure. It might mean seeing ourselves as always incomplete – and recognising that fulfilment might not be in the cards, that life is not a triumphant monomyth, and others are not here to be cast in supporting roles. Myself, I tend to turn to Samuel Beckett’s play Endgame (1957), where a character reminds us: ‘You’re on earth, there’s no cure for that!’ Sounds about right – let’s begin there.
Author Narrative
  • Anna Gotlib is an associate professor of philosophy at Brooklyn College in New York. She teaches and writes in the areas of feminist bioethics, neuroethics, social and political philosophy, and moral psychology. She is currently a co-editor of a book about the ethical, social, psychological and political implications of the Barbie phenomenon, as well as a book addressing migration and healthcare.
Notes
  • Interesting, well written but mostly unsurprising (with one exception ... read on). It's probably not a new 'thing' - only a new term.
  • From a quick on-line search it seems to be an ambiguous term, sometimes referring to a person's artificial on-line persona. See (there are many more):-
    PsychCentral: Beyond the Role of Main Character Syndrome
    WebMD: Main Character Syndrome
    Psychology Today: The Trouble with 'Main Character Syndrome'
  • I need to re-read the paper as while most of it is simply warning against selfishness and self-absorption, there are some odd swipes at what I'd thought of as 'good causes'.
  • The first is painting 'effective altruism', wherein we should direct our giving where it's most efficient in the utilitarian sense, in a bad light. "Singer (Peter) - Famine, Affluence, and Morality" is cited.
  • Then there's a critique of 'Longtermism', again a utilitarian idea whereby people in the remote future 'count' as much as those around us now, so pain today is justified when set against goods for uncountable multitudes in the future. This is a driver for effective action against climate change, though there are imminent dangers (which can only be mitigated) and proximate dangers (which can only be avoided by extreme action); a longer-term view historically might have avoided all this (though not without cost).
  • Effective altruism - in the sense of 'getting the most bang for the buck' just seems like common sense, while not ignoring those either temporally or geographically remote is a good corrective against only doing good to your friends and family. But "Singer (Peter) - The Life You Can Save: How to Do Your Part to End World Poverty" points out that there's a tension. We can't focus on the remote to the exclusion of the proximate either. My 'solution' is to provide some sort of discount rate in any utilitarian calculations.
  • That's it for now ... it deserves further thought.
  • There are no Aeon Comments.
  • This relates to my Notes on Narrative Identity, Forensic Property, Self, Fiction, Zombies and Psychopathology.

Paper Comment
  • Sub-Title: "Why romanticising your own life is philosophically dubious, setting up toxic narratives and an inability to truly love"
  • For the full text see Aeon: Gotlib - Main character syndrome.



"Grant (Colin) - My blackness"

Source: Aeon, 27 January 2023


Author's Introduction
  • In late-1940s Jamaica, when my mother, Ethlyn Adams, was a teenager and began to think about courtships, she was summoned to the dining room by her father. He’d recently been promoted to inspector of police in the Jamaican constabulary, allegedly (in family lore) the island’s first black man to be appointed to such a senior position. Inspector Vivian Wellington Adams tapped his finger on the sandalwood dining table and warned his daughter: ‘I don’t want you bringing anyone into the house darker than this.’ My grandfather would brook no questioning of his version of the brown paper-bag test. He was a black man but he would not have answered to that description; he’d have considered it an insult.
  • Jamaica was a pigmentocracy – it still is – and a premium was placed on fair skin, a societal code that Frantz Fanon in Black Skin, White Masks (1952) calls an ‘epidermal schema’. Inspector Adams was a cinnamon-coloured brown man. His daughter Ethlyn, by her own estimation, as well as her family’s, ‘had good colour’, and no right-thinking person would allow their family-line to be contaminated by those of a darker hue. This, after all, was colonial Jamaica, at a time when the journalist and historian Vivian Durham observed cogently: ‘It was the ambition of every black Jamaican to be white.’
Author's Conclusion
  • But hold on. Institutionalised racism has been overstated, a UK government-sponsored inquiry suggested in 2021. Claiming that it was a dominant feature in British society succeeded only in ‘alienating the decent centre ground’, according to the Commission on Race and Ethnic Disparities, set up after the Black Lives Matter protests. The government has since rebuffed criticism that its conclusions were wrong, highlighting that all but one of the commissioners of the report were people of colour. Clearly colour does not privilege compassion or understanding; the black and brown experts on the panel might as well have been green; it didn’t matter.
  • What mattered was that these decision-making people of colour had found another tribe – like those black and brown members of Sunak’s cabinet – that trumped blackness; it’s a tribe coalesced around class; it’s always class, stupid. Hall would have told you so, and Baldwin also.
  • Never mind shoving those awful black people off a pier with a big broom; pack them on to a plane bound for Rwanda. Maybe you think you’re the right kind of black – a Suella Braverman, Nadhim Zahawi, Priti Patel, Kemi Badenoch kind of black, whose signalling allegiance to whiteness in the UK’s purposefully opaque race/colour system can be ‘read at a glance’. Having willingly paid the price of the ticket, you probably feel you’re safe. But let me remind you of brother Jimmy Baldwin’s warning. It was true more than 50 years ago when he wrote in support of a demonised Angela Davis, and it’s true today: ‘If they take you in the morning, they will be coming for us that night.’ You may think otherwise – that you are third-generation Caribbean, Irish, Pole, Pakistani – but, come crunch time, you may just find that you’re ‘Black’ or at best, black like me.
Author Narrative
  • Colin Grant is a writer, a fellow of the Royal Society of Literature, and the director of WritersMosaic, a division of the Royal Literary Fund. His books include Colin Grantis a writer, a fellow of the Royal Society of Literature, and the director of WritersMosaic, a division of the Royal Literary Fund. His books include Negro with a Hat: The Rise and Fall of Marcus Garvey (2008); a group biography of the Wailers, I&I: The Natural Mystics (2011); the memoir Bageye at the Wheel (2013); a history of epilepsy, A Smell of Burning (2017); and his latest, I’m Black So You Don’t Have to Be (2023). He lives in Brighton, UK.
Notes
  • Lots of useful background, though I doubt I'll ever have the time or inclination to follow up the links. Also much psychological material, but I thought the Paper - a plug for the author's latest book - a bit of a jumble. It seems as muddled as the author.
  • Also, I didn't like the author's attack on the Conservative 'ministers of colour'. No doubt they have their faults. However, three of the four mentioned are not 'Black' (admittedly he doesn't capitalise the word in his Conclusion) - and they are not token 'minorities' but are - or were - senior Ministers of State, as - of course - is the Prime Minister. And the Ruanda scheme is not intended to ship off black people, but is supposed to undermine the 'small boats' business model, most of whose occupants aren't Black either.
  • With respect to 'successful blacks' - how is progress to be made if those who succeed are taken to be compromisers? And it's no surprise that Conservative ministers - of whatever hue - have had relatively privileged backgrounds.
  • At one point the Author says that he doesn't want to be defined by his Blackness but by his other qualities. So, why can't Conservative ministers act that way? Can't we move on?
  • A lot more needs to be said - and more nuance added to the above.
  • For the Author, see:-
    Colin Grant: Home Page
    Wikipedia: Colin Grant

Paper Comment
  • Sub-Title: "At times I’ve tried to escape it. Other times I’ve embraced it. But at all times, people have attempted to define me by it"
  • For the full text see Aeon: Grant - My blackness.



"Greve (Sebastian Sunday) - AI’s first philosopher"

Source: Aeon, 21 April 2022


Author's Introduction
  • When Alan Turing turned his attention to artificial intelligence, there was probably no one in the world better equipped for the task. His paper ‘Computing Machinery and Intelligence’ (1950) is still one of the most frequently cited in the field. Turing died young, however, and for a long time most of his work remained either classified or otherwise inaccessible. So it is perhaps not surprising that there are important lessons left to learn from him, including about the philosophical foundations of AI.
  • Turing’s thinking on this topic was far ahead of everyone else’s, partly because he had discovered the fundamental principle of modern computing machinery – the stored-program design – as early as 1936 (a full 12 years before the first modern computer was actually engineered). Turing had only just (in 1934) completed a first degree in mathematics at King’s College, Cambridge, when his article ‘On Computable Numbers’ (1936) was published – one of the most important mathematical papers in history – in which he described an abstract digital computing machine, known today as a universal Turing machine.
  • Virtually all modern computers are modelled on Turing’s idea. However, he originally conceived these machines merely because he saw that a human engaged in the process of computing could be compared to one, in a way that was useful for mathematics. His aim was to define the subset of real numbers that are computable in principle, independently of time and space. For this reason, he needed his imaginary computing machine to be maximally powerful.
Author's Conclusion
  • Lastly, Turing points out the importance of the question for the study of human cognition:
  • The whole thinking process is still rather mysterious to us, but I believe that the attempt to make a thinking machine will help us greatly in finding out how we think ourselves.
    Today, we can confidently say that he was right; the attempt to make a thinking machine certainly has helped us in this way. Moreover, he also correctly predicted in his 1950 paper that ‘at the end of the century the use of words and general educated opinion will have altered so much that one will be able to speak of machines thinking without expecting to be contradicted’. He did not mean, of course, that the problem of minds and machines would have been solved. In fact, the problem has become only more urgent. Ongoing advances in affective computing and bioengineering will move more and more people to believe that not only can machines think, they can also feel, perhaps deserve certain legal rights, etc. Yet others (such as Roger Penrose) may still reasonably deny that computers can even compute.
  • It was Turing’s fundamental conceptual work in combination with his practical, experimental approach that allowed him not only to conceive the fundamental principle of modern computing in 1935-36 but also, in 1947-48, to anticipate what are now, more than 70 years later, some of the most successful theoretical approaches in the field of AI and machine learning. Arguably, it was this combination that allowed him to progress from a schoolboy whose work in mathematics was judged promising yet untidy, and whose ideas were considered ‘vague’ and ‘grandiose’ by his teachers, to being one of the most innovative minds of the 20th century.
Author Narrative
  • Sebastian Sunday Grève is a German philosopher, who was educated in Oxford and is living in Beijing, where he works as an assistant professor at Peking University.
Notes
Paper Comment
  • Sub-Title: "Alan Turing was a pioneer of machine learning, whose work continues to shape the crucial question: can machines think?"
  • For the full text see Aeon: Greve - AI’s first philosopher.



"Hains (Brigid) & Hains (Paul) - Aeon: Follow-up Boxes"

Source: Hains (Brigid) & Hains (Paul) - Aeon: Follow-up Boxes



"Ham (Paul) - Censoring offensive language threatens our freedom to think"

Source: Aeon, 08 April 2024


Author's Introduction
  • ‘The Karen buried her hatchet and submitted to the straight, fat hillbilly’s rule of thumb that gay ladies and gentlemen of colour should be blackballed from the powwow.’
  • This sentence offends almost everyone, according to the inclusive language guidelines being drawn up by universities, corporations and public bodies in the Western world. Their guidelines would have struck a red line through every word.
  • What I should have written was: ‘The entitled white woman, in the interests of peace, accepted the default ruling of the obese, heterosexual person from the Ozarks that LGBTQ+ and BIPOC should not be invited to the get-together.’
  • Obviously, this is meant satirically. No writer worth his or her (or their) salt would write such a sentence (for aesthetic reasons, hopefully, and not because it offends). But the fact that I feel the need to explain myself at all suggests the presence of an intimidating new force in society, a kind of thought virus that has infected most organisations and political parties, on the Right and Left, the key symptom of which is an obsession with textual ‘purity’, that is, language stripped of words and phrases they deem offensive.
  • Yet, in trying to create a language that offends no one, they offend almost everyone.
  • Why are we so afraid to use words freely, to offend with impunity? Whence arose this fetish for the ‘purity’ of the text? I trace the origins of this obsession with textual purity to the triumph of linguistic philosophy in the early 20th century. Let’s alight on a few key moments in that story to understand how we got here.

Author's Conclusion
  • For most of us, words are valves that relieve the pressure cooker of the brain, or simply the sounds that we lazily deploy to please or fill a void. Words are the ghosts of our thoughts, mere echoes of our ‘selves’. Only the finest poets seem able to marry their minds and their words.
  • And that is why banning words won’t make the world a better place. It’ll simply stifle the expression of thoughts that exist anyway. Words that offend should be argued with and met with words that counter-offend. That’s education, not cancellation, and it is the point of liberty. In sum, cancelling language won’t create a ‘safe space’. It’ll create a dystopia of textual fetishists: thought police mumbling in self-edited confusion.
Author Narrative
  • Paul Ham is a historian and lecturer in narrative history at Sciences Po in France. He is the author of the book The Soul: A History of the Human Mind (forthcoming, July 2024), and will soon launch his Substack column called Who Made Our Minds?
Notes
  • Well, I think this paper is a missed opportunity. At one point I thought it might be an 'April Fool', but it was issued on 8th April.
  • Something needs to be said about the - every shifting - set of words that people take offence to. However, I don't think this has anything to do with the rise of linguistic philosophy either Analytic or Continental. I don't understand Derrida enough to comment on what his aims were, but Wittgenstein wasn't trying to limit the freedom of speech, but he and the Logical Positivists were claiming that lots of philosophy as then practised was nonsense and that philosophers were 'bewitched by language'. They were probably right, but went too far.
  • Also, the issue isn't just with words but with ideas. Some ideas aren't allowed to be put forth in any form of words. It's true that 'hate speech' should not go unchallenged, but the meaning of 'hate' has changed and the bar is set so low that any opinion dissenting from a perceived orthodoxy now seems to count as 'hate'. Hitler hated the Jews for being Jews, but no Labour politician has such a view.
  • I was surprised that some of the Aeon Comments weren't expunged because of breaking 'community guidelines'. Some of them are rather abusive.
  • The author cites The Canceling of the American Mind: How Cancel Culture Undermines Trust, Destroys Institutions, and Threatens Us All by Greg Lukianoff and Rikki Schlott. It sounds - as an Amazon reviewer initially thought - like a right-wing diatribe, but it may be more even-handed. There aren't many reviews, but the most negative one is by some American neo-faschist.
  • I'd like to know - other than on points of style (too many cliches) and because of its meaning - why the 'objectionable' sentense is so objectionable. Word by word. What's wrong with 'Blackballed' (Wikipedia: Blackballing)? Is it likely to be misunderstood by the ignorant (like 'niggardly': Wikipedia: Controversies about the word 'niggardly')? Are 'Powwow' (Wikipedia: Powwow) and 'Burying the Hatchet' (Wikipedia: Burying the hatchet) cultural appropriations? Wikipedia doesn't warn against the use of 'Burying the Hatchet' but remarks that 'In mainstream American culture, such as 20th-century Western movies or by military personnel, the term 'powwow' was used to refer to any type of meeting. This usage is now considered by Indigenous Americans to be an offensive case of appropriation because of the cultural significance powwows hold'.

Paper Comment



"Hancock (Zachary B.) - Beanbag genetics"

Source: Aeon, 15 April 2025


Author's Introduction
  • Life is immensely complex. At every level, from molecules and cells to entire organisms and the intricate ecological balance between species, biologists are amazed by the staggering complexity and interconnectedness of life.
  • Historically, biologists have approached the complexity of life in two ways. Peering into the tapestry of complexity, one way is to search for general threads of order that weave together seemingly disparate facts. For example, while life is incredibly diverse, all living things are well fit to their environment. Charles Darwin, the father of evolution, discovered that a single process explained how this happened – he called it natural selection. All species have variation in their traits – some have thicker fur or longer beaks – and if any of those differences help the organism survive and reproduce, it will pass on that trait to future generations.
  • Over time, this moulds organisms to become well adapted to their environment. In this view, evolutionary theory becomes a unifying idea in biology. All organisms, from microbes to jellyfish, ferns to elephants, are connected by the common thread of evolution. If there are truly universal laws governing how organisms evolve, then we should be able to represent these rules mathematically in the same way a physicist expresses universal concepts like the laws of motion. This approach has been taken by population genetics – since its beginnings in the 1910s, population geneticists have derived hundreds of mathematical equations to describe how evolution occurs under a myriad of conditions. Often, these equations are agnostic to whether the organism is a bird or a shrimp, they are concerned only with transmission rules (that is, how genes are passed on to the next generation), and the processes that bias those rules (ie, evolutionary forces).
  • The other way of approaching complexity is to look upon the tapestry and conclude that no general rule or law can unite these threads. Biology is not like physics – there are too many variables in the real world to represent life as a mathematical expression. A single concept like ‘natural selection acting on variation’ cannot possibly help us understand why some microorganisms, like Paramecium, have scrambled DNA that has to be rewired every time they need to build a new cellular feature. Or how genes get turned off or on by interacting with an intricate array of molecules that are themselves modulated by different environmental cues. To understand life is to study each molecule, cell, individual and species as a unique feature of the world. Evolution, in this view, becomes the study of a mountain of individual facts, each of which tell us something new about evolution as a process. Each new discovery is a paradigm shift, a cause to update the textbooks, and promotes the view that evolutionary theory is in a constant state of extension and revision.
  • While these two views of biology should be complementary, they have often been bitterly at odds. As the controversies oscillate in intensity, the past decade in biology has represented a heightened period of contention. Some have claimed that there is a ‘struggle for the soul of the discipline’ being played out – new discoveries in molecular, cellular and developmental biology are heralding a revolution in evolutionary thinking. Instead of a few fundamental principles, evolutionary biology, they contend, should be viewed through a pluralistic lens. We should scrap any attempt at finding analogues to the laws of motion for biology, and embrace the complexity – the only law in biology is that there are no laws in biology. These biologists and philosophers are united by little more than their scepticism of the first way of viewing life, and so have variously called for ‘extending’ evolutionary theory to downright replacing it with a new paradigm. Among the causes of evolution missing, according to this view, are concepts like niche construction, developmental bias, epigenetics, biological agency, etc. But to understand the current debate, it’s perhaps instructive to return to a previous peak in the wave function of biological disputes.

Author's Conclusion
  • So, to return to Mayr’s question: what is the contribution of beanbag genetics to evolutionary theory? A fundamental goal of any scientific theory should be to uncover the processes that govern the natural world. In this way, beanbag genetics has provided us with the forces that have given rise to biological diversity. Without it, several of these forces would have languished in obscurity. Furthermore, without the mathematics of beanbag genetics, many concepts that are now mainstays in textbooks might have never been discovered by verbal models alone. It’s not intuitive that a trait that increases fitness has an exceptionally low probability of spreading through the population.
  • Since Haldane’s defence in 1964, the ‘mathematical school’ has continued to make impressive contributions to evolutionary theory. Among these include phylogenetics, which has allowed us to reconstruct the relationships of organisms across the Tree of Life. Coalescent theory has permitted us to pinpoint the origins of humans in Africa, as well as trace the evolution of diseases in real time, including HIV and SARS-CoV-2. Furthermore, insights from quantitative genetics, which focuses on traits shaped by dozens of genes, has enabled us to dramatically increase crop yields to feed the world’s burgeoning population.
  • To study evolution is to engage with it as a process. Any exposition of evolution should be cognisant of the basic facts of beanbag genetics. While the field has progressed far beyond Mayr’s simplistic caricature, we’d do well to remember the strength of even elementary algebraic models. After all, the basic forces of evolution – the unifying theory of all biology – can be taught with a simple bag full of beans.
Author Narrative
  • Zachary B Hancock is an evolutionary biologist who specialises in population genetics and phylogenetics of shovel bugs. He is a postdoctoral fellow in the department of ecology and evolutionary biology at the University of Michigan, US.
Notes
Paper Comment
  • Sub-Title: "Today a bitter dispute about the nature of biology is underway. A simple bag of beans may be what tips the balance"
  • For the full text see Aeon: Hancock - Beanbag genetics.



"Hart (David Bentley) - The myth of machine consciousness makes Narcissus of us all"

Source: Aeon, 22 May 2023


Author's Introduction (Excerpt)
  • ... in recent years, I have come to find (the Narcissus myth) particularly apt to our culture’s relation to computers, especially in regard to those who believe that there is so close an analogy between mechanical computation and mental functions that one day, perhaps, artificial intelligence will become conscious, or that we will be able to upload our minds on to a digital platform. Neither will ever happen; these are mere category errors. But computers produce so enchanting a simulacrum of mental agency that sometimes we fall under their spell, and begin to think there must be someone there.
Author's Conclusion
  • The functionalist notion that thought arises from semantics, and semantics from syntax, and syntax from purely physiological system of stimulus and response, is absolutely backwards. When one decomposes intentionality and consciousness into their supposed semiotic constituents, and signs into their syntax, and syntax into physical functions, one is not reducing the phenomena of mind to their causal basis; rather, one is dissipating those phenomena into their ever more diffuse effects. Meaning is a top-down hierarchy of dependent relations, unified at its apex by intentional mind. This is the sole ontological ground of all those mental operations that a computer’s functions can reflect, but never produce. Mind cannot arise from its own contingent consequences.
  • And, yet, we should not necessarily take much comfort from this. Yes, as I said, AI is no more capable of living, conscious intelligence than is a book on a shelf. But, then again, even the technology of books has known epochs of sudden advancement, with what were, at the time, unimaginable effects. Texts that had, in earlier centuries, been produced in scriptoria were seen by very few eyes and had only a very circumscribed effect on culture at large. Once movable type had been invented, books became vehicles of changes so immense and consequential that they radically altered the world – social, political, natural, what have you.
  • The absence of mental agency in AI does nothing to diminish the power of the algorithm. If one is disposed to fear this technology, one should do so not because it is becoming conscious, but because it never can. The algorithm can endow it with a kind of ‘liberty’ of operation that mimics human intentionality, even though there is no consciousness there – and hence no conscience – to which we could appeal if it ‘decided’ to do us harm. The danger is not that the functions of our machines might become more like us, but rather that we might be progressively reduced to functions in a machine we can no longer control. There was no mental agency in the lovely shadow that so captivated Narcissus, after all; but it destroyed him all the same.
Author Narrative
  • David Bentley Hart is a scholar of religion, and a philosopher, writer, author of fiction, and cultural commentator. He is currently a research associate at the University of Notre Dame, Indiana, and his latest books are You Are Gods: On Nature and Supernature (2022) and Tradition and Apocalypse: An Essay on the Future of Christian Belief (2022). His Substack publication is Leaves in the Wind.
Notes
  • I ought to be happy with this paper as it argues against a couple of my betes noire - Functionalism and Uploading.
  • Yet it isn't really an argument, but a rant. The author just shouts over the top of authors and researchers much more distinguished than him (eg. Daniel Dennett) without really engaging with their arguments or appreciating what they are trying to do - which is to explain mental phenomena in the nuts and bolts sense of 'how, exactly, does it work?'.
  • Also, his ideas on computers seems to be somewhat out of date. When programmers wrote the algorithms it was obvious that whatever computers could do was down to what their programmers programmed them to do. But with machine learning, the computer derives its own concepts from patterns in the data in ways that are not even transparent to the programmers who build the overall architecture. This is one of the complaints against AI; we often don't know how it comes out with its decisions.
  • Of course, I agree with the author that we are not data, and that there may well be more to consciousnes than is allowed for in a digital computer. But arguments are required.
  • The editor has included links to several other papers that take a very different line to the author:-
    Aeon: Lande - Do you compute?
    Aeon: Simon - Machine in the ghost
    Aeon: Greve - AI’s first philosopher
    → "Frankish (Keith) - The Consciousness Illusion"
    Aeon: Levin & Dennett - Cognition all the way down
  • I need to re-read this paper and see if I can provide arguments for his assertions where I agree with them.
  • I also need to read, or re-read, the Aeon papers above.
  • Thankfully, it wasn't opened up for Comments.

Paper Comment



"Hassett (Brenna) - How to grow a human"

Source: Aeon, 10 July 2023


Author's Introduction
  • The average human spends at least one quarter of their life growing up. In the careful calculus of the animal kingdom, this is patently ridiculous. Even most whales, the longest of the long-lived mammals, spend a mere 10 per cent or so of their time growing into leviathans. In no other primate has the mathematics gone this wrong but, then again, no other primate has been as successful as we are in dominating the planet.
  • Could the secret to our species’ success be our slowness in growing up? And if so, what possible evolutionary benefit could there be to delaying adulthood – and what does it mean for where our species is going?
Authors' Conclusion
  • A long childhood is our greatest evolutionary adaptation. It means that we have created needy offspring, and this has surprising knock-on effects in every single aspect of our lives, from our pair bonds to our dads to our boring genitals to our dangerous pregnancies and births and our fat-cheeked babies and even that unlikely creature, the grandmother. The amount of time and energy required to grow a human child, and to let it learn the things it needs to learn, is so great that we have stopped the clock: we have given ourselves longer to do it, and critically, made sure there are more and more investors ready to contribute to each of our fantastically expensive children.
  • The average human spends at least one quarter of their life growing up. In the careful calculus of the animal kingdom, this is patently ridiculous. Even most whales, the longest of the long-lived mammals, spend a mere 10 per cent or so of their time growing into leviathans. In no other primate has the mathematics gone this wrong but, then again, no other primate has been as successful as we are in dominating the planet.
    What’s more, as humans, our cultures not only scaffold our evolution, but act as bore-drills to open up new paths for biology to follow, and we find ourselves in a position where the long childhood our ancestors took millions of years to develop is being stretched yet further. In many societies, the markers of adulthood are increasingly stretched out – for the most privileged among us, formal education and financial dependence are making 40 the new 20. Meanwhile, we are taking time away from the most desperate among us, placing that same education out of reach for those foolish enough to be born poor or the wrong colour or gender or in the wrong part of the world. A human child is a rather miraculous thing, representing a huge amount of targeted investment, from mating to matriculation. But given the gulfs in opportunity we are opening up between those that have and those that do not, it would benefit us all to consider more closely the childhoods we are investing in, and who we are allowing to stay forever young.

Author Narrative
  • Brenna Hassett is a lecturer in forensic osteology and archaeology at the University of Central Lancashire. She is the author of Built on Bones (2017) and Growing Up Human (2022).

Notes
  • This is a plug for the author's book.
  • It's mostly fine - if hardly revelatory.
  • The final paragraph of the conclusion is slightly absurd and betrays some of the author's undercurrents. Things have never been better for most people in most places, even though 'equality' is still a long way off (as it always will be).
  • The Aeon Comments seem fairly equally divided between those - like me - who object to the political and feminist undercurrents - and those of like mind with the author who applaud them.
  • More needs to be said ...

Paper Comment
  • Sub-Title: "Our childhood is preposterously long compared to other animals. Is it the secret to our evolutionary success?"
  • For the full text see Aeon: Hassett - How to grow a human.



"Hershovitz (Scott) - How to do philosophy with kids"

Source: Aeon, 21 December 2022


Key points – How to do philosophy with kids
  1. Children are natural philosophers. They ask about everything, and they’re not afraid of getting things wrong.
  2. It’s worth supporting their philosophical instincts. It helps them to think deeply about weighty issues. And, without your encouragement, kids’ spontaneous excursions into philosophy start to slow down around age eight or nine.
  3. Follow the leader. The first step is simply to notice when a kid has a philosophical question.
  4. Ask questions, and question answers. The goal is to get kids to make arguments – to defend their views – and to question them, too.
  5. Do philosophy, don’t teach. Grownups and kids bring different talents to these conversations, and many philosophical conversations will present you with opportunities for collaboration.
  6. Push, but not too hard. If you want to sustain kids’ interest in philosophy, it has to be fun. So pick your topics and your moments carefully.
  7. Keep going. As your kids get older, you’ll need to come at things a little obliquely. There are some terrific games and activities to help with this.
Author Narrative
  • Scott Hershovitz is the Thomas G and Mabel Long Professor of Law and professor of philosophy at the University of Michigan. He is the author of Nasty, Brutish, and Short: Adventures in Philosophy with My Kids (2022).
Notes
  • Clearly a plug for the author's book. Interesting though.
  • It's applicable to my grandchildren - who are all under the age of 6 - but too late for my children!
  • I don't remember asking philosophical questions when young, not do I remember my children doing so, though maybe we all did.
  • Fundamental questions are often given a 'religious' rather than philosophical answer.
  • I agree that philosophy should be 'done' not 'taught' in this context.
  • The paper reads as though it's directed to philosophers with children rather than to parent generally.
  • I wonder how the author's book compares to "Kazez (Jean) - The Philosophical Parent: Asking the Hard Questions About Having and Raising Children"? The latter is both wider in scope and aimed at a less restricted audience, I presume.

Paper Comment



"Hillier-Smith (Bradley) - Moral refuge"

Source: Aeon, 20 May 2025


Author's Introduction
  • There are currently 43.7 million refugees worldwide. These are people who have been forced to flee their home countries due to severe threats to their lives, human rights and basic needs. Yet, having fled in search of safety, they have not always found it. Instead, the vast majority live in squalid and dangerous camps or face destitution in urban areas in regions neighbouring their own states in the Global South. In these conditions, refugees continue to face severe human rights violations. A small minority undertake perilous journeys to find adequate safety in the Global North. Thousands lose their lives on the way, every year.
  • How should states in the Global North respond to this situation? This question polarises debate. Some philosophers, like Peter Singer, argue that states must admit refugees until the point of societal collapse; others argue that states are not necessarily obligated to admit a single refugee. Some politicians advocate for expansive resettlement, others seek to prevent refugees from seeking asylum at the border, or even deport them. Some citizens march the streets proclaiming ‘refugees welcome here’, others attempt to burn down a hotel with refugees inside. Some states have welcomed more than a million refugees, others build concrete walls and barbed wire fences.
  • In the face of such volatile disagreement, there is an urgent need for an understanding and agreement on what an ethical response to refugees would be.

Author's Conclusion
  • What would implementing this ethical response look like? The negative obligations would rule out harmful practices. State policies and oversight would prohibit tactics of violence and abuse at the border, and ensure respect for refugees’ rights. The interception, return and detention of refugees, as in Libya, would end as a practice, with currently incarcerated refugees evacuated to safety. Forced encampment would also cease. And containment would be prohibited as a practice: refugees would not be forcibly prevented from escaping severe threats; instead, their human rights to seek asylum would be respected.
  • The positive obligations would also rule out particular practices as inadequate. Camps may cover subsistence needs in the short term, but long-term encampment fails to provide an autonomous or dignified existence, or human rights protection, and does not treat refugees as moral equals but as undesirable elements to be herded away. Forced relocation schemes, which deport refugees against their will to states they do not wish to be in, similarly fail to provide an autonomous and dignified existence or adequate human rights protection, and do not treat refugees as moral equals, but akin to toxic waste to be exported away.
  • Instead, the positive obligations would require alternative responses. Active resettlement of refugees into Northern states would provide an autonomous existence where refugees could live and rebuild their lives according to their wishes, a dignified existence away from squalid camps and urban destitution, as well as human rights protection. Resettlement further respects refugees as worthy of moral consideration and of inclusion as moral, social and political equals. Providing safe and regular routes would also be required. At present, the only way a refugee can claim asylum in Northern states is by physically setting foot on those territories. This incentivises often-lethal journeys. For an alternative safe route, issuing humanitarian visas as valid documentation to travel safely on airlines and trains would enable refugees’ autonomy and dignity in travel to safety, safeguard their rights, and respect them as moral equals worthy of protection.
  • The above proposals might be dismissed as a naive wishlist that ignores the political realities of the contemporary world. The 44 million refugees worldwide are roughly 0.5 per cent of the global population. This is often framed as an unprecedented ‘crisis’, but this invites alarmism, fatalism or harmful, knee-jerk responses. Larger displacements have been addressed in our history (there were around 200 million refugees after the Second World War, or 10 per cent of global population), and we have a wide variety of policy tools available. Feasibly expanding resettlement across Northern states through a multilaterally coordinated agreement could save millions of the most vulnerable. Providing safe routes would remove the desperation exploited by trafficking gangs and the huge state expenditure on combatting them. And many refugees can be protected in regions closer to their states of origin through modest increases in foreign assistance in grants to host states and fulfilling the (chronically underfunded) budget of the UNHCR. An ethical response is therefore not only urgently required but also within our reach.
  • In fact, it is already a reality – at least for Ukrainian refugees. There are no Ukrainian refugees beaten up or killed at Northern state borders, nor drowning en route. There are no Ukrainians imprisoned indefinitely and abused in detention centres, or enclosed into camps, or forcibly contained in regions facing severe threats to their human rights. Instead, immediate protection has been provided to millions. The EU Council activated its Temporary Protection Directive (available since 2001, but triggered only in 2022), granting all Ukrainian refugees the right to travel freely to, and access employment, housing, healthcare, education and social welfare in, any and all EU states. The UK provided visa schemes allowing hundreds of thousands of Ukrainian refugees to apply online for free, and then board commercial flights or other transport to the UK and stay for at least three years with immediate rights to work and access to public services. Eurostar even offered free rail travel for Ukrainian refugees to the UK.
  • This response should be rightly celebrated as one where the negative and positive obligations towards refugees were recognised and acted upon. An ethical response was achieved with millions protected. This response also proved that the supposed costs of protecting refugees were agreed to be morally irrelevant in the face of urgent obligations towards them, and that expansive protection measures were not idealistic, infeasible or too demanding, but immediately implementable.
  • Since all human beings are moral equals, there can be no moral difference between Ukrainian refugees and non-Ukrainian refugees facing equal threats that could justify such divergent responses towards them. An ethical response, demonstrated to be possible towards Ukrainian refugees, can and ought to be expanded to all.
Author Narrative
  • Bradley Hillier-Smith is associate lecturer in moral, legal and political philosophy at the University of St Andrews in Scotland. He is the author of The Ethics of State Responses to Refugees (2024).
Notes
  • This is an important but rather smug paper. It's easy to place obligations on the shoulders of you society that you yourself do nothing to bear. Also, the paper completely ignores the 'pull effect' of providing too ready a welcome.
  • I don't believe in 'rights' but do believe in duties. Refugees have no rights to our help but we do have a duty to 'process' them better than we do.
  • I note that our acceptance of Ukrainians had various factors that make them different to Syrians and sub-Saharan Africans: their culture is closer to ours, accepting them is a snub to the Russians - our perennial enemies - and they are expected to go back once the crisis is over. They are genuine refugees, not 'migrants'. So are Syrians, of course, but they are unlikely to go back home.
  • There are a good many thoughtful Aeon Comments - most with equally thoughtful replies by the Author. They deserve careful consideration.
  • I've recently read "McPherson (Stephanie Sammartino) - The Global Refugee Crisis: Fleeing Conflict and Violence", a somewhat older - pre Ukraine - consideration of the same topic at a somewhat 'lower' intellectual level.
  • This is a plug for the Author's book, of course. Thankfully it's absurdly expensive, at £120, so I'm not tempted to buy it.
  • This relates to my Notes on Narrative Identity and Forensic Property.

Paper Comment
  • Sub-Title: "You can believe in border control yet protect those fleeing to safety. So what is our ethical obligation to refugees?"
  • For the full text see Aeon: Hillier-Smith - Moral refuge.



"Hoeg (Mette Leonard) - Aphantasia can be a gift to philosophers and critics like me"

Source: Aeon, 20 March 2023


Author's Conclusion
  • Aphantasia could certainly explain why Parfit’s theory resonates so strongly with me – as well as my sympathy for Eastern contemplative and monist theories such as Buddhism that advocate self-abandonment, detachment and renunciation. While many people find such non-essential ideas of personhood and existence disturbing and estranging, to me they’re not only obviously plausible, but also highly relatable – and easy to practise. When I introspect, I literally see nothing – and on that basis, it is likely more difficult to create and uphold an idea of a centred, essential and continuous self.
  • While the aphantasic imagination and memory have fewer dimensions and likely less richness, the mind seems to compensate for this with a stronger absorption in, and greater intimacy with, the present moment. My sense of connection to my past is weak. I have very few memories like the one of flying through the anemone-lighted forest on my father’s bike, and they rarely come to my mind. The colours and light of the present are always capturing my attention.
Author Narrative
  • Mette Leonard Høeg is a hosted research fellow in philosophy at the Oxford Uehiro Centre for Practical Ethics. She is also a literary critic, and the author of Uncertainty and Undecidability in Twentieth-Century Literature and Literary Theory (2022) and anthology editor of Literary Theories of Uncertainty (2023).
Notes
  • To be supplied.

Paper Comment



"Hubert (Mario) - The nature of natural laws"

Source: Aeon, 14 November 2024


Excerpts
  • ... we assume that, thanks to science, there is a recipe of sorts for how the laws of nature work. You take the state of the Universe at a given moment – every single fact about every single aspect of it – and combine it with the laws of nature, then assume that these will reveal, or at least determine, the state of the Universe in the moment that comes next. I refer to this as the layer-cake model of the Universe ...
  • ... In order to mitigate (the) problem of how electrons are able to obey the laws, another conception of laws was proposed by the philosopher David Lewis, which has been dubbed Humeanism about laws, in reminiscence of David Hume ... We, as human beings, cannot directly observe this causal binding ... Lewis took this epistemic conclusion and turned it into an ontological one. Not only do we not experience how exactly laws influence physical objects, now it is said that the laws of nature do not influence or produce anything in the world. The layer-cake model is utter fiction. Instead, laws of nature effectively describe what is happening in the world. They describe the facts in the world, like a newspaper article reports facts in the world. Therefore, to emphasise the main idea of this proposal, I will call it the newspaper model of laws of nature.
  • The newspaper model is probably the most popular theory of laws of nature among professional philosophers, and it attracts a lot of active research right now. It is so attractive because it is metaphysically thin: there are no mysterious, unexplained relations of production as demanded in the layer-cake model. Laws merely summarise the history of physical objects.

Author's Conclusion
  • As a reaction to this narrow scope of the layer-cake model, the philosopher Eddy Keming Chen and the mathematician Sheldon Goldstein, at the University of California, San Diego and Rutgers University respectively, as well as the philosopher Emily Adlam, at Chapman University, have suggested an alternative. Laws may be primitive, but they nonetheless ‘merely’ constrain the physical possibilities in the world. Call this the straitjacket model of laws of nature. No notion of production and no flow of time is required. All that laws do is to constrain what can happen in the world. In this way, we combine the advantages of the newspaper model with the advantages of the layer-cake model, because we acquire the generality of the newspaper model and a reason for stable regular behaviour from the layer-cake model. Now we have a metaphysical underpinning for retrocausal laws and the laws of special relativity because laws, in the straitjacket model, are primitive and govern the world by constraining what can happen.
  • Still, the straitjacket model suffers from the same metaphysical issue that plagued the layer-cake model. The layer-cake model was not able to account for how laws produce new states. In a similar vein, the straitjacket model does not specify how laws can constrain what happens in the world. It seems again that abstract laws have to latch on to the real world to tell physical objects how to behave. How laws are able to do so remains unanswered.
  • The possible implications for any form of law of nature are profound. The layer-cake model seems to be intuitively plausible – the present is determined by the past – but we found out that it requires that laws somehow affect the objects in space and time without being themselves located in space and time.
  • Since the layer-cake model is too restrictive to capture other formulations of physical laws, like retrocausality and general relativity, the straitjacket model was developed. This model does provide a framework for retrocausality and general relativity, yet it suffers from the same metaphysical problem as the layer-cake model. The newspaper model, on the other hand, tries to introduce laws without any metaphysical baggage, and this seems to be a promising approach. Yet we seem to need a metaphysical glue to secure the stable behaviour of our world.
  • Given all this, which theory of laws best explains the regularities in our world? If the newspaper model were true, it would be a constant coincidence that the Sun rises every day or that the water in your kettle boils at 100°C, as there is no metaphysical constraint on how objects can behave. In contrast to many of my colleagues, I therefore find the newspaper model pretty unconvincing for explaining stable regularities. The layer-cake model and the straitjacket model fare better in this respect. The advantage of the straitjacket model is that it is general enough to capture unfamiliar laws of nature, like those describing retrocausality. But this virtue comes with a vice: the straitjacket model is so general that any law of nature would fit in.
  • The metaphysically interesting aspect of nature’s laws is not that they constrain physical possibilities, but how they do that. Even if it is up for debate, the layer-cake model broadly addresses that question best. This works wonderfully with billiard balls. There are conditions where the model just can’t explain how laws of nature produce the future, like retrocausality; but instead of seeking a single new overarching model, perhaps we’d be better off sticking with the layer-cake, after all, and developing a separate tailored account for each type of situation where that model does not fit.
Author Narrative
  • Mario Hubert is assistant professor of philosophy at Ludwig-Maximilians-University Munich.
Notes
Paper Comment



"Huenemann (Charlie) - If I teleport from Mars, does the original me get destroyed?"

Source: Aeon, 01 August, 2017


Author's Conclusion
  • Maybe there is something to be learned from this. Perhaps what seems to me an extremely obvious truth – namely, that there should be some fact to the matter of what I experience once I step in and press the button – is really not a truth at all. Maybe the notion that I am an enduring self over time is some sort of stubborn illusion. By analogy, I once joined a poker club that had been in existence for more than 50 years, with a complete change in its membership over that time. Suppose someone were to ask whether it was the same club. ‘It is and it isn’t,’ would be the sensible reply. Yes: the group has met continuously each month over 50 years. But no: none of the original members are still in it. There is no single, objective answer to the poker-identity question, since there is no inner, substantive soul to the club that has both remained the same and changed over time.
  • The same goes, perhaps, for me. I think I have been the same thing, a person, over my life. But if there is no inner, substantive me, then there is no fact to the matter about what my experience will be when ‘I’ press the button. It is just as the observer says: first there was one, and then there were two (with the toggle set to ‘save’), each thinking himself to be the one. There is no fact about what ‘the one’ really experienced, because ‘the one’ wasn’t there to begin with. There was only a complex arrangement of members, analogous to my poker club, thinking of themselves as belonging to the same ‘one’ over time.
  • Small comfort that is. I went into this problem wondering whether I could survive – only to find out that I am not and never was! And yet the decision still lies before me: do I – do we? – press the button?
  • Note: I make no claim to originality in this thought experiment.
    1. A very similar sort of question was raised in 1775 by the Scottish philosopher Thomas Reid, in a letter to Lord Kames1 referencing Joseph Priestley’s materialism: ‘whether when my brain has lost its original structure, and when some hundred years after the same materials are again fabricated so curiously as to become an intelligent being, whether, I say, that being will be me; or, if two or three such beings should be formed out of my brain, whether they will all be me’.
    2. I first encountered it, with the Martian setting, in the preface to2 the essay collection "Hofstadter (Douglas) & Dennett (Daniel), Eds. - The Mind's I - Fantasies and Reflections on Self and Soul" (1981), edited by Douglas Hofstadter and Daniel Dennett.
    3. The British philosopher Derek Parfit made much hay out of the idea in his book3 "Parfit (Derek) - Reasons and Persons" (1984).
    4. And the podcaster C G P Grey provides an insightful illustration of the problem in his video "YouTube - Video - The Trouble with Transporters" (2016).
Author Narrative
  • Charlie Huenemann is professor of philosophy at Utah State University. He is the author of several books and essays on the history of philosophy, as well as some fun stuff, such as How You Play the Game: A Philosopher Plays Minecraft (2014).

Notes
  • This is a short - and rather old - Paper on a familiar topic. As the author points out in his Conclusion, he makes no claims to originality.
  • There are 42 Aeon Comments, including some responses by the Author. I've not read these in detail yet and should do so.
  • Enough to describe the proposed situation here:-
    1. Transfer is of an information blueprint obtained by scanning.
    2. Reconstruction uses new material.
    3. There's a toggle switch that allows me to destroy the 'original me' or not.
  • The author adopts a rather uncritical PV. I am my connectome which can be instantiated elsewhere without loss to me.
  • He claims that from either a first or third person perspective, no-one can tell the difference between the original and the teletransportee. He's a bit quick here, as - while no tests - internal or external - on the latter could reveal the difference, third parties know the history - and the teletransportee can be told - so the non-identity is clear.
  • The author does have a very sensible passage that distinguishes forward from backward psychological continuity (see my Note):
      Still: I – the one who steps into the teleporter and presses the button – would not subsequently have this new guy’s experience of walking out onto Earth. My next experience after pressing the button would be – well, it would be no experience at all, as I would be dead.
  • The author's conclusion is we do not persist - strictly speaking - from one moment to the next. This strikes me as an unserviceable and unnecessary conclusion.
  • The references are unsurprising. I’ve commented on them in detail: follow the links.
  • This connects additionally to my Notes on Teletransportation and Fission. Also Thought Experiments and the Self.

Paper Comment

For the full text, follow this link (Local website only): PDF File4.




In-Page Footnotes ("Huenemann (Charlie) - If I teleport from Mars, does the original me get destroyed?")

Footnote 1: Footnote 2: Footnote 3:



"Humphrey (Nicholas) - Seeing and somethingness"

Source: Aeon, 03 October 2022


Author's Conclusion
  • Bringing these ideas into the field of animal behaviour, I’ve looked at a range of diagnostic criteria that might apply. Does the animal:
    1. Have a robust sense of self, centred on sensations?
    2. Engage in self-pleasuring activities – be it listening to music or masturbation?
    3. Have notions of ‘I’ and ‘you’?
    4. Carry their sense of their own identity forward?
    5. Attribute selfhood to others?
    6. Lend out their minds so as to understand others’ feelings?
  • Broadly, these tests confirm my hunch that it’s only mammals and birds who make the cut. Chimpanzees, dogs, parrots we can be sure of. Lobsters, lizards, frogs we can rule out.
  • Octopuses? They are everybody’s favourite candidate for an outlying species that is sentient. But the behavioural evidence belies this. On the face of it, octopuses don’t find pleasure in sensation-seeking for sensation’s sake; they don’t have a strong sense of themselves as individuals; they don’t attribute selfhood to others; nor do they care.
  • What about man-made machines? There are, of course, already machines in existence that see and hear and smell at their own level. But, as with lobsters and frogs, it’s presumably blind-seeing, blind-hearing, blind-smelling. Given the life-tasks that nonsentient animals and machines have been designed to accomplish, we can assume that phenomenal blindness leaves them none the worse off.
  • Nevertheless, let’s suppose that, not so far in the future, human engineers were to want to build robots to undertake a task where sentience and selfhood really might play a role: namely, to survive as significant individuals in a world of other phenomenal selves. Then I can imagine that one day the engineers could in fact take a leaf from Nature’s book and, by duplicating the specialised brain circuits responsible for phenomenal consciousness in humans, build sentience into a machine.
Author Narrative
  • Nicholas Humphrey is emeritus professor of psychology at the London School of Economics. He is the author of many books on the evolution of human intelligence and consciousness, the latest being Sentience: The Invention of Consciousness (UK 2022; US 2023). He lives in Cambridge, UK.
Notes
  • This is a plug for the author’s latest book: "Humphrey (Nicholas) - Sentience: The Invention of Consciousness". Any serious discussion will have to await my reading this book.
  • The author’s theory looks very interesting, though it sounds a little like ‘blinding with science’ in this short paper.
  • I’m ‘happy’ with his conclusions. It seems sensible – and not as radical as the book’s blurb suggests – to restrict phenomenal consciousness to warm-blooded animals. But this doesn’t mean the theory is correct. I’m not sure about octopuses!
  • The author (or maybe his editor) sites several other Aeon papers:-
    → "Frankish (Keith) - The Consciousness Illusion",
    → "Pigliucci (Massimo) - Consciousness is real",
    → "Hanlon (Michael) - The mental block",
    → "Seth (Anil Kumar) - The real problem", and
  • Also, one of his own:-
    → "Humphrey (Nicholas) - The society of selves".
  • There are some useful comments, which I've preserved in PDF form lest they disappear.
  • One commentator refused to read further than the introductory account of the 'blind-sighting' of the monkey which Humphrey recounts without any hint that there might be a moral dilemma in (ab-)using an intelligent and sentient being in this way. While I sympathise, I can't see the point in ignoring the results of (by most people's sensibilities today, even if not those of Humphreys) immoral research. Otherwise, animals - and sometimes human animals - have suffered in vain.

Paper Comment
  • Sub-Title: "An evolutionary approach to consciousness can resolve the ‘hard problem’ – with radical implications for animal sentience"
  • For the full text see Aeon: Humphrey - Seeing and somethingness.



"Jaarsma (Ada) - Choose your own birth"

Source: Aeon, 10 March 2020


Author Narrative
  • Ada Jaarsma is professor of philosophy at Mount Royal University in Calgary, Canada. Her latest book is Kierkegaard After the Genome: Science, Existence and Belief in This World (2018).
Notes
  • To be supplied.

Paper Comment
  • Sub-Title: "Every human is both an animal with a deep evolutionary history and an individual who must bring their existence into being"
  • For the full text see Aeon: Jaarsma - Choose your own birth.



"Kensinger (Elizabeth) & Budson (Andrew) - How to get better at remembering"

Source: Aeon, 27 November 2024


Key points – How to get better at remembering
  1. Understand the memory cycle. Successfully remembering something involves a three-part process beginning with encoding, then storage, and finally retrieval.
  2. Do FOUR things to encode more effectively. To encode information better, go through these steps: focus on it, organise it in relation to your other knowledge, understand it, and relate it to something you already know.
  3. Test yourself. One of the best ways to create a durable memory is to refresh that memory every so often, and one of the best ways to refresh the memory is to quiz yourself.
  4. Adopt a lifestyle that benefits long-term memory. To maximise your chances of storing the information you’ve encoded, aim to sleep well, eat well and get plenty of exercise.
  5. Relax to increase your chances of remembering. Struggling to retrieve a memory can be stressful, which only serves to make recall even more elusive. Use deep breathing and other relaxation strategies to reduce your stress and aid your memory.
  6. If you get stuck, use retrieval cues. A common mistake when struggling to remember something is to try generating all the possibilities for what it could be. A better approach is to think about contextual details from when you first stored the information, such as where you were and what else was going on at the time.
  7. Remember faces by making them distinctive. Remembering the identity of a face can be especially tricky. To help, find a stable facial feature (ie, not a hairstyle or glasses) and connect it to something else about the person, such as their name – the more bizarre or silly the connection, the more likely you will remember who the face belongs to.
  8. Use a memory palace to remember lists of information. For when you have a long list of information to remember, use your imagination to peg each item to a location along a familiar route.
Author Narrative
  • Elizabeth Kensinger is professor of psychology and neuroscience at Boston College. She has published more than 200 articles and received several awards for her research on human memory, including from the Association for Psychological Science and the Cognitive Neuroscience Society.
  • Andrew Budson is chief of cognitive neurology at the Boston VAMC and professor of neurology at Boston University. He has published nine books and more than 150 articles, and has received awards for research in ageing and dementia from the American Academy of Neurology.
Notes
  • A fairly standard guide to improving your memory.
  • I liked the section of not overloading your memory with transient junk. To-do-lists are fine!
  • I've always found remembering names of people I meet difficult - I often don't really listen, being generally distracted by social awkwardness.
  • I presume memory palaces are no use for people without mental imagery. While I can walk through places in my head, I've not tried pasting items on the walls - I'm not sure they'd stick.
  • I was slightly confused about the advice on remembering faces. I'm excellent at recognising faces, but sometimes the name escapes me. I have made the occasional slip though. It's re-recognising people you've only just met and not payed much attention to that's the issue.
  • I should probably follow up on this - and look up the other books I've got on Memory-improvement.
  • We're referred to:-
    Aeon: Penn - How to study effectively
    → The authors' book: Why We Forget and How to Remember Better: The Science Behind Memory (2023)
    → Seven Steps to Managing Your Aging Memory: What’s Normal, What’s Not, and What to Do About It (2nd ed, 2023)
    YouTube: Daniel Schacter - Human Memory
    YouTube: Cambridge Forum - Forgetting & Remembering - what can we do about it?

Paper Comment



"Kent (Adrian) - Our quantum problem"

Source: Aeon, 28 January, 2014


Author's Introduction - Excerpts
  • It was clear from the start that quantum theory challenged all our previous preconceptions about the nature of matter and how it behaves, and indeed about what science can possibly – even in principle – say about these questions. Over the years, this very slipperiness has made it irresistible to hucksters of various descriptions. I regularly receive ads offering to teach me how to make quantum jumps into alternate universes, tap into my infinite quantum self-energy, and make other exciting-sounding excursions from the plane of reason and meaning. It’s worth stressing, then, that the theory itself is both mathematically precise and extremely well confirmed by experiment.
  • Quantum mechanics has correctly predicted the outcomes of a vast range of investigations, from the scattering of X-rays by crystals to the discovery of the Higgs boson at the Large Hadron Collider. It successfully explains a vast range of natural phenomena, including the structure of atoms and molecules, nuclear fission and fusion, the way light interacts with matter, how stars evolve and shine, and how the elements forming the world around us were originally created.
  • Yet it puzzled many of its founders, including Einstein and Erwin Schrödinger, and it continues to puzzle physicists today. Einstein in particular never quite accepted it. ‘It seems hard to sneak a look at God’s cards,’ he wrote to a colleague, ‘but that he plays dice and uses “telepathic” methods (as the present quantum theory requires of him) is something that I cannot believe for a single moment.’ In a 1935 paper co-written with Boris Podolsky and Nathan Rosen, Einstein asked: ‘Can [the] Quantum-Mechanical Description of Physical Reality Be Considered Complete?’ He concluded that it could not. Given apparently sensible demands on what a description of physical reality must entail, it seemed that something must be missing. We needed a deeper theory to understand physical reality fully.
  • Einstein never found the deeper theory he sought. Indeed, later theoretical work by the Irish physicist John Bell and subsequent experiments suggested that the apparently reasonable demands of that 1935 paper could never be satisfied. Had Einstein lived to see this work, he would surely have agreed that his own search for a deeper theory of reality needed to follow a different path from the one he sketched in 1935.
  • Even so, I believe that Einstein would have remained convinced that a deeper theory was needed. None of the ways we have so far found of looking at quantum theory are entirely believable. In fact, it’s worse than that. To be ruthlessly honest, none of them even quite makes sense. But that might be about to change.
  • Here’s the basic problem. While the mathematics of quantum theory works very well in telling us what to expect at the end of an experiment, it seems peculiarly conceptually confusing when we try to understand what was happening during the experiment. To calculate what outcomes we might expect when we fire protons at one another in the Large Hadron Collider, we need to analyse what – at first sight – look like many different stories. The same final set of particles detected after a collision might have been generated by lots of different possible sequences of energy exchanges involving lots of different possible collections of particles. We can’t tell which particles were involved from the final set of detected particles.
  • Now, if the trouble was only that we have a list of possible ways that things could have gone in a given experiment and we can’t tell which way they actually went just by looking at the results, that wouldn’t be so puzzling. If you find some flowers at your front door and you’re not sure which of your friends left them there, you don’t start worrying that there are inconsistencies in your understanding of physical reality. You just reason that, of all the people who could have brought them, one of them presumably did. You don’t have a logical or conceptual problem, just a patchy record of events.
  • Quantum theory isn’t like this, as far as we presently understand it. We don’t get a list of possible explanations for what happened, of which one (although we don’t know which) must be the correct one. We get a mathematical recipe that tells us to combine, in an elegant but conceptually mysterious way, numbers attached to each possible explanation. Then we use the result of this calculation to work out the likelihood of any given final result. But here’s the twist. Unlike the mathematical theory of probability, this quantum recipe requires us to make different possible stories cancel each other out, or fully or partially reinforce each other. This means that the net chance of an outcome arising from several possible stories can be more or less than the sum of the chances associated with each.

Author's Conclusion
  • With luck, if the ideas I have outlined are on the right lines, we might have a good chance of detecting the limits of quantum theory in the next decade or two. At the same time we can hope for some insight into the nature and structure of physical reality. Anyone who expects it to look like Newtonian billiard-balls bouncing around in space and time, or anything remotely akin to pre-quantum physical ideas, will surely be disappointed. Quantum theory might not be fundamentally correct, but it would not have worked so well for so long if its strange and beautiful mathematics did not form an important part of the deep structure of nature. Whatever underlies it might well seem weirder still, more remote from everyday human intuitions, and perhaps even richer and more challenging mathematically. To borrow a phrase from John Bell, trying to speculate further would only be to share my confusion. No one in 1899 could have dreamed of anything like quantum theory as a fundamental description of physics: we would never have arrived at quantum theory without compelling hints from a wide range of experiments.
  • The best present ideas for addressing the quantum reality problem are at least as crude and problematic as Bohr’s model of the atom. Nature is far richer than our imaginations, and we will almost certainly need new experimental data to take our understanding of quantum reality further. If the past is any guide, it should be an extraordinarily interesting scientific journey.
Author Narrative
  • Adrian Kent is a reader in quantum physics at the University of Cambridge. His latest book is Many Worlds? (2010), co-edited with Simon Saunders, Jonathan Barrett and David Wallace.

Notes
  • I've now read this rather long Paper twice – once a long time ago and recently in March 2025 – but I'm not much the wiser for it.
  • However, it gave me a new angle on the Bohm 'hidden variable' models, linking them to de Broglie (see Wikipedia: De Broglie-Bohm theory). I'd not heard of 'beables' before (see Wiktionary: Beable).
  • The whole area is very complicated. Also, the Paper is now over 10 years old and was doubtless produced to publicise the author's book, which is far too expensive (£75). Things may have moved on.
  • The Paper connects the Bohr 'shut up and calculate' approach to the then prevalence of logical positivism. But – while rejecting LP as an account of meaning is valid – the LP idea is still a sensible methodological approach. If no experimental evidence can settle a claim, it's best to keep quiet about it and move on to soluble problems. Also, I think the author mis-states the implications of this approach. It doesn't deny that we can have no evidence for things we can't see. The Higgs Boson had just then (allegedly) been confirmed by experiments in the LHC. I do have some doubts about this, as the experiments were difficult and I imagine the experimental results - earnestly looked for, given the huge costs – may well be consistent with other theoretical models (yet to be developed). I doubt the results have confirmed the model to 10 decimal places as is often trumpeted for QM. I may be wrong, of course. See Wikipedia: Higgs boson.
  • I thought the following passage ludicrously rhetorical:
    1. Quantum theory is supposed to describe the behaviour of elementary particles, atoms, molecules and every other form of matter in the universe. This includes us, our planet and, of course, the Large Hadron Collider. In that sense, everything since the Big Bang has been one giant quantum experiment, in which all the particles in the universe, including those we think of as making up the Earth and our own bodies, are involved. But if theory tells us we’re among the sets of particles involved a giant quantum experiment, the position I’ve just outlined tells us we can’t justify any statement about what has happened or is happening until the experiment is over. Only at the end, when we might perhaps imagine some technologically advanced alien experimenters in the future looking at the final state of the universe, can any meaningful statement be made.
    2. Of course, this final observation will never happen. By definition, no one is sitting outside the universe waiting to observe the final outcome at the end of time. And even if the idea of observers waiting outside the universe made sense – which it doesn’t – on this view their final observations still wouldn’t allow them to say anything about what happened between the Big Bang and the end of time. We end up concluding that quantum theory doesn’t allow us to justify making any scientific statement at all about the past, present or future. Our most fundamental scientific theory turns out to be a threat to the whole enterprise of science. For these and related reasons, the Copenhagen interpretation gradually fell out of general favour.
  • This is – presumably – the justification of the Paper’s subtitle: “When the deepest theory we have seems to undermine science itself …”.
  • I’m not convinced. It’s associated with the ‘Many Worlds’ interpretation of QM as a solution to the ‘measurement problem’ by denying the distinction between the quantum and macro-worlds. I’m wondering whether some form of supervaluationism in the logic of vagueness might help. See, maybe, "Eklund (Matti) - Supervaluationism, Vagueifiers, and Semantic Overdetermination". Very hand-wavy, but – in the context of a QM experiment – what’s going on in the rest of the universe is irrelevant (all possibilities are of equal value for anywhere to any degree ‘remote’).
  • There are no Aeon Comments to review, but I need to think about these issues some more in the context of more recent Aeon Papers.
  • This connects to my Notes on Quantum Mechanics and Fission.

Paper Comment

For the full text, follow this link (Local website only): PDF File1.
  • Sub-title: "When the deepest theory we have seems to undermine science itself, some kind of collapse looks inevitable."
  • For the full text, see Aeon: Kent - Our quantum problem



"Khaliq (Namir) - Why I’ll never forget the day I met Daniel Kahneman for lunch"

Source: Aeon, 25 July 2024


Excerpts
  • There was a certain cinematic quality to many of the thought experiments that Kahneman and Tversky generated during the course of their partnership. Here’s a version of one, first described in 1986, designed to illustrate how we experience regret: imagine Amos and Danny flying out of Jerusalem Airport on two different planes that are departing at the same time. They travel to the airport together in the same taxi, get caught in the same traffic, and arrive at the airport 30 minutes after their scheduled departures. But Danny is told his flight left on time, while Amos is told his flight was delayed and left only three minutes ago. Objectively, rationally, their reactions should be the same because their experiences were akin – yet Amos is far more upset. The pair chalk up the difference to the ease with which Amos could envision things having gone differently – it’s not hard to imagine saving three minutes somewhere in your day.
  • That work ultimately challenged the long-standing belief in economics that humans are logical, value-maximising creatures. Through the roster of creative and ingenious research scenarios the duo designed in the early 1970s, they showed that this assumption is often untrue. Here is one: ask someone which they would rather have – $500 guaranteed; or a 50/50 shot at winning $1,000. Most of us would go with the security of the first option. But then ask the same question in a slightly different way: would you rather lose $500 for sure; or take a 50/50 bet of losing $1,000. Suddenly, for most people, their answers flip. A guaranteed loss doesn’t feel too good. And yet, in economics terms, it’s the same question asked twice, just in a slightly different way. Kahneman and Tversky revealed that it is the framing of a scenario that affects our response to it, rather than an objective analysis of the risks. This simple insight set the field of economics ablaze for the next half a century.
  • Later in life, Kahneman turned his attention away from fallibility to the question of happiness. His work produced another canon of insights, destabilising our assumptions about the kind of experiences that lead to lasting satisfaction. For instance, imagine plunging your hand into a bucket of ice-cold water for as long as you can bear it. The more time you keep it submerged, the more pain you endure. But think back on the experience once it’s over, and it turns out that your memory of the discomfort will have little to do with the length of time you braved the pain, and much more to do with the peak moments of your suffering and whatever you were feeling in the final seconds of the experience. Together with others, Kahneman synthesised this idea into ‘the peak-end rule’: we end up remembering only what is most extreme and most recent. He built upon this evidence to develop a distinction between the experiencing self and the remembering self. Kahneman believed that humans often sacrifice the momentary pleasure felt by the former in order to maximise the satisfaction borne by the latter. We don’t want to live a good life. We want to remember having lived a good life.
  • Danny’s had been a life of collaboration and camaraderie: with Tversky, whom he described as his intellectual soulmate; with a young researcher named Matthew Killingsworth, with whom Danny partnered in the final three years of his life to overturn many of his own seminal findings regarding the correlation between income and life satisfaction (it turns out that perhaps money does buy happiness: Killingsworth, Kahneman & Mellers - Income and emotional well-being: A conflict resolved); with his wife Anne alongside whom he co-authored multiple papers; and with countless others. I offer that as the reason he seemed to be in better cardiovascular shape than me.
Author Narrative
  • Namir Khaliq is a film and television producer and the founder of Protostellar Media, a science communication production company in Los Angeles, California.
Notes
  • This is a brief article with some nice biographical commentary on Daniel Kahneman. This aspect is not worth commenting on.
  • The encounter is interesting - Kahneman was 88 at the time - two years before his death - and the film was never made.
  • This has reminded me to finish (and probably restart) reading "Kahneman (Daniel) - Thinking, Fast and Slow".
  • There are a few sections of philosophical interest in the paper and I've extracted these for 'footnoting' in the future.
  • There was also a link to another paper, which I've downloaded.
  • I'll comment further in due course.

Paper Comment



"Kind (Amy) - How to think about consciousness"

Source: Aeon, 04 September 2024


Author's Introduction
  • ...
  • But it’s consciousness in the experience sense – what philosophers refer to as phenomenal consciousness – that I’ll be focusing on in the remainder of this Guide. This kind of consciousness serves as a fundamental part of our existence, perhaps even the most fundamental part of our existence. But despite its fundamentality, and though we are intimately aware of our own conscious experience, the notion of consciousness is a perplexing one. Philosophers have long found its nature surprisingly hard to understand and explain, and considerable philosophical attention has been devoted to various puzzles that it presents us with.
  • In this Guide, I will walk you through some of these puzzles and help you to think about them in a philosophically informed way. Whether you realise it or not, consciousness is at the very centre of your experience. It plays a key role in identity formation and your sense of self – underpinning your career preferences, your hobbies and interests, and your goals and aspirations. It plays a key role in your relationship with others – your romantic entanglements, your familial bonds and your friendships. And it plays a key role in the development of your moral outlook. Ultimately, coming to a better understanding of the nature of conscious experience will help you not only to better understand yourself but also to understand the place of humanity in our world.

Key points – How to think about consciousness
  1. Privacy is baked into consciousness. The way you know about your own conscious experience, by introspection, is different from the way that you know about anyone else’s conscious experience. In general, the conscious experiences that you have are private to you and cannot be shared directly with anyone else.
  2. Imagine being a bat. Conscious experience is subjective and, thus, an understanding of some types of conscious experiences may be out of our reach. The privacy of conscious experience means that it is connected with only a single point of view. This creates a difficulty for us to understand conscious experiences that are radically different from our own.
  3. What colour do you see? Even radical differences in conscious experience may be undetectable. The fact that consciousness is both private and subjective seems to allow for the possibility that one person’s conscious experiences might be inverted compared with another’s and, moreover, that this inversion would be completely undetectable.
  4. Science might not have all the answers. Consciousness poses a special challenge for understanding our place in the natural world. Though consciousness clearly has something to do with the brain, it may well be that it is something over and above the brain. If this is right, then it is not clear that we can achieve a scientific explanation of consciousness.
  5. How we might test for consciousness is a thorny question. We seem to be nearing the day when sophisticated machines will behave and talk in such a way that suggests that they are conscious. But hard questions arise about whether this behaviour and talk is enough to justify attributions of consciousness, and that makes testing for consciousness a difficult prospect.

Author's Conclusion - Why it matters
  • Thinking about consciousness matters because it helps you confront a number of deep questions about your place in the Universe and your interaction with others. Consciousness is central to what makes you who you are, and it is central to the way you live your life. Yet given the privacy and subjectivity of consciousness, you cannot have direct access to conscious experiences that are vastly different from our own, and this poses various challenges for your ability to achieve understanding across experiential divides. The privacy and subjectivity of consciousness also pose challenges to attempts to account for it within the bounds of contemporary science, and we may well have to develop new ways of theorising in order to truly understand how it fits into the natural world.
  • Conceptions of consciousness also play a foundational role in our moral judgments. Creatures who have conscious experiences – who can feel pleasure and pain – are generally deserving of moral consideration. The fact that a creature can feel pain, for example, means that it can be harmed, and that in turn suggests that we have an obligation to avoid causing it harm.
  • When we think about many of the grievous moral failings of the past, they often trace at least in part to a failure to adequately recognise the moral standing of other beings. And this in turn often traces to a failure to adequately recognise that these beings have conscious experiences just like ours. (To give just one example, slaveowners often denied that their Black slaves felt pain the same way that white people did.) Just as our history shows grave mistreatment of moral agents due to racial or sexist bias, we might now be in danger of grave mistreatment of moral agents due to mechanical bias.
  • Do you say thank you to Siri and Alexa or ChatGPT when they perform a task for you? Do you compensate them for their labours? Do you expect them to be at your beck and call at all hours of the day? Granted, insofar as these artificial agents almost surely lack consciousness, it’s highly unlikely that this kind of treatment of them constitutes a moral failing. Were they conscious, though, the moral stakes would be quite different. Thus, as increasingly sophisticated AI devices come onto the scene, if we want to avoid the analogous kinds of moral mistakes that were made by our forebears, we will need to know whether and when to grant such devices moral standing, and that in turn means that we will need to know whether such devices have conscious experiences.
Author Narrative
  • Amy Kind is Russell K Pitzer Professor of Philosophy at Claremont McKenna College in California. She is the author of Persons and Personal Identity (2015), Philosophy of Mind: The Basics (2020) and Imagination and Creative Thinking (2022), and the co-author of What Is Consciousness?: A Debate (2023) and Philosophy of Mind: 50 Puzzles, Paradoxes, and Thought Experiments (2024). She serves as the editor of the scholarly blog on imagination, The Junkyard.
Notes
  • This is a very disappointing paper. It's an elementary introduction and as such skates over the issues and makes several errors (it seems to me).
  • The treatment of philosophical zombies was particularly irritating. As David Papineau points out, if you think you can imagine philosophical zombies then you are mistaken. If physicalism is true, as he, I and the author believe, then they are impossible. It is possible for beings that look like us to lack consciousness, but not those that are molecule-by-molecule identical to us (if physicalism is true).
  • I'm also not impressed by 'inverted spectrum' arguments - colours are 'lighter' or 'darker' (as is demonstrated by printing in grey-scale). So, swapping yellow for blue could have behavioural consequences. But variants whereby differences in experience really do lead to no behavioural difference are possible, I suppose. I'm not sure where this leads.
  • Suggesting that pain could feel nice to some people is completely inept. It's true that someone - a leper say - might not feel pain as they have no nerve endings - but saying that someone who steps on a tack and shrieks 'with pain' is experiencing a pleasant sensation seems to be absurd. If it was pleasant you want to step on another one, and then others would know that their inner feeling differed from normal.
  • There's a connection to "Birch (Jonathan) - The Edge of Sentience: Risk and Precaution in Humans, Other Animals, and AI" with respect to the potential sentience of AIs. I though the ethical comments were a bit flippant. Can people in the past be said to have acted unethically if the then common wisdom said they weren't? If we went back in time, we would be acting unethically if we acted as they did, but not they themselves.
  • Unfortunately, I don't have time to analyse the paper further at the moment, and have added it to the list!
  • There are some interesting comments, and I've extracted them for a closer reading.
  • Like a commentator, I'd not been aware of the shared consciousness of 'Krista and Tatiana Hogan, the Canadian craniopagus twins who are fused at the skull'. See Wikipedia: Krista and Tatiana Hogan. For me, this is by far the most important 'lead' from the paper. I need to update my Note on Dicephalus, to reflect the fact that as well as sharing of body parts there can be sharing of brain parts.

References
Paper Comment




In-Page Footnotes ("Kind (Amy) - How to think about consciousness")

Footnote 1:



"King (Barbara J.) - Human exceptionalism imposes horrible costs on other animals"

Source: Aeon, 01 November 2022


Author's Conclusion
  • I began this piece with a eulogy of sorts for Kayley, a cat whose expressions of thoughts and feelings I tried to understand and honour. Accepting these inner lives are a first step towards dismantling human exceptionalism. But animals who think and feel in totally opaque ways, or perhaps not at all, deserve our care, too. We owe it to all beings with whom we share Earth to construct a world that moves beyond human exceptionalism. We owe it to ourselves as well.

Author Narrative
  • Barbara J King is emerita professor of anthropology at William & Mary in Williamsburg, Virginia. She is the author of seven books, most recently How Animals Grieve (2013), Personalities on the Plate: The Lives and Minds of Animals We Eat (2017) and Animals’ Best Friends: Putting Compassion to Work for Animals in Captivity and in the Wild (2021).
Notes
  • I'm in agreement with most of what the author has to say. I suppose it's a plug for her latest book.
  • The introduction is a bit of a put-off. It's only by actually interacting with your pets - rather than hearing the accounts of others (especially badly written, as here) - that you 'get' that other animals have inner lives.
  • The section towards the end about animal dreaming - and their nightmares in many cases - is important.
  • I need to re-read this paper!

Paper Comment



"King (Rachael Scarborough) & Rudy (Seth) - The ends of knowledge"

Source: Aeon, 29 September 2023


Authors' Introduction
  • Right now, many forms of knowledge production seem to be facing their end. The crisis of the humanities has reached a tipping point of financial and popular disinvestment, while technological advances such as new artificial intelligence programmes may outstrip human ingenuity. As news outlets disappear, extreme political movements question the concept of objectivity and the scientific process. Many of our systems for producing and certifying knowledge have ended or are ending.
  • We want to offer a new perspective by arguing that it is salutary – or even desirable – for knowledge projects to confront their ends. With humanities scholars, social scientists and natural scientists all forced to defend their work, from accusations of the ‘hoax’ of climate change to assumptions of the ‘uselessness’ of a humanities degree, knowledge producers within and without academia are challenged to articulate why they do what they do and, we suggest, when they might be done. The prospect of an artificially or externally imposed end can help clarify both the purpose and endpoint of our scholarship.
  • We believe the time has come for scholars across fields to reorient their work around the question of ‘ends’. This need not mean acquiescence to the logics of either economic utilitarianism or partisan fealty that have already proved so damaging to 21st-century institutions. But avoiding the question will not solve the problem. If we want the university to remain a viable space for knowledge production, then scholars across disciplines must be able to identify the goal of their work – in part to advance the Enlightenment project of ‘useful knowledge’ and in part to defend themselves from public and political mischaracterisation.
  • Our volume The Ends of Knowledge: Outcomes and Endpoints Across the Arts and Sciences (2023) asks how we should understand the ends of knowledge today. What is the relationship between an individual knowledge project – say, an experiment on a fruit fly, a reading of a poem, or the creation of a Large Language Model – and the aim of a discipline or field? In areas ranging from physics to literary studies to activism to climate science, we asked practitioners to consider the ends of their work – its purpose – as well as its end: the point at which it might be complete. The responses showed surprising points of commonality in identifying the ends of knowledge, as well as the value of having the end in sight.
Author Narrative
  • Rachael Scarborough King is associate professor of English at the University of California, Santa Barbara. She is the author of Writing to the World: Letters and the Origins of Modern Print Genres (2018).
  • Seth Rudy is associate professor of English at Rhodes College in Tennessee, US. He is the editor, with Rachael Scarborough King, of The Ends of Knowledge: Outcomes and Endpoints Across the Arts and Sciences (2023).
Notes
  • To be supplied.

Paper Comment



"Klaas (Brian) - The forces of chance"

Source: Aeon, 29 October 2024


Author's Introduction
  • The social world doesn’t work how we pretend it does. Too often, we are led to believe it is a structured, ordered system defined by clear rules and patterns. The economy, apparently, runs on supply-and-demand curves. Politics is a science. Even human beliefs can be charted, plotted, graphed. And using the right regression we can tame even the most baffling elements of the human condition. Within this dominant, hubristic paradigm of social science, our world is treated as one that can be understood, controlled and bent to our whims. It can’t.
  • Our history has been an endless but futile struggle to impose order, certainty and rationality onto a Universe defined by disorder, chance and chaos. And, in the 21st century, this tendency seems to be only increasing as calamities in the social world become more unpredictable. From 9/11 to the financial crisis, the Arab Spring to the rise of populism, and from a global pandemic to devastating wars, our modern world feels more prone to disastrous ‘shocks’ than ever before. Though we’ve got mountains of data and sophisticated models, we haven’t gotten much better at figuring out what looms around the corner. Social science has utterly failed to anticipate these bolts from the blue. In fact, most rigorous attempts to understand the social world simply ignore its chaotic quality – writing it off as ‘noise’ – so we can cram our complex reality into neater, tidier models. But when you peer closer at the underlying nature of causality, it becomes impossible to ignore the role of flukes and chance events. Shouldn’t our social models take chaos more seriously?
  • The problem is that social scientists don’t seem to know how to incorporate the nonlinearity of chaos. For how can disciplines such as psychology, sociology, economics and political science anticipate the world-changing effects of something as small as one consequential day of sightseeing or as ephemeral as passing clouds?

Author's Conclusion
  • The theory of self-organised criticality was based on the sandpile model, which could be used to evaluate how and why cascades or avalanches occur within systems. If you add grains of sand, one at a time, to a sandpile, eventually, a single grain of sand can cause an avalanche. But that collapse becomes more likely as the sandpile soars to its limit. A social sandpile model could provide a useful intellectual framework for analysing the resilience of complex social systems. Someone lighting themselves on fire, God forbid, in Norway is unlikely to spark a civil war or regime collapse. That is because the Norwegian sandpile is lower, less stretched to its limit, and therefore less prone to unexpected cascades and tipping points than the towering sandpile that led to the Arab Spring.
  • There are other lessons for social research to be learned from nonlinear evaluations of ecological breakdown. In biology, for instance, the theory of ‘critical slowing down’ predicts that systems near a tipping point – like a struggling coral reef that is being overrun with algae – will take longer to recover from small disturbances. This response seems to act as an early warning system for ecosystems on the brink of collapse.
  • Social scientists should be drawing on these innovations from complex systems and related fields of research rather than ignoring them. Better efforts to study resilience and fragility in nonlinear systems would drastically improve our ability to avert avoidable catastrophes. And yet, so much social research still chases the outdated dream of distilling the chaotic complexity of our world into a straightforward equation, a simple, ordered representation of a fundamentally disordered world.
  • When we try to explain our social world, we foolishly ignore the flukes. We imagine that the levers of social change and the gears of history are constrained, not chaotic. We cling to a stripped-down, storybook version of reality, hoping to discover stable patterns. When given the choice between complex uncertainty and comforting – but wrong – certainty, we too often choose comfort.
  • In truth, we live in an unruly world often governed by chaos. And in that world, the trajectory of our lives, our societies and our histories can forever be diverted by something as small as stepping off a steam train for a beautiful day of sightseeing, or as ephemeral as passing clouds.
Author Narrative
  • Brian Klaas is an associate professor in global politics at University College London, an affiliate researcher at the University of Oxford, and a contributing writer for The Atlantic. His most recent book is Fluke: Chance, Chaos, and Why Everything We Do Matters (2024). He writes The Garden of Forking Paths Substack and created the Power Corrupts podcast.
Notes
  • This is a plug for the author's new book, on which it is largely reliant. This book is cheap, so I could buy it, but doubt I'd read it.
  • I think the focus - in the first half of the paper - on uncertainty via Chaos and (especially, though briefly) Quantum Mechanics is a bit of a blind alley.
  • I also thought the author may have misunderstood chaos as it applies to multiple events. Initial conditions may make a single trial impossible to predict but multiple trials can still show a statistical regularity. Think of coin tossing or dice rolling - the paradigm cases of probability. The same for QM - diffraction patterns are statistical even though we can't know where a single photon goes.
  • As the author points out in the second half of the paper, sometimes a minor - and unpredictable - incident leads to enormous consequences. But not on its own. It's part of a much wider picture and - in particular - an unstable situation. This has nothing to do with nonlinearity. Other events could as easily have lit the blue touch-paper.
  • The Trump non-assassination was a sliding-doors incident. Again, nothing to do with non-linearity. Pivotal figures are always subject to the random risks of life - and the more pivotal, and the more critical the situation, the more at risk.
  • What we need to do is watch out for these unstable and critical situations and prepare for them better. Of course, working out what was the 'number one priority' is always easier in retrospect. Our resources are limited and require focus. Again, sliding-doors moments only get their significance in retrospect. Most occasions of narrowly missing a train are rather dull and predictable unless - for some reason - that journey was critical.
  • I was surprised there was no mention of Malcolm Gladwell, though he features in Brian P Klaas - Fluke.
  • There are 34 Aeon Comments - rather equally divided between supportive and antagonistic ones. The latter are the more useful. I need to read them carefully.

Paper Comment
  • Sub-Title: "Social scientists cling to simple models of reality – with disastrous results. Instead they must embrace chaos theory"
  • For the full text see Aeon: Klaas - The forces of chance.



"Knight (Chris) - The two Chomskys"

Source: Aeon, 04 December 2023


Author's Introduction
  • Noam Chomsky rose to fame in the 1960s and even now, in the 21st century, he is still considered one of the greatest intellectuals of all time. His prominence as a political analyst on the one hand, and theoretical linguist on the other, simply has no parallel. What remains unclear is quite how the two sides of the great thinker’s work connect up.
  • When I first came across Chomsky’s linguistic work, my reactions resembled those of an anthropologist attempting to fathom the beliefs of a previously uncontacted tribe. For anyone in that position, the first rule is to put aside one’s own cultural prejudices and assumptions in order to avoid dismissing every unfamiliar belief. The doctrines encountered may seem unusual, but there are always compelling reasons why those particular doctrines are the ones people adhere to. The task of the anthropologist is to delve into the local context, history, politics and culture of the people under study – in the hope that this may shed light on the logic of those ideas.
  • The tribe shaping Chomsky’s linguistics, I quickly discovered, was a community of computer scientists during the early years of the Cold War, employed to enhance electronic systems of command and control for nuclear war and other military operations. My book Decoding Chomsky (2016) was an attempt to explain the ever-changing intricacies of Chomskyan linguistics within this specific cultural and historical setting.
  • I took it for granted that the ideas people entertain are likely to be shaped by the kind of life they lead. In other words, I assumed that Chomsky’s linguistic theories must have been influenced by the fact that he developed them while working for the US military – an institution he openly despised.
  • This was Chomsky’s impossible dilemma. Somehow, he needed to ensure: a) that the research he was conducting for the US military did not interfere with his conscience; and b) that he could criticise the US military without inducing them to cease funding his research. His solution was to make sure that the two Noam Chomskys – one working for the US military and the other against it – shared no common ground.
Authors' Conclusion
  • Many of Chomsky’s activist supporters have been shocked to discover that their hero has been on friendly terms not only with the former head of the CIA, John Deutch, but also with the sex offender Jeffrey Epstein. But it would have been impossible for Chomsky to maintain his position at MIT for so long without associating with all sorts of dubious establishment figures. As Chomsky told The Harvard Crimson in 2023 of his meetings with Epstein: ‘I’ve met [all] sorts of people, including major war criminals. I don’t regret having met any of them.’ For me, Chomsky’s association with Epstein was a serious error. I also believe, however, that had Chomsky been so principled and pure as to refuse to work at MIT, then he might never have gained the platform he needed to inspire so many of us to oppose both militarism and the even greater threat of climate catastrophe.
  • There are times when we all have to make compromises, some more costly than others. In Chomsky’s case, it was his attempt at a new understanding of language that suffered most from the institutional contradictions he faced. Despite the failure of his attempted revolution in linguistics, Chomsky’s political activism remains an inspiration.
Author Narrative
  • Chris Knight is a senior research associate in the department of anthropology at University College London. He is the author of Blood Relations: Menstruation and the Origins of Culture (1991) and Decoding Chomsky: Science and Revolutionary Politics (2016). His website is Science and Revolution: Chris Knight's Blog on Noam Chomsky.
Notes
  • This is easily the most noxious paper I've read on Aeon. It strikes me as an entirely ad hominem argument. However, his book on the topic was published by Yale and has received some positive reviews.
  • Politically, Knight and Chomsky are pretty much of a mind (and very far from my own stance). Knight is basically accusing Chomsky - whose political radicalism and activism he seems to admire - of hypocrisy for working for MIT which has accepted major funding from the Pentagon. I don't care about any of this. What I do care about is Knight's suggestion that - what he considers to be Chomsky's ludicrous linguistic theories - are somehow tied in with what he was paid to do and also that Chomsky avoids reference to the social function of language because he is conflicted over tensions between his job and his activism, which is entered into as a response to feelings of guilt.
  • This paper makes some surprising claims about Chomsky's theories that are hard to credit. I assume he's misunderstood him, not himself being a linguist.
  • Knight's Wikipedia page (Wikipedia: Chris Knight (anthropologist)) looks like he wrote it himself. It has many - to my mind - non-standard features.
  • His own webste is Chris Knight: Home Page.
  • Another enemy of Chomsky is Daniel Everett, who's written a long and appreciative review of Knight's book. See Everett - An Anthropologist Contemplates Chomsky.
  • On MIT, see Wikipedia: Massachusetts Institute of Technology. One of the world's great institutions, but just a tool of the military according to Marxists.
  • For Noam Chomsky see Wikipedia: Noam Chomsky.
  • I'll look into the author's linguistic claims shortly.
  • I also need to read carefully the - often critical but sometimes supportive - Comments on Aeon. There are some interesting negative reviews of the the author's book on Amazon.
  • One comment I take to be misguided is the thought that ChatGPT has disproved Chomsky's theories. The argument from the Poverty of Stimulus is relevant here. Human babies learn their mother tongue (and sometimes two mother tongues) with no formal training. See "Todman (Theo) - Poverty of Stimulus Arguments". ChatGPT has been trained using the entire WWW.
  • That the claim attributed to Chomsky that most of our language use is internal seems reasonable - we do our thinking and general musings in language. I've long thought that various non-human animals have a language of thought and that human language developed from this.
  • These comments are rambling and need sorting out!

Paper Comment
  • Sub-Title: "The US military’s greatest enemy worked in an institution saturated with military funding. How did it shape his thought?"
  • For the full text see Aeon: Knight - The two Chomskys.



"Kopparapu (Ravi) & Misra (Jacob Haqq) - Have they been here?"

Source: Aeon, 18 April 2025


Authors' Introduction
  • Could extraterrestrial technology be lurking in our backyard – on the Moon, Mars or in the asteroid belt? We think it’s worth a look.
  • The idea that a planet like Mars might host cities and technology gained traction in scientific circles when Percival Lowell popularised his theory of ‘canals’ on Mars between 1894 and 1908. He claimed that these were artificial irrigation channels built by Martians to transport water from the poles to equatorial cities. Not all astronomers agreed with Lowell, but the possibility of an inhabited Mars couldn’t be immediately dismissed. The excitement generated by his hypothesis inspired works of science fiction like H G Wells’s War of the Worlds (1898), in which Martian invaders come to colonise Earth and exploit its resources. It also fuelled a growing public belief that we might not be too far from another population of advanced extraterrestrial beings.
  • What would it mean to find alien technology close to Earth? The implications would be shocking and transformative. Such a discovery would instantly upend humanity’s sense of itself in the hierarchy of the cosmos, suggesting that we are not alone, but that visitors have, in fact, been here before. The difference between an extraterrestrial civilisation light years away and one that has touched down or hovered so close to home could be the difference between unproven theory and documented reality. In a Universe where distances render most forms of contact nearly impossible, proximity changes everything.
  • Lowell’s hypothesis about canals on Mars did not last, of course. By the 1930s, improved instruments revealed that the ‘canals’ were optical illusions. And by the 1960s, scientific evidence had shut the door on the idea of alien civilisations within the solar system. Venus was too hot. Mars was barren. Science fiction turned to the stars – and, with the discovery of exoplanets (Earth-like worlds that could be habitable), so did science.
  • Yet we can’t rule out the possibility that aliens have paid a visit, or that alien technology is already here. After all, our own space-exploration activities have included missions on trajectories beyond the boundaries of the solar system. The Voyager 1 and Voyager 2 spacecraft, launched by NASA in 1977, have already exited the solar system, while other spacecraft such as Pioneer 10, Pioneer 11 and New Horizons are all on course to reach interstellar space. Studies are currently underway by the Breakthrough Starshot project to develop a tiny ‘nanocraft’ that would be propelled with a laser toward the nearby Alpha Centauri star system at a fraction of the speed of light. These examples illustrate the possibility that human technology is at least conceptually capable of sending spacecraft to nearby planetary systems. And if this is something within our reach, then isn’t it possible that an extraterrestrial civilisation (if it exists) would also be capable of such a technological feat?

Authors' Conclusion
  • The barriers to searching for technosignatures within our solar system are not technological – they’re cultural. While interest in life elsewhere in the Universe has surged, the idea of nearby technosignatures has lagged behind, often dismissed or overshadowed by speculation rooted in science fiction and pseudoscience. This vacuum, filled by fringe narratives, has made scientists wary of engaging with the topic for fear of damaging their reputations. The notion that a local technosignature could pose a security threat adds to the reluctance. Cultural stigma continues to discourage researchers from even low-risk efforts, such as examining existing planetary data for unexplained anomalies. When these pressures ripple through the scientific community, they can create an informal but powerful taboo – one that sidelines legitimate lines of investigation.
  • This isn’t how science should work. The search for technosignatures is grounded in a simple truth: technology has emerged at least once – here on Earth. Beyond that, the possibilities are wide open. We shouldn’t limit ourselves to searches of planets light years away when extraterrestrial technology might exist within our own solar system. To explore this seriously, we need to prioritise data collection and accessibility, enabling interdisciplinary teams – engineers, scientists and data analysts – to investigate potential signals, rule out false positives, and establish clear criteria for distinguishing genuine anomalies from noise. Scientists must be willing to take intellectual risks: to mine existing datasets or gather new ones, and to examine places in the solar system that might harbour such evidence. Rigorous enquiry into solar system technosignatures won’t just expand scientific understanding – it will help us prepare. And preparation, after all, is better than ignorance.
Author Narrative
  • Ravi Kopparapu is a planetary scientist.
  • Jacob Haqq Misra is an astrobiologist at the Blue Marble Space Institute of Science in Seattle. He is the author of Sovereign Mars (2022).
Notes
Paper Comment



"Krakauer (David C.) & Kempes (Chris) - Problem-solving matter"

Source: Aeon, 17 September 2024


Authors' Introduction
  • What makes computation possible? Seeking answers to that question, a hardware engineer from another planet travels to Earth in the 21st century. After descending through our atmosphere, this extraterrestrial explorer heads to one of our planet’s largest data centres, the China Telecom-Inner Mongolia Information Park, 470 kilometres west of Beijing. But computation is not easily discovered in this sprawling mini-city of server farms. Scanning the almost-uncountable transistors inside the Information Park, the visiting engineer might be excused for thinking that the answer to their question lies in the primary materials driving computational processes: silicon and metal oxides. After all, since the 1960s, most computational devices have relied on transistors and semiconductors made from these metalloid materials.
  • If the off-world engineer had visited Earth several decades earlier, before the arrival of metal-oxide transistors and silicon semiconductors, they might have found entirely different answers to their question. In the 1940s, before silicon semiconductors, computation might appear as a property of thermionic valves made from tungsten, molybdenum, quartz and silica – the most important materials used in vacuum tube computers.
  • And visiting a century earlier, long before the age of modern computing, an alien observer might come to even stranger conclusions. If they had arrived in 1804, the year the Jacquard loom was patented, they might have concluded that early forms of computation emerged from the plant matter and insect excreta used to make the wooden frames, punch cards and silk threads involved in fabric-weaving looms, the analogue precursors to modern programmable machines.
  • But if the visiting engineer did come to these conclusions, they would be wrong. Computation does not emerge from silicon, tungsten, insect excreta or other materials. It emerges from procedures of reason or logic.
  • This speculative tale is not only about the struggles of an off-world engineer. It is also an analogy for humanity’s attempts to answer one of our most difficult problems: life. For, just as an alien engineer would struggle to understand computation through materials, so it is with humans studying our distant origins.
  • Today, doubts about conventional explanations of life are growing and a wave of new general theories has emerged to better define our origins. These suggest that life doesn’t only depend on amino acids, DNA, proteins and other forms of matter. Today, it can be digitally simulated, biologically synthesised or made from entirely different materials to those that allowed our evolutionary ancestors to flourish. These and other possibilities are inviting researchers to ask more fundamental questions: if the materials for life can radically change – like the materials for computation – what stays the same? Are there deeper laws or principles that make life possible?

Authors' Conclusion
  • Is life problem-solving matter? When thinking about our biotic origins, it is important to remember that most chemical reactions are not connected to life, whether they take place here or elsewhere in the Universe. Chemistry alone is not enough to identify life. Instead, researchers use adaptive function – a capacity for solving problems – as the primary evidence and filter for identifying the right kinds of biotic chemistry. If life is problem-solving matter, our origins were not a miraculous or rare event governed by chemical constraints but, instead, the outcome of far more universal principles of information and computation. And if life is understood through these principles, then perhaps it has come into existence more often than we previously thought, driven by problems as big as the bang that started our abiotic universe moving 13.8 billion years ago.
  • The physical account of the origin and evolution of the Universe is a purely mechanical affair, explained through events such as the Big Bang, the formation of light elements, the condensation of stars and galaxies, and the formation of heavy elements. This account doesn’t involve objectives, purposes, or problems. But the physics and chemistry that gave rise to life appear to have been doing more than simply obeying the fundamental laws. At some point in the Universe’s history, matter became purposeful. It became organised in a way that allowed it to adapt to its immediate environment. It evolved from a Babbage-like Difference Engine into a Turing-like Analytical Engine. This is the threshold for the origin of life.
  • In the abiotic universe, physical laws, such as the law of gravitation, are like ‘calculations’ that can be performed everywhere in space and time through the same basic input-output operations. For living organisms, however, the rules of life can be modified or ‘programmed’ to solve unique biological problems – these organisms can adapt themselves and their environments. That’s why, if the abiotic universe is a Difference Engine, life is an Analytical Engine. This shift from one to the other marks the moment when matter became defined by computation and problem-solving. Certainly, specialised chemistry was required for this transition, but the fundamental revolution was not in matter but in logic.
  • In that moment, there emerged for the first time in the history of the Universe a big problem to give the Big Bang a run for its money. To discover this big problem – to understand how matter has been able to adapt to a seemingly endless range of environments – many new theories and abstractions for measuring, discovering, defining and synthesising life have emerged in the past century. Some researchers have synthesised life in silico. Others have experimented with new forms of matter. And others have discovered new laws that may make life as inescapable as physics.
  • It remains to be seen which will allow us to transcend the history of our planet.
  • For more information about the ideas in this essay, see Chris Kempes and David Krakauer’s research paper ‘The Multiple Paths to Multiple Life’ (2021, Kempes & Krakauer - The Multiple Paths to Multiple Life), and Sara Imari Walker’s book Life as No One Knows It: The Physics of Life’s Emergence (2024).

Highly Critical Amazon Review - By Dr Hector Zenil (KCL, KCL - Hector Zenil) - of Sara Imari Walker’s book Life as No One Knows It
  • I was looking forward to reading this book given the extensive media coverage - a masterclass of marketing - that turned it into an instant Bestseller.
  • I was particularly interested in the author's original ideas on algorithmic probability and open-endedness, a topic on which we had briefly collaborated. However, this new book surprised me for the wrong reasons. Rather than focusing on her own work, the book devotes most of the book to discussing the ideas of another person, a hypothesis about life that has been disproven multiple times by multiple groups.
  • This book introduces Assembly Theory or AT, a hypothesis that suggests that the only feature of life to characterise it is its ability to make numerous copies of itself or utilise numerous copies of the resources it requires, copies which can be quantified by the assembly index within this theory, according to the authors.
  • The book spends nearly 250 pages explaining how life self-replicates. While this is a well-established fact known for about a century, making the book trivially accurate, it fails to deliver on its promise to explain life 'as no one knows it'. Instead, all across the book, the strategy is to present how things introduced centuries or even millennia ago - as if representing modern science - differ from how Assembly Theory defines them today.
  • For example, when it comes to defining 'objects', Assembly Theory would propose them to be finite and distinguishable, breakable, and to be able to exist 'more than once', as opposed to immutable, unbreakable and indistinguishable as (Greek) science would do, conflating the primitive concept of atom and its etymological root with modern science.
  • According to the author, this 'new' account of what an 'object' is is the unique and revolutionary new way in which Assembly Theory promises to unify biology and physics, just as James Maxwell unified magnetic and electric forces and Albert Einstein unified space and time, placing herself and Cronin at the same level.
  • The book gives the impression that Assembly Theory (AT) offers something new in applying this simple idea of counting copies to biology, selection, or evolution. Unfortunately, it seems that either the authors of AT were unaware of or chose to overlook the fact that this idea had already been explored or decades. In 2005, for example, computer scientists reported in the journal IEEE Transactions on Information Theory that by counting copies of nucleotides using a compression algorithm they could accurately reconstruct an evolutionary tree from mammalian mtDNA sequences.
  • Indeed, there are already several established statistical tools that can and have been used to count copies in biological and chemical data, such as Shannon Entropy and compression algorithms like LZW (for Lempel-Ziv-Welch who introduced it in the 1970s), which underpin formats like ZIP and PNG, designed expressly for the purpose.
  • That compression or counting copies works does not come as a surprise. In molecular biology it is well established that counting for GC nucleotides, which are molecules, discloses the relationship between species. This is because two species that are evolutionarily related to each other will have about the same number of GC nucleotide copies. This is referred to as GC content.
  • This landmark result obtained by Cilibrasi and Vitanyi in 2005, represented in the above figure, was completely omitted from the book's background discussion, alongside pretty much every other relevant piece of work necessary to properly introduce the reader to a long-studied field. Instead, the book gives the uninformed reader the impression that the author(s) (Cronin/Walker) originated all these ideas themselves.
  • While the book does not address multiple criticisms of AT, simply sweeping them under the carpet, one might at least expect that the assembly index proposed counts the number of copies in data in a novel fashion, for example, in a manner different from compression algorithms or a simple application of Shannon Entropy. However, it does not.
  • In another paper recently published in Nature's npj Systems Biology journal, it was demonstrated that the results touted as unique by the authors of AT can be replicated, and even surpassed, using very simple and widely used statistical tools. This provides evidence that their assembly index offers no advantage over existing methods, especially considering that they converge to the same values, Shannon Entropy.
  • Moreover, other research groups have shown that AT's idea of a copy-number threshold is flawed. In fact, geology and planetary researchers, including some from NASA, recently published a paper in the Journal of the Royal Society Interface demonstrating that they were mistaken. The authors of Assembly Theory responded by claiming that their threshold was only a guesstimation for Earth and that any molecular data would need to be filtered and pre- or post-processed. This significantly undermines the claim that papers on AT and Walker's book have advanced that their tool is an agnostic life detection tool for Earth and beyond (such as aliens on other planets).
  • Life is highly modular, but we have known this for nearly two centuries, since the discovery of the cell, DNA, the genetic code, and more. From a computational perspective, the earliest questions about how to characterise life were explored by pioneers like Claude Shannon (his thesis was on Genetics), Alan Turing, John von Neumann, and Nils Barricelli - founders of digital computation - mostly related to pattern formation and self-replication (i.e., making copies). This was also a topic explored by authors like John Conway, Chris Langton and Stephen Wolfram, and not only with Turing machines but with a range of tools, from genetic algorithms to P systems to membrane computing, all exploiting hierarchical modularity based on having an abundance of 'Lego copies' (e.g. proteins) with which to build things, from genetic sequences to cellular compartments. This is not to mention those from whom the authors of AT have 'borrowed' (to be generous) their ideas, such as Andrey Kolmogorov, Gregory Chaitin, Ray Solomonoff, Leonid Levin, and Charles Bennett. Dozens of books and countless articles have been published on connecting information theory and life. Does the author cite any of this prior literature or any literature that has laid the foundations for what AT aims to do - but cannot deliver? No.
  • When the book does reference some of this work, it is misrepresented, for example, the work of Alan Turing. Indeed, the book's treatment of the topic of computation illustrates a deep misunderstanding of the subjects the author covers. The book claims, for example, that the concept of Turing universality is based on a peculiar and abstract type of 'machine,' which completely misses the point of computational universality. What Alan Turing did was to assume the most basic model of computation to prove that all machines and models are equivalent. Turing machines are invoked only to prove mathematical theorems, but they are by no means fundamental to the phenomena that computation explains.
  • Turing was interested in capturing the concept of mechanical inference or manual derivation - a strong form of causality that he believed could be executed by humans with a pencil and paper. (Early in the last century, human calculators were called computers, before the advent of digital or electronic computers.) The author confuses the peculiarities of a specific model with the broader picture. The big picture is Turing's result, which proves that the underlying substrate is irrelevant: every mechanical procedure can be translated into a Turing machine or any physical procedure, and vice versa. Walker is very proud of Assembly Theory's step-by-step approach, which she believes can provide a definitive measure. However, the book author is essentially describing a Turing machine - a computer program that she mistakenly thinks is fundamentally different from Turing machines.
  • When the book briefly mentions Kolmogorov-Chaitin complexity, referring to it as 'algorithmic compression', she fails to distance AT from it. According to her, trying to find computer programs to explain objects like the mathematical constant pi is very difficult. However, when the authors of AT defend their assembly index, they do so by claiming that its calculation is intractable (very difficult to calculate) and that they can only propose heuristics to do so. This is correct, because limiting the space of computer programs to those that produce an object by finding its repetitions or assembly blocks is still a large region impossible to explore in full, but the approach falls into a proper subset of algorithmic complexity itself, making the assembly index an algorithmic compressor. This brings them full circle, back to what they tried to distance themselves from in the first place: algorithmic compression.
  • The other argument is that the assembly index is worse than algorithms like LZW at compressing because it may take longer to explore the set of shortest assembly paths, that they don't exhaust anyway, so their defence amounts to saying that they are better because they are worse. Regardless, the author correctly defines the assembly index as the shortest number of elements that build the object, which is exactly the definition of algorithmic (Kolmogorov) complexity or K, and they approximate it by means of counting number of copies, which is a Shannon Entropy estimation by way of an LZ compression algorithm (an upper bound on K). In other words, almost every page frustratingly advances an argument that logically contradicts another argument on the same page or the pager right before or right after. In this case, while they try to convince the reader that they have nothing to do with compression or algorithmic complexity, everything they advance has to do with it.
  • We know that many physical processes can produce many copies of the same objects without being alive, such as basaltic rock formations (like the Giant's Causeway in Northern Ireland), snowflakes, and more, yet they are not related to living systems in any way. The claim that AT can find the separation between life and matter by counting the number of copies and hence gauging abundance in an object is blatantly incorrect. This is why simply making copies, even in the physical world, is not sufficient in any way to define life. This has been understood since the times of Mendel, Turing, Darwin, Schrödinger, and almost any other time in history since the Greeks proposed that atoms formed matter and later physicists and chemists found that all atoms were of a limited number of types and that everything else emerged from them through copying and combination. While copying is a property of self-replication and a requirement for life and evolution, Assembly Theory incorrectly proposes that it is the only feature that counts (pun intended), in order to define life. That this fabrication is inserted and repeated a hundred times, surrounded by sometimes trivially correct assertions, does not make it true.
  • Unfortunately, I was left with a feeling that this book was a marketing exercise rather than a scholarly contribution to the topic.
Author Narrative
  • David C Krakauer is the president and William H Miller Professor of Complex Systems at the Santa Fe Institute in New Mexico. He works on the evolution of intelligence and stupidity on Earth. Whereas the first is admired but rare, the second is feared but common. He is the founder of the InterPlanetary Project at SFI and the publisher/editor-in-chief of the SFI Press.
  • Chris Kempes is a professor at the Santa Fe Institute, working at the intersection of physics, biology, and the earth sciences.
Notes
  • I read this paper a couple of months ago and now can't remember the detailed argument. So. I need to re-read the paper before commenting in detail.
  • However, I remember disliking the overall thesis. I'm happy with the idea that 'life' could arise in some other infrastructure - silicon-based, or whatever. But I still don't think it reduces to 'information'. A living thing is what instantiates the information, and two initially-indistinguishable instantiations are separate individuals (that will grow apart).
  • I've downloaded the recommended paper (but not yet logged it to my database).
  • The book is available on Kindle for £3, but is probably not worth reading if the damning review is anything to go by.
  • There are no Aeon Comments.
  • This relates to my Notes on Life and Information.

Paper Comment



"Lacaux (Celia) - The brain’s twilight zone: when you’re neither awake nor asleep"

Source: Aeon, 26 September 2024


Author's Introduction
  • Each night as you lay down to sleep, you embark on an extraordinary journey – not through space, but through the shifting terrain of your own consciousness. This transition, known as the sleep-onset period, is not a simple flick of a switch from wakefulness to slumber, but a gradual, nuanced shift that suspends you between two worlds. Long regarded as a mere prelude to sleep, recent studies suggest there is far more to this fascinating twilight period.
  • Your brain doesn’t simply power off as you fall asleep; instead, it enters a captivating liminal state, hovering between wakefulness and sleep. Imagine your brain as a metropolis at twilight, where different neighbourhoods dim their lights and quieten at staggered times. The journey begins in the subcortical regions, the deeper, hidden parts of the brain. From there, like ripples spreading across a pond, drowsiness progresses to the cortex (the brain’s outer layer), moving gradually from front to back. This entire process of sequential brain region shutdown can take up to 20 minutes.
  • The phased descent into sleep explains why you might not recall the last few moments before dosing off while watching TV or reading. The parts of your brain responsible for processing scenes or flipping pages remain active, even after other areas, particularly deeper ones, including the thalamus and hippocampus, have already slipped into slumber, disrupting your ability to form new memories of those final minutes.
  • During this process, the brain oscillates between various short-lived states within seconds, resembling a swing that moves back and forth – sometimes nearing wakefulness, at other times leaning towards sleep. These fluctuations are unique to the sleep-onset period. If we were to take a snapshot of the brain at this time, we would observe not only an in-between state, with parts of the brain awake and others asleep, but also a constantly changing landscape of activity. Indeed, each person’s process of falling asleep is as unique as their fingerprint.

Author's Conclusion
  • You too could enjoy the creative benefits of sleep onset – all it takes is a short nap! Indeed, in another recent study that my colleagues and I conducted, we showed that participants who snoozed for around one minute were subsequently three times more likely to discover a sudden solution to a problem compared with those who stayed awake or went into deeper sleep. We were even able to predict whether a participant would experience an ‘aha’ moment simply based on their brain activity during the resting period. The ideal creative cocktail was found in that liminal state between wakefulness and sleep, consisting of a moderate level of alpha brain waves (a marker of drowsiness) and a low level of delta waves (a marker of sleep depth).
  • Building on these insights, other researchers are developing tools to explore and harness the creative potential of the sleep onset period. One such innovation, developed by a team at the Massachusetts Institute of Technology, is a glove they call Dormio that’s equipped with sensors to detect the exact moment a person falls asleep by monitoring heart rate, muscle tone and skin conductance. When the glove detects the onset of sleep, it triggers an alarm to rouse the participant. In one experiment, the team used the device to prompt participants to think about a specific word – ‘tree’ – as they drifted off to sleep. The participants subsequently mentioned seeing trees in their hypnagogic experiences, and they demonstrated more creativity in tree-related creative tasks. In exploiting the journey into sleep, it seems that Edison and Dalí were on to something. If you want to explore the hypnagogic landscape for yourself, why not try their method of holding an object when you take a nap?
  • Recent research is casting new light on the wake-to-sleep transition, revealing a unique liminal space during which both the body and mind undergo a series of dynamic and profound changes. Brain activity slows down, muscles relax, heart rate lowers, consciousness and responsiveness to the environment fluctuate, and rich dreamlike experiences emerge. This period represents a window into critical cognitive functions, such as memory but also into the emergence of creative sparks.
  • The next time you find yourself lying awake, remember that you are not just waiting to fall asleep – you are standing at the threshold of an extraordinary journey.
Author Narrative
  • Célia Lacaux completed her PhD at the Paris Brain Institute. She now holds a postdoctoral position at the University of Geneva, Switzerland, where she continues to investigate the impact of sleep on creativity.
Notes
  • This Paper is interesting - if brief. Indeed, so brief that I've quoted most of it.
  • As I have a long nap each day, usually for 90 minutes, I thought I'd read this and see what it had to say.
  • It suggests that a brief doze is good for creativity, and this may well be so. But so is any moving away from a problem and then returning to it later. The problem is that we often don't get that luxury - we're put on the spot. But if a problem is difficult, you're stuck, and it's not urgent, then it's best not to keep trying to force it. Just common sense.
  • This sounds like a quicker 'reset button'.
  • It doesn't fit well with my napping programme. My intention is always to catch up on sleep (I only have 6 hours overnight). I fall asleep really quickly (or if I don't, I never remember any of the pre-sleep events). Usually, it's 'head hits pillow and gone'. Maybe I could try napping when I'm not tired? Sounds like a frustrating waste of time.
  • Given how many people have sleep problems, and my regime works just fine for me, I'm not going to fix what's not broke.
  • There are a few reflective Aeon Comments.
  • This relates to my Note on Sleep and - maybe - Consciousness, Memory and Intelligence.

Paper Comment



"Lachmann (Michael) & Walker (Sara) - Life ≠ alive"

Source: Aeon, 24 June, 2019


For the full text, follow this link (Local website only): PDF File1.
  • Sub-Title: "A cat is alive, a sofa is not: that much we know. But a sofa is also part of life. Information theory tells us why."
  • Authors:
    • Michael Lachmann is a professor at the Santa Fe Institute in New Mexico. He is interested in the interface between evolution2 and information, and in particular the origins of life.
    • Sara Walker is an astrobiologist and theoretical physicist at Arizona State University, where she is deputy director of the Beyond Center for Fundamental Concepts in Science, associate director of the ASU-Santa Fe Institute Center for Biosocial Complex Systems, and assistant professor in the School of Earth and Space Exploration.
  • For the full text, see Aeon: Lachmann & Walker - Life ≠ alive

Notes
  • This is an important paper and deserves more careful consideration than I can give it at the moment. For now, just a few jottings.
  • The authors connects connect Life3 with Information4 that has been accumulated by Evolution5, though it’s not clear to me whether this is necessary.
  • However, ‘Living’ is something that something that is Life6 does – in particular by metabolising and reproducing. They don’t think that Earth-style biology (or any biology) is necessary for either Life7 or living.
  • Early in the paper mules – which troubled Aristotle – are mentioned: while they are sterile, you’d hardly call them dead. I wasn’t clear how the authors’ proposal resolved this conundrum.
  • However, they are happy to have explained viruses – which they say are Life8 but not alive, as they don’t metabolise independently.
  • I sometimes wondered whether they were just arguing over Semantics9. I think that both Life10 and ‘alive’ are natural kind terms for which there may be a correct – if currently unknown – definition. But while I agree there is no mysterious ‘life-force’, I don’t think defining Life11 as evolved Information12 will do.
  • Dead things are – according to the authors – Life13, rather than merely evidence of past Life14.
  • They describe a TE15 towards the end of the paper whereby self-replicating 3D printers that mine their raw materials from the environment, suggesting that they are both Life16 - if they have developed via Evolution17 form simpler models – and living.
  • They count sofas as part of the web of Life18, as they have evolved from simpler chairs, but not as alive. Personally, I think it’s an abuse of language. Life19 has to be capable of – or have once been – living.
  • "Schrodinger (Erwin) - What is Life?" receives a positive mention.
  • There’s a mention of Schrodinger’s cat, though I doubted that the conundrum was solved – while it remains as Life20 - according to the authors – it’s still a superposition of alive and dead states.



"Lande (Kevin) - Do you compute?"

Source: Aeon, 11 April 2019


Author's Introduction
  • ‘The brain is a computer’ – this claim is as central to our scientific understanding of the mind as it is baffling to anyone who hears it. We are either told that this claim is just a metaphor or that it is in fact a precise, well-understood hypothesis. But it’s neither. We have clear reasons to think that it’s literally true that the brain is a computer, yet we don’t have any clear understanding of what this means. That’s a common story in science.
  • To get the obvious out of the way: your brain is not made up of silicon chips and transistors. It doesn’t have separable components that are analogous to your computer’s hard drive, random-access memory (RAM) and central processing unit (CPU). But none of these things are essential to something’s being a computer. In fact, we don’t know what is essential for the brain to be a computer. Still, it is almost certainly true that it is one.
  • I expect that most who have heard the claim ‘the brain is a computer’ assume it is a metaphor. The mind is a computer just as the world is an oyster, love is a battlefield, and this shit is bananas (which has a metaphor inside a metaphor). Typically, metaphors are literally false. The world isn’t – as a matter of hard, humourless, scientific fact – an oyster. We don’t value metaphors because they are true; we value them, roughly, because they provide very suggestive ways of looking at things. Metaphors bring certain things to your attention (bring them ‘to light’), they encourage certain associations (‘trains of thought’), and they can help coordinate and unify people (they are ‘rallying cries’). But it is nearly impossible to ever say in a complete and literally true way what it is that someone is trying to convey with a literally false metaphor. To what, exactly, is the metaphor supposed to turn our attention? What associations is the metaphor supposed to encourage? What are we all agreeing on when we all embrace a metaphor?
  • If it were a metaphor to say that the brain is a computer, then we would expect the claim to be literally false. This checks out with the point that our brains aren’t organised, like PCs, into silicon-based hard drives, RAMs and CPUs. We would also expect it to be difficult to flesh out exactly what we mean when we say that the brain is a computer. The value of the claim, were it a metaphor, would have to lie in whether it suggests the right things to attend to, whether it calls to mind fruitful associations, and whether it succeeds in bringing some coordination to the cognitive sciences. Some think that the supposed metaphor succeeds on these counts, while others think it fails and has poisoned the well of cognitive-science research.
Author's Conclusion
  • Throughout the history of science, we have often started by knowing just enough to be able to point to an important thing, though we didn’t grok it well enough to paint the full picture. We do not need to start with a fixed, fully fleshed-out conception of some deep feature of the world and then see whether it fits our observations. Often, we get our hooks into some general phenomenon as we try to solve specific problems, and we have to drag ourselves closer and closer to the phenomenon underlying those problems, developing our understanding over the course of generations, sometimes withstanding fierce swings and great setbacks as we hold on for dear science. We’re on to something when we say the brain is a computer, but it might not be clear for a while what exactly it is that we’re on to.
  • How will we come to make sense of the general claim that the brain is a computer? I think we have to resist talking solely in abstractions. We have to keep ourselves firmly rooted in the specific hypotheses that the brain computes X, Y and Z. What do these specific hypotheses have in common? What makes them tick? How do they give rise to systematic predictions and why do they seem to provide insightful explanations? What would we lose if, instead of saying that the brain computes depth from binocular disparity, we adopted a totally undefined term, ‘jorgmute’ (I know not what it is to ‘jorgmute’), and said that the brain jorgmutes depth from binocular disparity?
  • These questions are, in fact, just the sorts of questions you find philosophers of science regularly asking. To its credit, cognitive science (unlike other sciences, recently) has always welcomed and acknowledged the contributions of philosophers. As the late American philosopher Jerry Fodor wrote in "Fodor (Jerry) - The Language of Thought" (1975):
      One wants to say: ‘If our psychology is, in general, right then the nature of the mind must be, roughly, this …’ and then fill in the blank. … [T]he experimentalist can work the other way around: ‘If the nature of the mind is roughly …, then our psychology ought henceforth to look like this: …’, where this blank is filled by new first-order theories. We ascend, in science, by tugging one another’s bootstraps.
  • The brain is almost certainly partly a computer. We still have to uncover what that means about our brains and about ourselves.
Author Narrative
  • Kevin Lande is a postdoctoral researcher at the Centre for Philosophical Psychology at the University of Antwerp. His work has appeared in The Journal of Philosophy. He lives in Belgium.
Notes
Paper Comment
  • Sub-Title: "We’re certainly on to something when we say the brain is a computer – even if we don’t yet know what exactly we’re on to"
  • For the full text see Aeon: Lande - Do you compute?.



"Lau (Graham) - The ‘panzoic effect’: the benefits of thinking about alien life"

Source: Aeon, 25 March 2025


Author's Conclusion
  • Most of all, though, when I reflect on the possible abundance of alien life, it fills me with wonder and awe. In recent years, psychologists have shown that these are powerful, perspective-changing emotions. Wonderment at the nature of the world – from being curious about the workings of everyday things to wandering in the world around us – inspires us and helps us to develop new ideas and perspectives. Awe, meanwhile, is the feeling of being in the presence of something that transcends your current understanding of yourself and your place in the cosmos.
  • The psychologist Dacher Keltner writes in his book Awe: The New Science of Everyday Wonder and How It Can Transform Your Life (2023) that:
      From our first breath to our last, awe moves us to deepen our relations with the wonders of life and to marvel at the vast mysteries that are part of our fleeting time here, guided by this most human of emotions.
  • As Keltner suggests, there are many forms through which awe can come into our lives: from experiencing depth in music and art to feeling the grandness of nature or seeing people act in morally impactful ways.
  • Indeed, this is what may be going on when astronauts change their perspective following spaceflight. In the journal article ‘The Overview Effect: Awe and Self-Transcendent Experience in Space Flight’ (2016), David B Yaden and his colleagues conclude: ‘Awe and self-transcendence are among the deepest and most powerful aspects of the human experience; it should come as no surprise that they emerge as we gaze upon our home planet and our whole world comes into view.’
  • When I feel the panzoic effect, it encourages me to envision a hopeful future, one where our explorations inspire unity, and our shared wonder leads to greater care for one another and the planet we call home. Whether we ever encounter extraterrestrial life or not, I believe that the journey of seeking it can help us rediscover and improve ourselves. Much like the overview effect, the panzoic effect suggests that the wonder and awe we experience in this cosmic mirror – by looking out and, in turn, looking back in – has the potential to alter how we view ourselves and our place in the cosmos. And as White himself told me: ‘I think that’s the big question.’
  • Are we alone? We don’t yet know, but asking the question forces us to appreciate our existence here on Earth, while offering us a glimpse into our possible cosmic futures. Considering alien life is a means for considering ourselves.
Author Narrative
  • Graham Lau is an astrobiologist and communicator of science. He is a senior research investigator with the Blue Marble Space Institute of Science (Blue Marble Space Institute of Science (BMSIS)), and director of communications and marketing for Blue Marble Space in Seattle.
Notes
Paper Comment



"Lazar (Seth) - Frontier AI ethics"

Source: Aeon, 13 February 2024


Author's Introduction
  • Around a year ago, generative AI took the world by storm, as extraordinarily powerful large language models (LLMs) enabled unprecedented performance at a wider range of tasks than ever before feasible. Though best known for generating convincing text and images, LLMs like OpenAI’s GPT-4 and Google’s Gemini are likely to have greater social impacts as the executive centre for complex systems that integrate additional tools for both learning about the world and acting on it. These generative agents will power companions that introduce new categories of social relationship, and change old ones. They may well radically change the attention economy. And they will revolutionise personal computing, enabling everyone to control digital technologies with language alone.
  • Much of the attention being paid to generative AI systems has focused on how they replicate the pathologies of already widely deployed AI systems, arguing that they centralise power and wealth, ignore copyright protections, depend on exploitative labour practices, and use excessive resources. Other critics highlight how they foreshadow vastly more powerful future systems that might threaten humanity’s survival. The first group says there is nothing new here; the other looks through the present to a perhaps distant horizon.
  • I want instead to pay attention to what makes these particular systems distinctive: both their remarkable scientific achievement, and the most likely and consequential ways in which they will change society over the next five to 10 years.
Author's Conclusion
  • Lastly, LLMs might enable us to design universal intermediaries, generative agents sitting between us and our digital technologies, enabling us to simply voice an intention and see it effectively actualised by those systems. Everyone could have a digital butler, research assistant, personal assistant, and so on. The hierophantic coder class could be toppled, as everyone could conjure any program into existence with only natural-language instructions.
  • At present, universal intermediaries are disbarred by LLMs’ vulnerability to being hijacked by prompt injection. Because they do not clearly distinguish between commands and data, the data in their context window can be poisoned with commands directing them to behave in ways unintended by the person using them. This is a deep problem – the more capabilities we delegate to generative agents, the more damage they could do if compromised. Imagine an assistant that triages your email – if hijacked, it could forward all your private mail to a third party; but if we require user authorisation before the agent can act, then we lose much of the benefit of automation.
  • But suppose these security hurdles can be overcome. Should we welcome universal intermediaries? I have written elsewhere that algorithmic intermediaries govern those who use them – they constitute the social relations that they mediate, making some things possible and others impossible, some things easy and others hard, in the service of implementing and enforcing norms. Universal intermediaries would be the apotheosis of this form, and would potentially grant extraordinary power to the entities that shape those intermediaries’ behaviours, and so govern their users. This would definitely be a worry!
  • Conversely, if research on LLMs continues to make significant progress, so that highly capable generative agents can be run and operated locally, fully within the control of their users, these universal intermediaries could enable us to autonomously govern our own interactions with digital technologies in ways that the centralising affordances of existing digital technologies render impossible. Of course, self-governance alone is not enough (we must also coordinate). But excising the currently ineliminable role of private companies would be significant moral progress.
  • Existing generative AI systems are already causing real harms in the ways highlighted by the critics above. And future generative agents – perhaps not the next generation, but before too long – may be dangerous enough to warrant at least some of the fears of looming AI catastrophe. But, between these two extremes, the novel capabilities of the most advanced AI systems will enable a genre of generative agents that is either literally unprecedented, or else has been achieved only in a piecemeal, inadequate way before. These new kinds of agents bring new urgency to previously neglected philosophical questions. Their societal impacts may be unambiguously bad, or there may be some good mixed in – in many respects, it is too early to say for sure, not only because we are uncertain about the nature of those effects, but because we lack adequate moral and political theories with which to evaluate them. It is now commonplace to talk about the design and regulation of ‘frontier’ AI models. If we’re going to do either wisely, and build generative agents that we can trust (or else decide to abandon them entirely), then we also need some frontier AI ethics.
Author Narrative
  • Seth Lazar is professor of philosophy at the Australian National University, an Australian Research Council Future Fellow, and a Distinguished Research Fellow of the University of Oxford Institute for Ethics in AI. He has worked on the ethics of war, risk, and AI, and now leads the Machine Intelligence and Normative Theory (MINT) Lab, where he directs research projects on the moral and political philosophy of computing, funded by the ARC, the Templeton World Charity Foundation, AI2050, and Insurance Australia Group. His book Connected by Code: How AI Structures, and Governs, the Ways We Relate, based on his 2023 Tanner Lecture on AI and Human Values, is forthcoming with Oxford University Press.
Notes
Paper Comment
  • Sub-Title: "Generative agents will change our society in weird, wonderful and worrying ways. Can philosophy help us get a grip on them?"
  • For the full text see Aeon: Lazar - Frontier AI ethics.



"Lenharo (Mariana) - Do insects have an inner life? Animal consciousness needs a rethink"

Source: Nature News, 19 April 2024


Full Text
  1. Introduction
    • A declaration signed by dozens of scientists says there is “a realistic possibility” for elements of consciousness in reptiles, insects and molluscs.
    • Growing evidence indicates that insects such as bees show some forms of consciousness, according to a new scientific statement.
    • Crows, chimps and elephants: these and many other birds and mammals behave in ways that suggest they might be conscious. And the list does not end with vertebrates. Researchers are expanding their investigations of consciousness to a wider range of animals, including octopuses and even bees and flies.
    • Armed with such research, a coalition of scientists is calling for a rethink in the animal-human relationship. If there’s “a realistic possibility” of “conscious experience in an animal, it is irresponsible to ignore that possibility in decisions affecting that animal”, the researchers write in a document they call The New York Declaration on Animal Consciousness. Issued today during a meeting in New York City, the declaration also says that there is a “realistic possibility of conscious experience” in reptiles, fish, insects and other animals that have not always been considered to have inner lives, and “strong scientific support” for aspects of consciousness in birds and mammals.
    • As the evidence has accumulated, scientists are “taking the topic seriously, not dismissing it out of hand as a crazy idea in the way they might have in the past,” says Jonathan Birch, a philosopher at the London School of Economics and Political Science and one of the authors of the declaration.
    • The document, which had around 40 signatories early today, doesn’t state that there are definitive answers about which species are conscious. “What it says is there is sufficient evidence out there such that there’s a realistic possibility of some kinds of conscious experiences in species even quite distinct from humans,” says Anil Seth, director of the Centre for Consciousness Science at the University of Sussex near Brighton, UK, and one of the signatories. The authors hope that others will sign the declaration and that it will stimulate both more research into animal consciousness and more funding for the field.
  2. Blurry line
    • The definition of consciousness is complex, but the group focuses on an aspect of consciousness called sentience, often defi ned as the capacity to have subjective experiences, says Birch. For an animal, such experiences would include smelling, tasting, hearing or touching the world around itself, as well as feeling fear, pleasure or pain — in essence, what it is like to be that animal. But subjective experience does not require the capacity to think about one’s experiences.
    • Non-human animals cannot use words to communicate their inner states. To assess consciousness in these animals, scientists often rely on indirect evidence, looking for certain behaviours that are associated with conscious experiences, Birch says.
    • One classic experiment is the mirror test, which investigates an animal’s ability to recognize itself in a mirror. In this experiment, scientists apply a sticker or other visual mark on an animal’s body and place the animal in front of a mirror. Some animals — including chimpanzees (Pan troglodytes), Asian elephants (Elephas maximus) and cleaner fishes (Labroides dimidiatus) — exhibit curiosity about the mark and even try to remove it. This behaviour suggests the possibility of self-awareness, which might be a sign of consciousness.
    • In an experiment with crows (Corvus corone), the birds were trained to make a specific head gesture whenever they saw a coloured square on a screen, a task they carried out with high accuracy. While the birds performed the task, scientists measured the activity in a region of their brain associated with high-level cognition. The birds’ brain activity correlated with what the birds were reporting, not with what they were actually shown. This suggests that they were aware of what they were perceiving, another potential marker of consciousness.
    • The consciousness wars1: can scientists ever agree on how the mind works?
  3. Invertebrate inner lives?
    • Another experiment showed that octopuses
      (Octopus bocki), when picking between two chambers, avoided one where they had previously received a painful stimulus in favour of one where they were given an anaesthetic. This suggests that they experience and actively avoid pain, which some researchers think indicates conscious experience.
    • Research shows that octopuses avoid pain, which some scientists take as a sign ofconsciousness.
    • Investigations of fruit flies (Drosophila melanogaster) show that they engage in both deep sleep and ‘active sleep’, in which their brain activity is the same as when they’re awake. “This is perhaps similar to what we call rapid eye movement sleep in humans, which is when we have our most vivid dreams, which we interpret as conscious experiences,” says Bruno van Swinderen, a biologist at the University of Queensland in Brisbane, Australia, who studies fruit flies’ behaviour and who also signed the declaration.
    • Some suggest that dreams are key components of being conscious, he notes. If flies and other invertebrates have active sleep, “then maybe this is as good a clue as any that they are perhaps conscious”.
  4. Animal minds
    • Other researchers are more sceptical about the available evidence on animal consciousness. “I don’t think there is basically any decisive evidence so far,” says Hakwan Lau, a neuroscientist at the Riken Center for Brain Science in Wako, Japan.
    • Lau acknowledges that there is a growing body of work showing sophisticated perceptual behaviour in animals, but he contends that that’s not necessarily indicative of consciousness. In humans, for example, there is both conscious and unconscious perception. The challenge now is to develop methods that can adequately distinguish between the two in non-humans.
    • Seth responds that, even in the absence of definitive answers, the declaration might still have a positive influence in shaping policies relating to animal ethics and welfare.
    • For van Swinderen, the time is right to consider whether most animals might be conscious. “We are experiencing an artificial-intelligence revolution where similar questions are being asked about machines. So it behoves us to ask if and how this adaptive quality of the brain might have evolved in nature.”
  5. References
    • There are 6 of these, but not hyperlinked, so I’ve omitted them here.

Paper Comment




In-Page Footnotes ("Lenharo (Mariana) - Do insects have an inner life? Animal consciousness needs a rethink")

Footnote 1:
  • This is a link to another article.



"Levin (Michael) & Dennett (Daniel) - Cognition all the way down"

Source: Aeon, 13 October 2020


Authors' Introduction
  • Biologists like to think of themselves as properly scientific behaviourists, explaining and predicting the ways that proteins, organelles, cells, plants, animals and whole biota behave under various conditions, thanks to the smaller parts of which they are composed. They identify causal mechanisms that reliably execute various functions such as copying DNA, attacking antigens, photosynthesising, discerning temperature gradients, capturing prey, finding their way back to their nests and so forth, but they don’t think that this acknowledgment of functions implicates them in any discredited teleology or imputation of reasons and purposes or understanding to the cells and other parts of the mechanisms they investigate.
  • But when cognitive science turned its back on behaviourism more than 50 years ago and began dealing with signals and internal maps, goals and expectations, beliefs and desires, biologists were tom. All right, they conceded, people and some animals have minds; their brains are physical minds - not mysterious dualistic minds - processing information and guiding purposeful behaviour; animals without brains, such as sea squirts, don’t have minds, nor do plants or fungi or microbes. They resisted introducing intentional idioms into their theoretical work, except as useful metaphors when teaching or explaining to lay audiences. Genes weren’t really selfish, antibodies weren’t really seeking, cells weren’t really figuring out where they were. These little biological mechanisms weren’t really agents with agendas, even though thinking of them as if they were often led to insights.
  • We think that this commendable scientific caution has gone too far, putting biologists into a straitjacket that prevents them from exploring the most promising hypotheses, just as behaviourism prevented psychologists from seeing how their subjects’ measurable behaviour could be interpreted as effects of hopes, beliefs, plans, fears, intentions, distractions and so forth. The witty philosopher Sidney Morgenbesser once asked B F Skinner: ‘You think we shouldn’t anthropomorphise people?’- and we’re saying that biologists should chill out and see the virtues of anthropomorphising all sorts of living things. After all, isn’t biology really a kind of reverse engineering of all the parts and processes of living things? Ever since the cybernetics advances of the 1940s and ’50s, engineers have had a robust, practical science of mechanisms with purpose and goal- directedness - without mysticism. We suggest that biologists catch up.
Authors' Conclusion
  • Humans, of course, have very large cognitive horizons, sometimes working hard for things that will happen long after they are gone, in places far away. Worms work only for very local, immediate goals. Other agents, natural and artificial, can be anywhere in between. This way of plotting any system’s cognitive horizon is a kind of space-time diagram analogous to the ways in which relativistic physics represents an observer’s light cone - fundamental limits on what any observer can interact with, via influence or information. Examples are shown in Figure 2 below. It’s all about goals: single cells’ homeostatic goals are roughly the size of one cell, and have limited memory and anticipation capacity. Tissues, organs, brains, animals and swarms (like anthills) form various kinds of minds that can represent, remember and reach for bigger goals. This conceptual scheme enables us to look past irrelevant details of the materials or backstory of their construction, and to focus on what’s important for being a cognitive agent with some degree of sophistication: the scale of its goals. Agents can combine into networks, scaling their tiny, local goals into more grandiose ones belonging to a larger, unified self. And of course, any cognitive agent can be made up of smaller agents, each with their own limits on the size and complexity of what they’re working towards.
  • From this perspective, we can visualise the tiny cognitive contribution of a single cell to the cognitive projects and talents of a lone human scout exploring new territory, but also to the scout’s tribe, which provided much education and support, thanks to language, and eventually to a team of scientists and other thinkers who pool their knowhow to explore, thanks to new tools, the whole cosmos and even the abstract spaces of mathematics, poetry and music. Instead of treating human ‘genius’ as a sort of black box made of magical smartstuff, we can reinterpret it as an explosive expansion of the bag of mechanical-but- cognitive tricks discovered by natural selection over billions of years. By distributing the intelligence over time - aeons of evolution, and years of learning and development, and milliseconds of computation - and space - not just smart brains and smart neurons but smart tissues and cells and proofreading enzymes and ribosomes - the mysteries of life can be unified in a single breathtaking vision.
Author Narrative
  • Michael Levin is the Vannevar Bush Chair and Distinguished Professor of Biology at Tufts University in Massachusetts, where he directs the Allen Discovery Center and the Tufts Center for Regenerative and Developmental Biology.
  • Daniel C Dennett is the Austin B Fletcher professor of philosophy and co-director of the Center for Cognitive Studies at Tufts University. He is the author of more than a dozen books, the latest of which is From Bacteria to Bach and Back: The Evolution of Minds (2017). He lives in Massachusetts.
Notes
  • A rather complex paper, that deserves a second reading.
  • Ultimately, it's just a further application of Dennett's 'Intentional Stance', as exemplified in "Dennett (Daniel) - The Intentional Stance".
  • The authors suggest that the 'bottom up' approach can only go so far, else things just get too complicated to be of any use as explanations (however important and useful this may be at the bottom level).
  • The authors are also at pains to stress that they are not suggesting panpsychism or that organic subsystems have minds, but they seem to have disquieted a few Aeon Commentators, whose remarks I need to read; I've downloaded them for future analysis.
  • The reference to the Prisoner's Dilemma (and to "Dennett (Daniel) - Darwin's Dangerous Idea: Evolution and the Meanings of Life", from which quotation is made) in the context of inter-cellular communication was interesting and enlightening.

Paper Comment



"Liggins (David) - This essay isn’t true"

Source: Aeon, 17 November 2022


Author's Conclusion
  • My conjecture is that truth-talk arose like this. We had a device for expressing agreement that was transformed into a way of describing things. The transformation was beneficial because it enabled us to say more. But it came with a price. We had to believe that some things are true. And we had to believe that something counts as true if what it says is the case. In other words, we had to believe the reality-to-truth link. Of course, our ancestors were happy to take on these useful assumptions without launching a philosophical enquiry to check that some things really are true. And so the assumption that some things are true entered human culture, along with the reality-to-truth link. These assumptions were passed down to us through the generations, just as they are still being passed on to children right now.
  • Philosophers generally draw on the assumption that there are truths when discussing lots of topics: knowledge, reasoning, assertion. If we discover that nothing is true, all those discussions will have to be rethought. It looks as if the discovery that nothing is true would have massive implications for how we live our lives. For example, it seems to imply that lying is fine. No-one can be faulted for failing to tell the truth – because there are no truths to tell. I would be very uneasy if the theory I’m sympathetically exploring had that implication. Actually, I think it doesn’t. What it does imply is that our understanding of lying has to change. It’s easiest to make the point with an example. Suppose I’ve robbed a bank. My friend lies to give me an alibi: he says I was at home at the time of the robbery. Why was the claim a lie? We’d ordinarily say: because the claim wasn’t true. But the nihilist can’t say that. Instead, they have to say that the claim was a lie because I wasn’t at home. If lying is understood in this way, then the nihilist can say that lying is wrong – although there is nothing wrong with not telling the truth.
  • Any brief discussion of alethic nihilism is bound to raise more questions than it answers. But I hope that this one illustrates one of the purposes of philosophy itself – that is, to be genuinely critical. To put that another way: one of the purposes of philosophy is to take assumptions that we hardly ever stop to consider and put them under the microscope to see whether they genuinely deserve to be believed.
Author Narrative
  • David Liggins is senior lecturer in philosophy at the University of Manchester in the UK.
Notes
  • The proposal of the paper - that there are no true statements - is absurd. It's more paradoxical than the paradoxes it seeks to 'solve'.
  • The Aeon comments - as is to be expected on a topic so controversial - are very extensive and interesting - with many responses by the author - and I've stored them for future analysis.
  • I'll add my comments in due course.
  • I also need to add two new PID Notes - on Truth and on Paradox - subordinate to that on the Logic of Identity.

Paper Comment



"Linden (Ingemar Patrick) - As a society, we’re not death phobic, we’re death complacent"

Source: Aeon, 17 December 2024


Author's Introduction
  • It is often said that contemporary Western societies are in the grip of an excessive and irrational fear of death, a thanatophobia. For instance, this is the diagnosis given by the grief counsellor and author Stephen Jenkinson who observes that:
      [P]eople in any culture inherit their understandings of dying much more than they create them. In our case, that inheritance takes the form of an extraordinary degree of aversion to and dread of dying. My phrase for it was that the culture is incontrovertibly and, for the most part, unconsciously death-phobic.
  • Jenkinson’s diagnosis aligns with that of the French historian Philippe Ariès, the author of a seminal study of the history of Western attitudes to death. Ariès argues that whereas our premodern ancestors maintained an equanimous relation to death, we moderns suffer a ‘horror of death’, which he describes as a ‘violent attachment to the things of life … a passion for being, an anxiety at not sufficiently being.’
  • Evidence for this view is not hard to come by. We are surrounded by advertisements for various supplements and ointments that promise to restore our youth or hide every sign that it is fleeting (and that we are going to die). Billionaires such as Jeff Bezos and corporations such as Google are spending billions on anti-ageing research. Popular science books such as Transcend: Nine Steps to Living Well Forever (2009) by Raymond Kurzweil and Terry Grossman, Lifespan: Why We Age – and Why We Don’t Have To (2019) by David Sinclair, and Ageless: The New Science of Getting Older Without Getting Old (2020) by Andrew Steele have become bestsellers. The spirituality genre similarly assures us that we can keep on living, albeit in a different realm. The much-publicised multimillionaire Bryan Johnson, who spends $2 million per year on a personal anti-ageing regimen (the Blueprint), is building a community under the banner ‘Don’t Die’. Our collective fear of death is also evidenced by how we die: outside society’s view, in a hospital bed, drugged and connected to machines via intravenous tubes, fighting death until our last breath. Those around us ensure that everything is done to keep us alive.
  • The standard view that contemporary culture is distinctly thanatophobic certainly isn’t groundless. Nevertheless, I think it can be shown to be a myth.

Author's Conclusion
  • The case against the taken-for-granted notion that our culture is thanatophobic is, I think, surprisingly strong. I propose a counter narrative: mainstream attitudes to death in modern Western societies and most other modern societies around the world are death complacent. We think that quality of life is more important than quantity of life – healthspan over lifespan – and we see fear of death as something to overcome.
  • Even an enemy of death such as Bryan Johnson insists that he does not fear death: ‘I know what fear feels like, and I don’t experience that emotion when contemplating death.’ For him, the desire to prolong life is a value judgment, not an expression of fear; it is an expression of his ‘passion for being’ rather than his ‘horror of death’, in Ariès’s words.
  • When comparing Johnson’s lack of fear with the supposed fearlessness of strongly religious premodern societies, there is an important difference: unlike Johnson, they did not think that death was the permanent end of a person. They denied the reality of death.
  • I venture that contemporary societies have a greater proportion than ever before of people who both accept that death means personal annihilation, yet remain largely unafraid of it.
Author Narrative
  • Ingemar Patrick Linden is the author of The Case Against Death (2022). He has taught philosophy at New York University, US and is currently a senior lecturer at Geneva College of Longevity Science, Switzerland. He is the co-host of Levity, a podcast about the philosophy of longevity.
Notes
  • This is an interesting - if rather brief - paper. The author thinks we should do more to try to prolong our lives, the only one we have.
  • I can go along with this, but share the view of the supposedly complacent majority that it's quality rather than quantity that matters and that when the party's over we should try to arrange the least unpleasant mode of exit.
  • The author's book may be worth reading, but is a little expensive.
  • There are no Aeon Comments.
  • This relates to my Note on Death.

Paper Comment



"Linford (Daniel) - Exploding the Big Bang"

Source: Aeon, 09 December 2024


Author's Introduction
  • In the 1930s, a Belgian priest and physicist named Georges Lemaître transformed our understanding of the Universe when he envisioned its birth as a cosmic explosion. According to Lemaître, the beginning of time began with ‘bright but very rapid fireworks’. His theory suggested that we lived in the fading afterglow – a slowly unfolding world of smoke and ashes. Lemaître’s ‘fireworks theory of evolution’ painted a vivid picture, but it also presented scientists with a near-impossible puzzle: could we find evidence of the beginning of time if that slow unfolding was somehow tracked backward? Would we discover a record of the Universe’s birth somewhere in the present?
  • Before Lemaître, the question of the Universe’s birth was confined to metaphysicians and theologians. Jewish, Christian and Muslim scholars believed in divine creation, while atheist thinkers typically argued for an eternal past. The consequences of finding evidence for the beginning of time would have been enormous. If science was able to reveal when time began, the Abrahamic religions could take comfort in the confirmation of an important doctrine: the divine creation of the Universe. Alternatively, if science found that time never began, some conceptions of God could be ruled out. Empirical evidence, however, played no role in these philosophical and theological debates about the world’s origins. In fact, no one, not even scientists, believed that the dawn of time could have left a trace in the present.
  • The 20th century changed everything. Lemaître’s hypothesis, initially met with scepticism, suggested that the Universe had a fiery origin – one that might be discoverable. Today, many of us still believe this story. The Universe, according to popular books, television documentaries and the theme song to at least one sitcom, started with a Big Bang, marking the origins of physical matter and time itself.
  • The question of our Universe’s birth seems settled. And yet, despite how the Big Bang is portrayed in popular culture, many physicists and philosophers of physics have long doubted whether science can truly tell us that time began. In recent decades, powerful results developed by scientifically minded philosophers appear to show that science may never show us that time began. The beginning of time, once imagined as igniting in a sudden burst of fireworks, is no longer an indisputable scientific fact.

Author's Conclusion
  • Once confined to metaphysics and theology, the question of whether the Universe began once seemed within the reach of science. Einstein’s work transformed our understanding of space and time, binding both to matter and suggesting that spacetime itself could hold clues about its own origins. This breakthrough challenged beliefs that a ‘beginning’ was empirically inaccessible and led physicists to seek traces of the Universe’s birth. This triumph has proven to be bittersweet.
  • The Malament-Manchak theorem presents us with a sobering limit: our observations, no matter how extensive, may never be sufficient to determine spacetime’s global structure. Mathematically, the possible shapes and properties of the Universe remain too numerous – many versions fit equally well with the data available from our past light cones. Though the Big Bang has been popularly hailed as the origin of our Universe, many physicists and philosophers remain unconvinced.
  • In the end, whether time had a beginning is a cosmological riddle. Despite dramatic scientific developments, no theorem or observation seems powerful enough to tell us whether the Universe emerged from ‘bright but very rapid fireworks’ or has always existed. Science has brought us closer to understanding the cosmos, yet it also reminds us of the limits of our knowledge. The beginning of time may remain, in the end, a mystery that we will never conclusively answer.
Author Narrative
  • Daniel Linford is an instructor in the Department of Philosophy and Religious Studies at Old Dominion University in Virginia, US.
Notes
  • This is a very difficult topic. I doubt the author is really qualified to write on the subject.
  • I doubt theoretical physicists care much what philosophers think on the subject.
  • While 'some' physicists may think the Big Bang never happened, I doubt it's anything other than a fringe view. It explains not just the Cosmic Background Radiation and the expansion of the universe, but the ratio of Helium to Hydrogen (last I heard).
  • The paper has encouraged me to dust off "Lerner (Eric) - The Big Bang Never Happened". Based on Wikipedia: Eric Lerner, the book was not well received by the physics establishment.
  • That said, the Big Bang Theory is no more than an inference to the best explanation of the data, more of which arrives as our technology improves.
  • It's yet another incentive to get to grips with SR & GR, but maybe I'm just too old, incompetent and busy.
  • There are a lot of Aeon Comments, with replies by the author. Most of them are by the 'boggled but impressed', but I've not read them all, and skimmed most that I have read. They deserve closer attention.
  • The paper relates to my Notes on Time, Origins and Evolution (of the Universe).

Paper Comment
  • Sub-Title: "It was thought that science could tell us about the origins of the Universe. Today that great endeavour is in serious doubt"
  • For the full text see Aeon: Linford - Exploding the Big Bang.



"Longrich (Nick) - The other Homo sapiens"

Source: Aeon, 23 May 2025


Author's Introduction
  • On the western slope of Mount Carmel, in Israel, lies the cave of Es-Skhul. About 140,000 years ago, during the Ice Age, nomadic hunter-gatherers made camp here. The sea to the west had receded, exposing a broad plain covered with groves of live oak, almond and olive, meadows filled with asphodel and anemone. Herds of fallow deer, rhinoceros and aurochs roamed the plains. People hunted animals with stone-tipped spears, and foraged wild mustard and olives. And when they died, they buried their dead by the mouth of the cave. The skeletons found here represent some of the earliest known members of our species, Homo sapiens. But these Homo sapiens were very different from us.
  • Their skulls retained anatomical features seen in primitive humans like Neanderthals – huge brow ridges, massive jaws, thick skulls. But, despite their primitive appearance, they weren’t our ancestors; they appear too late in time. They’re a side branch of our evolutionary tree, one that went extinct, leaving no descendants. Why did we survive, while they didn’t?
  • The answer may lie in their skulls. They lacked the peculiar anatomical traits that modern humans share – small brow ridges, bubble-shaped skulls, reduced jaws, thin cranial bones – which are typical of juveniles of other hominins and apes. Compared with other hominins, we’re literally baby-faced. Selection for juvenile traits – low aggression, openness to novelty and new people – likely made us more social, and produced our immature-looking skulls as a side-effect. Ironically, it may have been this sociability and low aggression that made modern humans so incredibly dangerous to these primitive Homo sapiens.

Author's Conclusion
  • One of the most extraordinary features of modern Homo sapiens is how we form large social groups. African hunter-gatherers typically live in bands of several dozen people, which ally to form tribes of many hundreds or even 1,000 people. Meanwhile, studies of Neanderthal DNA show low genetic diversity, suggesting more inbreeding – they lived in smaller, more isolated social groups. Our big social groups must have given us more brains to solve problems and to devise techniques for making tools. But maybe more important is the fact that large tribes are better able to defend land – or take it. If archaic Homo sapiens resembled Neanderthals in having small social groups, this must have put them at a disadvantage against modern humans.
  • It’s obviously hard to reconstruct social structures for people who lived tens of thousands of years ago. Still, could the anatomical differences between us and the archaics tell us something – could a clue to our superiority lie in modern humans’ weird skull shapes, where juvenile features are retained into adulthood?
  • A similar pattern is seen in domestic dogs – dog skulls are shaped like those of wolf puppies, and they have thinner skull bones too. The process of domesticating dogs for lower aggression produced something that looks like a young wolf. This may be a side-effect of selecting dogs for characters found in wolf puppies – less aggression, more playfulness, more friendliness.
  • So it’s possible that a sort of process of domestication gave modern Homo sapiens our weird, immature skulls, including big, domed heads, loss of brow ridges, small jaws, and thin skull bones. If the bones look immature, maybe the brain inside was too. Perhaps youthful creativity, imagination, faculty for languages, playfulness, why’s-the-sky-blue curiosity, willingness to make new friends were all retained late in life in us, compared with other humans – with selection for child-like behaviours creating our child-like faces.
  • Paradoxically, low aggression may have been a massive advantage in intertribal warfare. Low aggression could have helped us to form big social groups – tribes of hundreds and thousands. And modern humans don’t just form huge groups, we’re unique among animals in being able to form peace treaties between different groups, and alliances between groups to defend or attack territory. What made modern Homo sapiens so uniquely dangerous might not have been a tendency towards violence and aggression, but friendliness, and the ability to forge alliances. The ability to create groups and social networks, and hold off fighting – at least, until we’re in a position to win – could have given us a decisive edge.
  • I alone am survived to tell thee. Today, all human diversity derives from a small population that lived a few hundred thousand years ago. The picture was very different when we first evolved. Then, there were 10 or more different human species. All of them have since disappeared.
  • Within Homo sapiens, we see this pattern repeated, with lineages others than our own stripped away, a diversity of peoples whittled down until only one remains. We were just one of many different lineages, but now we’re alone. All that remains of the others are stone tools, a few skeletons, perhaps a few of their genes mixed with ours.
  • We don’t know how this replacement played out. In some cases, the large size of modern human social groups probably allowed our ancestors to move in and seize territory without a fight, forcing archaic humans onto more marginal land. But in many cases, the conflict between modern humans and archaic sapiens was likely violent. The remarkably slow spread of modern humans – it took perhaps 300,000 years for us to completely displace archaic sapiens in Africa – implies that archaic humans resisted fiercely and effectively. It took hundreds of thousands of years of intertribal warfare for modern humans to spread from our homeland in southern Africa to the far edges of the continent. Even the final, rapid push from Egypt to the northwest tip of Africa took more than 10,000 years – just half a mile per year. The expansion of modern humans was a long, gruelling war of attrition, not a blitzkrieg. This also tells us something about just how human they were – the edge we had was decisive, but not overwhelming – they must have been very like us to fight us off for so long.
  • The evolution of the human species, much like art, would have been both an additive and a subtractive process. Shakespeare put words on the page, then deleted lines and whole scenes. Evolution is the same. It adds new genes to a population through mutation. It subtracts genes by eliminating individuals, populations, entire species. When Michelangelo sculpted the statue of David, he chiselled away every piece of stone that didn’t look like the ideal human form. In the same way, evolution worked on the raw material of Homo sapiens, carving away everything that wasn’t a modern human. To be more precise, we ourselves did the carving, slowly stripping away other species and other lineages, until modern humans remained.
  • This process didn’t begin with us. Over millions of years, increasingly advanced hominin species appeared, with bigger brains and more advanced tools and language. They tended to displace the more primitive species. The acquisition of spear-throwing and hunting by Homo erectus probably saw the primitive Australopithecus species killed off – even hunted and eaten – by their advanced rivals. And in the Levant, we see a series of species move through, as new and more advanced hominins evolved in Africa and then migrated out – the ancestors of Homo erectus, Homo antecessor, Neanderthals, archaic humans and, finally, modern humans, each wave replacing the one that came before.
  • This process did, however, end with us. After thousands and millions of years, one lineage emerged to replace all the others. This probably explains something about our history, and our tendency towards war and conflict. We may live in civilisation today, but the genes within us are those that made us the sole survivors of hundreds of thousands of years of intertribal conflicts and bloody, genocidal wars. We replaced all the other humans because we were more dangerous than all the others.
Author Narrative
  • Nick Longrich is a senior lecturer in evolutionary biology at the University of Bath, UK.
Notes
  • This is a fascinating, informative and challenging paper.
  • I felt that the author could have done with defining what he meant by 'species' and 'sub-species'. He seems happy with both these sets of individuals inter-breeding.
  • See:-
    Wikipedia: Species
    Wikipedia: Subspecies
  • He also seems to propose that some groups of modern (or at least contemporary) humans form separate sub-species. Controversial, if not dangerous. Maybe I misunderstand. The text is
      It’s possible that archaics represent a subspecies of Homo sapiens. More likely, given their wide geographic spread and anatomical variation, they represented multiple subspecies. Within modern humans, we see distinct groups in different regions – the San in southern Africa have been evolving more or less in isolation for 300,000 years, and the Pygmies in Central Africa split off from the remaining humans around 200,000 years ago, and so on.
  • There are no Aeon Comments.
  • This relates to my Notes on Evolution, Homo Sapiens, Human Beings, Race and Society.

Paper Comment
  • Sub-Title: "We are just one branch of a diverse human family tree. Aside from Neanderthals, who were they – and why did we replace them?"
  • For the full text see Aeon: Longrich - The other Homo sapiens.



"Love (Shayla) - You can want things you don’t like and like things you don’t want"

Source: Aeon, 07 May 2024


Author's Conclusion
  • There’s an important distinction between the kind of wanting Berridge is talking about and ‘wanting’ in a more aspirational sense. ‘I want to work out more, I want to stop spending so much time on social media, I want to eat healthier – those are cognitive plans,’ Robinson says. This kind of wanting can go along with a more primal want, but it doesn’t have to. It’s the more intense wanting that is affected in addiction.
  • There are other striking cases where too much or too little wanting can be produced through changes to dopamine. People who take medications for Parkinson’s disease sometimes develop extreme behavioural addictions. The older medications for Parkinson’s, like L-Dopa, helped the brain to make natural dopamine. Newer medications provide artificial dopamine that can attach directly to dopamine receptors. This can lead to compulsive gambling, compulsive pornography use and sex pursuit, compulsive shopping or eating, or even the compulsive pursuit of more medication – even though taking the medication isn’t that pleasant an experience.
  • People with Tourette syndrome who have an excess of dopamine sometimes resist taking medication that reduces it, because the medication can also reduce their desires and lead to motor issues. ‘They stop caring about life, stop wanting to eat, have sex, they stop socialising,’ says Robinson.
  • There are also a number of conditions – including schizophrenia and major depression – that can involve what’s known as anhedonia, or a loss of pleasure. ‘The value of rewards seems to drain out of life, and people say that rewards aren’t worth anything anymore,’ Berridge says. But in the past decade, researchers have started to find that if you give a person a positive reward and ask them to rate the pleasure, the ratings are likely to be normal – it’s just the motivation to pursue them that has gone away. Berridge thinks some of these cases might be better described as avolition (loss of volition and motivation), or anticipatory anhedonia, as opposed to consummatory anhedonia.
  • Remembering that wanting and liking are separate can be useful, even if you are not struggling with depression or a desire for an addictive substance, but are instead grappling with other kinds of common desires, or a lack of motivation. If I want to spend the rest of my day scrolling through TikTok instead of doing work, the strength of that feeling of wanting might suggest that there will be a big payoff in terms of enjoyment. But I can remember that this suggestion could be misleading, and that I’ll likely feel underwhelmed in the end. Not giving in to something you want does not necessarily mean robbing yourself of something you will like. And in the opposite case, just because you don’t want to do something – like going for a run, or hanging out with a new friend even if you’re feeling anxious – that doesn’t mean you won’t end up liking your experience.
Author Narrative
  • Shayla Love is a staff writer at Psyche. Her science journalism has appeared in Vice, The New York Times and Wired, among others. She lives in Brooklyn, New York.
Notes
  • A useful piece. ‘Wanting’ and ‘liking’ are distinct concepts which are usually closely correlated. In a sense, wanting and liking are differ because the former is only anticipatory. Mill may be right that we always expect to like what we want, even if in practice that third slice of cake makes us sick.
  • But it is possible - that's what temptation and weakness of will are all about - to not want to do what we expect to like.
  • The author is right, therefore, to distinguish first-order desires from what are effectively second-order desires; wanting to go to the gym while not wanting to go is really wanting to want to go. See my Note on Wantons.
  • Similarly, I may want to win the event even if I know I'll hate giving the victory speech, and variants on that theme.
  • There are also irrational cravings (eg. in pregnant women, maybe apocryphally) for things we know we won't like (like coal).
  • However, the paper is mostly about first-order desires.
  • This paper deserves closer study. There are useful details to do with the differing neurotransmitters and areas of the brain in the two emotions.

Paper Comment



"Lupyan (Gary) - What colour do you see?"

Source: Aeon, 12 December 2023


Author's Conclusion
  • Even though we don’t know what it’s like to be someone else, we can compare how our phenomenology differs from one time to another. There are numerous reports of people with brain injuries that cause them to lose visual imagery, and some cases of losing inner speech. It is much harder to brush aside self-reports of someone who says they used to be able to imagine things, and now they can’t (especially when these are confirmed by clear differences in objective behaviour).
  • ... The idea that the same image can look different to different people is alarming because it threatens our conviction that the world is as we ourselves experience it. When an aphantasic learns that other people can form mental images, they are learning that something they did not know was even a possibility is, in fact, many people’s everyday reality. This is understandably destabilising.
  • And yet, there is a scientific and moral imperative for learning about the diverse forms of our phenomenology. Scientifically, it prevents us from making claims that the majority experience (or the scientist’s experience) is everyone’s experience. Morally, it encourages us to go beyond the ancient advice to ‘know thyself’ which can lead to excessive introspection, and to strive to know others. And to do that requires that we open ourselves up to the possibility that their experiences may be quite different from our own.
Author Narrative
  • Gary Lupyan is professor of psychology at the University of Wisconsin-Madison, where he researches the effects of language on cognition.
Notes
Paper Comment
  • Sub-Title: "New research is uncovering the hidden differences in how people experience the world. The consequences are unsettling"
  • For the full text see Aeon: Lupyan - What colour do you see?.



"Lyons (Siobhan) - Whither Philosophy?"

Source: Aeon, 02 November 2023


Author's Introduction
  • 'As long as there has been such a subject as philosophy, there have been people who hated and despised it,’ reads the opening line of "Williams (Bernard), LRB - On Hating and Despising Philosophy" (1996).
  • Almost 30 years later, philosophy is not hated so much as it is viewed with a mixture of uncertainty and indifference. As Kieran Setiya recently put it in the London Review of Books, academic philosophy in particular is ‘in a state of some confusion’.
  • There are many reasons for philosophy’s stagnation, though the dual influences of specialisation and commercialisation, in particular, have turned philosophy into something that scarcely resembles the discipline as it was practised by the likes of Aristotle, Spinoza or Nietzsche.
Author Narrative
Paper Comment
  • Sub-Title: "The discipline today finds itself precariously balanced between incomprehensible specialisation and cheap self-help"
  • For the full text see Aeon: Lyons - Whither Philosophy?.



"Machek (David) - What’s a life worth living? For the ancients, it depends"

Source: Aeon, 06 March 2023


Author's Introduction
  • Are human lives worth living? And, if so, is life worth living unconditionally – or are there conditions attached? The philosophical debate about these profound and uncomfortable questions has a long history, going back to ancient Greek and Roman philosophy. Much of what the ancients had to say about these issues is bound to alienate a modern audience. And yet, by reckoning with their views of the life worth living, we can not only better understand how the ancients saw the world, but also find a probing angle to engage with our modern intuitions about these controversial questions.
  • To start with, it is useful to distinguish between two levels of philosophical discussions about the value of life. On one level, we can ask whether human life is valuable in the sense of having inviolable moral status: life is worth living always and unconditionally. The claim of there being such a status is often invoked by the opponents of abortion or capital punishment. On another level, we can ask whether, or when, a life is worth living for the person who lives it (or who is to live it): when does one feel like their own life isn’t worth continuing? These two evaluative perspectives on life are independent. They can agree or conflict with one another. One may believe that terminally ill patients ought to stay alive and yet maintain – without inconsistency – that their life is not worth living for them.
  • As for the latter evaluative perspective – whether a life is worth living – many people nowadays would say that the conditions of having a life that’s at least barely worth living are not particularly hard to meet. A life does not have to be great; perhaps it suffices when it is not too bad. According to an influential view in medical ethics, continued existence is worthwhile for a patient if only they desire to carry on living. The view that the criterion for a life worth living is located in the subjective preferences of the person living that life was memorably captured by William James: ‘Be not afraid of life. Believe that life is worth living and your belief will help create the fact.’
  • But for ancient philosophers, including Socrates, Plato, Aristotle and the Stoics, this is merely wishful thinking. What makes a life worth living, on the prevailing ancient view, is a degree of life’s objective perfection – a view that is likely anathema to modern sensibilities. This is the outlook that underlies Socrates’s dictum from the Apology that an ‘unexamined life is not worth living’. A human life doesn’t have to be excellent, and it doesn’t have to be the life of a philosopher, but it must exhibit some level of reflection on how one should live and why.
Author's Conclusion
  • For the ancients, it is the quality of life – in the sense of the ability to fulfil one’s biological and social functions – that determines whether a life is worth living. Some aspects of the ancient outlook can seem to us extreme. But compared with the range of positions in the contemporary philosophical discourse, the ancient philosophers were in fact moderate. On the one hand, no ancient philosopher was committed to the view that every human life has unconditional value. The ancients would regard unconditional life-affirmation as an expression of the unhealthy human tendency to cling to their lives at all costs. On the other hand, and despite the prominent life-denying strands in Greek literature, ancient philosophers did not develop an anti-natalist agenda. Human lives have a decent chance to be meaningful, and thus worth living. To be worth living, a human life need not be either agreeable or perfect: it suffices that it does not blatantly betray one’s station in society and the Universe at large.
Author Narrative
  • David Machek is a senior postdoctoral research fellow in the Department of Philosophy at the University of Bern in Switzerland. He is a specialist in ancient Greaco-Roman and early Chinese thought, and the author of The Life Worth Living in Ancient Greek and Roman Philosophy (April 2023).
Notes
  • Very interesting, if overly brief, though it's a plug for the author's absurdly expensive new book.
  • It raises lots of important questions, but addressing them is difficult without risking accusations of holding very unfashionable opinions.
  • One - maybe unimportant - objection is that the views attributed to the 'ancients' tend to be those of the aristocratic elite who have the leisure to navel-gaze.
  • Another - more important - objection is the supposition that we have 'functions' that we need to fufil in order to have a life worth living. The existentialists might agree, but would say that we create these rather than 'find' them.
  • I've saved the Comments for reflection, though haven't read them all. They don't seem as rabid as I'd expected.
  • This topic connects to my Thesis in a number of ways: What are We?, Life, Death, Person ... as well as Narrative Identity

Paper Comment



"Mahant (Nikhil) - Extraterrestrial tongues"

Source: Aeon, 09 May 2025


Author's Introduction
  • In the movie Arrival (2016), a seven-limbed alien species lands on Earth with a language that no human can understand. The aliens – dubbed Heptapods – are obliging enough to provide room in their spaceship for linguistic exchanges, but the team charged with translation is baffled. The creatures write in sentences that look like circular smoky inkblots, unlike anything on our planet.
  • The movie’s drama – based on a story by Ted Chiang – rests on the sheer strangeness of the Heptapod language, but it’s actually not as alien as it could be. Apart from the sci-fi twist that learning it imparts special abilities, the Heptapod language is not very different from ordinary human languages. The symbols are strange and circular, sure, but they still stand for words belonging to familiar grammatical categories like nouns and verbs, and can be translated into English. In fact, a major plot element in the movie is the mistranslation of a Heptapod noun meaning ‘tool’ as ‘weapon’.
  • The situation is similar with several other nonhuman languages in fiction. Consider Klingon from Star Trek, now spoken by several Earthlings. Klingon’s claim to alienness is that it contains a peculiar set of sounds and an unusual sentence structure. But, like human languages, it still contains nouns and verbs, and the same structural elements, like subject and object. The same is true of other fictional languages like Dothraki (Game of Thrones), Na’vi (Avatar) and Quenya (The Lord of the Rings).
  • Even outside fiction, imaginations are rather impoverished. The development of constructed languages (referred to as ‘conlangs’) for fictional and other purposes draws primarily from linguistics. But, as a science, linguistics generally focuses on discovering the general rules governing actual, observable human languages – their sounds, symbols or gestures, their grammar, the elements and structure of their sentences, the meanings of their expressions, etc. And while conlangs may have unique vocabularies or flout one or more rules of human languages, the formula for creating one essentially involves adapting familiar elements from how Earthlings communicate.
  • As a philosopher of language, I find this unsatisfying. The space of possible languages is vast, and full of exotic languages that are much weirder and stranger than any we have yet imagined. We should explore what those might be – and for more than intellectual curiosity alone. If we one day encounter aliens through first contact or a signal sent across the galaxy, their language might be nothing like ours. After all, humans have evolved with certain cognitive abilities and limitations. Expecting intelligent beings with alternative origins to use languages like ours betrays an anthropocentric view of the cosmos. If we want to move beyond exchanging prime number sequences to figuring out what the extraterrestrials are actually saying, we need to be prepared.

Author's Conclusion
  • An extraterrestrial language that lacks the third, semantic level would be particularly alien: one whose elements are not ‘about’ anything. Its words do not refer to objects nor are its sentences true or false descriptions of the world. Creatures that use such a language would be causal mechanisms that hook up with the world by way of environmental inputs – eg, smell, temperature or radiation – to produce resultant outputs. ‘Communication’ between such creatures may be a series of causal transactions: a stimulus from one causing a response in another, much like how hormones work in our bodies.
  • This should remind us of the interaction between machines, which is also causal in nature: the computer on which you are reading this article interacted causally with the Aeon server to bring this article to your screen. But does this interaction amount to linguistic communication? Is a language that lacks semantics even a language? It is difficult to give a simple answer to this. But these are the sort of questions that we should expect to encounter when meditating on alien modes of communication. We expect an encounter with aliens to challenge our conception of what it is to have a body, what it is to be conscious, what it is to be a living creature, and what it means to behave intelligently. So, in thinking about alien modes of communication, shouldn’t we be exploring possible languages that are very different from our own – so much so that they challenge our very conception of what a language is?
  • Alien modes of communication may also have additional levels, ones that we cannot yet foresee. Perhaps there is an affective level that can encode how exactly one feels – say, the nature and intensity of one’s pain. Or a phenomenal level that can encode qualitative experiences, such as an apple’s redness. Growing out of our anthropocentric bubble to explore how aliens might communicate will equip us better for a potential first contact scenario. But it will also make us more reflective about, and potentially improve, one of the greatest assets that our species possesses: language.

Author Narrative
  • Nikhil Mahant is a philosopher specialising in language. He is a Marie Skłodowska-Curie fellow at the Department of Philosophy at Uppsala University in Sweden.

Notes
  • This is a fascinating Paper. Firstly, it's a good potted introduction to linguistics. Before we consider how an alien language might differ from a uman one, we need to consider the nature and variety of human languages. The author spends a good page explaining the four levels of language:-
    1. Signs: things that we produce, observe or exchange. Sounds, alphabets, emojis, ...
    2. Structure: word structure, grammar and syntax.
    3. Semantics: meaning.
    4. Pragmatics: how language users can say things that go beyond the literal meanings of the words they produce. Idioms and implicatures.
  • But the paper's main purpose is to remind us just how 'alien' an alien might be. Not only do we normally imagine that aliens look like us, with a few tweaks, but they also think and view the world like us - yet they may not.
  • So, we are to try tweaking a human language by changing one or more of these levels, omitting one or more level or trying to imagine an extra level or so.
  • Even so, I think there will be limits to the differences: aliens will be responding to the same world we do, though maybe with different sensory modalities, or different concerns.
  • This Paper might have been a plug for Alien Structure: Language and Reality by Matti Eklund (4 July 2024), but the book - far too expensive to buy unless to be studied immediately - is just mentioned in passing.
  • There’s a brief discussion of Wittgenstein’s views on language, not his later ‘no private language argument’ from the Philosophical Investigations but his earlier and ‘quixotic’ ‘logically perfect’ language from the Tractatus.
  • The author's first language is Hindi.
  • There are 28 Aeon Comments, including replies to some of them by the author. Most - but not all - look worthy of careful consideration.
  • This is related to my Notes on Language, Semantics and Narrative Identity.

Paper Comment



"Majeed (Raamy) - Does national humiliation explain why wars break out?"

Source: Aeon, 27 March 2025


Author's Introduction
  • Why do countries go to war? International relations scholars are increasingly pointing to national humiliation as a key factor. Evelin Lindner observes in Making Enemies (2006) that ‘Humiliated hearts and minds may represent the only “real” weapons of mass destruction.’ Likewise, Joslyn Barnhart argues in The Consequences of Humiliation (2020) that ‘Humiliating international events on average increase individual support for assertive foreign policy actions.’ When citizens of a nation feel humiliated, they become more likely to support aggressive foreign policy initiatives.
  • This argument has featured prominently in discussions about Russia’s full-scale invasion of Ukraine on 24 February 2022. The Kremlin’s official justifications have included unfounded claims about Ukrainian Nazis committing genocide, as well as the threat of NATO expansion. But a recurring theme in explanations for the war is also the humiliation supposedly felt by everyday Russians.
  • We see this in the rhetoric of the Kremlin. On the morning of the invasion, the Russian president Vladimir Putin framed the war as a response to humiliation, declaring that its purpose was to protect those who had ‘been facing humiliation and genocide perpetrated by the Kiev regime’. Similar explanations appear in Western commentary. As Thomas Friedman noted in The New York Times in 2022: ‘When Putin felt humiliated by the West after the collapse of the Soviet Union and the expansion of NATO, he responded: “I’ll show you. I’ll beat up Ukraine.”’
  • This idea extends beyond Russia. Emmanuel Macron, the French president, has warned against humiliating Russia in efforts to aid Ukraine, implying that further humiliation could prolong the conflict. Looking further back, the humiliation of Germany by the Treaty of Versailles (1919) is often cited as a key driver of Hitler’s rise to power. And China’s so-called ‘century of humiliation’ – spanning from the Opium Wars to the end of the Second World War – is frequently invoked to explain China’s aggressive stance towards the West today.
  • Scholars of international relations have given a name to this phenomenon: national humiliation. As Barnhart writes:
      A state of ‘national humiliation’ arises when individuals who identify as members of the state experience humiliation as the overwhelming emotional response to an international event, which they believe has undeservedly threatened the state’s image on the world stage.
  • But what exactly is national humiliation? And is it really such a clear cause of war?

Author's Conclusion
  • Humiliation narratives do not emerge spontaneously. They are cultivated by political leaders, state institutions and media outlets that frame historical events in ways that sustain a sense of victimhood. These narratives serve strategic purposes, often justifying aggressive foreign policy.
  • Consider Russia. Putin frequently portrays the collapse of the Soviet Union as a national humiliation inflicted by the West. By consistently framing history this way, he keeps the humiliation narrative alive, and uses it to justify actions such as the annexation of Crimea and the full-blown invasion of Ukraine.
  • Does this mean that ordinary Russians walk around feeling humiliated every day? It is difficult to say – especially given the challenges of collecting reliable data in Russia. But, in a way, that question is secondary. Even if most Russians do not personally feel humiliated, the humiliation narrative is sufficient to provide political cover for war.
  • One major lesson of shifting our focus from humiliating emotions to humiliation narratives is that it challenges the inevitability of war. The conventional explanation suggests that, once a country’s citizens feel humiliated, they will almost always support aggressive foreign policies to restore their national pride. But if we instead see humiliation as a narrative – a constructed story rather than a raw emotional reaction – we recognise that these narratives can be rewritten, reframed, or even rejected.
  • Germany’s post-Second World War reconciliation with its neighbours illustrates this. When the German chancellor Willy Brandt, on a state visit to Warsaw in 1970, spontaneously knelt before a memorial to Polish victims of the Nazis, many Germans initially saw the gesture as humiliating. Yet, over time, it became a cornerstone of Germany’s postwar identity – an act of humility rather than humiliation. This shift in narrative helped Germany move beyond its history of conflict.
  • What does this mean for Russia? Much of the justification for its war on Ukraine hinges on a humiliation narrative. But alternative voices – both within and outside Russia – challenge this interpretation. They argue that Russia’s struggles stem not from Western aggression but from internal problems such as economic mismanagement, corruption and demographic decline. Elevating these alternative narratives could help reduce tensions, and open pathways to peace.
  • Ultimately, understanding how humiliation narratives function gives us a sharper analytical tool for assessing international conflict. It reminds us that the stories nations tell about themselves matter just as much as – if not more than – the emotions their citizens feel. And, crucially, it shows that those stories can be rewritten – erasing the justifications for war along with them.
Author Narrative
  • Raamy Majeed is a lecturer in philosophy at the University of Manchester, who works on the philosophy of mind and cognitive science. He is also an associate editor for the Australasian Journal of Philosophy.
Notes
  • This is another annoying Paper; an exercise in Russia-bashing. It is important, though.
  • The Paper is very brief and I've cited about 2 / 3 of it. I thought of citing it all and adding footnotes.
  • Firstly, it's a nation's elite and its 'nation builders' that - in general - are those that 'naturally' feel humiliated. The average citizen just gets on with things, unless the 'humiliation' directly affects them (as it would have in post WW2 Germany - had the alternative to the Marshall Plan - de-industrialising Germany - been implemented). The general populace may be persuaded to share the view of the elite by setting a narrative and promoting it. Propaganda when performed by our enemies, though not so by us. All nations need a - probably fictitious and doubtless inlated - narrative to keep them going and together during difficult times. In the UK, because things are stable and going well, we - or the elite (those that in other circumstances would feel the humiliation) feel happy to sneer at and unravel our own national story, as though everything we ever did was a sin.
  • In my view, war is entered into to avoid humiliation as much as in response to it. Surely the Falklands War was in response to an affront as much as the protection of a tiny community and a few sheep. The second war against Iraq was a direct response to 9/11. The US's nose had been tweaked and someone had to pay, even if they weren't guilty. The numerous current border-dispute wars (the Russia-Ukraine war is hardly the only one) are driven by both sides refusing to back down and be humiliated.
  • I consider the reference to Willy Brandt's 'act of humility / humiliation' to be absurd. Germany had been - almost entirely - responsible for one of the most disgraceful episodes in history. A little kneeling goes nowhere near reparation.
  • That said, Germany had been humiliated at Versailles and the German populace could easily be persuaded to share the outrage of the Nazis and other nationalists.
  • Likewise, Russia had been a Great Power - both under the Czars and the Communists - and the West did little to help them recover from the collapse of the Soviet Union but - unlike with Germany after WW2 - continued to treat them as enemies to be suppressed.
  • There are no Aeon Comments.
  • This relates to my Note on Narrative Identity.

Paper Comment



"Marino (Lori) - Happy the person"

Source: Aeon, 16 September 2022


Author's Conclusion
  • There is also the ‘slippery slope’ argument often voiced by judges in cases brought by the NhRP. This argument claims that if certain nonhuman animals, like great apes, elephants and dolphins, are recognised as persons, then there is little to stop the trend from moving down the scala naturae to farmed animals and others, whose use we take for granted.
  • A related misconception is that rights given to nonhuman animals would be equivalent to human rights. But the rights sought by the NhRP on behalf of their nonhuman clients are species-appropriate – the right not to be confined, and (where relevant) the right to bodily integrity: for instance, not to be experimented on. They do not seek the right to an education or the right to vote that only humans should possess.
  • But resistance could be due to something even more intractable: our need to place ourselves apart from the other animals. A theory advanced by the anthropologist Ernest Becker may provide insights into this deeper resistance. In The Denial of Death (1973), he suggested that human awareness of personal mortality creates a deep subconscious anxiety that is mitigated by defences against the knowledge that we are biological entities who share the same fate with the other animals at the end of life. These defences are expressed culturally, religiously and even in our legal system where our animality is denied and our need to be qualitatively different from and above the rest of the animal kingdom is challenged by nonhuman personhood and rights. Our species attempts to deny our own mortality by promoting the false narrative ‘I am not an animal!’ In fact, numerous studies have shown that people are more sensitive to being compared with other animals when they are subconsciously reminded of mortality. Thus, efforts to promote the rights of other animals are pushing against a deep human need to cope with the fear of death.
  • It’s an uphill battle to recognise personhood in other animals, even when they are clearly persons by any current legal definition or criteria. The fact that judges have come close to but have not granted a single right to another animal tells us more about human psychology than about the psychology of elephants, great apes and cetaceans, or any other nonhuman beings. Organisations like the NhRP should continue to hold the US common law legal system accountable for being inconsistent – and, frankly, speciesist – in applying the law to cetaceans, elephants and apes. Even as they do, scientific evidence for autonomy in still other groups of animals, from monkeys to birds and dogs, only mounts. Today, there is little doubt that these animals, too, should be accommodated with basic species-specific rights. The real question may be: ‘Who deserves our respect?’ In an ideal world, all sentient beings (whether deemed persons or not) would have rights that protect them from human abuse – a necessity that is an unflattering indictment of our human nature.

Author Narrative
  • Lori Marino is a neuroscientist and an expert in animal behaviour and intelligence. Formerly on the faculty at Emory University, she is the founder and executive director of the Kimmela Center for Animal Advocacy in Utah, and president of the Whale Sanctuary Project.

Notes
Paper Comment
  • Sub-Title: "She has deep emotions, complex social needs and a large, elephant brain. Her legal personhood should be recognised too"
  • For the full text see Aeon: Marino - Happy the person.



"May (Joshua) - Why we should think of neurodiversity like we do personality"

Source: Aeon, 10 April 2025


Author's Introduction
  • In 2013, Seattle Children’s Hospital ran an ad on buses featuring a smiling child alongside the tagline: ‘Let’s wipe out cancer, diabetes and autism in his lifetime.’ Equating autism – a fundamental part of many people’s identities – to life-threatening diseases felt like a slap in the face to many in the community. The ads were swiftly pulled, highlighting a growing shift in thinking about autism as well as other neurological differences.
  • The ongoing paradigm shift has empowered many people to reclaim their identities. The concept of neurodiversity, developed in the 1990s, was inspired by the disability rights movement and its focus on accommodating differences rather than curing them. The neurodiversity movement has likewise pushed for greater acceptance and support, challenging the ‘pathology paradigm’ that views conditions like autism only as deficits to be treated.
  • Instead, the neurodiversity paradigm urges society to accept people as they are and to foster environments where neurodivergent individuals can thrive. Over the past two decades, advocacy groups have gained traction, leading to the adoption of individualised education programmes, flexible work arrangements, sensory-friendly environments and more. Figures such as Temple Grandin and Greta Thunberg can now proudly embrace their autism as a valued identity, even as critical to their success. Nick Walker, an autistic self-advocate and leading theorist in neurodiversity, writes: ‘Professionals who truly understand the neurodiversity paradigm would no sooner attempt to “treat” a client’s autism than attempt to “treat” a client’s homosexuality, or attempt to “treat” a client’s membership in an ethnic minority group.’ Other differences, such as ADHD, can fit under the neurodivergent umbrella as well, with some people viewing their distractibility and hyperactivity as endearing quirks or assets.
  • Despite these positive developments, the paradigm shift has also raised a tough question: is it enough to frame neurodivergence solely as a difference (akin to race or sexual orientation), never a deficit? Such an uncompromising approach has led some to roundly oppose the neurodiversity paradigm. Critics argue that it paints too rosy a picture, glossing over the struggles of those with severe neurological conditions. Some autistic people can’t communicate with others or live independently, even with support and accommodations. ADHD, too, can be debilitating. For the writer Yasmin Tayag, treatment for ADHD was a revelation, freeing her from ‘impulsivity and recklessness, angry outbursts, and frantic thoughts’, she wrote in The Atlantic.
  • A false choice lurks in the debate over neurodiversity. Instead of framing neurodivergence as strictly a difference to preserve or a deficit to be cured, we can employ a more nuanced view of human cognition. I use the term ‘cognitive continuity’ to emphasise that all psychological traits – whether they are associated with the label ‘neurodivergent’ or not – exist on a continuum. Human brains exhibit a wide range of functioning across many dimensions, and the lines between categories are blurry.

Author's Conclusion
  • A nuanced take on neurodiversity allows for both acceptance and intervention. Schools, workplaces and healthcare systems should not assume that one way of thinking or being is categorically ‘normal’ and that deviations are deficits. At the same time, individuals and institutions shouldn’t be discouraged from pursuing treatments when neurodivergence is causing distress or limiting valuable opportunities.
  • The key is to build bridges of compassion across the neurological continuum. Instead of expecting or forcing neurodivergent people to always fit neurotypical moulds, the people in their lives can make accommodations – while recognising that those aren’t always enough. For some, medication or therapy is essential to managing real challenges that come with their neurotype. However, as the neurodivergent academic Robert Chapman argues, we should resist pathologising by default.
  • Just as being highly introverted can be seen as a mere difference in temperament, traits associated with autism and other neurotypes can be understood as reflecting natural variations in brain function. At the same time, neurological differences can pose challenges even in the most accommodating societies. Both perspectives can hold true. Rather than presenting a false choice – between ‘differences to celebrate’ or ‘deficits to fix’ – we can recognise that a neurotype’s impact on wellbeing depends on the individual and context.
  • The neurodiversity movement has made great strides in dismantling stigma and expanding accommodations. Offering treatment to those who want it, however, isn’t necessarily a form of prejudice or oppression. As we continue to examine what it means to be neurodivergent, it’s important to resist binary thinking and fully embrace the complexity of being human.
Author Narrative
  • Joshua May is professor of philosophy and psychology at the University of Alabama at Birmingham and author of Regard for Reason in the Moral Mind (2018) and Neuroethics (2023). He writes about moral controversies and the cognitive science of social change.
Notes
  • This paper is excellent. I entirely agree that 'Neurodiversity' is a set of spectrums that all people share and that require medical intervention only at the extreme ends. Otherwise the characteristics are part of personality - advantageous in some circumstances, disadvantageous in others, and it's up to the individual to manage or take advantage of these traits as they see fit or are able.
  • It's up to 'society' to be accommodating and to provide - where reasonably convenient - an environment to enable flourishing; but also up to the individual to be accommodating as well - to fit in where possible and not expect a free lunch or act antisocially 'because they are on the spectrum'. Neurodiversity may explain but not - except in rare cases - justify behaviour.
  • The few Aeon Comments are interesting in that they are from mildly autistic individuals with siblings with extreme symptoms, thereby diverting attention and sympathy from themselves who went 'undiagnosed'.
  • This relates to my Notes on Narrative identity, Personality and Psychopathology.

Paper Comment



"McDonald (Lucy) - The magic of the mundane"

Source: Aeon, 15 March 2024


Author's Introduction
  • Think back to the last time you fell over in a public place. What did you do next? Perhaps you immediately righted yourself and carried on exactly as before. I bet you didn’t, though. I bet you first stole a furtive glance at your surroundings to see if there were witnesses. If there were, then you may well have bent over and inspected the ground as if to figure out why you tripped, even if you already knew why. Or maybe you smiled or laughed to yourself or uttered a word like ‘Oops!’ or ‘Damn’. At the very least, I bet your heart rate increased.
  • These behaviours seem irrational. If you were uninjured, why do anything at all after the stumble? For some reason, such public mishaps – stumbling, knocking something over, spilling something, pushing a ‘pull’ door, realising you’ve gone the wrong way and turning around – provoke an anxiety that compels us to engage in curious behaviours.
  • This is because, the sociologist Erving Goffman shows us, there is nothing simple about passing through a public space. Instead, we are always expected to reassure strangers around us that we are rational, trustworthy and pose no threat to the social order. We do this by conforming to all manner of invisible rules, governing, for example, the distance we maintain from one another, where we direct our eyes and how we carry ourselves. These complex rules help us understand ourselves and one another. Break such a rule, and you threaten a ‘jointly maintained base of ready mutual intelligibility’.

Author's Conclusion
  • Throughout his career, Goffman refused to describe his work as offering a theory of the social world. In his presidential address to the American Sociological Association, published posthumously due to his early death at 60, Goffman described himself as offering merely ‘glimmerings’ about the structure of social interaction. This may explain why, despite his fame, many sociologists are ambivalent about his work, and why in the neighbouring discipline of philosophy, he is often ignored.
  • It is hard to know exactly what to do with Goffman. He offered no foundational principles, no overarching analyses of the world in its totality. Nor was his methodology always clear; he used too much data to qualify as a theoretician, but his work was often too abstract, too impressionistic and too literary to qualify as ethnography proper. He did not help matters by refusing to engage with other people’s analyses of his work.
  • Yet Goffman’s rejection of theorising is itself theoretically significant. He showed that one need not articulate a grand theory of the world in order to improve our understanding of it. Indeed, such grand theorising might be premature when we haven’t yet appreciated the full complexity of even the most minute phenomena – like a person falling over in the street. Goffman thought that there could be great value in the provision of even ‘a single conceptual distinction’, ‘if it orders, and illuminates, and reflects delight in the contours of our data’.
  • In identifying the ‘interaction order’, Goffman also illuminated a dimension of life previously hidden to most of us. Railing against what he referred to as the ‘touching tendency to keep a part of the world safe from sociology’, Goffman showed us that life is social all the way down – nothing we do is untouched by the norms and expectations of our community.
  • One might find this revelation depressing; is there no respite from the demands and opinions of other people? But it’s also possible to find hope in this. What we might write off as personal awkwardness is in fact evidence of acute attunement to social norms. Features of our bodies, our behaviours and our minds that others have told us are inherent flaws are in fact of no moral significance – their alleged defectiveness stems from arbitrary social standards of ‘normality’. And ultimately, it is only once we grasp the contingency and artificiality of such social norms, especially those that oppress, that we can begin to transform them.
Author Narrative
  • Lucy McDonald is a lecturer in ethics at King’s College London. Her work has appeared in the Journal of Moral Philosophy and the Australasian Journal of Philosophy, among others.
Notes
  • There are many interesting lessons in this paper, which I need to re-read and make notes on!
  • See Wikipedia: Erving Goffman
  • There are also some interesting Aeon Commnents. Some raise issues of cultural relativity. Correct, but nothing that the author or Erving Goffman would disagree with.

Paper Comment



"McElvenny (James) - Our language, our world"

Source: Aeon, 15 January 2024


Author's Introduction
  • Anyone who has learned a second language will have made an exhilarating (and yet somehow unsettling) discovery: there is never a one-to-one correspondence in meaning between the words and phrases of one language and another. Even the most banal expressions have a slightly different sense, issuing from a network of attitudes and ideas unique to each language. Switching between languages, we may feel as if we are stepping from one world into another. Each language seemingly compels us to talk in a certain way and to see things from a particular perspective. But is this just an illusion? Does each language really embody a different worldview, or even dictate specific patterns of thought to its speakers?
  • In the modern academic context, such questions are usually treated under the rubrics of ‘linguistic relativity’ or the ‘Sapir-Whorf hypothesis’. Contemporary research is focused on pinning down these questions, on trying to formulate them in rigorous terms that can be tested empirically. But current notions concerning connections between language, mind and worldview have a long history, spanning several intellectual epochs, each with their own preoccupations. Running through this history is a recurring scepticism surrounding linguistic relativity, engendered not only by the difficulties of pinning it down, but by a deep-seated ambivalence about the assumptions and implications of relativistic doctrines.
Author's Conclusion
  • Gurindji speakers’ habit of using cardinal directions would seem to have opened up their powers of perception. At least some Gurindji speakers may be able to consciously feel Earth’s magnetic field. But do English speakers and Gurindji speakers live in ‘distinct worlds’, as Sapir would have it? Having greater sensitivity to some features of the environment still seems like something less than the all-encompassing, incommensurable worldviews of the Humboldtian tradition.
  • This is perhaps the chief source of the continuing scepticism regarding linguistic relativity in many academic quarters. We start with a feeling, an ineffable je ne sais quoi, that our language shapes our world. But to assess the truth of this claim, the scientist wants a hypothesis – a rigorous, experimentally testable statement of precisely how language shapes our world. Quasi-mystical meditations on my life in language are not the stuff of modern scientific journals. But any properly formulated hypothesis will necessarily be reductive and deflationary – devising empirical tests of the supposed differences in our worldviews inevitably means transforming our innermost feelings into detached, foreign objects that we can observe and analyse from the outside. Such tests can arguably never capture the totality and primordiality of the original feeling.
  • Does this mean that the scholarship of previous centuries has no place in today’s world or, alternatively, that modern science simply cannot fathom the philosophical depths explored by earlier work? Past and present scholarship are complementary. The writings of earlier scholars – however speculative they may seem to us now, and whatever problematic assumptions they may be built upon – undeniably capture something of our human experience and can inform the investigations of present-day researchers. In turn, the hypotheses and experiments of latter-day linguists and psychologists provide another perspective – shaped by the scientistic worldview of our own era – on these enduring questions of the connections between mind and language. In all these cases, we cannot even make sense of the questions without understanding something of the specific intellectual contexts in which they have arisen.
Author Narrative
  • James McElvenny is a linguist and intellectual historian at the University of Siegen, Germany. His latest books are Language and Meaning in the Age of Modernism (2018) and A History of Modern Linguistics (2024). He presents the History and Philosophy of the Language Sciences podcast.
Notes
  • A very useful survey of the history of linguistics. Doubtless a plug for the author's latest book. Unfortunately, the book ends with WWII.
  • It's a very interesting discussion of the Sapir–Whorf hypothesis (Wikipedia: Linguistic relativity) 'a principle suggesting that the structure of a language influences its speakers' worldview or cognition, and thus individuals' languages determine or shape their perceptions of the world.'
  • I need to re-read and take notes to add to my Note on Language.
  • There's a reference to human perception of the Earth's magnetic field (referencing "Jaekl (Philip) - Human magnetism"), which I'd come across before in the context of the senses of birds.

Paper Comment



"McGivney (Annette) - ‘Bees are sentient’: inside the stunning brains of nature’s hardest workers"

Source: The Guardian, 02 April 2023


Introduction
  1. When Stephen Buchmann finds a wayward bee on a window inside his Tucson, Arizona, home, he goes to great lengths to capture and release it unharmed. Using a container, he carefully traps the bee against the glass before walking to his garden and placing it on a flower to recuperate.
  2. Buchmann’s kindness – he is a pollination ecologist who has studied bees for over 40 years – is about more than just returning the insect to its desert ecosystem. It’s also because Buchmann believes that bees have complex feelings, and he’s gathered the science to prove it.
  3. This March, Buchmann released a book that unpacks just how varied and powerful a bee’s mind really is. The book1, What a Bee Knows: Exploring the Thoughts, Memories and Personalities of Bees, draws from his own research and dozens of other studies to paint a remarkable picture of bee behavior and psychology. It argues that bees can demonstrate sophisticated emotions resembling optimism, frustration, playfulness and fear, traits more commonly associated with mammals. Experiments have shown bees can experience PTSD-like symptoms, recognize different human faces, process long-term memories while sleeping, and maybe even dream.
  4. Buchmann is part of a small but growing group of scientists doing what he calls “fringe” research seeking to understand the full emotional capacity of bees. His research has radically changed the way he relates to the insects – not only does he now avoid killing them in his house, he has also significantly reduced lethal and insensitive treatment of specimens for his research.
  5. “Two decades ago, I might have treated a bee differently,” Buchmann says.

Amazon Book Description of What a Bee Knows: Exploring the Thoughts, Memories and Personalities of Bees, by Stephen Buchmann
  1. For many of us, the buzzing of a bee elicits panic. But the next time you hear that low droning sound, look closer: the bee has navigated to this particular spot for a reason using a fascinating set of tools. She may be using her sensitive olfactory organs, which provide a 3D scent map of her surroundings. She may be following visual landmarks or instructions relayed by a hive-mate. She may even be tracking an electrostatic path left by other bees.
  2. What a Bee Knows: Exploring the Thoughts, Memories, and Personalities of Bees invites us to follow bees’ mysterious paths and experience their alien world. Although their brains are incredibly small - just one million neurons compared to humans’ 100 billion - bees have remarkable abilities to navigate, learn, communicate, and remember.
  3. In What a Bee Knows, entomologist Stephen Buchmann explores a bee’s way of seeing the world and introduces the scientists who make the journey possible. We travel into the field and to the laboratories of noted bee biologists who have spent their careers digging into the questions most of us never thought to ask (for example: Do bees dream? And if so, why?). With each discovery, Buchmann’s insatiable curiosity and sense of wonder is infectious.
  4. What a Bee Knows will challenge your idea of a bee’s place in the world - and perhaps our own. This lively journey into a bee’s mind reminds us that the world is more complex than our senses can tell us.

Paper Comment

See The Guardian: McGivney - ‘Bees are sentient’: inside the stunning brains of nature’s hardest workers.




In-Page Footnotes ("McGivney (Annette) - ‘Bees are sentient’: inside the stunning brains of nature’s hardest workers")

Footnote 1:
  • As I’m unlikely ever to buy or read this book, I’ve provided the Abstract from Amazon.



"Mercedes (Leoni) - Late autism diagnosis: it’s a relief, but who’s behind the mask?"

Source: Aeon, 09 December 2024


Author's Introduction
  • Realising that you are autistic as an adult can bring about a lot of feelings: relief to finally have answers; anger that it has taken so long; regret about what might have been if you’d known earlier. What also awaits, for many, is confusion.
  • People diagnosed as adults commonly have been masking for most of their lives. That is, they’ve been hiding parts of themselves to fit in with a neurotypical majority, essentially performing another identity, both consciously and unconsciously. Masking involves projecting ‘an inauthentic version of you’, explains Amy Pearson, an autism researcher at Durham University in the United Kingdom. ‘This can include showing personality traits that aren’t part of your own personality, or using social communication strategies that you wouldn’t usually use, for example, making eye contact even if it makes you uncomfortable.’
  • Masking well requires constant vigilance, which can be exhausting. It means that, in any given social situation, you’re constantly monitoring how you move (am I too animated?), how you speak (is my delivery too flat?), and what you’re saying (does this sound too blunt?) to ensure it’s considered acceptable to others. Masking can also be remarkably effective, to the extent that the masker fools themselves. I abandoned pursuing an autism diagnosis in my late 20s after a doctor dismissed the possibility out of hand. I thought I must have been mistaken, and felt ashamed that I’d made such a fuss. It was only during the social isolation of the COVID-19 pandemic – when I no longer felt the energy crashes that I’d come to expect after a week of masking at the office – that I started slowly moving again toward the realisation that I am indeed autistic.
  • Those who realise they’re autistic or receive a formal diagnosis in adulthood might be abruptly facing their own mask for the first time. This presents a new question for them: after wearing this mask for so long, how can I know where it ends and the true ‘me’ begins? Questions such as these take some time to answer. A diagnosis is just the start.

Author's Conclusion
  • In my case, realising my autistic identity has felt like being granted permission – permission to opt out of situations that can rapidly drain my battery, such as casual social events that last more than a few hours, which lets me better manage my energy. It also feels like permission to pursue more of the things that I love and that matter to me, to really lean into things I want to do that might have been deemed ‘weird’ – such as travelling to museums across the country to tick off an A-to-Z list of dinosaurs, or painting the surface of Pluto – but that make me who I am. That said, I still feel a little conflicted at times. I sometimes feel like a fraud, like I don’t really have the right to claim the label, which can only be because of years of masking. I’m also very aware that my experience is just a sliver of autistic experience as a whole.
  • Others will soon be diagnosed and begin their own new phase of self-exploration, though many will not: according to a recent estimate, in England, where I live, there are about 750,000 undiagnosed autistic people aged 20 and over. That’s a lot of people who are potentially living without support, should they need it, and who might be denied this journey of self-discovery. Diagnosis isn’t necessary for everyone. But a wider understanding of neurodivergence will go a long way toward ensuring that more of us have the opportunity to see ourselves in a powerful, clarifying new light.
Author Narrative
  • Leonie Mercedes is a freelance writer from London, UK. She shares tips for urban astronomy and nature watching, and science-themed lateral thinking questions in her weekly newsletter, Kicks from Science.
Notes
  • I'm all for 'neurodiversity' as a way of recognising that we're all different. But I don't like 'diagnoses' where these are of farly mild differences from the standards expected in a particular society.
  • The only people correctly diagnosed as 'autistic' are those who cannot function in society at all, or who sit in a corner banging their heads against the wall.
  • Those who are just socially awkward or whose interests don't fit in with their peer group or work environment really either need to learn some social skills that others find come to them naturally, or gravitate to other situations in which they don't seem so odd.
  • You don't need a 'diagnosis' for this, you just need to face up to who you are and make the best of things, focusing on your strengths.
  • I don't think trying to fit in as best you can is 'masking'. We live in a social world and we can't just opt out.
  • There's a link - possibly supplied by the Editor - to "Happe (Francesca) - Autistic people shouldn’t have to use ‘camouflage’ to fit in", which I've commented on in a similar vein (I now see).
  • There are no Aeon Comments.
  • This relates to my Note on Psychopathology. It probably requires further though as I myself am considered decidedly odd.

Paper Comment



"Merritt (Stephanie) - Bruno the brave"

Source: Aeon, 25 August 2016


Author's Introduction
  • Go to Campo dei Fiori in Rome on 17 February and you will find yourself surrounded by a motley crowd of atheists, pantheists, anarchists, Masons, mystics, Christian reformers and members of the Italian Association of Free Thinkers, all rubbing shoulders with an official delegation from City Hall.
  • Every year, this unlikely congregation gathers to lay wreaths at the foot of the 19th-century statue that glowers over the piazza from beneath its friar’s cowl; flowers, candles, poems and tributes pile up against the plinth whose inscription reads: ‘To Bruno, from the generation he foresaw, here, where the pyre burned.’ In the four centuries since he was executed for heresy by the Roman Inquisition, this diminutive iconoclast has been appropriated as a symbol by all manner of causes, reflecting the complexities and contradictions inherent in his ideas, his writings and his character.
  • Giordano Bruno was born in Nola, at the foot of Mount Vesuvius, in 1548, a soldier’s son who joined the prestigious Dominican convent of San Domenico Maggiore in nearby Naples at the age of 17. He excelled in philosophy and theology, but his precocious intellect proved as much a liability as an asset; he was constantly in trouble with his superiors for questioning orthodoxies and seeking out forbidden reading material.
  • In 1576, he was forced to flee the convent under the cover of darkness to avoid an interrogation by the local Inquisitor after being discovered reading Erasmus in the privy (he had thrown the book down the hole to hide the evidence, but someone was determined enough to fish it out).
  • So began an itinerant life that saw Bruno progress north through Italy to France, transforming himself along the way from defrocked friar (he was excommunicated in absentia for his unauthorised flight), to university teacher, to personal tutor to King Henry III of France, all in little more than five years.
  • This remarkable rise illustrates Bruno’s audacity and charisma but, although he began to move in influential circles, gaining a reputation for his prodigious memory and the artificial memory system he devised, his position always remained precarious; he made enemies with as much alacrity as he attracted admirers, and was thrown into prison in more than one city for giving offence in his public lectures. He published books that further consolidated his notoriety as a dangerous thinker.
  • Bruno moved from Paris to London, back to Paris, then on to Wittenburg, Prague, Zurich, Frankfurt, Padua and Venice, charming his way into the service of ambassadors, archdukes, kings and emperors, always seeking the sympathetic patron who would allow him to develop his philosophy without fear of reprisals from the authorities, but eventually his luck ran out. In Venice, he was betrayed and arrested by the Inquisition. After eight years in prison and two lengthy trials, he was led out to Campo dei Fiori on the morning of 17 February 1600, where he was burned alive, reportedly turning his face from the proffered crucifix in his final moments.
  • The German Catholic convert Gaspar Schoppe, who witnessed Bruno’s final trial and his execution, wrote an account afterwards to a friend in which he reported that, after the death sentence was pronounced: ‘He made no other reply than, in a menacing tone, [to say], “You may be more afraid to bring that sentence against me than I am to accept it.”’
  • The length of Bruno’s imprisonment and trials suggests an unease on the part of the Holy Office, a recognition that the case against him was far from clear-cut. Bruno’s defiance and integrity in refusing to recant have lent an air of martyrdom to his death – something he might even have sensed at the time, to judge by his response. But the nature of that martyrdom – if such it was – is less easy to pin down.

Author's Conclusion
  • Bruno might have become the figurehead of choice for the students of a Rome newly independent of Papal authority after the Risorgimento (‘the generation he foresaw’); there might be piazzas, streets and colleges named after him all over Italy now, but his relationship to authority remains uneasy. His successor Galileo, who did recant when condemned by the Inquisition, was officially pardoned by Pope John Paul II in 1992, but as recently as 2000, on the 400th anniversary of Bruno’s death, the same Papal authority declared that the Nolan had deviated too far from Christian doctrine to be granted Christian pardon. Even now, he remains an outsider, and perhaps that is what gives his story its enduring power.
  • Last Christmas, a friend sent me a photograph from Cosenza in Calabria: a poster stuck to a public sculpture in the shape of a pile of books growing wings and taking flight. The poster showed a picture of Bruno’s statue, superimposed with the words ‘Ricordando Giordano Bruno, Contro Vecchi e Nuovi Inquisitori’ (Remembering Giordano Bruno, Against Old and New Inquisitors).
  • A few weeks earlier, I had heard him mentioned by the exiled Russian activist Nadezhda Tolokonnikova of Pussy Riot, at a fundraising gig for the Belarus Free Theatre, as part of a rousing speech about the past heroes who had given her courage to continue her protests. Perhaps the Inquisition is still with us, in different shapes, and each new generation he foresaw must not take its freedom of thought for granted. For as long as there are those who dare to voice their ideas aloud and are imprisoned or exiled for it, Bruno persists as a symbol of intellectual courage. Long may his flame burn.
Author Narrative
  • Stephanie Merritt is an English writer and novelist. She was deputy literary editor of The Observer from 1998-2005 and is the author of a bestselling series featuring the 16th-century heretic philosopher and spy, Giordano Bruno. She lives in Sussex.
Notes
  • This is an old Paper (by Aeon standards!) and arrived on my list because it was cited in "Lau (Graham) - The ‘panzoic effect’: the benefits of thinking about alien life".
  • It would actually have been more relevant had it been cited by "Mills (Sam) - Requeering Wilde", as there is a parallel in pressure groups adopting their version of a historical figure without being too scrupulous as to historical accuracy.
  • It was written by a novelist who has written a series with a fictionalised Gordiano Bruno as a protagonist, so is doubtless a plug for her books (though they are not cited explicitly). Indeed, it seems from Wikipedia: Stephanie Merritt that the 8 novels in the series were written under the name of S. J. Parris.
  • The author claims not to be a Bruno scholar, so maybe Wikipedia: Giordano Bruno may be best for the facts.
  • It's doubtful whether Gordiano Bruno was right on many things so he's mainly revered - where he is - as a hero for free speech.
  • There are 19 Aeon Comments, mostly sensible, which deserve study.
  • I suppose this relates - vaguely - to my Notes on Narrative identity and Naturalism.

Paper Comment
  • Sub-Title: "For anyone who dares to voice dangerous ideas and risk imprisonment or exile, Giordano Bruno remains a hero"
  • For the full text see Aeon: Merritt - Bruno the brave.



"Metzinger (Thomas) - Are you sleepwalking now?"

Source: Aeon, 22 January 2018


Author's Conclusion
  • What is clear by now is that our societies lack systematic and institutionalised ways of enhancing citizens’ mental autonomy. This is a neglected duty of care on the part of governments. There can be no politically mature citizens without a sufficient degree of mental autonomy, but society as a whole does not act to protect or increase it. Yet, it might be the most precious resource of all. In the end, and in the face of serious existential risks posed by environmental degradation and advanced capitalism, we must understand that citizens’ collective level of mental autonomy will be the decisive factor.
  • Here’s a positive proposal: to get started, we should aim at a productive cross-fertilisation of the two strongest aspects of the human mind. The first is our recently evolved capacity for self-critical rational thinking. At least sometimes, human beings are sensitive to rational argument. And the second is the enormous depth of our phenomenological state-space. Because of its many dimensions, the number of possible conscious states for a human being is incredibly large. We are only rarely aware of this fact, and we haven’t really started to systematically test how we might deliberately alter our state-space so as to enhance our autonomy and increase experiential forms of self-knowledge, ideally backed by the rigour of modern-day neuroscience. An exception, of course, are certain ancient spiritual techniques, which also work to stabilise the neurocognitive conditions required for rationality and rigorously logical, critical thinking. Statistically, mindfulness and mind-wandering are opposing constructs, and rational thought critically depends on the capacity for attentional self-control.
  • It was William James, the father of American psychology, who said in 1892: ‘And the faculty of voluntarily bringing back a wandering attention over and over again is the very root of judgment, character, and will. […] And education which should improve this faculty would be the education par excellence.’ We can finally see more clearly what meditation is really about: over the centuries, the main goal has always been a sustained enhancement of one’s mental autonomy.
  • I think ideologically charged debates about freedom of the will and Stone Age reductionism are now a thing of the past. But the path forward is not about finding the right philosophical theory. It is about starting a process of actively implementing what we already know into societal and cultural practice. As a working concept, mental autonomy is an excellent new candidate for a basic value that could guide us in education, policymaking and ethics. The two-component proposal gives a new and deeper meaning to the old Kantian ideal of ‘man’s emergence from his self-imposed immaturity’. We might call it raising the standard of civilisation on the mental level, or developing a ‘culture of consciousness’.
  • Finally, mental autonomy brings together the core ideas of both Eastern and Western philosophy. It helps us see the value of both secularised spiritual practice and of rigorous, rational thought. There seem to be two complementary ways to understand the dolphins in our own mind: one, from the point of view of a truly hard-nosed, scientifically minded tourist on the prow of the boat; and two, from the perspective of the wide-open sky, silently looking down from above at the tourist and the dolphins porpoising in the ocean.

Author Narrative
  • Thomas Metzinger is full professor and director of the theoretical philosophy group and the research group on neuroethics/neurophilosophy at the department of philosophy at the Johannes Gutenberg University of Mainz in Germany.
  • From 2014-19, he is a fellow at the Gutenberg Research College. He is the founder and director of the MIND group, and adjunct fellow at the Frankfurt Institute of Advanced Studies in Germany.
  • His latest book is "Metzinger (Thomas) - The Ego Tunnel: The Science of the Mind and the Myth of the Self" (2009).

Notes
Paper Comment



"Mills (Sam) - Requeering Wilde"

Source: Aeon, 25 March 2025


Author's Introduction
  • The Oscar Wilde Temple first opened in 2017, in the basement of the Church of the Village in Greenwich, New York. Wilde is glorified on a plinth: a creamy statue dressed as a dandy, his prison number from his time served in Reading Gaol, C.3.3, on a sign below him. Directly behind Wilde is a large neo-Gothic stained glass window of Jesus, drawing an association of martyrdom between the two men. On the walls there are also pictures of LGBTQ figures who were similarly persecuted: Alan Turing, Harvey Milk, Marsha P Johnson. The artwork was created by David McDermott and Peter McGough.
  • This is a depiction of Wilde that we are all familiar with: as a flamboyant aesthete and gifted writer, a witty provocateur who is supposed to have told customs officials in New York, ‘I have nothing to declare but my genius’, who wrote sparkling plays and splendid children’s books. He might have married the writer Constance Lloyd, but this was clearly a smokescreen to conceal his true sexuality, for he had numerous affairs with men, from Robbie Ross to Lord Alfred Douglas. Wilde’s blossoming talent was destroyed by a cruel Victorian public who vilified him for his homosexuality and flung him into prison, causing him to die in poverty and misery in a Parisian bedsit a few years after his release. Subsequently, over the past 50 years, Wilde has been adopted by the LGBTQ movement as a secular saint, the ultimate symbol of a persecuted gay man.
  • But is Wilde’s story as simple as this? Is this a fair representation of his life and sexuality? Wilde was, after all, a man of contradictions. Shortly before he died, he confided in Jean Dupoirier, the proprietor of the Hôtel d’Alsace, that: ‘Some said my life was a lie but I always knew it to be the truth; for like the truth it was rarely pure and never simple.’ In ‘Biography and the Art of Lying’ (1997), Merlin Holland, Wilde’s grandson, points out that Wilde’s life is not one that ‘can tolerate an either/or approach with logical conclusions, but demands the flexibility of a both/and treatment, often raising questions for which there are no answers.’ Those who have attempted to mould Wilde to their own agenda, simplifying his complexities, have soon found that ‘he turns to quicksilver in their fingers’.

Author's Conclusion
  • Ellmann saw Wilde’s shift from female to male lovers as a ‘reorientation’. I would argue that a more accurate term to describe Wilde’s sexuality was that he was bisexual. Interviewed in Marjorie Garber’s Vice Versa (1995), the academic Jonathan Dollimore reflected similarly: ‘My feeling about Oscar Wilde is that he was certainly bisexual, and there is a sense in which I do deplore that representation of Wilde as living entirely in bad faith in relation to his wife.’ However, gay theorists have resisted this more complex and nuanced examination of Wilde’s sexuality. Take these words from the queer theorist Eve Kosofsky Sedgwick, interviewed in Outweek magazine in 1991: ‘I’m not sure that because there are people who identify as bisexual there is a bisexual identity …’ The interviewer goes on to summarise that: ‘In questioning whether bisexuality is a potent identity, Sedgwick points to historical figures the gay and lesbian community claims as lesbian and gay (Cole Porter, Eleanor Roosevelt, Virginia Woolf, Walt Whitman, Oscar Wilde) – who would actually be classified as bisexual,’ to which Sedgwick concludes: ‘But the gay and lesbian movement isn’t interested in drawing that line.’
  • The trouble with Sedgwick’s argument is that it reinforces a simplistic homo/hetero binary. The result is that – as Steven Angelides points out in A History of Bisexuality (2001) – ‘bisexuality… is unthinkable outside of binary logic’ and hence becomes erased. However, this diminishment of bisexuality was characteristic of the era. From the 1970s onwards, bisexuals have historically found their sexuality ignored or dismissed, and had to fight for decades for the ‘B’ to be included in LGBTQ. Bisexuality has frequently been seen as a phase or situated ‘on the fence’, where the sitter will inevitably come down on one side or the other, or deemed as a dilution of a cause; lesbian feminists in the 1980s were suspicious of bi women who were seen as ‘sleeping with the enemy’ and, in 1985, the London Lesbian and Gay Community Centre banned bis because their lesbian members felt threatened by bisexual men. During the AIDs crisis, bis were frequently scapegoated, depicted as the bridgeway between two sexual populations, passing the disease from gay to heterosexual and back again.
  • Because bisexuality has been mischaracterised as a sort of ‘no man’s land’, where it has always been harder to fight a cause, Wilde’s sexuality was inevitably simplified. The binary story of Wilde as an innocent persecuted gay man versus a cruel heterosexual public is a more potent one, particularly given the alarming rise in LGBTQ hate crimes this century. In 2016, the arts organisation Artangel organised a public reading in Reading Prison of De Profundis – the letter Wilde wrote while incarcerated – with luminaries such as Ralph Fiennes, Lemn Sissay and Maxine Peake reading sections aloud. When interviewed, one of the directors, Michael Morris, noted that, in some countries, being gay is still illegal: ‘We want people to be mindful of that oppression and persecution.’
  • This is certainly a good reason to remember the horrors of Wilde’s fate. However, to accept that Wilde was bisexual does not mean that his gay inclinations are halved or half-hearted, or that his tragedy is diluted. It does not diminish the cruelty of Victorian society in condemning him to spend two years in a dark, freezing cell, his mind suffering a slow shattering, his genius going to waste simply because he did not conform to their narrow idea of what sexuality should be. But it does acknowledge that he was a much more complex figure than the misleading caricature that theorists’ agendas and social causes have slowly shaped him into.
Author Narrative
  • Sam Mills is a novelist/nonfiction author. Her books include Chauvo-Feminism (2021), The Watermark (2024) and Uneven: Nine Lives that Redefined Bisexuality (2025). She is also managing director of the publishing house Dodo Ink. She lives in London.
Notes
  • This is an interesting and balanced paper, though the balance is rather tendentious in that the author is promoting bisexuality. But it's good to see Wilde being treated fairly objectively rather than as a gay icon.
  • There are sundry references to "Ellmann (Richard) - Oscar Wilde" which I ought to read one day.
  • There are a few - mostly sensible - Aeon Comments, which I've read.
  • I doubt I'll take this any further, but I probably ought to have a Note on Sexual Identity as a sub-Note to Narrative Identity, to which this relates. See also Oscar Wilde.

Paper Comment
  • Sub-Title: "Oscar Wilde is an icon of gay liberation from secrecy. But his life and his sexuality were not so simple – nor so binary"
  • For the full text see Aeon: Mills - Requeering Wilde.



"Mireault (Gina) - Born that way"

Source: Aeon, 17 January 2023


Excerpt
  • Recent studies have found that three broad aspects of temperament in particular are especially useful in predicting long-range developmental outcomes.
    1. The first is reactivity or negative emotionality, referring to general negative mood, intense negative reactions, and distress either when limits are imposed (eg, anger) or in new situations (eg, fear).
    2. The second is self-regulation, which researchers refer to as ‘effortful control’ of feelings (eg, self-soothing) and of attention (eg, able to hold focus).
    3. The third goes by several names including approach-withdrawal, inhibition, or sociability, and refers to the tendency to approach new people and situations, or to be wary and withdraw.
  • There are additional levels of these dimensions, but these three have best withstood scientific tests of reliability and validity across infants, children and teenagers.
  • Hundreds of studies have subsequently and unequivocally demonstrated that temperament is a driving factor in child development that is at least as important as everything that comes after a baby enters the world, including parenting.
Author Narrative
  • Gina Mireault is professor and chair of psychology and human services at Northern Vermont University. Her research is focused on emotional development in childhood.
Notes
  • This is an interesting esssay. Child development is a very broad issue, and this focuses on the possibility that temperament is substantially innate and affects our subsequent life paths. It falls in to the 'Nature versus Nurture' debate, though the Paper points out how these are intertwined.
  • I'm willing to go along with this - certainly from personal experience, though - of course - one cannot know personally where one's own predispositions came from.
  • There are some interesting Comments - with replies by the author - which I've saved for future interrogation.
  • Further comments in due course, maybe.

Paper Comment
  • Sub-Title: "Confident or shy, our temperament is mostly baked-in from birth. But how that influences our lives is up for grabs"
  • For the full text see Aeon: Mireault - Born that way.



"Moravec (Matyas) - Ghosts among the philosophers"

Source: Aeon, 14 March 2025


Author's Introduction
  • ‘Iassume that the reader is familiar with the idea of extra-sensory perception … telepathy, clairvoyance, precognition and psycho-kinesis. These disturbing phenomena seem to deny all our usual scientific ideas … Unfortunately the statistical evidence, at least for telepathy, is overwhelming … Once one has accepted them it does not seem a very big step to believe in ghosts and bogies.’
  • These words weren’t published in the pages of an obscure occult journal or declared at a secret parapsychology conference. They weren’t written by a Victorian spiritualist or a séance attendee. In fact, their author is Alan Turing, the father of computer science, and they appear in his seminal paper "Turing (Alan) - Computing Machinery and Intelligence" (1950), which describes the ‘imitation game’ (more commonly known as the ‘Turing test’) designed to establish whether a machine’s intelligence could be distinguished from that of a human.
  • The paper starts by setting up the now-famous thought experiment: a human, a machine, and an observer who asks questions. If the observer cannot work out which one is which based on their responses, the machine has passed the test: its intelligence is indistinguishable from that of a human mind. The vast majority of the paper addresses various objections against the experiment from mathematics, philosophy of mind, or from those sceptical about the power of computers.
  • But, about two-thirds of the way through the paper, Turing addresses an unexpected worry that might disrupt the imitation game: telepathy. If the human and the observer could communicate telepathically (which the machine supposedly could not do), then the test would fail. ‘This argument is to my mind quite a strong one,’ says Turing. In the end, he suggests that, for the test to work properly, the experiment must take place in a ‘telepathy-proof room’.
  • Why did Turing feel the need to talk about telepathy? Why did he consider extrasensory perception a serious objection to his thought experiment? And what about his peculiar mention of ghosts?

Author's Conclusion
  • Many of these philosophical engagements with parapsychology and psychical research have now been forgotten – or perhaps brushed under the carpet out of embarrassment. The Stanford Encyclopedia of Philosophy, for example, only contains a few words about Broad and Price’s interest in parapsychology, and my own forthcoming paper is currently the only one on Lewy’s work on psychical research.
  • Some of Broad and Price’s contemporaries also thought that psychical research was not worth any attention by historians of philosophy. Rudolf Metz’s tome A Hundred Years of British Philosophy (1938) contains a mere two pages on psychical ‘researchers’ (the quotation marks are his) and their ‘voluminous writings’. His tone is dismissive, and he says that ‘it hardly redounds to the credit of modern British philosophy that so many thinkers and investigators who have otherwise to be taken seriously have on this ground proved so over-venturesome.’
  • Despite Metz’s dismissal, many philosophers continued working on questions in psychical research long into the 1950s and ’60s, after psychical research had left other departments in the modern university. These philosophers included Antony Flew, Martha Kneale and other overlooked thinkers such as C T K Chari, H A C Dobbs and Clement Mundle. It also appears that Kurt Gödel believed in the afterlife.
  • Cambridge was one of the birthplaces of analytic philosophy, which prides itself on dispensing with speculative metaphysics, and putting a heavy emphasis on scientific precision and empiricism. But ‘spooky’ topics like telepathy, after-death survival and ghosts permeated the philosophical ecosystem in Cambridge and beyond long into the 1960s. It pushed many of the thinkers interested in it towards new and creative explorations of the nature of time, matter and language. So, when Alan Turing, one of Cambridge’s most famous alumni, turned his attention to artificial intelligence and made his era-defining contribution, it was natural that he would be concerned about a problem many of his peers considered deeply puzzling: telepathy.
Author Narrative
  • Matyáš Moravec is a lecturer in philosophy at Queen’s University Belfast. He is the author of Henri Bergson and the Philosophy of Religion (2024).
Notes
Paper Comment



"Morton (Jennifer M.) - The spectre of insecurity"

Source: Aeon, 18 October 2024


Author's Introduction
  • Alongside equality, freedom and opportunity, fear has long played a powerful role in political discourse. In ordinary life, fear is often a fitting response to danger. If you encounter a snake while out on a hike, fear will lead you to back away and exercise caution. If the snake is poisonous, fear will have saved your life. By contrast, the fears that dominate political discourse are less concrete. We are told to fear elites, terrorists, religious zealots, godless atheists, sexists, feminists, Marxists and the enemies of democracy. Yet even as these purported poisons are less obviously lethal, political rhetoricians have long understood that making them salient is a powerful way to shape citizens’ motivations. As Donald Trump told Bob Woodward: real power is fear.
  • It is tempting to think that political fear is largely manufactured – a cynical ploy to manipulate the masses. Trump’s dark vision of the United States would seem to be a prime example of this. Yet, fear can be fitting in politics. Citizens face real dangers from failed political leadership, as lethal to our livelihood as snake bites.
  • Thomas Hobbes, the 17th-century political philosopher, understood fear. Hobbes was born in 1588 in the English town of Malmesbury, during the Anglo-Spanish war. As rumours of an impending Spanish attack circulated, he described his mother as ‘filled with such fear that she bore twins, me and together with me fear.’ Fear would follow Hobbes throughout his life. England in the 17th century was torn apart by religious and political factions, recurring plagues, misinformation, inflation and a changing labour market. Like in our current moment, pessimism and uncertainty ran rampant. As Jonathan Healey notes in his fantastic book on this period, The Blazing World (2023), the parallels between these historical periods are not hard to find: ‘We, too, are living through our own historical moment in which a media revolution, social fracturing and culture wars are redefining society and politics.’
  • Many dismiss Hobbes as a curmudgeon whose argument for authoritarianism was guided by his view that people are naturally selfish and violent. In "Hobbes (Thomas), Macpherson (Crawford) - Leviathan" (1651), his most influential book, he argues that, without a powerful executive in absolute control, we would lead lives that are ‘nasty, brutish, and short’. He also seems to suggest that, once that sovereign is established, we have no right to rebel against it since the alternative is invariably worse (though commentators disagree on whether this is a fair interpretation).
  • We have good reason to reject the view that even the most horrific authoritarian regimes are always better than the chaos brought about by rebellion. Stability is not the only political value. But we have lost sight of how important it is. And though we certainly should reject Hobbes’s most extreme authoritarian conclusions, there is much we can learn from understanding the motivations that led Hobbes to accept them, particularly in this current moment in which the appeal of authoritarians like Trump is ascendant.

Author's Conclusion
  • The COVID-19 pandemic and its aftermath offered a glimpse of a solution. When things felt precarious and uncertain, the US government stepped in with eviction moratoriums, universal basic income, free vaccines and a child tax credit. (Let’s not forget that Trump made sure that those stimulus checks bore his name.) But a few short years later, we are back to business as usual, leaving millions on the edge of precarity.
  • This is not to deny that xenophobia, sexism and racism also churn through the current discontent with liberalism. Nostalgia for times past is a strong undercurrent of the appeal of Right-wing movements. Some want the return of the well-paid factory job with strong benefits, while others want a return to white male supremacy. But nostalgia’s power is most potent when no compelling, believable vision of a brighter future exists.
  • Those who fear Trump’s re-election and the rise of Right-wing political movements keep reminding us that democracy is on the line. But sowing fear and doubt only adds to the growing sense of insecurity and uncertainty that is already unravelling people’s trust in the liberal project. It plays right into the hands of the strategy that Trump is so adept at playing. For people to see the value in the current system, we need to do more than fear-monger about the alternative.
  • Hobbes is often interpreted as being narrowly focused on justifying a powerful state that could control our worst appetites so as to prevent us from killing each other. I have argued that if we take his concern for stability and security seriously, the solution requires a far more radical rethinking of liberal states as they currently exist. Material insecurity and political instability cannot be divorced. A liberal state that leaves so many feeling as if their lives are on the verge of being ‘nasty, brutish, and short’ falls short of solving the problem that political society is meant to solve. Freedom is meaningless if you cannot count on a stable connection between the work you put in today and a good life tomorrow. But this is precisely the connection that has become severed for so many. If political society is to enable flourishing lives, then we need a political and economic system that can provide that kind of stability. This requires more than mere rhetoric. If we fail, Leviathan is waiting in the wings.
Author Narrative
  • Jennifer M Morton is Presidential Penn Compact Professor of Philosophy, with a secondary appointment at the Graduate School of Education, at the University of Pennsylvania. She is also a senior fellow at the Center for Ethics and Education at the University of Wisconsin-Madison. She is the author of Moving Up without Losing Your Way: The Ethical Costs of Upward Mobility (2019).
Notes
  • This is an interesting paper, though one very much focused on the American situation prior to Trump's second term.
  • I agree that stability is essential and that some freedoms have to be given up in order to avoid anarchy. I also think that while 'diversity' is to be lauded in times of peace and stability it might lead to the collapse of a society under stress. Of course, demagogues overplay the 'stress' card - saying things have never been worse - so what is there to lose - when it's closer to the truth to say that things have never been better.
  • There are a lot of mostly thoughtful Aeon Comments, though the replies by the author are disappointing - mainly thanking supporters.
  • This - just about - relates to my Note on Narrative Identity.

Paper Comment



"Narayanan (Darshana) - Baby talk"

Source: Aeon, 25 July 2024


Author's Introduction
  • Some restless infants don’t wait for birth to let out their first cry. They cry in the womb, a rare but well-documented phenomenon called vagitus uterinus (from the Latin word vagire, meaning to wail). Legend has it that Bartholomew the Apostle cried out in utero. The Iranian prophet Zarathustra is said to have been ‘noisy’ before his birth. Accounts of vagitus uterinus appear in writings from Babylon, Assyria, ancient Greece, Rome and India. The obstetrician Malcolm McLane described an incident that occurred in a hospital in the United States in 1897. He was prepping a patient for a c-section, when her unborn baby began to wail, and kept going for several minutes – prompting an attending nurse to drop to her knees, hands clasped in prayer. Yet another child is said to have cried a full 14 days before birth. In 1973, doctors in Belgium recorded the vitals of three wailing fetuses and concluded that vagitus uterinus is not a sign of distress. An Icelandic saga indicates that the phenomenon has been observed in other animals – ‘the whelps barked within the wombs of the bitches’ – vagitus uterinus in dogs, a foretelling of great events to come in Icelandic lore.
  • Air is necessary for crying. The coordinated movements of muscles in the stomach and rib cage force air out of the lungs and up through the vocal cords – two elastic curtains pulled over the top of the windpipe – causing them to vibrate and produce a buzz-like sound. These sound waves then pass through the mouth, where precise motions of the jaws, lips and tongue shape them into the vocal signals that we recognise. In this case, the rhythmic sounds of a cry.
  • Vagitus uterinus occurs – always in the last trimester – when there’s a tear in the uterine membrane. The tear lets air into the uterine cavity, thus enabling the fetus to vocalise. vagitus uterinus provided scientists with some of the earliest insights into the fetus’s vocal apparatus, showing that the body parts and neural systems involved in the act of crying are fully functional before birth.
  • Loud, shrill and penetrating – a baby’s cry is its first act of communication. A simple adaptation that makes it less likely that the baby’s needs will be overlooked. And babies aren’t just crying for attention. While crying, they are practising the melodies of speech. In fact, newborns cry in the accent of their mother tongue. They make vowel-like sounds, growl and squeal – these are protophones, sounds that eventually turn into speech.
  • Babies communicate as soon as they are born. Rigorous analyses of the developmental origins of these behaviours reveal that, contrary to popular belief – even among scientists – they are not hardwired into our brain structures or preordained by our genes. Instead, the latest research – including my own – shows that these behaviours self-organise in utero through the continuous dance between brain, body and environment.

Author's Conclusion
  • The newborns had not just memorised the prosody of their native languages; they were actively moving air through their vocal cords and controlling the movements of their mouth to mimic this prosody in their own vocalisations. Babies are communicating as soon as they are born, and these abilities are developing in the nine months before birth.
  • There is no genetic blueprint, programme or watchmaker who knows how it must turn out in the end. The reality of how these behaviours come to be is far more sophisticated and elegant. They develop through continuous interactions across multiple levels of causation – from genes to culture. The processes that shape them unfold over many timescales – from the milliseconds of cellular processes to the millennia of evolution.
  • Now here’s my challenge to you: which of these factors are the primary drivers of vocal development – our genes or brain? – and which ones are merely supporting – the body? How much of their communication do babies owe to nature versus nurture? Is it more nature or nurture? I guarantee you, there are no scientifically defensible answers to these questions.
  • Development does not rely on a distinction between primary, essential causes and secondary, supporting ones. They are all essential – intimately connected – causal contributors. Let us disregard our artificial hierarchies. Mother Nature herself pays little attention to them.
Author Narrative
  • Darshana Narayanan is a neuroscientist, journalist, and founding member of The Computational Democracy Project. She is a visiting scholar at New York University’s Institute for Public Knowledge.
Notes
  • This is an important paper. Not only does it raise lots of issues in the Nature-Nurture debate (and the author is right that development is an interaction between the two). It also raises issues for some other topics of interest ...
  • The metaphysics of Pregnancy: the author seems to side with the 'parthood' rather than the 'container' model.
  • The 'edge of sentience' (see "Birch (Jonathan) - The Edge of Sentience: Risk and Precaution in Humans, Other Animals, and AI"): the things researchers used to do to aborted but still-living human fetuses!
  • The abortion debate.
  • I need to comment on this further! Sadly, it wasn't opened up to Aeon comments.

Paper Comment
  • Sub-Title: "When babies are born, they cry in the accent of their mother tongue: how does language begin in the womb?"
  • For the full text see Aeon: Narayanan - Baby talk.



"Nicholson (Arwen E.) & Haywood (Raphaelle D.) - There’s no planet B"

Source: Aeon, 16 January 2023


Author's Conclusion
  • Earth is the home we know and love not because it is Earth-sized and temperate. No, we call this planet our home thanks to its billion-year-old relationship with life. Just as people are shaped not only by their genetics, but by their culture and relationships with others, planets are shaped by the living organisms that emerge and thrive on them. Over time, Earth has been dramatically transformed by life into a world where we, humans, can prosper. The relationship works both ways: while life shapes its planet, the planet shapes its life. Present-day Earth is our life-support system, and we cannot live without it.
  • While Earth is currently our only example of a living planet, it is now within our technological reach to potentially find signs of life on other worlds. In the coming decades, we will likely answer the age-old question: are we alone in the Universe? Finding evidence for alien life promises to shake the foundations of our understanding of our own place in the cosmos. But finding alien life does not mean finding another planet that we can move to. Just as life on Earth has evolved with our planet over billions of years, forming a deep, unique relationship that makes the world we see today, any alien life on a distant planet will have a similarly deep and unique bond with its own planet. We can’t expect to be able to crash the party and find a warm welcome.
  • Living on a warming Earth presents many challenges. But these pale in comparison with the challenges of converting Mars, or any other planet, into a viable alternative. Scientists study Mars and other planets to better understand how Earth and life formed and evolved, and how they shape each other. We look to worlds beyond our horizons to better understand ourselves. In searching the Universe, we are not looking for an escape to our problems: Earth is our unique and only home in the cosmos. There is no planet B.
Author Narrative
  • Arwen E Nicholson is a research fellow in physics and astronomy at the University of Exeter in the UK. She has developed Gaian models of regulation to understand how life might impact the long-term habitability prospects of its planet.
  • Raphaëlle D Haywood is a senior lecturer in physics and astronomy at the University of Exeter in the UK. Her research focuses on detection of small, potentially terrestrial planets around stars other than our Sun.
Notes
  • An important and 'timely' paper.
  • There are several iportant lessons:-
    1. Our planet is habitable not just because of its size and position relative to our Sun, but because of its biosphere.
    2. Finding and traveling to a planet capable of 'terraforming' is not a trivial task, given the distances involved. It would take thousands of years just to get there even if we knew where we were going. And what would we take? All this is way beyound our technological capacities.
    3. Worrying about future problems with our Sun is absurd - that's a billion years away.
    4. Fixing whatever 'existential' problems we have with our own planet are trivial compared to setting up on another planet.
  • It might be worth protecting - to a limited degree - against an 'extinction event' - such as all-out nuclear war or an asteroid impact - but this would be a tempory home for a 'remenant' while Earth recovered. The 'devastated Earth' would still support life better than the alternatives, if any.
  • There are numerous Comments (79 as of March 2023 when it looks like they stopped).

Paper Comment



"Noe (Alva) - Rage against the machine"

Source: Aeon, 25 October 2024


Author's Introduction
  • Computers don’t actually do anything. They don’t write, or play; they don’t even compute. Which doesn’t mean we can’t play with computers, or use them to invent, or make, or problem-solve. The new AI is unexpectedly reshaping ways of working and making, in the arts and sciences, in industry, and in warfare. We need to come to terms with the transformative promise and dangers of this new tech. But it ought to be possible to do so without succumbing to bogus claims about machine minds.
  • What could ever lead us to take seriously the thought that these devices of our own invention might actually understand, and think, and feel, or that, if not now, then later, they might one day come to open their artificial eyes thus finally to behold a shiny world of their very own? One source might simply be the sense that, now unleashed, AI is beyond our control. Fast, microscopic, distributed and astronomically complex, it is hard to understand this tech, and it is tempting to imagine that it has power over us.
  • But this is nothing new. The story of technology – from prehistory to now – has always been that of the ways we are entrained by the tools and systems that we ourselves have made. Think of the pathways we make by walking. To every tool there is a corresponding habit, that is, an automatised way of acting and being. From the humble pencil to the printing press to the internet, our human agency is enacted in part by the creation of social and technological landscapes that in turn transform what we can do, and so seem, or threaten, to govern and control us.
  • Yet it is one thing to appreciate the ways we make and remake ourselves through the cultural transformation of our worlds via tool use and technology, and another to mystify dumb matter put to work by us. If there is intelligence in the vicinity of pencils, shoes, cigarette lighters, maps or calculators, it is the intelligence of their users and inventors. The digital is no different.
  • But there is another origin of our impulse to concede mind to devices of our own invention, and this is what I focus on here: the tendency of some scientists to take for granted what can only be described as a wildly simplistic picture of human and animal cognitive life. They rely unchecked on one-sided, indeed, milquetoast conceptions of human activity, skill and cognitive accomplishment. The surreptitious substitution (to use a phrase of Edmund Husserl’s) of this thin gruel version of the mind at work – a substitution that I hope to convince you traces back to Alan Turing and the very origins of AI – is the decisive move in the conjuring trick.
  • What scientists seem to have forgotten is that the human animal is a creature of disturbance. Or as the mid-20th-century philosopher of biology Hans Jonas wrote: ‘Irritability is the germ, and as it were the atom, of having a world…’ With us there is always, so to speak, a pebble in the shoe. And this is what moves us, turns us, orients us to reorient ourselves, to do things differently, so that we might carry on. It is irritation and disorientation that is the source of our concern. In the absence of disturbance, there is nothing: no language, no games, no goals, no tasks, no world, no care, and so, yes, no consciousness.

Author's Conclusion
  • The telling fact: computers are used to play our games; they are engineered to make moves in the spaces opened up by our concerns. They don’t have concerns of their own, and they make no new games. They invent no new language.
  • The British philosopher R G Collingwood noticed that the painter doesn’t invent painting, and the musician doesn’t invent the musical culture in which they find themselves. And for Collingwood this served to show that no person is fully autonomous, a God-like fount of creativity; we are always to some degree recyclers and samplers and, at our best, participants in something larger than ourselves.
  • But this should not be taken to show that we become what we are (painters, musicians, speakers) by doing what, for example, LLMs do – ie, merely by getting trained up on large data sets. Humans aren’t trained up. We have experience. We learn. And for us, learning a language, for example, isn’t learning to generate ‘the next token’. It’s learning to work, play, eat, love, flirt, dance, fight, pray, manipulate, negotiate, pretend, invent and think. And crucially, we don’t merely incorporate what we learn and carry on; we always resist. Our values are always problematic. We are not merely word-generators. We are makers of meaning.
  • We can’t help doing this; no computer can do this.
Author Narrative
  • Alva Noë is professor of philosophy at the University of California, Berkeley, where he is also department chair. His books include Strange Tools: Art and Human Nature (2015) and The Entanglement: How Art and Philosophy Make Us What We Are (2023).
Notes
  • I read this paper a couple of months ago and now can't remember the detailed argument. So. I need to re-read the paper before commenting in detail.
  • However, I remember disliking the overall thesis. Well, maybe not the thesis but the argument. It's popular to bash AI despite all its rapid advances. Probably best to wait a few years to get a more balanced view. The 'promisorry notes' are dated only a few years in the future, rather than decades as has been the case until the ChatGPT revolution. And there's Quantum Computing in the wings.
  • The suggestion that 'resistance' is central to thought is just one philosopher's intuition. It seems absurd to me, and suggesting that it's impossible for machines to 'resist' runs in the face of much SciFi which - while hardly authoritative - shows that other views are conceivable.
  • But I'd agree that AI-enabled machines are just tools for us to use to remove another layer of drudgery and repetitive work and free us up to do more interesting things. Just like the first few waves of computers did.
  • After re-reading this Paper, I ought to read "Noe (Alva) - Out of Our Heads: Why You Are Not Your Brain, and Other Lessons from the Biology of Consciousness".
  • There are 29 Aeon Comments, mostly supportive, mostly confused. Some are cynical rants - claiming that all the AI developers are after is your money or to acquire cheaper slaves. The Comments do, however, require closer reading.
  • This relates to my Notes on Computers, Mind, Thought ... and probably others.

Paper Comment
  • Sub-Title: "For all the promise and dangers of AI, computers plainly can’t think. To think is to resist – something no machine does"
  • For the full text see Aeon: Noe - Rage against the machine.



"Oderberg (David) - Life makes mistakes"

Source: Aeon, 22 October 2024


Author's Introduction
  • We forget where we parked. We misplace our keys. We misread instructions. We lose track of the time. We call people by the wrong name. ‘To err is human,’ as the English poet Alexander Pope wrote in his Essay on Criticism (1711). But it is not exclusively human. All animals do things that prevent them from surviving, reproducing, being safe, or being happy. All animals get things wrong. Think of a fish that takes the bait and accidentally bites into a metal hook. Think of dogs that forget where they have buried their bones, or frogs that aim their tongues in the wrong direction. Birds build flimsy nests. Whales beach themselves. Domestic hens try to hatch golf balls.
  • But not everything in the Universe can make mistakes. While living things navigate a world filled with biological errors, the fundamental building blocks of the cosmos adhere to the laws of physics with unwavering consistency. No one ever caught an electron erring, let alone an atom, a sodium ion, a lump of gold, a water droplet or a supernova. The objects that physicists study, the pure objects of physics, do not make mistakes. Instead, they follow ineluctable laws.
  • And this is where a problem emerges. Mistake-making organisms, like everything else in the Universe, are made from law-abiding atoms and molecules. So where does mistake-making begin and end in living things? How deep does it go? Can the parts and subsystems of organisms, like immune systems or the platelets in blood, make mistakes too? And, if they do, is there something that connects human mistakes to those made by biological subsystems?
  • The answers to these questions have profound implications for how we think about life. If things go wrong only when physics becomes biology, biology might truly be irreducible to physics and chemistry, despite centuries of reductionism saying otherwise. It might also mean that organisms really have ‘correct’ goals and purposes that they can mistakenly deviate from – they really are teleological, despite a long history of mechanistic arguments claiming otherwise. And if life’s errors really are as ubiquitous as they appear, it might mean we need a ‘grand’ framework to explain what happens when things go wrong: a theory of biological mistakes.

Author's Conclusion
  • The theory of biological mistakes appears to be a universal feature of biology, which demarcates the living from the realms of physics and chemistry, thereby rendering it irreducible to either. Despite this, mistakes are not yet subject to systematic investigation by biologists. Mistake theory is a framework within which to generate novel, testable hypotheses. And there are so many questions in need of systematic investigation: how can things go wrong in relation to timing, location, measurement, evaluation of quality and identification? How do organisms attempt to avoid mistakes? Which mistakes are unavoidable? How are they corrected? How does an organism monitor, in real time, whether it is deviating onto a pathway that will threaten its flourishing?
  • And then there are questions about the contradictory cases in which mistakes paradoxically help an organism in the long term despite threatening flourishing in the short term. This relates to the role of exploration or play in life. Organisms generally need to explore their environments, whether in search of food, or a mate, or shelter, and so on. However, too much exploration is wasteful and dangerous. It would be a mistake to allow too many mistakes, but some are required for us to flourish in our environments. Indeed, mistakes in DNA copying, for example, produce the variation that drives life’s diversity. But if these mistakes vary too much, systems fall apart. Interrogating these errors experimentally may give us a window into the phenomenon of biological normativity, helping us understand how organisms act correctly, or badly, in their environments.
  • Mistake-making is neither limited to organisms nor bound by scale. Mistakes can be made by the tiniest bacteria as well as the largest animals – even whole populations. They can also be made by non-organisms, such as platelets, antibodies and cells belonging to organisms. It is the ubiquity of mistake-making, as well as its potential, that demands an equally broad theory to organise investigation into the phenomenon.
  • Life is often defined by what we get right. It is explained by growth, replication and adaptation to the environment. But mistakes are everywhere. A theory of mistakes will help us understand, in a systematic and experimentally driven way, behaviour that threatens the flourishing of living beings. It will also help us appreciate the normativity that runs through life. While some still view ‘teleology’ with scepticism, mistake theory may well be the antidote that challenges conventional wisdom about the goals of living things. In the intricate biological dance of right and wrong, we might just find the key to understanding the deeper purposes that drive life on Earth.

Author Narrative
  • David S Oderberg is professor of philosophy at the University of Reading in the UK. His latest book is The Metaphysics of Good and Evil (2020) and he is currently writing a book on biological mistakes.

Notes
  • I read this paper a couple of months ago and now can't remember the detailed argument. So. I need to re-read the paper before commenting in detail.
  • However, I remember disliking the overall thesis. I can't see that 'mistakes' are the province of living - or biological - things only. AIs seem to be making lots of them.
  • The author's book on 'Good and Evil' is excruciatingly expensive, and is a defense of Scholasticism allegedly from the perspective of Analytic Philosophy.
  • I'm generally suspicious of any work supported by the Templeton Foundation, though Oderberg is a Christian, though not a Christian Materialist1 - so is genuine and not just taking the cash.
  • We're referred to University of Reading: Mistakes in Living Systems for further information on the author's project. It may be worth following up.
  • There are no Aeon Comments.
  • This relates to my Notes on Life and Evolution ... and probably others (Religion, for instance).

Paper Comment
  • Sub-Title: "Hens try to hatch golf balls, whales get beached. Getting things wrong seems to play a fundamental role in life on Earth"
  • For the full text see Aeon: Oderberg - Life makes mistakes.



"Olson (Jay) - Capturing the cosmos"

Source: Aeon, 08 January 2024


Author's Introduction
  • Some time late this century, someone will push a button, unleashing a life force on the cosmos. Within 1,000 years, every star you can see at night will host intelligent life. In less than a million years, that life will saturate the entire Milky Way; in 20 million years – the local group of galaxies. In the fullness of cosmic time, thousands of superclusters of galaxies will be saturated in a forever-expanding sphere of influence, centred on Earth.
  • This won’t require exotic physics. The basic ingredients have been understood since the 1960s. What’s needed is an automated spacecraft that can locate worlds on which to land, build infrastructure, and eventually make copies of itself. The copies are then sent forth to do likewise – in other words, they are von Neumann probes (VNPs). We’ll stipulate a very fast one, travelling at a respectable fraction of the speed of light, with an extremely long range (able to coast between galaxies) and carrying an enormous trove of information. Ambitious, yes, but there’s nothing deal-breaking there.
  • Granted, I’m glossing over major problems and breakthroughs that will have to occur. But the engineering problems should be solvable. Super-sophisticated flying machines that locate resources to reproduce are not an abstract notion. I know the basic concept is practical, because fragments of such machines – each one a miracle of nanotechnology – have to be scraped from the windshield of my car, periodically. Meanwhile, the tech to boost tiny spacecraft to a good fraction of the speed of light is in active development right now, with Breakthrough Starshot and NASA’s Project Starlight.
  • The hazards of high-speed intergalactic flight (gas, dust and cosmic rays) are actually far less intense than the hazards of interstellar flight (also gas, dust and cosmic rays), but an intergalactic spacecraft is exposed to them for a lot more time – millions of years in a dormant ‘coasting’ stage of flight. It may be that more shielding will be required, and perhaps some periodic data scrubbing of the information payload. But there’s nothing too exotic about that.
  • The biggest breakthroughs will come with the development of self-replicating machines, and artificial life. But those aren’t exactly new ideas either, and we’re surrounded by an endless supply of proof of concept. These VNPs needn’t be massive, expensive things, or perfectly reliable machines. Small, cheap and fallible is OK. Perhaps a small fraction of them will be lucky enough to survive an intergalactic journey and happen upon the right kind of world to land and reproduce. That’s enough to enable exponential reproduction, which will, in time, take control of worlds, numerous as the sand. Once the process really gets going, the geometry becomes simple – the net effect is an expanding sphere that overtakes and saturates millions of galaxies, over the course of cosmic time.
  • Since the geometry is simplest at the largest scale (owing to a Universe that is basically the same in every direction), the easiest part of the story is the extremely long-term behaviour. If you launch today, the rate at which galaxies are consumed by life steadily increases (as the sphere of influence continues to grow) until about 19 billion years from now, when the Universe is a little over twice its current age. After that, galaxies are overtaken more and more slowly. And at some point in the very distant future, the process ends. No matter how fast or how long it continues to expand, our sphere will never overtake another galaxy. If the probes can move truly fast – close to the speed of light – that last galaxy is about 16 billion light-years away, as of today (it will be much further away, by the time we reach it). Our telescopes can see galaxies further still, but they’re not for us. A ‘causal horizon’ sets the limit of our ambition. In the end, the Universe itself will push galaxies apart faster than any VNP can move, and the ravenous spread of life will stop.
  • Communication becomes increasingly difficult. Assuming you invent a practical way to send and receive intergalactic signals, you’ll be able to communicate with the nearby galaxies pretty much forever (though, with an enormous time lag). But the really distant galaxies are another matter. If we assume fast probes, then seven out of eight galaxies we eventually reach will be unable to send a single message back to the Milky Way, due to another horizon. The late Universe becomes increasingly isolated, with communication only within small groups of galaxies that are close enough to remain gravitationally bound to each other.
  • Our VNP project might encounter another kind of limitation, too. What if another intelligent civilisation had the very same idea, initiating their own expansion from their own home in a distant galaxy? Our expanding spheres would collide, putting a stop to further expansion for each of us. We don’t know if that will happen, because no one has observed a telltale cluster of engineered galaxies in the distance, but we should be open to the possibility. If we can do it, another civilisation can too – it’s just a question of how often that occurs, in the Universe. Taken as a whole, this entire process bears an uncanny resemblance to a cosmological phase transition, with ‘nucleation events’ and ‘bubble growth’ that come to fill most of the Universe. There is even ‘latent heat’ given off in the process, depending on how quickly these massive civilisations consume energy.
Author Narrative
  • Jay Olson is currently an academic visitor in the Department of Physics at Boise State University in Idaho.
Notes
  • This is a fascinating and worrying paper.
  • The author - who seems to have had a number of papers on this and related topics published in respectable science journals (links are provided from yhis Aeon paper) - is a little too confident that the technological issues can be resolved, but it does sound feasible in due course, given a clear run.
  • The self-replicating VNPs (Von Neumann Probes) are described as 'small' and analogous to flies, or did I misunderstand something?
  • I agree that whoever devises and implements such a scheme would be analogous to a cult leader. You'd be irrevocably contaminating the entire universe (assuming the scheme technology actually works); you'd have to be pretty sure you'd be doing the 'right thing'.
  • What would worry me is that the technology to develop and release a genetically-modified virus that turns the entire biosphere into goo would be much easier to achieve, and more likely to happen before the first VNP sets off. So, the universe may be safe.
  • The paper is worth re-reading more carefully.
  • The numerous Comments look pretty wild and off piste from a quick scan.

Paper Comment
  • Sub-Title: "When self-replicating craft bring life to the far Universe, a religious cult, not science, is likely to be the driving force"
  • For the full text see Aeon: Olson - Capturing the cosmos.



"P (Deepak) - Mere imitation"

Source: Aeon, 08 August 2024


Author's Introduction
  • Here’s the quandary when it comes to AI: have we found our way to salvation, a portal to an era of convenience and luxury heretofore unknown? Or have we met our undoing, a dystopia that will decimate society as we know it? These contradictions are at least partly due to another – somewhat latent – contradiction. We are fascinated by AI’s outputs (the what) at a superficial level but are often disenchanted if we dig a bit deeper, or otherwise try to understand AI’s process (the how). This quandary has never been so apparent as in these times of generative AI. We are enamoured by the excellent form of outputs produced by large language models (LLMs) such as ChatGPT while being worried about the biased and unrealistic narratives they churn out. Similarly, we find AI art very appealing, while being concerned by the lack of deeper meaning, not to mention concerns about plagiarising the geniuses of yesteryear.
  • That worries are most pronounced in the sphere of generative AI, which urges us to engage directly with the tech, is hardly a coincidence. Human-to-human conversations are layered with multiple levels and types of meanings. Even a simple question such as ‘Shall we have a coffee?’ has several implicit meanings relating to shared information about the time of the day, a latent intent to have a relaxed conversation, guesses about drink preferences, availability of nearby shops, and so on and so forth. If we see an artwork titled ‘1970s Vietnam’, we probably expect that the artist is intending to convey something about life in that country during end-war and postwar times – a lot goes unsaid while interacting with humans and human outputs. In contrast, LLMs confront us with human-like responses that lack any deeper meaning. The dissonance between human-like presentation and machine-like ethos is at the heart of the AI quandary, too.
  • Yet it would be wrong to think that AI’s obsession with superficial imitation is recent. The imitation paradigm has been entrenched in the core of AI right from the start of the discipline. To unpack and understand how contemporary culture came to applaud an imitation-focused technology, we must go back to the very early days of AI’s history and trace its evolution over the decades.

Author's Conclusion
  • If imitations are so problematic, what are they good for? Towards understanding this, we may take a leaf out of Karl Marx’s scholarship on the critique of the political economy of capital, capital understood as the underpinning ethos of the exploitative economic system that we understand as capitalism. Marx says that capital is concerned with the utilities of objects only insofar as they have the general form of a commodity and can be traded in markets to further monetary motives. In simple terms, towards advancing profits, efforts to improve the presentation – through myriad ways such as packaging, advertising and others – would be much more important than efforts to improve the functionality (or use-value) of the commodity.
  • The subordination of content to presentation is thus, unfortunately, the trend in a capitalist world. Extending Marx’s argument to AI, the imitation paradigm embedded within AI is adequate for capital. Grounded on this understanding, the interpretation of the imitation game – err, the Turing test – as a holy grail of AI is hand-in-glove with the economic system of capitalism. From this vantage point, it is not difficult to see why AI has synergised well with the markets, and why AI has evolved as a discipline dominated by big market players such as Silicon Valley’s tech giants. This market affinity of AI was illustrated in a paper that showed how AI research has been increasingly corporatised, especially when the imitation paradigm took off with the emergence of deep learning.
  • The wave of generative AI has set off immense public discourse on the emergence of real artificial general intelligence. However, understanding AI as an imitation helps us see through this euphoria. To use an overly simplistic but instructive analogy, kids may see agency in imitation apps like My Talking Tom – yet, it is obvious that a Talking Tom will not become a real talking cat, no matter how hard the kid tries. The market may give us sophisticated and intelligent-looking imitations, but these improvements are structurally incapable of taking the qualitative leap from imitation to real intelligence. As Hubert Dreyfus wrote in What Computers Can’t Do (1972), ‘the first man to climb a tree could claim tangible progress toward reaching the moon’ – yet, actually reaching the Moon requires qualitatively different methods than tree-climbing. If we are to solve real problems and make durable technological progress, we may need much more than an obsession with imitations.

Author Narrative
  • Deepak P (see Deepak P - Home Page) is an associate professor in the School of Electronics, Electrical Engineering and Computer Science at Queen’s University Belfast, UK, and an adjunct faculty member at the Department of Computer Science and Engineering at the Indian Institute of Technology Madras, India. He has authored several research publications, including book chapters and books, on various topics in artificial intelligence. His research interests are in analysing the political economy of artificial intelligence.

Notes
  • I wasn't impressed by this paper. I felt the author had an axe to grind and underplayed the amazing progress in AI over the last 70 years. The reason for this appeared at the end, where his Marxist economic stance came out in the open. He seems to think of AI as an enemy of the people and its commercial developers as only interested in profit. True, ‘big tech’ does want to make a profit, as does any other company that sells us stuff, but they provide stuff that we want – and the advertising angle means that most of us don’t pay anything for a technology that is life-changing, in its way.
  • He doesn't mention at all the importance of hardware in the methods of AI. Classical AI, with explicit algorithmic encoding was the only way to go with the feeble machines available in the past. Alan Turing wrote his paper in the days of vacuum tubes. Today, an MS Copilot-ready PC has 40 TOPS (40 trillion operations / second) and Meta AI apparently has 600,000 H100 cards at its disposal, each of which has 3,000+ TOPS and costs £20k. Not to mention the entire internet of data. This demonstrates why 'big tech' is the only operator with the resources to develop and provide these services.
  • I'm indebted to PC Pro (issue 362, November 2024) for the above. It also has an article that puts a more positive spin than our author on the impact of AI on the jobs market. We should use AI to help out with the dull bits of our jobs so we can focus on the interesting bits, and the future. An AI won't definitively recognise a face (or a cancer) but can do the leg-work and make suggestions. Provided we don't treat it as an oracle, this can only be a help.
  • Nature and nurture enter the picture when it comes to training. When it comes to facial recognition he implied that AIs can only recognise faces because lots of poorly-paid Kenyans have flagged them as faces. Well, unlike humans, AI isn't hard-wired with an interest in faces or anything else. So, it needs to be told to take an interest in faces, and given a sample-set of faces to analyse. It's not told anything about faces (nose between two eyes, etc). It works out the features for itself, and then applies these to 'recognising' a face it has 'seen' before.
  • Of course, as with human learning, it will be biased by its training data. After all, schools - and parents - in different countries teach their children different views of the world (and there may be no 'correct' view anyway). A white child that's never seen a black person before will be surprised - and not sure what to make of - the first black person they see (that was my experience, 60 years ago, anyway). Similarly, an AI trained on predominately 'white' data may be confused by black faces. That's a fixable problem (if it's recognised as such).
  • I agree that contemporary and historic computing machines merely imitate intelligence in many ways, though in others they exemplify it with super-human capacity. Not yet in general intelligence.
  • Books by well-known AI-skeptics ("O'Neill (Cathy) - Weapons of Math Destruction: How Big Data Increases Inequality and Threatens Democracy" & "Harari (Yuval Noah) - Homo Deus: A Brief History of Tomorrow") are alluded to but hardly engaged with. The first shares the author's worries about bias, but the latter is more concerned about existential threat.
  • I need to re-read the paper to see if any of the arguments that machines are incapable of having beliefs - for instance - are cogent. In his discussion of thermostats he makes no mention of Daniel Dennett’s 'Intentional Stance' whereby the best way to interact with almost any purposive system is to treat it as though it has beliefs and desires and acts rationally. We don't need to think of a thermostat's beliefs as though they are rounded in the way human beliefs are.
  • There are sundry Aeon Comments - mostly sycophantic and trivial - with similarly 'appreciative' but trivial replies by the author. There seems to be an appetite for sticking the boot in to the pretensions of AI-mongers. I suppose they deserve a closer read.

Paper Comment
  • Sub-Title: "Generative AI has lately set off public euphoria: the machines have learned to think! But just how intelligent is AI?"
  • For the full text see Aeon: P - Mere imitation.



"Pas (Heinrich) - All is One"

Source: Aeon, 28 April 2023


Authors' Introduction
  • ‘From all things One and from One all things,’ wrote the Greek philosopher Heraclitus some 2,500 years ago. He was describing monism, the ancient idea that all is one – that, fundamentally, everything we see or experience is an aspect of one unified whole. Heraclitus wasn’t the first, nor the last, to advocate the idea. The ancient Egyptians believed in an all-encompassing but elusive unity symbolised by the goddess Isis, often portrayed with a veil and worshipped as ‘all that has been and is and shall be’ and the ‘mother and father of all things’.
  • This worldview also follows in straightforward fashion from the findings of quantum mechanics (QM), the uncanny physics of subatomic particles that departs from the classical physics of Isaac Newton and experience in the everyday world. QM, which holds that all matter and energy exist as interchangeable waves and particles, has delivered computers, smartphones, nuclear energy, laser scanners and arguably the best-confirmed theory in the entirety of science. We need the mathematics underlying QM to make sense of matter, space and time. Two processes of quantum physics lead directly to the notion of an interconnected universe and a monistic foundation to nature overall: ‘entanglement’, nature’s way of integrating parts into a whole, and the topic of the 2022 Nobel Prize in Physics; and ‘decoherence’, caused by the loss of quantum information, and the reason why we experience so little quantum weirdness in our daily lives.
  • Yet, despite the throughline in philosophy and physics, the majority of Western thinkers and scientists have long rejected the idea that reality is literally unified, or nature and the Universe a system of one. From judges in the Inquisition (1184-1834) to quantum physicists today, the thought that a single system underlies everything has been too odd to believe. In fact, though philosophers have been proposing monism for thousands of years, and QM is, after all, an experimental science, Western culture has regularly lashed out against the concept and punished those promoting the idea.
Authors' Conclusion
  • Even after decoherence had explained how our everyday experience can follow from a monistic quantum reality, the idea remained the outsider view of a small group of renegade physicists. And, in fact, for most of us, the notion of an all-encompassing ‘One’ doesn’t feel like proper science. It comes with a scent of New Age bullshit.
  • But why does this idea sound so bizarre to us? To understand this bias, we have to leave quantum mechanics for a moment and look back to how monism evolved in Europe over the past 800 years. It turns out, the controversy about how to interpret QM is part of the larger story – the conflict about who was entitled to define the foundation of reality: religion, or science?
  • According to Everett and Zeh, the fundamental description of the Universe is a single entangled state, described by a universal wave function. Everything we experience in our daily lives emerges from this fundamental quantum reality.
  • If this is correct, it implies that the traditional approach of physics to understand things in terms of constituents doesn’t work anymore. If physicists explain how everyday objects such as chairs, tables and books are made of atoms, atoms are composed of atomic nuclei and electrons, atomic nuclei contain protons and neutrons, and protons and neutrons consist of quarks, they ignore that these particles aren’t fundamental but just abstractions from the fundamental whole.
  • Instead, the most fundamental description of the Universe has to start with the Universe itself, understood as an entangled quantum object. Indeed, the 2022 Nobel Prize in Physics was awarded for experiments that probe correlations between particles separated by large distances yet connected to each other based on entanglement.
  • This view also requires us to rethink our notion of space and time. If there exists but a single thing in the Universe, then space, often understood as the relative order of things, doesn’t make sense any more. Nor is it easy to imagine this single object evolving in time. Accordingly, the Wheeler-DeWitt equation, describing the quantum mechanical wave function of the Universe and the starting point for much of Stephen Hawking’s work on cosmology, describes a timeless universe.
  • Entanglement also plays a crucial role in the most advanced approaches to quantum computing and the search for a theory of quantum gravity, in which entanglement creates connections between distant regions of space-time. Just a few weeks before the new Nobel laureates were honoured in Stockholm in 2022, a different team of distinguished scientists had a paper published in Nature that described a process on Google’s quantum computer that could be interpreted as some kind of wormhole, a tunnel connecting far-away regions in space. Although the wormhole realised in this recent experiment exists only in a two-dimensional toy universe, it hints at an intimate relationship between quantum entanglement and proximity in space, and thus could constitute a breakthrough for future research at the forefront of physics.
  • The 3,000-year-old concept of monism may actually help modern physicists in their struggle to find a theory of quantum gravity and make sense out of black holes, the Higgs boson, and the early Universe. Chances are high that we witness the beginning of a new era where science is informed by monism and the Universe is perceived as a unified whole.
  • This Essay was made possible through the support of a grant to Aeon+Psyche from the John Templeton Foundation.
Author Narrative
  • Aeon: Heinrich Päs is professor of theoretical physics at TU Dortmund University in Germany. He is the author of The Perfect Wave: With Neutrinos at the Boundary of Space and Time (2014) and The One: How an Ancient Idea Holds the Future of Physics (2023).
  • Amazon: Heinrich Päs is a German theoretical physicist and professor at TU Dortmund University. He received a PhD from the University of Heidelberg for research at the Max-Planck-Institut in 1999, held postdoc appointments at Vanderbilt University and the University of Hawaii, and an Assistant Professorship at the University of Alabama. His research on particle physics, cosmology and the structure of space and time was on the cover of the Scientific American and the New Scientist magazine. It also got included in the collector's edition "Ultimate Physics: From Quarks to the Cosmos", next to a piece by Stephen Hawking. His first book “The Perfect Wave” dealt with neutrinos - the most puzzling particles we know about. It was praised in The Wall Street Journal, Nature, Economist, Publisher's Weekly, ZEIT, Welt and Deutschlandfunk. His new book "The One" makes the scientific case for an ancient idea about the nature of the universe: that all is One. Blending physics, philosophy, and the history of ideas, "The One" is an epic, mind-expanding journey through millennia of human thought and into the nature of reality itself.
Notes
  • I thought this Paper was a missed opportunity! Most of it is raking over old controversies between science and religion or between the scientific establishment and its rebels.
  • Also, I don't think that what the ancients thoughts of as 'the One' has anything to do with what a modern interpreter of QM might mean.
  • I wonder how Päs compares with Philip Goff? It's another item supported by the Templeton Foundation and another that tries to look beyond 'the equations' into 'ultimate reality'.
  • The reason the scietific establishment conplains about the latter it that it makes no testable predictions; there's nothing to 'calculate'.
  • If something had been said about the consequences - for physics, not society - of all being one that would be fine, but i couldn't see any.
  • There are lots of Comments that I've not read. I may revise my opinion after reading them.
  • His book is cheap, but - according to the low-star reviewers - suffers from the defects aluded to above.

Paper Comment
  • Sub-Title: "The ancient philosophy of monism and the physics of quantum entanglement agree: all that exists is one unified whole"
  • For the full text see Aeon: Päs - All is One.



"Pessoa (Luiz) - The entangled brain"

Source: Aeon, 19 May 2025


Author's Introduction
  • When thousands of starlings swoop and swirl in the evening sky, creating patterns called murmurations, no single bird is choreographing this aerial ballet. Each bird follows simple rules of interaction with its closest neighbours, yet out of these local interactions emerges a complex, coordinated dance that can respond swiftly to predators and environmental changes. This same principle of emergence – where sophisticated behaviours arise not from central control but from the interactions themselves – appears across nature and human society.
  • Consider how market prices emerge from countless individual trading decisions, none of which alone contains the ‘right’ price. Each trader acts on partial information and personal strategies, yet their collective interaction produces a dynamic system that integrates information from across the globe. Human language evolves through a similar process of emergence. No individual or committee decides that ‘LOL’ should enter common usage or that the meaning of ‘cool’ should expand beyond temperature (even in French-speaking countries). Instead, these changes result from millions of daily linguistic interactions, with new patterns of speech bubbling up from the collective behaviour of speakers.
  • These examples highlight a key characteristic of highly interconnected systems: the rich interplay of constituent parts generates properties that defy reductive analysis. This principle of emergence, evident across seemingly unrelated fields, provides a powerful lens for examining one of our era’s most elusive mysteries: how the brain works.
  • The core idea of emergence inspired me to develop the concept I call the entangled brain: the need to understand the brain as an interactionally complex system where functions emerge from distributed, overlapping networks of regions rather than being localised to specific areas. Though the framework described here is still a minority view in neuroscience, we’re witnessing a gradual paradigm transition (rather than a revolution), with increasing numbers of researchers acknowledging the limitations of more traditional ways of thinking.

Author's Conclusion
  • Why is the brain so entangled, and thus so unlike human-engineered systems? Brains have evolved to provide adaptive responses to challenges faced by living beings, promoting survival and reproduction – not to solve isolated cognitive or emotional problems. In this context, even the mental vocabulary of neuroscience and psychology (attention, cognitive control, fear, etc), with origins disconnected from the study of animal behaviour, provides problematic theoretical pillars. Instead, approaches inspired by evolutionary considerations provide better scaffolds to sort out the relationships between brain structure and function.
  • The implications of the entangled brain are substantial for the understanding of healthy and unhealthy brain processes. It is common for scientists to seek a single, unique source of psychological distress. For example, anxiety or PTSD is the result of an overactive amygdala; depression is caused by deficient serotonin provision; drug addiction is produced by dopamine oversupply. But, according to the ideas described here, we should not expect unique determinants for psychological states.
  • Anxiety, PTSD, depression and so on should be viewed as system-level entities. Alterations across several brain circuits, spanning multiple brain regions, are almost certainly involved. As a direct consequence, healthy or unhealthy states should not be viewed as emotional, motivational or cognitive. Such classification is superficial and neglects the intermingling that results from anatomical and functional brain organisation.
  • We should also not expect to find a single culprit, not even at the level of distributed neuronal ensembles. The conditions in question are too heterogeneous and varied across individuals; they won’t map to a single alteration, including at the distributed level. In fact, we should not expect a temporally constant type of disturbance, as brain processes are highly context-dependent and dynamic. Variability in the very dynamics will contribute to how mental health experiences are manifested.
  • In the end, we need to stop seeking simple explanations for complex mind-brain processes, whether they are viewed as healthy or unhealthy. That’s perhaps the most general implication of the entangled brain view: that the functions of the brain, like the murmurations of starlings, are more complicated and more mysterious than its component parts.
Author Narrative
  • Luiz Pessoa is director of Maryland Neuroimaging Center, principal investigator at the Laboratory of Cognition and Emotion, and professor of psychology at the University of Maryland. He is the author of The Cognitive-Emotional Brain (2013) and The Entangled Brain (2022).
Notes
Paper Comment
  • Sub-Title: "The brain is much less like a machine than it is like the murmurations of a flock of starlings or an orchestral symphony"
  • For the full text see Aeon: Pessoa - The entangled brain.



"Pigliucci (Massimo) - These lessons in scepticism could make the world a better place"

Source: Aeon, 08 May 2025


Author's Introduction
  • We live in a paradoxical time: despite the proliferation of critical thinking courses in schools and universities, our public discourse has never been more dominated by inflexible certainties, tribal allegiances to dubious ‘facts’, and a profound aversion to questioning our own beliefs. In an age where certainty is currency, doubt has become a radical act.
  • Our social media ecosystems reward conviction, not contemplation. Politicians trumpet certainties rather than explore complexities. Even our educational institutions often teach critical thinking as a weapon to dismantle others’ arguments rather than a tool for examining our own. The skill we most desperately need is the very one we’ve neglected to cultivate: the ability to hold our own certainties in suspension.
  • What if doubt isn’t weakness but wisdom? What if the most intellectually courageous stance isn’t to plant your flag in the ground of conviction, but to embrace the productive discomfort of uncertainty? The ancient Greco-Romans, facing their own societal upheavals, developed sophisticated approaches to scepticism that might serve us better than our modern pretences to critical discourse.
  • The word ‘scepticism’ comes from the Greek skeptikos, meaning ‘enquirer’. In the ancient Greco-Roman world, there were at least four distinct approaches to scepticism, which I and my co-authors Gregory Lopez and Meredith Kunz explore in some detail in Beyond Stoicism: A Guide to the Good Life with Stoics, Skeptics, Epicureans, and Other Ancient Philosophers (2025). Let’s take a look at four representative philosophers whose way of thinking would very much be useful to us all-too-certain denizens of the 21st century.

The Four Sceptics
  • Socrates’ approach: knowing what you don’t know
  • Protagoras’ relativism
  • Cicero’s approximations to truth
  • Pyrrho’s peace of mind

Author's Conclusion
  • So now you have four good reasons to be a sceptic. Socrates emphasises the benefits of understanding how little you know; Protagoras reminds us that different perspectives have different uses and that there’s no ‘view from nowhere’; Cicero encourages us to put more effort into knowing the important things and being comfortable with doubt for less important ones; and Pyrrho argues that suspending judgment actually brings peace of mind. I’m betting (with confidence, but not certainty!) that if we all tried these approaches ours would be a significantly better world to live in.
Author Narrative
  • Massimo Pigliucci is the K D Irani Professor of Philosophy at the City College of New York. His books include How to Be a Stoic: Using Ancient Philosophy to Live a Modern Life (2017) and Beyond Stoicism: A Guide to the Good Life with Stoics, Skeptics, Epicureans, and Other Ancient Philosophers (with Gregory Lopez and Meredith Kunz, 2025).
Notes
  • This is basically a plug for the author's latest book, but is a useful summary.
  • The suggestion - under Pyrrhonism - that you should write two essays for any proposition - one in favour and one against. That's what Boris Johnson did - much criticised - for and against Brexit. Shows his classical education, even if he came to the - most likely - wrong conclusion.
  • There are a few Aeon Comments with responses by the Author.
  • This losely relates to my Note on Society. I probably ought to have a note on Skepticism and Epistemology generally; that on Epistemology is on the list.

Paper Comment



"Plakias (Alexandra) - Adjust your disgust"

Source: Aeon, 20 March 2025


Author's Introduction
  • Something in contemporary Western diets must shift, for both moral and ecological reasons. Fortunately, there are alternatives to our current food system – ways of eating that are equally, if not more, nutritious but without the suffering and climate impacts of factory-farmed meat. Unfortunately, many people find these alternatives disgusting.
  • I know, because I’ve been one of those people. My interest in disgust began almost two decades ago in graduate school, when I encountered an article describing scientists’ attempts to grow meat without animals. ‘In vitro meat’, now known as ‘cultivated meat’ or ‘lab-grown meat’, involves cultivating muscle tissue in bioreactors, using growth media and scaffolding to create threads of muscle that could then be massed together into hamburger patties or even a steak.
  • The idea of growing muscle tissue in labs for human consumption, independently of an animal body, seemed unnatural and repellent. It seemed, in a word, disgusting. But faced with more humane, sustainable options, is disgust a good enough reason to reject them? Is it any kind of reason at all?

Author's Conclusion
  • For many eaters today, cheese sits squarely on the side of civilisation. But the example shows that the meanings associated with foods are mutable. Instead of associating entomophagy – or cheese – with something dirty or dystopian, one might opt to frame the practice in terms of novelty, sustainability or understanding Indigenous cuisines. Or reframe the food by relating it to an aspirational identity, such as being an adventurous eater.
  • Perhaps most radical of all is the possibility of finding enjoyment in our negative reactions to food. Much as people can take pleasure in horror films or rollercoasters, there’s a kind of pleasure – paradoxical as it might seem – in negative aesthetic experiences like fear, discomfort, maybe even disgust. Food is, after all, a form of entertainment. Today’s diners seek out far-flung restaurants and obscure, hard-to-attain experiences; in the past, elaborate feasts and banquets were designed to surprise, thrill and even frighten guests. Discomfort can push us beyond our comfort zones; embracing the identity of an adventurous eater can help reposition foods: not so much ‘disgusting’ as ‘challenging’ or ‘adventurous’. Anyone who has experienced the thrill of swallowing that first raw oyster will be familiar with the sensation of conquering food fear.
  • More than a decade after encountering that first article on ‘in vitro meat’, I found myself in a studio apartment in Manhattan, having my own thrilling experience with that least thrilling of foods: chicken. I stood over the stove, looking at an apparently unremarkable package of shredded chicken. Unremarkable, unless you count the fact that said chicken was produced entirely from cells grown in a bioreactor: chicken without the chicken. As the chicken cooked, I listened to it sizzle and watched it brown. The future of food might be this: a new technology that recreates our most familiar foods. It might be a return to the foods we ate millennia ago. But whatever it is, it won’t be disgusting.
Author Narrative
  • Alexandra Plakias is associate professor of philosophy at Hamilton College in New York. She is the author of Thinking Through Food: A Philosophical Introduction (2019) and Awkwardness (2024).
Notes
Paper Comment
  • Sub-Title: "The future of food is nutritious and sustainable – if we can overcome our instinctual revulsion to insects and lab-grown meat"
  • For the full text see Aeon: Plakias - Adjust your disgust.



"Polansky (David) - The battles over beginnings"

Source: Aeon, 08 March 2024


Author's Introduction
  • Friedrich Nietzsche once wrote: ‘Mankind likes to put questions of origins and beginnings out of its mind.’ With apologies to Nietzsche, the ‘questions of origins and beginnings’ are in fact more controversial and hotly debated. The ongoing Israel-Gaza war has reopened old debates over the circumstances of Israel’s founding and the origins of the Palestinian refugee crisis. Meanwhile, in a speech he gave on the eve of Russia’s invasion of Ukraine in February 2022, Vladimir Putin insisted that ‘since time immemorial’ Russia had always included Ukraine, a situation that was disrupted by the establishment of the Soviet Union. And in the US, The New York Times’ 1619 Project generated no small amount of controversy by insisting that the United States’ real origins lay not with its formal constitution but with the introduction of slavery into North America.
  • In other words, many conspicuous political disputes today have a way of returning us to the beginnings of things, of producing and being waged in part through strong claims about origins. Yet doing so rarely helps resolve them. Because these debates have become ubiquitous, we may not realise how unusual our preoccupation with political origins really is. Beginnings are, after all, far removed from the issues at hand as to be a source of leverage in ongoing controversies or a source of controversy themselves. Why should the distant past matter more than the recent past or the present? To better understand why we remain bedevilled by the problem of origins, and perhaps to think more clearly about them in the first place, it may help to turn to a familiar but unexpected source: Machiavelli.

Author's Conclusion
  • Accident and force still lie beneath the surface of our day-to-day politics, threatening to re-emerge. This is not an easy thing to accept. Even in quieter times, our consciences still trouble us, like Shakespeare’s Bolingbroke after he deposes Richard II. Moreover, we want to see our own foundations as not only just but secure. To see them otherwise is to acknowledge that our circumstances remain essentially in a state of flux. If all things are in motion, then what shall become of us?
  • Something like this anxiety seems to lie behind how we talk about political origins today. And, thinking with Machiavelli, we can see how the liberal tradition of political thought going back hundreds of years now has not prepared us well to think ethically about our historical origins. The result, when confronted with the subject, tends to be either a flight into defensive nationalism or moralistic condemnation.
  • While Machiavelli’s work can easily read like cynicism, a decent measure of cynicism is just realism. And an attitude of realism about political life can inoculate us from both sanctimony and despair, allowing us to honestly acknowledge the crimes that contributed to the formations of our own political societies without requiring us to become despisers of our countries.
  • Similarly, it would be easy enough to read Machiavelli as debunking the edifying tales that surround the foundation of new societies, from the myths of ancient Greece to modern Independence Day celebrations. ‘This is what really happened,’ he seems to say. But it is important to recognise that his account of political origins is not intended to be incriminating but instructive.
  • For his work also bears a warning: the lawless and uncertain conditions surrounding our origins reflect enduring possibilities in political life. These are crucial moments in which our existing laws are revealed to be inadequate, because they were formulated under different circumstances than those we may presently face, thus requiring daring acts of restoration undertaken in the same spirit in which the laws were originally established.
  • We may not be obliged to follow directly in the footsteps of such tyrannical figures as Cleomenes of Sparta or the Medici of medieval Florence, all of whom employed terrible violence in the acts of restoration. But we may learn from such examples of the dramatic stakes involved in maintaining our political order – as the philosopher Claude Lefort put it in his magisterial 2012 work on Machiavelli: ‘This is the truth of the return to the origin; not a return to the past, but, in the present, a response analogous to the one given in the past.’
  • This is part of the value we gain from reading Machiavelli: facing the troubling implications of our own origins may help us better prepare ourselves for the continued vicissitudes of political life. After all, it may be that our own established order is the only thing standing in the way of someone else’s new origins.
Author Narrative
  • David Polansky is a research fellow with the Institute for Peace and Diplomacy. His writing has appeared in Quillette, New Criterion and The Review of Politics, among others. He lives in Toronto, Canada.
Notes
  • This is an important and complex issue. I agree that the origins of all societies are murky and often arose from violence. The borders of all inland countries are somewhat arbitrary and hardly static. Occasionally they are badly drawn and cry out for redefinition.
  • The author deals with the topic well, but the connection to Machiavelli is a little strained on occasion.
  • I've got several copies of Niccolo Machiavelli - The Prince, but have downloaded a pdf from Google Books to read when out walking Bertie the dog.
  • There are 22 Comments on Aeon which I need to read. Most of them look rather annoying. There are a few replies by the author.
  • I'll add to these comments when I've read them.

Paper Comment



"Price (Huw) & Wharton (Ken) - Untangling entanglement"

Source: Aeon, 29 June 2023


Excerpts
  • The term collider comes from causal modelling, the science of inferring causes from statistical data. Causal modellers use diagrams called directed acyclic graphs (DAGs), made up of nodes linked by arrows. The nodes represent events or states of affairs, and the arrows represent causal connections between those events. When an event has two independent contributing causes, it is shown in a DAG as a node where two arrows ‘collide’.
  • Starting with retrocausality, our recipe goes like this, in four easy steps:
    1. Retrocausality automatically introduces colliders into Bell experiments, at the point where the two particles are produced. Alice and Bob’s choices of measurement both feed back into the past, to influence the particles at this point.
    2. That’s interesting because colliders produce collider bias and causal artefacts – correlations that look like they involve causation, but really don’t.
    3. But constraining a collider can turn a causal artefact into a real connection across the collider, as shown in Figure 4. Because of the constraint, a different choice on Alice’s side sometimes requires a different outcome on Bob’s side, and vice versa.
    4. In the case of colliders in the past, as in Figure 3, constraint is easy. It just follows from normal initial control of experiments.
  • Hypothesis: quantum entanglement is connection across constrained colliders (CCC), where the colliders result from retrocausal influence on the source of pairs of entangled particles, and the constraint results from normal initial control of the experiments that produce such particles.
  • Conclusion: We’ve seen that quantum entanglement looked like magic, by the standards of some of the pioneers who discovered it. It still looks very strange, even to the physicists who have just won Nobel Prizes for proving that it is real. Any coherent explanation of it seems likely to combine some unexpected elements, and to require a careful analysis of how causes interact with each other, down at the level where we can’t see all the effects. The biggest surprise, in our view, is how few ingredients the explanation seems to need – and how simple the recipe is for putting them together.
Author Narrative
  • Huw Price is a Distinguished Professor Emeritus at the University of Bonn and an Emeritus Fellow of Trinity College, Cambridge. His most recent book is The Practical Turn: Pragmatism in Britain in the Long 20th Century (2017).
  • Ken Wharton is professor of physics and astronomy at San José State University. His research is in the field of quantum foundations.
Notes
  • I read this paper on my Kindle while walking Bertie the dog, and consequently got little out of it.
  • I found the use of the term 'collider' in a sense that has nothing to do with particle accellerators very confusing.
  • See:-
    Wikipedia: Collider (statistics)
    Collider Bias
    Wikipedia: Berkson's paradox
  • I was disappointed to find that collider bias is not mentioned in "Spiegelhalter (David) - The Art of Statistics: Learning from Data"; not even colliders or Berkson.
  • I need to read it again while sitting at my desk!
  • That said, it's all 'frontiers' stuff, so the authors are unlikely to be correct. As they suggest, using one of my favourite expressions, some readers may think their cure may be worse than the disease, involving - as it does - retrocausality.

Paper Comment



"Prum (Richard O.) - Artists of our own lives"

Source: Aeon, 19 January 2024


Author's Conclusion
  • All of the most complex and distinctive innovations as human beings – our creative intelligences, our diverse personalities, our rich psychological capacities, and our facility for language – are examples of performative evolutionary novelties. In the same way that gender performativity identifies the role of agents in our social environment in our gender becomings, human psychology, intelligence and language capacity all arose through coevolutionary feedback between our developing brains and our increasingly complex social environments, which cannot be encoded within our genomes. Our sensory and subjective experiences are not projections of data onto neural algorithms, but dynamic actions taken by every individual in interaction with the world. More than any other aspects of the human phenotype, human psychology, personality and behaviour are explicitly Austinian performatives – doings in our social world.
  • The performative understanding of our physical bodies and minds provides a single, unified conceptual framework for simultaneous science-culture enquiry and dialogue about evolution, biology, genetics, psychology and gender/sex. This shift in scientific thinking is not an accommodation to culture but, rather, an improvement in scientific understanding in its own right. Biology needs queer theory to get the body right. Although some may find this surprising, it means simply that scientists can learn about science from people who are not scientists.
  • Performative biology establishes an intellectually queer space in the heart of molecular genetics, developmental biology and evolutionary biology. Scientifically, performative biology calls for a renewed focus of biological research on all levels in the complex hierarchical process of organismal development and evolution, not just on genes. But this metaphoric revolution will require some fundamental rethinking about how biology is taught, researched and funded. Performative biology also extends an invitation to develop new directions and priorities in scientific research, by new generations of researchers, conceived in the fact that organisms are complex, individual becomings, not material products of a blueprint. Lastly, performative biology establishes an intellectual path for queer and trans scientists and their allies to pursue scientific research in molecular biology, developmental biology, evolutionary biology and psychology in ways that connect personally and intellectually to their lived experiences and identities.
Author Narrative
  • Richard O Prum is the W R Coe Professor of Ornithology at Yale University. He is the author of The Evolution of Beauty (2017), which was a finalist for the 2018 Pulitzer Prize in General Non-Fiction, and Performance All the Way Down: Genes, Development, and Sexual Difference (2023).
Notes
  • This is clearly a plug for the author's latest book which - for once - isn't prohibitively expensive.
  • It makes many important points that I think are well-known (see, for example, "Ball (Philip) - How to Grow a Human: Reprogramming Cells and Redesigning Life"); namely that the 'genetic blueprint' paradigm is simplistic and that how genes are expressed and do their work is very much context-sensitive; the context being within the organism itself.
  • This paper doesn't really engage with 'nature versus nurture', but suggests that nature itself is more context-sensitive, and less 'binary', than might be expected.
  • It's also keen to stress 'performativity': cells 'do things' - it's not just software. Well, who'd have guessed?
  • But this leads on to (what I took to be irrelevant) sidelines into the philosophy of Performativity (see Wikipedia: Performativity) and the invocation of J.L. Austin's speech act theory, as taken up by 'feminist queer theorists' like Judith Butler (see Wikipedia: Judith Butler).
  • I didn't find the invocation of 'feminist queer theory' in the least helpful. I think the complexity of the interaction between genes and the body just demonstrates how things can go wrong (as is the traditional understanding) and has nothing to do with 'diversity'. The author argues that the sciences can learn methodologically from the arts, but I think he's just seeing parallels.
  • I'd had a quick look into 'queer theory' before in relation to "Calarco (Matthew) - Thinking Through Animals: Identity, Difference, Indistinction". See:-
    Wikipedia: Queer,
    Wikipedia: Queer Theory, and
    Wikipedia: Queer Studies
  • For the author, see:-
    Wikipedia: Richard Prum, and
    Yale: Richard Prum
  • From the paper, I'd expected him to be 'trans' or at least non-binary, but he seems perfectly 'cis' and 'normal'.
  • Anyway, I think this could do with a second - maybe more sympathetic - reading. I'll add it to the ever-lengthening queue.
  • It applies to my Notes on Genetics and Narrative Identity.
  • There are some Comments I need to engage with - saved for future reading!

Paper Comment
  • Sub-Title: "The genome is the starting point for a performance we enact over a lifetime, not a blueprint we’ve got to follow"
  • For the full text see Aeon: Prum - Artists of our own lives.



"Qureshi-Hurst (Emily) - Many worlds, many selves"

Source: Aeon, 28 November 2024


Author's Introduction
  • Recently, I was caught on the horns of a dilemma. I had a decision to make and, either way, I knew my life would follow a different track. On one path, I accept a job offer: it’s an incredible opportunity, but means relocating hundreds of miles away, with no social network. On the other, I stay in Oxford where I’d lived for a decade: less adventure, but close to my friends and family. Both options had upsides and downsides, so I wished that I could take the job and turn it down, somehow living each life in parallel.
  • Well… there was potentially a way to make this happen. I could have my cake and eat it too.
  • This will seem odd at first, but bear with me. There are smartphone apps that can help you decide between two options by harnessing the unpredictable quirks of quantum mechanics. But this is no ordinary coin toss, where randomness decides your fate. Instead, it guarantees that both choices become realities.
  • You open the app and request a measurement of a photon, which forces it to occupy a binary state, such as ‘spin up’ or ‘spin down’. In my case, ‘spin up’ meant accept the job and ‘spin down’ meant decline. You will see only one result but, in theory, another you will see the opposite, in a different universe. From that moment, two versions of you co-exist, living in parallel.
  • It’s inspired by the ‘Many-Worlds’ interpretation of quantum mechanics ("Carroll (Sean M.) - Splitting the Universe"), first proposed by the physicist Hugh Everett III in his doctoral dissertation in the 1950s (Everett - On the Foundations of Quantum Mechanics, short thesis as defended, 1957). He argued that our Universe branches into multiple worlds every time a quantum event takes place – and thousands happen every second. While this idea seems fantastical, a growing number of scientists and philosophers think this is how our world really works. In fact, if the Many-Worlds interpretation of quantum mechanics is true, then the splitting of worlds is not only possible, it is ubiquitous.
  • As a philosopher of religion, I am interested in how this mind-boggling scientific theory might force us to reexamine even our most deeply held beliefs. In fact, I believe that the Many-Worlds interpretation of quantum mechanics encourages us to radically reconceptualise our understanding of ourselves. Perhaps I am not a single, unique, enduring subject. Perhaps I am actually like a branching tree, or a splitting amoeba, with many almost identical copies living slightly different lives across a vast and ever-growing multiverse. I also believe that this picture encourages us to rethink our ideas about moral responsibility, and what religion tells us about God – maybe, even, abandon the traditional idea of God altogether.

Author's Conclusion

    So, if the Everettian multiverse is reality, we exist as continually branching selves like splitting amoeba, our ideas of morality are turned upside down, and there are compelling arguments for why there is no God overseeing it all.
  • We are also left with something of an identity crisis: it’s not even clear if we ‘survive’ from moment to moment, if branching is a kind of a death. How do I make sense of that personally? One solution I am currently exploring is the idea that who we are is determined by a narrative thread of our own weaving. On this view, who I am is no more nor less than my own internal self-conception, shaped by memory, desire, emotion, experience and embodiment. On this view, it does not matter whether there are copies of me or not, and I should not care which philosophical principles these copies might violate. All that matters is that who I am is decided by me – it is subjective, not objective.
  • What this means is that instead of human beings having a core essence – something like an eternal and indivisible soul – we are collections of stories told and retold by us and those who love us. We are rivers, ever fluid, whose banks are shaped by sediments of stories stored in our depths.
  • The late philosopher Daniel Dennett held a similar view. According to him, the self is like a centre of gravity, a useful fiction that theoretical physicists and ordinary people alike construct to make sense of the world. For Dennett, all we have are evolving identities tied together by an autobiographical thread that is ever in flux. The self is an abstract construct; it does not really exist.
  • It would also mean that, if you are ever caught in a seemingly impossible dilemma, there is a way for you to have your cake and eat it too. I decided not to accept my job offer in this universe, but I like to think that another Emily did. In the many worlds of the Everettian multiverse, we have an infinite number of futures laid out before us. No more Sophie’s choice-type decisions need keep you up at night, instead you can flip a quantum coin and live both lives (even if you won’t ever know how the other future works out). Although we know that some of our futures contain suffering, perhaps we can take comfort in the idea that somewhere out there at least one version of us is living the best life possible.
  • Finally, it might encourage us to abandon religious ideas written in different historical contexts and shaped by different metaphysical understandings of the Universe. Perhaps there is no God. Instead of this leaving us with a cold and meaningless cosmos, I think it gives us radical autonomy to create our own systems of meaning. What matters, what is good and valuable, is up to us to decide.
  • Whatever solutions we eventually land on, what’s clear is that this radical and mind-bendingly weird interpretation of quantum mechanics raises big questions about ourselves, the Universe, and the existence of God. I, for one, can’t wait to see where the quantum physics takes us. Wherever we end up, it’ll be a wild ride.
Author Narrative
  • Emily Qureshi-Hurst is a philosopher specialising in theology and natural science in the Faculty of Divinity at the University of Cambridge, UK. She is interested in the interaction of science and religion, particularly physics and Christianity, and is the author of God, Salvation, and the Problem of Spacetime (2022) and Salvation in the Block Universe: Time, Tillich, and Transformation (2024).
Notes
Paper Comment



"Reed (Philip) - Why so many plagiarists are in denial about what they did wrong"

Source: Aeon, 01 February 2024


Author's Introduction
  • Most college professors have had to deal with plagiarised papers from their students, and my experience is no exception. The problem has long been widespread. In one anonymous survey of tens of thousands of college students in the United States and Canada, more than a third admitted to plagiarism in some form – and this was before they had access to a near-perfect method (in ChatGPT) for submitting work that is not their own.
  • I recently spent several years as a dean at my university, where I oversaw the response to violations of academic integrity across the arts and science disciplines. In almost all cases of plagiarism, students gave the same defence: I didn’t mean to. They insisted it was the result of a mistake, sloppy note-taking, or misunderstanding the rules.
  • The accusation of plagiarism can feel formidable. Denying intent allows one to try to sidestep the accusation. ‘Perhaps I’m guilty in some technical sense, but I’m not dishonest!’ But does a lack of intent matter? Is there really a complete lack of intent? And what does this tell us about a writer’s moral character?
Author's Conclusion
  • The minority of scholars who maintain that ‘accidental’ plagiarism isn’t really plagiarism or isn’t a matter of academic integrity conflate the commission of an act with the act’s appropriate consequences or with the plagiarist’s blameworthiness. Moreover, these scholars assume a false dichotomy, where cheating is always intentional and deserving of punishment, and textual errors like sloppy paraphrasing or missing citations are always unintentional – merely matters of honest confusion.
  • To care about critical thinking is to care about plagiarism. Learning to think means learning to write. The tasks of thinking and writing, appropriately pursued, require distinguishing words and ideas that you put on paper yourself from those that you get from somewhere else. When we write out our own ideas, we make an attempt to understand and impose order on a perplexing world. This is different from the also-valuable task of seeing how others do the same.
  • We have to care about half-intentional plagiarism because it helps us see why plagiarism matters at all. If we are not careful about the distinction between our own words and someone else’s, then we risk losing the significance of the distinction altogether.
Author Narrative
  • Philip Reed is a professor of philosophy at Canisius University in Buffalo, New York State. His scholarly interests are in ethics and moral psychology.
Notes
  • I thought this was a very tricky paper, but is highly relevant to my Thesis (or at least the writing-up thereof).
  • The problem is that the term 'plagiarism' (like 'murder') is a morally 'thick' concept which carries within it the presupposed condemnation of the act (and agent).
  • As a commentator has pointed out, we have a separate term for 'involuntary manslaughter' where a killing is accidental. It doesn't carry the same degree of moral condemnation.
  • So, I think intent does matter. Maybe not to the fact but to the ethical evaluation.
  • It also depends on the context - whether financial or reputational gain is involved.
  • Also, some phrases are in the public domain, and we can't cumber about all we write with attributions.
  • Yes - if what we're writing is a training exercise - from coursework to a PhD Thesis, we have to follow the rules, and failure to do so shows at least incompetence in that regard. But it may not demonstrate wickedness.
  • The author may be right that plagiarism is more like speeding than theft. I suppose you might steal something accidentally (say a valuable accidentally fell into your pocket), but would it then be 'stealng'? But 'speeding' comes in degrees, and while the fine may be the same - and relative to the egregiousness of the offence - the moral condemnation should respect intent. Say you didn't know the speed limit, or were speeding to a hospital (and weren't being reckless). Moral luck tends to enter into the equation as well.
  • The passage on 'precises' was interesting. I do a lot of this, and when quoting a long passage include this in my bespoke colour scheme to indicate that these are not my words (usually adding paragraphing and quotation marks. But in occasional sentences I often can't be bothered as it gets distracting - both for me as a writer and for the reader. After all, the context is that I'm giving an account of someone else's thought, not my own. What I'm more usually concerned about is accidentally introducing my own thought into the account that others may mistake for the original author's.
  • But ultimately, 'plagiarism' is an element of a game academics - mostly in the liberal arts - play, and those not in this game can ignore it to a degree. The same can't be said of copyright infringement, unfortunately. But that's as much a legal as moral issue, especially in musical quotations (attitudes to this have changed over time).
  • A commentator accusing Descartes of plagiarising St. Augustine is anachronistic. So - I would argue - is equating 'the spoils of war' with 'looting'.
  • The Comments are interesting, and I've reserved them for further consideration.
  • I've tagged this paper as relevant to my Note on Thesis - Method & Form, but it might also apply to Metaphilosophy (which at least has a reading list).

Paper Comment



"Requarth (Tim) - Our chemical Eden"

Source: Aeon, 11 January 2016


Author's Conclusion
  • In any origin-of-life theory the truth is hidden by billions of years of history, but Russell’s focus on energy flow turns that obstacle into something that is almost poetically obscure. ‘I don’t like the term “origin”,’ he said. ‘We really ought to call it “emergence”.’ If you think of life in terms of energy, then life’s emergence connects back to the very source of energy flow, the Big Bang itself. At that moment, Russell wrote in a 2013 paper, the Universe was at ‘nearly infinite thermodynamic stress’. The evolution of the cosmos has been propelled by the dissipation of that stress. Paradoxically, the most efficient way to get rid of the stress – to advance from order to disorder – is to create transient but ordered systems, akin to the whirlpool in the bathtub or a tornado in a storm. ‘All order in the Universe,’ Russell and his co-authors concluded, ‘is presumably born of this paradox.’
  • Life is one such oasis of order. After the Big Bang, the Universe could have, in principle, expanded into an even distribution of matter and energy. If that were the case, nothing could have ever happened, and nothing – including life – could have ever formed. But instead, something happened. Quantum fluctuations in the structure of space, perhaps, disrupted the balanced distribution of matter and energy, and set into motion a cosmic accumulation of structure and organisation. Particles clumped together; minute differences in gravitational fields attracted other particles; soon there were regions of matter bound by gravity, and enormous expanses of relatively empty space.
  • Some of that matter collapsed to form stars. Disks of gas and dust around those stars accreted into planets with molten cores, ceaseless tectonic shifting, and roiling volcanic activity. Hydrothermal convection arose from this disequilibrium, driving serpentinisation, the mineralogical process that forms hydrothermal vent structures – Russell’s chemical gardens. On one planet, at least, those mineral towers channelled the disequilibrium of the planet’s geology into chemistry, and complex proto-metabolic systems eventually led to life. In this view, life’s origin isn’t an origin at all, but just another step in a sequence of events set in motion by the Big Bang.
  • Thinking of life in terms of energy challenges the very definition of life. ‘It’s not what life is,’ Russell said. ‘It’s what life does.’ After all, you replace each atom in your body, on average, every few years. In that sense, life isn’t a thing so much as a manner of being, a restless fit of destruction and creation. If it can be defined at all, it is this: life is a self-sustaining, highly organised flux, a natural way that matter and energy express themselves under certain conditions.
  • Russell’s conception of our species, along with every other living thing, as mere energy patterns, ultimately born of rogue fluctuations in the Universe’s infancy, might make us feel a little less special. Then again, it might also make us feel a little less alone. We are descendants of an unbroken energetic lineage from the dawn of time. Darwin intuited this deep link between biology and physics, speculating that it is ‘probable that the principle of life will hereafter be shown to be part, or consequence, of some general law’. And, he might have added, there’s grandeur in this view of life, too.
Author Narrative
  • Tim Requarth is a freelance journalist whose writing has appeared in The New York Times Magazine, Foreign Policy, and Scientific American, among others. He lives in New York City.
Notes
  • This is a very interesting article, though maybe a bit over-confident in places.
  • It hales from 2016, so research may have moved on a bit since then.
  • I'm convinced that the development of 'life' in hydrothermals is a better bet either than
    → 'the primordial soup'
    → 'panspermia'
    → 'replicating clays'
  • I'll supply references shortly
  • I agree with the tag:-
    → Why life is not a thing but a restless manner of being
  • This paper relates more to my Notes on Life than on Evolution as the latter only applies after the process has got started.
  • The comments might be worth reading - I've had a quick scan and copied them lest they disappear.
  • One book recommended and reviewed by the author in the comments was
    → The Vital Question: Energy, Evolution, and the Origins of Complex Life. By Nick Lane. W.W. Norton & Co. 368 pages. $28.
    Requarth - Review - Lane - Taking on ‘The Vital Question’ About Life
    I've saved the review.
  • I'll add further comments in due course.

Paper Comment
  • Sub-Title: "To figure out the origin of life might take a conceptual shift towards seeing it as a pattern of molecular energy"
  • For the full text see Aeon: Requarth - Our chemical Eden.



"Robson (David) - ‘Like a film in my mind’: hyperphantasia and the quest to understand vivid imaginations"

Source: The Observer On-line, 20 April 2024


Full Text
  • Research that aims to explain why some people experience intense visual imagery could lead to a better understanding of creativity and some mental disorders
  • William Blake’s imagination is thought to have burned with such intensity that, when creating his great artworks, he needed little reference to the physical world. While drawing historical or mythical figures, for instance, he would wait until the “spirit” appeared in his mind’s eye. The visions were apparently so detailed that Blake could sketch as if a real person were sitting before him.
  • Like human models, these imaginary figures could sometimes act temperamentally. According to Blake biographer John Higgs, the artist could become frustrated when the object of his inner gaze casually changed posture or left the scene entirely. “I can’t go on, it is gone! I must wait till it returns,” Blake would declaim.
  • Such intense and detailed imaginations are thought to reflect a condition known as hyperphantasia, and it may not be nearly as rare as we once thought, with as many as one in 30 people reporting incredibly vivid mind’s eyes.
  • Just consider the experiences of Mats Holm, a Norwegian hyperphantasic living in Stockholm. “I can essentially zoom out and see the entire city around me, and I can fly around inside that map of it,” Holm tells me. “I have a second space in my mind where I can create any location.”
  • This once neglected form of neurodiversity is now a topic of scientific study, which could lead to insights into everything from creative inspiration to mental illnesses such as post-traumatic stress disorder and psychosis.
  • Francis Galton1 – better known as a racist and the “father of eugenics” – was the first scientist to recognise the enormous variation in people’s visual imagery. In 1880, he asked participants to rate the “illumination, definition and colouring of your breakfast table as you sat down to it this morning”. Some people reported being completely unable to produce an image in the mind’s eye, while others – including his cousin Charles Darwin – could picture it extraordinarily clearly.
  • “Some objects quite defined. A slice of cold beef, some grapes and a pear, the state of my plate when I had finished and a few other objects are as distinct as if I had photos before me,” Darwin wrote to Galton.
  • Unfortunately, Galton’s findings failed to fire the imagination of scientists at the time. “The psychology of visual imagery was a very big topic, but the existence of people at the extremes somehow disappeared from view,” says Prof Adam Zeman at Exeter University. It would take more than a century for psychologists such as Zeman to take up where Galton left off.
  • Even then, much of the initial research focused on the poorer end of the spectrum – people with aphantasia, who claim to lack a mind’s eye. Within the past five years, however, interest in hyperphantasia has started to grow, and it is now a thriving area of research.
  • To identify where people lie on the spectrum, researchers often use the Vividness of Visual Imagery Questionnaire (VVIQ), which asks participants to visualise a series of 16 scenarios, such as “the sun rising above the horizon into a hazy sky” and then report on the level of detail that they “see” in a five-point scale. You can try it for yourself. When you picture that sunrise, which of the following statements best describes your experience?
    1. No image at all, you only “know” that you are thinking of the object
    2. Vague and dim
    3. Moderately clear and lively
    4. Clear and reasonably vivid
    5. Perfectly clear and as vivid as real seeing
  • The final score is the sum of all 16 responses, with a maximum of 80 points. In large surveys, most people score around 55 to 60. Around 1% score just 16; they are considered to have extreme aphantasia; 3%, meanwhile, achieve a perfect score of 80, which is extreme hyperphantasia.
  • The VVIQ is a relatively blunt tool, but Reshanne Reeder, a lecturer at Liverpool University, has now conducted a series of in-depth interviews with hyperphantasic people – research that helps to delineate the peculiarities of their inner lives. “As you talk to them, you start to realise that this is a very different experience from most people’s experience,” she says. “It’s extremely immersive, and their imagery affects them very emotionally.”
  • Some people with hyperphantasia are able to merge their mental imagery with their view of the world around them. Reeder asked participants to hold out a hand and then imagine an apple sitting in their palm. Most people feel that the scene in front of their eyes is distinct from that inside their heads. “But a lot of people with hyperphantasia – about 75% – can actually see an apple in the hand in front of them. And they can even feel its weight.”
  • As you might expect, these visual abilities can influence career choices. “Aphantasia does seem to bias people to work in sciences, maths or IT – those Stem professions – whereas hyperphantasia nudges people to work in what are traditionally called creative professions,” says Zeman. “Though there are many exceptions.”
  • Reeder recalls one participant who uses her hyperphantasia to fuel her writing. “She said she doesn’t even have to think about the stories that she’s writing, because she can see the characters right in front of her, acting out their parts,” Reeder recalls.
  • Hyperphantasia can also affect people’s consumption of art. Novels, for example, become a cinematic experience. “For me, the story is like a film in my mind,” says Geraldine van Heemstra, an artist based in London. Holm offers the same description. “When I listen to an audiobook, I’m running a movie in my head.”
  • This is not always an advantage. Laura Lewis Alvarado, a union worker who is also based in London, describes her disappointment at watching The Golden Compass, the film adaptation of the first part of Philip Pullman’s His Dark Materials. “I already had such a clear idea of how every character looked and acted,” she says. The director’s choices simply couldn’t match up.
  • Zeman’s research suggests that people with hyperphantasia enjoy especially rich autobiographical memories. This certainly rings true for Van Heemstra. When thinking of trips in the countryside, she can recall every step of her walks, including seemingly inconsequential details. “I can picture even little things, like if I dropped something and picked it up,” she says.
  • Exactly where these abilities come from is unknown. Aphantasia is known to run in families, so it’s reasonable to expect that hyperphantasia may be the same. Like many other psychological traits, our imaginative abilities probably come from a combination of nature and nurture, which will together shape the brain’s development from infancy to old age.
  • Zeman has taken the first steps to investigate the neurological differences that underpin the striking variation in the mind’s eye. Using fMRI to scan the brains of people at rest, he has found that hyperphantasic people have greater connectivity between the prefrontal cortex, which is involved in “higher-order” thinking such as planning and decision-making, and the areas responsible for visual processing, which lie towards the back of the skull.
  • “My guess is that if you say ‘apple’ to somebody with hyperphantasia, the linguistic representation of ‘apple’ in the brain immediately transmits the information to the visual system,” says Zeman. “For someone with aphantasia, the word and concept of ‘apple’ operate independently of the visual system, because those connections are weaker.”
  • Further research will no doubt reveal the nuances in this process. Detailed questionnaires by Prof Liana Palermo at the Magna Graecia University in Catanzaro, Italy, for instance, suggest that there may be two subtypes of vivid imagery. The first is object hyperphantasia, which, as the name suggests, involves the capacity to imagine items in extreme detail.
  • The second is spatial hyperphantasia, which involves an enhanced ability to picture the orientation of different items relative to one another and perform mental rotations. “They also report a heightened sense of direction,” Palermo says. This would seem to match Holm’s descriptions of the detailed 3D cityscape that allows him to find a route between any two locations.
  • Many mysteries remain. A large survey by Prof Ilona Kovács, at Eötvös Loránd University in Hungary, suggests that hyperphantasia is far more common among children, and fades across adolescence and into adulthood. She suspects that this may reflect differences in how the brain encodes information. In infancy, our brains store more sensory details, which are slowly replaced by more abstract ideas. “The child’s memories offer a more concrete appreciation of the world,” she says – and it seems that only a small percentage of people can maintain this into later life.
  • Reeder, meanwhile, is interested in studying the consequences of hyperphantasia for mental health. It is easy to imagine how vivid memories of upsetting events could worsen the symptoms of anxiety or post-traumatic stress disorder, for example.
  • Reeder is also investigating the ways that people’s mental imagery may influence the symptoms of illnesses such as schizophrenia. She suspects that, if someone is already at risk of psychosis, then hyperphantasia may lead them to experience vivid hallucinations, while aphantasia may increase the risk of non-sensory delusions, such as fears of persecution.
  • For the moment, this remains an intriguing hypothesis, but Reeder has shown that people with more vivid imagery in daily life are also more susceptible to seeing harmless “pseudo-hallucinations” in the laboratory. She asked participants to sit in a darkened room while watching a flickering light on a screen – a set-up that gently stimulates the brain’s visual system. After a few minutes, many people will start to see simple illusions, such as geometric shapes. People with higher VVIQ scores, however, tended to see far more complex scenes – such as a stormy beach, a medieval castle or a volcano. “It was quite psychedelic,” says Lewis Alvarado, who took part in the experiment.
  • Reeder emphasises that the participants in her study were perfectly able to recognise that these pseudo-hallucinations were figments of their imagination. “If someone never has reality discrimination issues, then I don’t think they’re going to be more prone to psychosis.” For those with mental illness, however, a better understanding of the mind’s eye could offer insights into the patient’s experiences.
  • For now, Reeder hopes that greater awareness of hyperphantasia will help people to make the most of their abilities. “It’s a skill that could be tapped,” she suggests.
  • Many of the people I have interviewed are certainly grateful to know a little more about the mind’s eye and the way theirs differs from the average person’s.
  • Lewis Alvarado, for instance, only came across the term when she was listening to a podcast about William Blake, which eventually led her to contact Reeder. “For the first month or so I couldn’t get it out of my head,” she says. “It’s not something I talk about loads, but I think it has helped me to realise why I experience things more intensely, which is comforting.”
    → David Robson, Sat 20 Apr 2024 15.00 BST
  • David Robson is the author of The Laws of Connection: 13 Social Strategies That Will Transform Your Life, published by Canongate on 6 June 2024 (£18.99).
  • © 2024 Guardian News & Media Limited or its affiliated companies. All rights reserved.

Paper Comment




In-Page Footnotes ("Robson (David) - ‘Like a film in my mind’: hyperphantasia and the quest to understand vivid imaginations")

Footnote 1:
  • See Wikipedia: Francis Galton.
  • Galton made lasting contributions to a number of fields. I dislike the focus on certain aspects of his work that are distasteful to current sensibilities, though not to his contemporaries.



"Ross (Josephine) & Doherty (Martin) - How do we start learning to ‘read’ other people’s minds?"

Source: Aeon, 18 March 2025


Authors' Introduction
  • ... It’s not until the age of four or five that children develop this flexible ‘theory of mind’: the ability to reason about others’ thoughts and feelings. This includes understanding that people’s beliefs can be decoupled from reality (sometimes to comic effect). This ability to ‘read’ other people’s minds is crucial for social interaction. Which direction is that car going to turn? The indicator lets you infer the driver’s intention. Does that attractive acquaintance like you? Their eye contact and flick of the hair might give you a sign. Are we communicating about this topic well? To answer that question, we have to imagine what the reader will need to know in order to follow our train of thought.
  • Theory of mind has been a topic of intense study since the 1980s, when psychologists first developed ‘false belief’ tasks. These tasks involve asking children to reason about other people’s behaviour in scenarios where those people don’t have full information. For example, Mum leaves her car keys on the kitchen table; when she is gone, Dad puts the car keys in his pocket. Mum comes back to the kitchen – where will she look for her keys? Children under the age of four predict that Mum will look in Dad’s pocket; after all, that’s where the keys are. However, older children understand that Mum will expect the keys to be on the table, and that Dad might be in trouble.
  • An enduring question, one for which we and other scientists are still seeking answers, is what happens around that age to make this critical ability possible. How, exactly, do we become ‘mind readers’?

Authors' Conclusion
  • Insights into how theory of mind emerges are important from a practical perspective. The development of this ability to reason about other minds can be disrupted by environmental factors, such as a lack of opportunity for social interaction and language development, and by neurodivergence, which is sometimes associated with difficulty in social reasoning. To support children in developing theory of mind, experts might usefully target its developmental building blocks. That could include focusing, in part, on supporting a child’s self-reflection and self-control. For example, while some researchers have sought to help children develop theory of mind with the aid of thought bubbles that illustrate story characters’ thoughts and beliefs, a more accessible starting point for many children might be encouraging them to engage in introspection – perhaps by externalising their own thought processes in this concrete way (eg, within bubbles).
  • However, it is crucial to consider whether the developmental foundations of this skill set are universal. Researchers are currently pulling together to address a pervasive cultural bias in developmental psychology: the majority of information we have on child development is based on studies with Western populations. In Western societies, people tend to consider a principal goal of child development to be gaining independence and self-reliance – becoming the ‘main character’ in one’s own life. But, in much of the world, a principal goal is to support a child to be interdependent, to fit in with their social surroundings to sustain the greater good. For example, in Japan, developing omoiyari – a concept similar to empathy and sympathy – is a key goal of child-rearing.
  • Perhaps surprisingly, given this social focus, research suggests that children growing up in ‘interdependent’ societies tend to develop these abilities differently, demonstrating self-control earlier but theory of mind later than their Western counterparts. This apparent difference could be because standard theory of mind tasks have been developed using a Western lens, focusing on an individual’s intent as the key driver of behaviour. However, it could also be due to substantial cross-cultural variation in the way people mentally represent self and other.
  • Findings from developmental research, including other work we’ve done, indicate that there could be significant cultural differences in the stepping stones to theory of mind. In this research, we asked children in Scotland and Japan to complete the tasks described previously. In both groups, we again found that introspective ability was related to self-control. However, neither ability was strongly related to Japanese children’s theory of mind, suggesting that the developmental precursors may differ between independent and interdependent societies.
  • If this cross-cultural finding holds in future research, how might we explain the difference? It is possible that in interdependent societies, it does not take (as much) self-reflection or self-control to be able to consider another person’s thoughts and feelings, since other people’s perspectives are always closer to the forefront of thought. In cultures with an interdependent focus, children may follow a different set of developmental stepping stones to theory of mind, perhaps through the scaffolding of shared perspectives of reality. In other contexts, however – including in the independence-focused West – a child’s ability to guess at other people’s thoughts and feelings is likely to begin with introspection on their own.
Author Narrative
  • Josephine Ross is a professor of developmental psychology at the University of Dundee, Scotland. Her research focuses on the development of self-awareness, and the intersection of this development with social cognition and functioning throughout the lifespan.
  • Martin Doherty is an associate professor of developmental psychology at the University of East Anglia, UK. His research focuses on the development of theory of mind.
Notes
Paper Comment



"Sandford (Stella) - Seeing plants anew"

Source: Aeon, 02 August 2024


Author's Introduction
  • It was once common, in Western societies at least, to think of plants as the passive, inert background to animal life, or as mere animal fodder. Plants could be fascinating in their own right, of course, but they lacked much of what made animals and humans interesting, such as agency, intelligence, cognition, intention, consciousness, decision-making, self-identification, sociality and altruism. However, groundbreaking developments in the plant sciences since the end of the previous century have blown that view out of the water. We are just beginning to glimpse the extraordinary complexity and subtlety of plants’ relations with their environment, with each other and with other living beings. We owe these radical developments in our understanding of plants to one area of study in particular: the study of plant behaviour.
  • The idea of ‘plant behaviour’ may seem odd, given the association of the word ‘behaviour’ with animals, including humans. When we think of classic animal behaviours – dancing honeybees, dogs wagging their tails, primates grooming each other – we may wonder what there could possibly be in plant life corresponding to this.
  • One early advocate of the importance of the study of animal behaviour was E S Russell, a biologist and philosopher of biology. In 1934, Russell argued that biology should begin with the study of the whole organism, and conceived the organism as a dynamic unity passing through cycles of maintenance, development and reproduction. These activities are, he said, ‘directed towards an end’ and it is this ‘directive’ activity that distinguishes living things from inanimate objects. Behaviour, according to Russell, was the form of this ‘general directive activity of the organism’ concerned with the relations of the organism to its external environment. This meant that plants quite as much as animals exhibit behaviours. But because plants are sessile (fixed in one place), behaviour is exhibited mainly in growth and differentiation (development of embryonic cells into particular plant parts), rather than in movement, as with animals.

Author's Conclusion
  • The advocates of the new paradigm for the understanding of plants are not just proposing a new research programme but attempting to build a new picture of plant life with a set of concepts drawn from philosophy and other disciplines. I call this the ‘plant advocacy literature’ because, in addition to its scientific underpinnings, it advocates for and on behalf of plants. Its aim is not just to advance plant science but to make us think differently about plants, to value plant life and accord it more respect.
  • Much of this literature does not avoid the language of intention and speaks freely of plant ‘choice’. Trewavas for example, claims that research showing that clonal plants on sand dunes grow into resource-rich patches and avoid resource-poor patches makes it ‘difficult to avoid the conclusion of intention and intelligent choice and the ability to select conducive habitats … Intentional choice of habitat is clear.’ In his book Plant Behaviour and Intelligence (2014), Trewavas identifies the meaning of ‘purpose’ or ‘goal’ with ‘intention’ and concludes that plants do indeed intend to resist herbivores and do intend to respond to gravity, but that this merely means that ‘plants are aware of their circumstances, and act to deal with those that diminish their ability to survive and/or reproduce, and thus diminish fitness.’
  • But it is the anthropocentric language of intention and choice that finds its way into the more popular works on plants, with the Italian botanists Stefano Mancuso and Alessandra Viola even claiming that plants chose a sessile lifestyle and chose to be composed of divisible parts. Plant communication via VOCs is presented to popular audiences as plants talking to each other, and nutrient transfer from older to younger trees via mycorrhizae is described in terms of mothers suckling their young. This anthropomorphising of plants achieves the opposite of the aims of plant philosophy, which are (as far as is possible) to understand plants as plants – in vegetal, not animal or human terms.
  • The challenge for plant philosophy – and it is a huge challenge – is not just to try to achieve some clarity concerning the legitimate use of concepts such as agency and intelligence in the plant sciences. It is also to find ways of conceptualising plant behaviours that avoid both the presuppositions of a gene-determinist approach and the anthropo- or zoocentric approach of some of the plant advocacy literature and in popular media forms. Of course, this reconceptualisation would include insights from existing philosophy of biology, but it would also have to go beyond it.
  • Philosophy of biology is concerned with questions that, however specific, are general biological questions in the sense that they refer in principle to all living organisms and to processes – like evolution – that are common to them all. Plant philosophy, on the other hand, is concerned specifically with plants, with the specificity of plant life in distinction from most animals and with the implications of this for some general philosophical questions. Do we have to reconceive the concept of the individual to be able to speak of plants composed of reiterated units as individuals? If we began an attempt to construct a general philosophical account of biological agency with plant behaviour, rather than trying to include it afterwards, would this give rise to a novel conception of biological agency? And what would it mean for us in our everyday lives to appreciate these beings of another kind living among us – indeed, keeping us alive?
  • Contemporary plant philosophy has only just begun to ask these questions. The answers are still wide open.

Author Narrative
  • Stella Sandford is professor of modern European philosophy in the Centre for Research in Modern European Philosophy at Kingston University London. Her books include Vegetal Sex: Philosophy of Plants (2022), Plato and Sex (2010) and How to Read Beauvoir (2006).

Notes
  • I read this paper a couple of months ago and now can't remember the detailed argument. So. I need to re-read the paper before commenting in detail.
  • This new philosophy is hardly news, and I've commented on it my Note on Plants. My overall view is that - yes - plants are more complex and active than they have been historically given credit for (the same was true of nonhuman - or even human - animals until comparatively recently). But including them in the moral community is both impossible if we are to survive and stretches terms so as to trivialise them in the circumstances where they are important. The same goes for various psychological attributes.
  • The author does - however - avoid anthopomorphising and wants to treat plants on their own terms.
  • Plant Behaviour and Intelligence (2014) by Anthony Trewavas looks interesting, but is too expensive.
  • There are 26 Aeon Comments, mostly of doubtful value - though the author has replied to most. I need t read them carefully.
  • This also relates to my Notes on Life, Evolution, Individual, Intelligence, Consciousness, Self and Society.
  • It might also relates to my Notes on Agency, Cognition, Intention, and Decision-making if I had them. Should I? Mind will have to do for now.

Paper Comment
  • Sub-Title: "The stunningly complex behaviour of plants has led to a new way of thinking about our world: plant philosophy"
  • For the full text see Aeon: Sandford - Seeing plants anew.



"Sartwell (Crispin) - What my mother’s sticky notes show about the nature of the self"

Source: Aeon, 14 March 2024


Author's Introduction
  • People who perform prodigious feats of memory (repeating a long series of numbers, for example) often make use of a technique called the ‘mind palace’, also known as the ‘method of loci’, the ‘Roman room’ and the ‘journey method’. The technique is ancient – versions are described by the Roman rhetoricians Cicero and Quintilian – and involves imagining a building, such as a palace or house, and associating each item to be remembered with a location in that building. One moves through the imagined structure room by room (thus the ‘journey’), finding the items in question.
  • I’m caring for my 98-year-old mother Joyce, who’s had increasingly debilitating memory problems for many years. For a decade or two, she managed to compensate for these problems with amazing effectiveness. To a significant extent, her life was arranged to make it possible to function as she remembered less and less. The process – not atypical of people with memory problems, I’m told – indicates that the mind palace is more than imaginary. As she lost the internal structure of memory, the galleries of her mind became literal and external. First systematically, then chaotically, Joyce Abell has used her home here in rural Virginia as her memory.
  • In some ways we all do this. We scatter photos or significant objects about our space so that, as we move through it, we remember. Most people, walking around their homes, are using the journey method, or at least are potentially on a nostalgic journey among artefacts, into their own past. A memory palace is not just images in our heads, but consists of things in our built environment.
  • One way Joyce held on to her memories was with the use of Post-it notes. For a decade or more, she wrote down everything she wanted to remember – people’s names and phone numbers, appointments, movies she wanted to see, books she wanted to read, quotations to be read aloud at her memorial service (really), and many other matters. She pasted them all around her desk and up on walls. More and more, every fact required a note, and they proliferated into stacks and piles, most of them no longer visible under the later accretions. Notes with regard to how to operate the television piled up around the television; food reminders from years past accumulated in the kitchen. The notes eventually drifted down over everything. Once she had lost track of a note, a process that has accelerated as her memory has declined, another note had to be made if the same fact was to be retrieved. And so on.

Author's Conclusion
  • My mother’s memory went from being an imagined place or palace inside her to being largely identical to her physical environment. The process of remembering got steadily externalised over a period of 20 years. Perhaps this is how human personality fragments or disintegrates: as it frays, it extends into the world, even desperately, and alters it, rearranges it into a memory hoard. It wasn’t only the notes, after all: if self is memory, her house and its every artefact – every print and every spatula, every earring and every room – is in her, as she is in it. But then, the real place too is in constant change. It blows away bit by bit, yellows, erodes. Your son comes and rearranges or empties it, and you are lost.
  • As things that remember, perhaps humans disintegrate from inside outward, until we are no longer distinct from our environments. Slowly, our memories and the world become the same, and we at once cease to exist and expand into everything.

Author Narrative
  • Crispin Sartwell is a retired philosophy professor whose most recent book is Beauty: A Quick Immersion (2022).

Notes
Paper Comment



"Schneider (Suzanne) - Who bears the risk?"

Source: Aeon, 11 March 2024


Author's Introduction
  • I am sitting in my daughter’s hospital room – she is prepping for a common procedure – when the surgeon pulls up a chair. I expect he will review the literal order of operations and offer the comforting words parents and children require even in the face of routine procedures. Instead, he asks us which of two surgical techniques we think would be best. I look at him incredulously and then manage to say: ‘I don’t know. I’m not that kind of doctor.’ After a brief discussion, my husband and I tell him what, to us, seems obvious: the doctor should choose the procedure that, in his professional opinion, carries the greatest chance of success and the least risk. He should act as if our daughter is his.
  • In truth, this encounter should not have surprised me. I have for several years been working on a book about risk as a form of social and political logic: a lens for apprehending the world and a set of tools for taming uncertainty within it. It is impossible to tell the contemporary story of risk without considering what scholars call responsibilisation or individuation – essentially, the practice of thrusting increasing amounts of responsibility onto individuals who become, as the scholar Tina Besley wrote, ‘morally responsible for navigating the social realm using rational choice and cost-benefit calculations’. In the United States, business groups and politicians often call offloading more responsibilities onto citizens ‘empowering’ them. It’s maybe telling that this jargon prevails in the private healthcare sector, just as moves to privatise social security in the US are cast as empowering employees to invest their retirement savings however they see fit.
  • In Individualism and Economic Order (1948), F A Hayek wrote: ‘if the individual is to be free to choose, it is inevitable that he should bear the risk attaching to that choice,’ further noting that ‘the preservation of individual freedom is incompatible with a full satisfaction of our views of distributive justice.’ Over the past several decades, Hayek’s position has gone mainstream but also become somewhat emaciated, such that individual freedom substantively means consumer choice – the presumed sanctity of which must be preserved across all sectors. But devolved responsibility favours those with more capacity to evaluate and make decisions about complex phenomena – those of us, for instance, with high levels of education and social access to doctors and investment managers to call for advice. And indeed, it’s striking that the embrace of responsibilisation as a form of individual ‘empowerment’ has accompanied the deepening of inequality in Western democracies.

Author's Conclusion
  • Years ago, I attended a National Rifle Association personal safety course called ‘Refuse To Be A Victim’. While the appeal to gun ownership was never far below the surface, this is primarily a crash-course in personal risk assessment, powered by the data point that a violent crime occurs every 25 seconds in the US. As I learned during my three-hour training, refusing to be a victim requires cultivating a posture of constant vigilance – one that carefully surveys situations, exercises caution, and never, ever trusts strangers. The course conjured a world in which security was an individual responsibility and, perversely, victimhood a personal failure. Uncertainty, which is both a gift and a challenge, was weaponised to make racialised paranoia seem like the only responsible choice. Protecting myself and my children required thinking like the retired counterterrorism officer who taught my course – becoming, in short, my own personal risk assessor.
  • However extreme this case may seem, I’ve realised that Refuse To Be A Victim exists on a broad continuum of practices that envision, and even idealise, the individual qua actuary. Whether you find (like Thaler and Sunstein) that humans are poor decision-makers who need to be nudged toward virtue, or affirm (like Gigerenzer) that better public numeracy can improve our risk calculations, an antisocial logic clings to the actuarial self. Security becomes an individual privilege procured through the marketplace rather than a public right achieved at the social level. When it comes to personal safety, people of means are encouraged to manage risk by engaging in various kinds of social insulation (what I have called security hoarding), while those without are largely transformed into the ‘risks’ themselves. The uptick in vigilante acts and paranoid shootings in the US for ringing the wrong doorbell is a symptom of this antisocial logic taken to its natural, bloody end. The privatisation of security and violence are also forms of responsibilisation – ones where we can clearly see the costs associated with this form of ‘freedom’.
  • The good news is that there are wonderful alternatives to the hyper-individualised world of the actuarial self: apprehensions of security that think at the level of communities, neighbourhoods and towns or cities or nations. In a world facing rising temperatures, pandemics and a globalised financial system, many have come to recognise the highly individualised approach to risk as a relic of an irresponsible and iniquitous era. Building more capacious forms of security and networks of care will require looking beyond the actuarial self and the fundamentally conservative political agenda it serves.
Author Narrative
  • Suzanne Schneider is a deputy director and core faculty at Brooklyn Institute for Social Research and a Visiting Fellow at Kellogg College, Oxford. She is the author of Mandatory Separation: Religion, Education, and Mass Politics in Palestine (2018) and The Apocalypse and the End of History: Modern Jihad and the Crisis of Liberalism (2021).
Notes
  • This paper is rather a mixed bag. There are two conflicting demands - people want choice, but it's bad for them if they make the wrong ones.
  • Also, as the author and many others point out, most people have no idea about risk: in particular, they prefer 'control' even if objectively this involves greater risk.
  • Sometimes - as in medicine - paternalism seems at first sight to be the best choice - medics usually know more about the risks than their patients; but they aren't always right, and patients - or their parents - often think they know best (and sometimes they do).
  • At other times, the needs of society as a whole override the wishes of individuals to do things their way.
  • However, the author seems to have a cynical view that all this 'nudging' is just window-dressing on the part of the 'free market'. I can't see anything wrong with it.
  • Also, she seems to want people to be protected against acts of stupidity. The case in point is scams on social media: 'Big Tech' should compensate every individual who succumbs to a scam delivered via their sites. But, just what is a scam? Are the sundry healthcare products of no proven efficacy 'scams'? Is homeopathy a scam? I get emails from 'Roman Abramovich' every other week. Is my ISP responsible if I believe it's really him?
  • The paper is somewhat off track from a UK perspective as it's written from a US vantage point. Thankfully the gun toting encouraged in the US isn't legal here.
  • She quotes (amongst others):-
    → F A Hayek - Individualism and Economic Order (1948) - available on Kindle for £2.
    Odds Of Dying (Who's responsible for this site? How reliable is it?)
    Besley - Theorizing Teacher Responsibility in an Age of Neoliberal Accountability (Beijing? Copy-editing is poor)
    → Gerd Gigerenzer - Risk Savvy (2014) - about £10
    → Dan Gardner - Risk: The Science and Politics of Fear (2008) - cheap 2nd hand
    → Peter Bernstein - Against the Gods: The Remarkable Story of Risk (1996)
    Andreessen - The Techno-Optimist Manifesto
    The Behavioural Insights Team Annual Report 2017-18
    → Dan Ariely - Predictably Irrational (2008) - cheap 2nd hand
    → "Kahneman (Daniel) - Thinking, Fast and Slow" (2011)
    → Richard Thaler and Cass Sunstein - Nudge (2008)
    → "Smith (Adam) - An Enquiry Into the Nature and Causes of the Wealth of Nations" (1776)
    The Decision Lab - Cognitive Biases
    Thaler, Sunstein & Balz - Choice Architecture
    → Sunstein - Risk and Reason (2002)
    Schneider - How Gun Culture and the Government Fell Back in Love
  • There are some Aeon Comments.

Paper Comment
  • Sub-Title: "Under the guise of empowerment and freedom, politicians and business are offloading lifethreatening risk to individuals"
  • For the full text see Aeon: Schneider - Who bears the risk?.



"Schwenkler (John) - What does it take for someone to become a ‘different person’?"

Source: Aeon, 17 May 2022


Author's Introduction
  • What changes can a human undergo without becoming someone different? For example, if a degenerative disease paralyses your body, it’s clearly still you who exists in this transformed state. But what if the disease transforms your mind, impairing your memory or causing radical personality change? Would this new ‘you’ be a different person entirely than the one who existed before the transformation?
  • Part of what makes it hard to answer this question is its ambiguity. Imagine you saw my father, John, one day and then his twin brother, Joe, the next, and you pointed out Joe to me and said that you had only just seen him. I might correct you by saying that the person you see now is a different person than the person you saw yesterday. But is that the meaning we have in mind when we ask whether the person existing in the wake of a disease is a different person than the one before?
  • Or is it more like this: a man goes away to war, where he experiences great horrors, and on his return his family finds him to be a different person entirely than he was when he left. This is different from your confusion about John and Joe (and different from the situation if the person who showed up at the doorstep after the war happened to be an imposter). The family take for granted the continued existence of the man who left and then returned home. What troubles them is that he has changed, so much so as to be unrecognisable except in outward appearance. And the same point seems to hold in the case of the degenerative disease: if it were not you who existed in the wake of it, then your loved ones would not have cause to mourn your state in the way they do. But what, then, can we mean when we say, as we so often do, that the disease (or war, or drug taking, or divorce, or near-death experience…) has made you into a different person?
  • Here’s another example – imagine contemplating a life experience that will radically transform your values. The philosopher L.A. Paul calls this a transformative experience, such as when parents claim to have been altered fundamentally by the first sight of their newborn child. Given the prospect of such a radical change, should you allow it to happen? ...
Author's Conclusion
  • Sometimes, the claim that someone has become a different person can be used to justify cutting them out of our lives, or treating them as if they were a mere acquaintance. This makes sense in some cases: for example, if you and a classmate once bonded over a shared love of punk rock, or a common religious conviction, then, if they were to lose this love, or this conviction, it might be appropriate for you to dissolve your friendship. In such a case, the person who was your friend is no longer ‘about’ the thing that was the ground of your relationship. Hearing from your friend how she has changed, you will find yourself asking: who are you? For the purposes of your friendship, the person that you knew is gone.
  • But not all human relationships are this way. A person you marry, or a parent or child or elderly relative, is not someone who is tied to you just by virtue of a shared passion or purpose. Your relation to them goes deeper than that, and its ground is your shared life. It is part of being human that people sometimes change in ways that can make a radical difference to what they are about. But our passions and purposes are not the only things that determine who we are. Our lives are also defined by the irrevocable commitments we make to our loved ones, and the ties that bind together parent, child, sibling and kin. To see ourselves, and one another, for who and what we really are, we need to recognise these aspects of our identity that are not so subject to change.
Author Narrative
  • John Schwenkler is professor of philosophy at Florida State University, and Humboldt Visiting Researcher at Leipzig University in Germany. He is the author of Anscombe’s Intention: A Guide (2019), the co-editor of Becoming Someone New: Essays on Transformative Experience, Choice, and Change (2020) and the co-author of Reading Philosophy: Selected Texts With a Method for Beginners (2nd ed, 2020).
Notes
  • This is effectively a plug for the Author's book Becoming Someone New: Essays on Transformative Experience, Choice, and Change (2020)
  • I wasn't sure whether this is about Personality or Narrative Identity, but decided on the latter.
  • I will write up detailed thoughts in due course.

Paper Comment



"Sebo (Jeff) - Against human exceptionalism"

Source: Aeon, 05 May 2022


Author Narrative
  • Jeff Sebo is clinical associate professor of environmental studies, affiliated professor of bioethics, medical ethics, philosophy and law, and director of the animal studies MA programme at New York University. He is also on the executive committee at the NYU Center for Environmental and Animal Protection and the advisory board for the Animals in Context series at NYU Press. He is co-author of Chimpanzee Rights (2018) and Food, Animals, and the Environment (2018), and the author of Saving Animals, Saving Ourselves (2022).
Notes
Paper Comment
  • Sub-Title: "In a tight spot, you’d probably intuit that a human life outweighs an animal’s. There are good arguments why that’s wrong"
  • For the full text see Aeon: Sebo - Against human exceptionalism.



"Sebo (Jeff) & Schukraft (Jason) - Don’t farm bugs"

Source: Aeon, 27 July 2021


Authors' Introduction
  • The future of animal farming is taking shape in a small city in central Illinois. A startup called InnovaFeed is building a production site that will house more farmed animals than any other location in the history of the world. But the animals in question are not cows, pigs or chickens – they are black soldier fly larvae.
  • When the facility is fully operational, InnovaFeed hopes to produce 60,000 metric tonnes of insect protein from the fly larvae each year. By one conservative estimate, that amounts to around 780 billion larvae killed annually. If you lined up that many larvae end-to-end, the line would stretch from Earth to the Moon and back 25 times.
  • Interest in insect farming is booming. Insects have been heralded as a sustainable alternative to traditional animal agriculture, with a litany of articles touting the environmental benefits of insect protein. Socially minded investors have piled into the space, with recent funding rounds totalling more than $950 million. InnovaFeed plans to construct 20 production facilities by 2030. The company competes against the likes of AgriProtein in South Africa and Ÿnsect in France, both of which harbour comparably ambitious goals. The industry is small now, but poised to grow 50 times larger in the next decade.
  • Lost in all the hype is an uncomfortable question: do we want to encourage a food system that farms animals by the trillion?
Authors' Conclusion
  • What does taking insect welfare seriously mean for ordinary people? We can all strive to harm insects less in our own lives. For instance, there are simple actions we can take to make our homes and businesses less inviting to insects. Examples include fixing water leaks, reducing soil-to-wood contact around the building, keeping plants a few feet away from the foundation, and turning off outdoor lights at night. These actions would all reduce the risk that insects will enter buildings, which, in turn, would reduce the need for a lethal insecticide. We can also create a safer future world for insects through scientific research, compassionate attitudes, and humane education and advocacy.
  • We might find these ideas hard to accept, since we have strong biases against insects, and since the idea of reducing the harm we cause to insects at scale is daunting. But if we support policies that are better for insects and humans alike, then we can reduce harm to many insects in the short term while building the tools that we need to answer harder questions in the long run.
Author Narrative
  • Jeff Sebo is clinical associate professor of environmental studies, affiliated professor of bioethics, medical ethics, philosophy and law, and director of the animal studies MA programme at New York University. He is also on the executive committee at the NYU Center for Environmental and Animal Protection and the advisory board for the Animals in Context series at NYU Press. He is co-author of Chimpanzee Rights (2018) and Food, Animals, and the Environment (2018), and the author of Saving Animals, Saving Ourselves (2022).
  • Jason Schukraft is a senior research manager at the think-tank Rethink Priorities in California. Before joining the RP team, he earned his doctorate in philosophy from the University of Texas at Austin. He specialises in questions at the intersection of epistemology and applied ethics.
Notes
  • This is an important paper which makes a lot of good points. The Comments and replies are also worth reading, and I've saved them to PDF lest they disappear. I found the responses by Jason Schukraft refreshingly honest and even-handed. Jeff Sebo, however, never has anything to learn from his opponents.
  • One important point is the environmental one that a supposed justification might actually be the reverse of what is claimed. Because of the 'yuck factor' most of these farmed insects will be consumed by animals as cheaper animal feed. I did wonder, though, whether this rebuttal is as strong as is suggested. Do cattle fart as much if fed on insects than they do when eating grass?
  • But all this isn't the main point - which is the distress caused to the insects on the presumption that they are sentient. This is too complex an issue for me to deal with at the moment.
  • Enough to say that - given the very high percentage of insects that get eaten before maturity - and the same goes for fish spawn - hence the huge size of their 'litters' - what would be the evolutionary point of giving them the - presumably expensive - capacity for sentience when there's nothing larvae or fry can do to escape predation. It's purely a lottery.
  • Also, are we really to be filled with grief when we walk across - or mow - the lawn - as one commentator remarked.
  • Also, we're in competition against 'pests' - reference 'Frank's cauliflower' - no insecticide so it was covered in black fly, which got boiled, but no-one ate the cauliflower.
  • That said, gratuitous harming of any creature that's just going about its business is something that's bad for them – whether they know it or not – and bad for us.

Paper Comment



"Sedivy (Julie) - Why every utterance you make begins with a leap of faith"

Source: Aeon, 02 December 2024


Author's Introduction
  • People perceive language, like everything else, through a tiny pinhole known as the present. What lies in the future remains out of view, then very briefly passes through that pinhole before it vanishes into the past. The language used in everyday conversations – whether it’s spoken or sign language – is by nature fleeting. The information it contains is different at every slice of time. So, for any of us to use language, we must rely on our memories of a no-longer-existent past and our imagination of a not-yet-existent future. Think of someone who is midway through a sentence: ‘The co-worker who my boss promoted the other day ac— …’ A stream of linguistic elements has been tossed out, and it’s already quickly receding into the past. Meanwhile, the sentence could fork out into any one of a number of different trajectories, and the likelihood of each depends on what has already been uttered.
  • Our ability to speak and understand each other, then, hinges on the spaciousness of our memory and our accuracy at prediction. The properties of human language are determined largely by the limits of these capacities.
  • When you are speaking, the dilemma is that your short-term memory isn’t capacious enough to hold the details of a full sentence. Its form would dissipate in your mind in the time between uttering the first syllable and the last. You are, in this sense, working against time when you speak. And so, you begin talking with only a vague sense of how the sentence will unfold, taking a leap of faith that you can work out the details of what comes next by the time the earlier part of the sentence has scrolled into the past.
  • Within that pinhole of the present, you simultaneously map out the sentence’s structure, rummage for upcoming words in your long-term memory, and draw up a plan for the movements of the lips, tongue and mouth that form the word you’re on the verge of uttering, all while actually speaking. The future comes bearing down mercilessly on the present, and occasionally you find that you are stalled at an uhm, not ready to utter the word that comes next, or that you’ve rushed things and lined up the wrong word or sound. What we call a slip of the tongue is really a slip of the mind under time pressure.
  • Time also exerts pressure on the receiving end of language. A listener’s memory for the bare form of language, untethered from meaning, is fleeting. (Notice how much harder it is to remember a nonsensical list of syllables than a meaningful phrase of the same length: ‘lep blintasp lorset ap lep howd’, versus ‘the smartest person in the room’.) To avoid the accumulation of free-floating phonetic bits, the listener races to extract meaning from the first sounds that drop from a speaker’s mouth. If you hear ‘the cap—’, you are automatically rifling through your vocabulary to decide whether this snippet of speech refers to a hat or whether it will continue as ‘the captain’, ‘the capital’, or ‘the cappuccino’.
  • As words unfurl into phrases, you scramble to give them structure, well before you have clear evidence for how they fit together. ‘I told the teacher that the school …’ might continue with ‘… needed better programmes for disabled kids’, or it might take an unexpected syntactic turn, and continue as ‘… had just hired about my child’s disability.’ Sometimes, you’re able to leap into the future and predict the shape of the sentence to come, drawing on your stored memories of how language tends to pattern and your assessment of the current context. But if a sentence is complex – or you’ve predicted wrongly – you risk drowning in the flow of incoming speech as you struggle to recover meaning from the sounds that have vanished into the past.
  • Language exists in this tenuous space between the cognitive demands of the disappearing past and the nebulous future. These demands become apparent to language scientists when cracks in performance appear – as disfluencies, slips of the tongue, wrinkles in comprehension. Language is a compromise between the limitations of speakers and perceivers who are perpetually under time pressure. It’s an imperfect solution, riddled with ambiguity and indeterminacy of meaning. In fact, given its imprisonment in linear time, language could not exist at all, if not for the fact that we are surprisingly good at coping with linguistic uncertainty.

Author's Conclusion
  • All of this, of course, means that human language has uncertainty – even fragility – baked into it. Failures of communication can and do occur. Perhaps this is why, when much is at stake, people still seek out face-to-face dialogue, even though time pressures are most acute in spontaneous conversation. Along with our linguistic tools, we humans have developed the social skills to negotiate understanding – the nod to confirm that we’re following, the furrowed brow when we’re not, the patience with a speaker’s disfluencies and backtracking, the ability to instantaneously repair misunderstanding – so that our language is not grounded by its imperfections.
Author Narrative
  • Julie Sedivy is a language scientist and writer. She has taught linguistics and psychology at Brown University in Rhode Island and at the University of Calgary, and is the author of Memory Speaks: On Losing and Reclaiming Language and Self (2021) and, most recently, Linguaphile: A Life of Language Love (2024).
Notes
  • An interesting easy read. I'd not really considered the time factors involved in the encoding and decoding of language. I had noticed - though - how readily we repair errors in real-time utterances without even really noticing they are there (though we may remember them in retrospect, and cringe).
  • I've often wondered whether it's worth the bother amending emails to make them 'correct': some correspondents just don't care, and their outpourings are still intelligible, if irritating (mostly because it’s distracting from the message). Maybe less irritating than incoherent tripe that's got correct grammar and spelling. But that's another issue.
  • There are some useful examples given of linguistic trade-offs (eg. in Mandarin and Turkish) between precision and speed.
  • Since working memory is critical in linguistic communication I was slightly surprised - since German is mentioned - that there was no mention of the difficulties for real-time translators of long convoluted sentences in German with the verb at the end.
  • It occurred to me that we speak and listen like LLMs - anticipating what comes next in a stream of discourse, sometimes amusingly so, as in Alas Smith and Jones (Wikipedia: Alas Smith and Jones) where the interlocutor absurdly completes a sentence when the speaker pauses.
  • The paper is worth a re-read at leisure.
  • The author's books look interesting, but are basically memoirs of a linguist rather than books on linguistics.
  • There are no Aeon Comments
  • The paper relates to my Note on Language (and - just maybe - that on Time).

Paper Comment



"Sepielli (Andrew) - Ethics has no foundation"

Source: Aeon, 24 November 2023


Author's Introduction
  • Many academic fields can be said to ‘study morality’. Of these, the philosophical sub-discipline of normative ethics studies morality in what is arguably the least alienated way. Rather than focusing on how people and societies think and talk about morality, normative ethicists try to figure out which things are, simply, morally good or bad, and why. The philosophical sub-field of meta-ethics adopts, naturally, a ‘meta-’ perspective on the kinds of enquiry that normative ethicists engage in. It asks whether there are objectively correct answers to these questions about good or bad, or whether ethics is, rather, a realm of illusion or mere opinion.
  • Most of my work in the past decade has been in meta-ethics. I believe that there are truths about what’s morally right and wrong. I believe that some of these truths are objective or, as they say in the literature, ‘stance-independent’. That is to say, it’s not my or our disapproval that makes torture morally wrong; torture is wrong because, to put it simply, it hurts people a lot. I believe that these objective moral truths are knowable, and that some people are better than others are at coming to know them. You can even call them ‘moral experts’ if you wish.
  • Of course, not everyone agrees with all of that. Some are simply confused; they conflate ‘objective’ with ‘culturally universal’ or ‘innate’ or ‘subsumable under a few exceptionless principles’ or some such. But many people’s misgivings about moral objectivity are more clear headed and deeper. In particular, I find that some demur because they think that, for there to be moral truths, let alone objective, knowable ones, morality would have to have a kind of ‘foundation’ that, in their view, is nowhere to be found. Others, anxious to help, try to show that there’s a firm foundation or ultimate ground for morality after all.
  • It’s my view that both sides of this conflict are off on the wrong foot. Morality is objective, but it neither requires nor admits of a foundation. It just kind of floats there, along with the evaluative realm more generally, unsupported by anything else. Parts of it can be explained by other parts, but the entirety of the web or network of good and evil is brute. Maybe you think that’s weird and even worthy of outright dismissal. I once thought the same thing. The purpose of this essay, which is based on my book Pragmatist Quietism: A Meta-Ethical System (2022), is to encourage you to start seeing this aspect of the world as I now see it.
Author's Conclusion
  • Note that my brief for ethical truth bottoms out in claims about ‘specifically ethical’ value, and that my argument for the irrelevance of metaphysics, semantics, etc to ethics bottoms out in claims about what I called ‘representational’ value. This might strike you as begging the question against the sceptic about evaluative truth and knowledge – in other words, as assuming at the outset just what I intend to demonstrate to such a sceptic. My rejoinder: yes, I do beg the question, but this, in itself, does not put me in bad company. Everyone who ventures a positive claim about some subject matter – the external world, induction, mathematical knowledge, what-have-you – rather than withholding judgment entirely, must at some point confront the so-called ‘Agrippan trilemma’: either posit certain facts as unexplained, or beg the question, or accept an infinite regress. If these are problems, they’re not problems for me specifically; they’re problems for anyone who thinks things.
  • So I say that the true sin lies not in question-begging, but in failing to subsume aspects of the world within a more general vindicatory framework. For example, a theory of a priori knowledge that explains how knowledge of that very theory is possible might beg the question, but so long as it accounts for a priori knowledge in general – eg, of mathematics, logic and morality – and not just a priori knowledge of itself, it needn’t be problematic. A theory of accurate mental representation of the world that explains how our beliefs in that very theory accurately represent the world also begs the question, but this should not worry us insofar as it explains accurate mental representation across the board. These theories earn their keep by making sense of what would otherwise remain mysterious, and so it should not trouble us if they end up vindicating themselves in the process.
  • I propose to attain a similar sort of explanatory unity by vindicating all claims and domains that are worthy of it – not just ethics, but everything from biochemistry to sports prognostication – fundamentally in terms of values, be these representational, specifically ethical, or other sorts of values. It is this values-first re-imagining of enquiry for which I reserve the label ‘pragmatism’. Pragmatism offers a way of making sense of ethical truth, objectivity and knowledge by ensconcing these within a more comprehensive world picture, but not in such a way that would count as providing a foundation for ethics in some allegedly more fundamental area of enquiry. What emerges is a free-floating evaluative sphere, coupled with an account of why this is not so odd or mysterious after all.
OUP Book Description for Pragmatist Quietism: A Meta-Ethical System
  • The claim that there are objective ethical truths has attracted its share of doubters. Many have thought that such truths would require an extra-ethical foundation or vindication — in metaphysics, or the philosophy of language, or epistemology — and have worried that no such thing is available.
  • Pragmatist Quietism argues that, on the contrary, there are objective ethical truths, and that these neither require nor admit of a foundation or vindication from outside of ethics.
  • Recognizing that the idea of an ethical realm untethered from inquiry into reality, meaning, and knowledge may strike us as mysterious, this book offers a comprehensive meta-ethical worldview within which this jarring proposal may be ensconced.
  • The key moves are, first, the assimilation of normative-ethical inquiry to the sorts of debates that many have labelled 'merely verbal' or 'non-substantive', and second, the adoption of pragmatism — the approach to inquiry and explanation on which we endeavour to guide our thinking by considerations of value, rather than aiming to correctly represent the world.
Author Narrative
  • Andrew Sepielli is associate professor of philosophy at the University of Toronto, and a Laurance S Rockefeller visiting faculty fellow at the Princeton University Center for Human Values. He is the author of Pragmatist Quietism: A Meta-Ethical System (2022).
Notes
  • This is a plug for the author's excruciatingly expensive book. I won't be buying it, so I've added the OUP Book Description to the Abstract for this paper.
  • The essay meanders through numerous meta-ethical topics. Its discussion of utilitarianism as merely providing an ethical theory rather than a foundation for ethics was interesting. The paper deserves a second reading and critique.
  • While sundry well-known ethicists are mentioned, I was surprised to see no mention of Derek Parfit, the last few decades of whose life were spent trying to set ethics on a firm footing. While few seem to have thought he was successful, he ought to be at least mentioned.
  • But, in any case, I'm not convinced. As the author said in his introduction 'it’s not my or our disapproval that makes torture morally wrong; torture is wrong because, to put it simply, it hurts people a lot'. Well, two things on that:
    1. Historically, people have not thought that torture was wrong if it was the means to some supposedly good end. Even today some think this. Even those who say they torture - cats, say - because they enjoy it need a reason why this is wrong other than 'it hurts the cat'. People who piously say that torture to obtain information can never be justified need to think again in extreme cases. If someone has hidden an H-bomb about to explode and anihilate London (say; or a bioterrorist about to anihilate all life on earth) don't we have to do 'whatever it takes' to wring the information out of them?
    2. Secondly, lots of things hurt people a lot - giving birth, for instance. Why is it not wrong to allow people to go through this pain, which is more painful than lots of what counts as 'torture' these days, if 'hurting a lot' is the reason not to allow torture? Obviously, there are reasons these cases differ, but this needs to be spelt out.

Paper Comment
  • Sub-Title: "Ethical values can be both objective and knowable – torture really is wrong – yet not need any foundation outside themselves"
  • For the full text see Aeon: Sepielli - Ethics has no foundation.



"Shapiro (James A.) - Evolution without accidents"

Source: Aeon, 06 July 2023


Author's Conclusion
  • By turning evolutionary variation from random accidents to biological responses, 21st-century molecular genetics and genomics have revealed that living organisms possess tremendous potential for adaptive genome reconfiguration.
  • For evolution scientists, this revelation poses an important set of obligations. Those obligations include reorienting our studies of adaptive variation towards learning how deeply genome change is integrated with biocognitive sensory responses. This new evolutionary paradigm will require a more organic mode of research that combines genomics, physiology and cognitive science.
  • For some philosophers of science, 21st-century evolutionary biology will require rethinking all the purely mechanical physics-based assumptions they have held about life. Biologists will have to incorporate as foundational a recognition that rapid genome reorganisation is not only a feature of all organisms but, evidently, has proved essential for the survival of life on an ecologically diverse and dynamic planet.

Author Narrative
  • James A Shapiro is professor of microbiology in the Department of Biochemistry and Molecular Biology at the University of Chicago. His books include Bacteria as Multicellular Organisms (1997), co-edited with Martin Dworkin, and Evolution: A View from the 21st Century, Fortified (2nd ed, 2022).

Notes
  • Well, it contains a lot of interesting stuff. It's probably a plug for the author's book.
  • But ... I, like a host of Aeon Commentators, was arrested by a certain passage that seems completely unjustified by the evidence adduced, and seemed to suggest intelligent design. But it's just thrown in and not followed up: 'The notion of controlled biological processes at the core of organic evolution is plainly incompatible with a purely physicalist explanation, such as random mutations plus natural selection.'
  • There are 47 comments, most of them worth reading, and most of them negative, with some responses by the author. I've taken a copy.

Paper Comment



"Sharma (Kanika) - What’s in the rule of law?"

Source: Aeon, 16 December 2024


Author's Introduction
  • Law was central to the British colonial project to subjugate the colonised populations and maximise their exploitation. Convinced of its superiority, British forces sought to exchange their law for the maximum extraction of resources from the colonised territories. In The Dual Mandate in British Tropical Africa (1922), F D Lugard – the first governor general of Nigeria (previously governor of Hong Kong) – summed up the advantages of European colonialism as:

      Europe benefitted by the wonderful increase in the amenities of life for the mass of her people which followed the opening up of Africa at the end of the 19th century. Africa benefited by the influx of manufactured goods, and the substitution of law and order for the methods of barbarism.
  • Lugard, here, expresses the European orthodoxy that colonised territories did not contain any Indigenous laws before the advent of colonialism. In its most extreme form, this erasure manifested as a claim of terra nullius – or nobody’s land – where the coloniser claimed that the Indigenous population lacked any form of political organisation or system of land rights at all. So, not only did the land not belong to any individual, but the absence of political organisation also freed the coloniser from the obligation of negotiating with any political leader. Europeans declared vast territories – and, in the case of Australia, a whole continent – terra nullius to facilitate colonisation. European claims of African ‘backwardness’ were used to justify the exclusion of Africans from political decision-making. In the 1884-85 Berlin Conference, for example, 13 European states (including Russia and the Ottoman Empire) and the United States met to divide among themselves territories in Africa, transforming the continent into a conceptual terra nullius. This allowed for any precolonial forms of law to be disregarded and to be replaced by colonial law that sought to protect British economic interests in the colonies.
  • In other colonies, such as India, where some form of precolonial law was recognised, by using a self-referential and Eurocentric definition of what constituted law, the British were able to systematically replace Indigenous laws. This was achieved by declaring them to be repugnant or by marginalising such laws to the personal sphere, ie, laws relating to marriage, succession and inheritance, and hence applicable only to the colonised community. Indigenous laws that Europeans allowed to continue were altered beyond recognition through colonial interventions.

Author's Conclusion
  • Even a scholar such as E P Thompson, a Marxist historian who was critical of law as a device that mediates and reinforces existing class relations, valorised the idea of rule of law in Whigs and Hunters (1975) and described the British contribution to it as ‘a cultural achievement of universal significance’. In fact, Thompson, like others, justified the ‘goodness’ inherent in rule of law by arguing that Indian freedom fighters, including M K Gandhi and Jawaharlal Nehru, had used the idea of rule of law in their quest for Indian independence. However, as critics of the doctrine highlight, it is important to remember that when colonised people couched their own demands for greater rights in the conceptual language of rule of law, they did so as a strategic move to gain legitimacy and visibility for their causes, and not necessarily as a commitment to the doctrine itself.
  • At the same time, the anticolonialists’ choice to use the rhetoric of rule of law in their own movements, even if it was a choice made for strategic reasons, points to the endurance of some of the ideals associated with the concept. Despite its status quo-ist nature, and complicity with liberal capitalist regimes, the doctrine has come to stand as shorthand for justice, equality and democracy, which were precisely the objectives that the anticolonial struggles of the 20th century sought to achieve. The enduring legacy of the doctrine to both colonial and anticolonial agendas continues in the 21st century, where the promotion of rule of law has devolved into a multi-billion-pound industry. While furthering neo-imperialist global structures, international developmental aid is routinely tied to rule-of-law commitments and is forced upon postcolonies in the Global South; at the same time, resistance movements in these countries seek to use the concept of rule of law to denounce global capitalist exploitation.
  • This essay draws from Kanika Sharma’s chapter ‘The Rule of Law and Racial Difference in the British Empire’ in the book Diverse Voices in Public Law (2023), edited by Se-shauna Wheatle and Elizabeth O’Loughlin.
Author Narrative
  • Kanika Sharma is a senior lecturer in law at the School of Law, Gender and Media at SOAS University of London, UK.
Notes
  • This makes an important distinction between 'thin' and 'thick' versions of Law - doubtless expounded in greater depth elsewhere. This basically prises apart the distinction between law and morality.
  • The author's contention is that 'thin' laws are simply rules of conduct with defined punishments for their infringement with no serious connection to equality and morality (the Nuremberg Laws would be an example). 'Thick' laws are intended to be moral. So far so good.
  • Her further contention is that the laws imposed by the British colonialists on their subjects were of the 'thin' kind and that - therefore - the British can take no credit for 'giving the colonies the rule of law' in exchange for exploiting their peoples and natural resources.
  • I think 'the British' suffer an unfairly bad press in certain quarters on this issue. Empires have always had to be imposed on their client nations and the primary aim of the rule of law has always been to maintain order for the empire's administrators so that 'tribute' can be extracted. The upside is said to have been - in relatively benign empires - that they imposed peace on their subject peoples - who would otherwise be fighting amongst themselves - and provided a larger canvass on which prosperity could flourish.
  • The empire builders thought - sometimes with some justification - that their way of life was worthy of being imposed on - or at least made available to - those countries and territories they conquered. But I doubt this was ever the real motivation. It's just what powerful nations did, before other powerful nations did it to them. That’s how the world worked.
  • The British - and other European nations - just happened to be the last on the scene. They were schooled in the Classics, and took the Roman Empire as a model, tempered somewhat by a paternalist Christian morality (in principle, at least).
  • The question for revisionists is really whether British rule was the least bad option for the client nations - whether it would have been better for India (say) to be ruled by the British, the French, the Russians or (latterly) the Japanese. Similarly for Africa.
  • Unlike earlier empires, the British suffered from internal moral conflict as well as internal rebellion and external pressures. The idea of converting laws from thin to thick would never have occurred to earlier empires, and those suggesting such reforms would have been eliminated with extreme prejudice.
  • And yes, the British were racist, as were all peoples to one degree or another, at least when they had the upper hand.
  • And yes, the British imposed corporal punishment on their subject peoples - but punishments had been physical since time immemorial and only recently reformed (with Britain leading the way) - and the 'lower classes' had always been treated worse than those who made the laws. Times have changed.
  • There are no Aeon Comments.
  • All these arguments deserve closer consideration and relate to my Notes on Race and Forensic Properties.

Paper Comment



"Simecek (Karen) - Your life is not a story: why narrative thinking holds you back"

Source: Aeon, 17 October 2024


Author's Introduction
  • Narratives are everywhere, and the need to construct and share them is almost inescapable. ‘A man is always a teller of tales,’ wrote Jean-Paul Sartre in his novel Nausea (1938), ‘he lives surrounded by his stories and the stories of others, he sees everything that happens to him through them; and he tries to live his own life as if he were telling a story.’
  • We rely on narratives because they help us understand the world. They make life more meaningful. According to Sartre, to turn the most banal series of events into an adventure, you simply ‘begin to recount it’. However, telling a story is not just a powerful creative act. Some philosophers think that narratives are fundamental to our experiences. Alasdair MacIntyre believes we can understand our actions and those of others only as part of a narrative life. And Peter Goldie argues that our very lives ‘have narrative structure’ – it is only by grappling with this structure that we can understand our emotions and those of others. This suggests that narratives play central, possibly fundamental, roles in our lives. But as Sartre warns in Nausea: ‘everything changes when you tell about life.’
  • In some cases, narratives can hold us back by limiting our thinking. In other cases, they may diminish our ability to live freely. They also give us the illusion that the world is ordered, logical, and difficult to change, reducing the real complexity of life. They can even become dangerous when they persuade us of a false and harmful world view. Perhaps we shouldn’t be too eager to live our lives as if we were ‘telling a story’. The question is: what other options do we have?

Author's Conclusion
  • And so, instead of just changing our narratives, we should learn to understand the perspectives that shape them. When we focus on our own stories, we live life as we already know it, but by loosening the grip that stories hold over our lives – by focusing on the perspectives of ourselves and others – we can begin opening ourselves up to other possibilities. We can adopt new orientations, find significance in new places, and even move toward the exciting unpredictability of shared perspectives.
  • As Sartre warned, everything changes when you tell a story. Narratives limit our potential. Though we are complex beings, living in a chaotic universe, our stories create the illusion that our lives are ordered, logical and complete.
  • We might never fully escape the narratives that surround us, but we can learn to change the perspectives behind them. And so, we are never bound by stories, only by our ability to understand how our beliefs and values shape the way we perceive and engage with the world. We don’t need better narratives; we need to expand and refine our perspectives.
Author Narrative
  • Karen Simecek is associate professor of philosophy at the University of Warwick, UK. She is the author of Philosophy of Lyric Voice: The Cognitive Value of Page and Performance Poetry (Bloomsbury, 2023).
Notes
  • Well, the author may have a point. We should not be bound by the stories we tell ourselves (or others tell for us), but we need some structure to our lives, something that makes them more than just one thing after another and provides focus.
  • There are no Aeon Comments.
  • This relates to my Note on Narrative Identity.

Paper Comment



"Simon (Ed) - If animals are persons, should they bear criminal responsibility?"

Source: Aeon, 21 December 2022


Author's Introduction
  • If you were an animal in need of legal representation in early 16th-century Burgundy – a horse that had trampled its owner, a sow that had attacked the farmer’s son, a goat caught in flagrante delicto with the neighbour boy – then the best lawyer in the realm was Barthélemy de Chasseneuz. Though he’d later author the first major text on French customary law, become an eloquent defender of the rights of religious minorities, and be elected parliamentary representative for Dijon, Chasseneuz is most remembered for winning an acquittal on behalf of a group of rats put on trial for eating through the province’s barley crop in 1522.
  • Edward Payson Evans, an American linguist whose book The Criminal Prosecution and Capital Punishment of Animals (1906) remains the standard work on the subject, explains that Chasseneuz was ‘forced to employ all sorts of legal shifts and chicane, dilatory pleas and other technical objections,’ including the argument that the rats couldn’t be forced to take the stand because their lives were at risk from the feral cats in the town of Autun, ‘hoping thereby to find some loophole in the meshes of the law through which the accused might escape’ like, well, rats. They were found not guilty.
Author's Conclusion
  • In a previously denied motion to grant a habeas corpus hearing for the chimpanzee Hercules, even Justice Barbara Jaffe wrote in her decision that: ‘Efforts to extend legal rights to chimpanzees are … understandable; some day they may even succeed. Courts, however, are slow to embrace change …’ – which is as radical an argument as Wise could have perhaps hoped for from the byzantine and glacial chancery.
  • Whether or not courts come to grant greater rights of personhood to animals, as Judge Jaffe believes they eventually will, the reappearance (if not literally) of animals in court signals the beginnings of a cultural shift in how we understand consciousness throughout the wide biological kingdom.
  • What is ultimately being put on trial is Descartes’s automaton, and what is at stake is the exoneration of nature itself. What could result, at least intellectually and spiritually, if not legally, is a more just and equitable, empathetic and fair world, not just for humans, but for chimpanzees and elephants, dogs and rats.
Author Narrative
  • Ed Simon is the executive director of Belt Media Collaborative and the editor-in-chief for Belt Magazine; a contributing editor for the History News Network; and a staff writer at the literary site The Millions. His books include the anthology The God Beat: What Journalism Says about Faith and Why It Matters (2021), co-edited with Costica Bradatan; Binding the Ghost: Theology, Mystery, and the Transcendence of Literature (2022); and Pandemonium: A Visual History of Demonology (2022). He lives in Pittsburgh, PA.
Notes
  • An interesting paper, but it uses the term 'Person' too liberally. Very few species of animal are Persons as commonly defined.
  • It needs re-reading and further comment.

Paper Comment



"Smith (J.B.) - Living without mental imagery may shield against trauma’s impact"

Source: Aeon, 21 November 2024


Author's Introduction
  • Anecdote omitted ...
  • The fact that some people do not have the capacity to form mental imagery was first described in the scientific literature back in the 1880s, but the phenomenon was largely ignored by researchers during the century that followed. It seems to be a common assumption that the contents of our private mental worlds are the same as those of the people we see around us, rendered from the same pallet of sounds and colours and sensations. For a long time, the fields of psychology and neuroscience made similar assumptions, presuming that, barring some significant malfunction, our inner experience is much the same from one person to the next. But in the early 2000s, researchers returned to the subject of mental imagery and began to challenge the idea that everyone’s imagination worked in the same way. It became clear that a significant percentage of the population have little or no mental imagery lighting up their internal worlds, and the phenomenon was described in a seminal 2015 study by the neurologist Adam Zeman who coined the term ‘aphantasia’ by which we know it today.

Author's Conclusion
  • The picture that emerges from the study is that of aphantasia providing a sort of insulation from one of the most debilitating symptoms of PTSD. If your brain produces little or no mental imagery, then you are shielded from the involuntary, intrusive visual memories that drag patients back into the past, forcing them to relive the horrific incident in vivid, re-traumatising detail. And, while aphantasia is often described in terms of deficits, this fluke of neurobiology turns out to be a superpower, one that Pearson suggests might benefit those working in certain high-stress professions: ‘For first responders, the police, the military, having aphantasia could be really advantageous.’
  • But Pearson is keen to emphasise that intrusive visual imagery is not the only symptom of PTSD. It is still possible for people with aphantasia to suffer from trauma, a fact that is echoed in McDougall’s patients. ‘There are many impacts of trauma that are less obvious and that the patient themself might not even realise,’ she tells me. ‘They might be feeling completely overwhelmed, feeling anxious all the time, and they might not be aware that these things are being triggered by their trauma because it’s not something as obvious as a flashback.’ This might lead some individuals to be misdiagnosed, and for support to focus on disparate symptoms without addressing the root cause.
  • PTSD is a complex and multifaceted phenomenon, and the fact that I have aphantasia might not be the only reason I didn’t go on to suffer from this debilitating condition. But the new findings do indicate that the lack of mental imagery might have played a significant role in how my brain processed an assault, insulating me from the intrusive visual flashbacks that are a hallmark of PTSD. This exploration has helped me to develop a deeper understanding of how my brain works, and to get a better idea of what it must be like for someone who does suffer from visual flashbacks. It’s easy to understand that we may never know what it’s truly like to be a bat, or a cat, but we are increasingly coming to realise that it is impossible to know what it is like to be the person sitting across from us, as well.
Author Narrative
  • J B Smith is a writer based in south London, UK. His work has appeared in Ransom Note and Interalia Magazine.
Notes
Paper Comment



"Stephenson (Abi) - The cochlear question"

Source: Aeon, 15 November 2024


Excerpt
  • Why is the case of cochlear implantation so different from other parallel medical situations that a parent has to navigate? Why is it controversial in the way that an artificial limb or cornea transplant is not? Unlike the parent of a child with vision loss who pursues laser surgery in an uncomplicated way, the parent of a deaf child is implicated in a much larger politico-cultural struggle. To my outsider’s eyes, a lot of this was not the tangled snarl of identity politics, but seemed largely to stem from a fundamental disagreement over the metaphysics of deafness. Whereas the hearing world, hand in hand with the medical one, has conceptualised deafness as a sensory deficit that can be relatively effectively ‘restored’ – albeit partially, temporarily and imperfectly – parts of the Deaf-World argue that this approach demonstrates an outdated pathologisation of difference.
  • Happily, we live in an era where neuro- and other divergences are no longer seen as aberrations, but rather as part of a welcome heterogeneity of biology and perspective. Deaf critics and disability theorists thus pose the question: why does society want to frame deafness as a medical abnormality, rather than a sensory difference? In their view, the medical model is the outward face of a punishing normative tyranny. Any deviations from the standard hearing model are ushered – either gently, kindly or violently, oppressively – back to the midline. Like the demented ‘benevolent’ logic of gay conversion therapies, even the so-called good intentions of parents and bystanders (as anti-racist campaigners have long argued) could perpetuate discrimination just as easily as the malign ones.
  • The psychologist Harlan Lane went even further, and argued that deafness is actually more akin to an ethnicity than to a disability. If the same rights and protections apply as apply to other cultural, religious and racial minorities, then the entire therapeutic landscape looks incredibly sinister. At its mildest, the mainstream model of improving a deaf child’s hearing becomes the enforced alteration of a member of a cultural and linguistic minority. And at worst, like with the cochlear implant, it is not only an invasive surgery that endangers and irrevocably changes a child, but also threatens the extinction of an imperilled language and the erasure of a cultural group.
  • Lane likens the hearing parents of a deaf child to parents who adopt a child from a different racial background, arguing they have a similar responsibility to uphold the cultural mores and traditions of their child’s ethnic group. Tom Humphries, the Deaf culturalist who coined the term ‘audism’, has a deeply cynical view of hearing parents, positioning them simply as legal ‘owners’ of their deaf children, many of whom eventually ‘migrate’ back to what he strongly implies is their true cultural home. He explicitly likens this pattern of ownership and return to that of African American slaves or Latin American populations under colonial rule. As a parent, this line of argumentation is jarring, to say the least. While at the extreme end of the debate, many Deaf critics have joined Humphries in arguing vociferously that hearing parents cannot be trusted to give informed consent on behalf of their child – surgical or otherwise.
  • With these sorts of arguments informing a good deal of the public discourse around deafness, what is the hearing parent of a deaf child to think? And more importantly – how are they to act? The underlying assumption of CI critics seems to be that the neutral stance is to do nothing, and that any intervention at all requires moral licence. But doing nothing isn’t always neutral – most obviously in medical scenarios – and can be a malign act of withholding. There is a genuine moral dilemma here, because a parent must give informed consent one way or the other. Not acting while the child is young is potentially equally culpable.
  • If the anti-CI arguments are not convincing, then it’s possible that their proponents have indirectly harmed the potential development of some children and their ability to flourish in the widest set of circumstances. Alongside the passionate critiques of Lane, Humphries and others, there is also considerable weight lent to the academics arguing quite the opposite – that denying a deaf child a cochlear implant is neglect. In the Western world, where early paediatric implantation in severely to profoundly deaf children is considered to be the ‘standard of care’, making the choice not to implant could be seen as a relatively radical decision to withhold a mainstream technology that most of a deaf child’s peers will be using.
  • And what are the ethics of withholding when that technology has safety implications, and could enable the deaf child-then-adult to apprehend dangers to themselves or others? Footsteps in the dark, a window breaking, a car approaching on a quiet street, a fire alarm, a scream in the shopping centre, a baby crying in the next room – none would be audible to my daughter without an implant. And from a feminist perspective she might need, as women always have done, a loud voice to shout, or to argue with her healthcare providers, or to advocate for herself in an emergency. The implant would provide her with a clearer pathway to power and impact in the world, and to positions of influence where she will be underrepresented both as a woman, and as a deaf person.
  • To refuse her a CI on the basis of Lane et al’s arguments would be to use the future of an individual as a blunt weapon to achieve benefit for the broader Deaf community.
Author Narrative
  • Abi Stephenson is a producer and curator of talks, broadcasts and animations. She was the senior producer of the RSA’s public events programme for more than a decade, curating a year-round festival that showcased the world’s greatest minds and ideas. She also edited and produced the award-winning RSA Animate and RSA Shorts series, which aimed to make robust, world-changing ideas accessible to everyone.
Notes
  • Well, a useful paper, and shows the psychological and other harms caused by political correctness. The arguments by elements of the 'deaf community' are entirely specious and parents of deaf children should not be intimidated in the least but should do what is right for their children.
  • There are two entirely distinct issues. The first relates to whether deafness (or blindness) is just a difference or a deprivation. It's clear that we have ears to hear with and eyes to see with and that the lack of either sense is a deprivation. Refusing amelioration is perverse and selfish.
  • This doesn't mean that deaf or blind people are - in themselves - less valuable than the hearing and sighted - but that their lives (and society as a whole) would go better (other things being equal) if they could see and hear. Things would go better (same caveat) if there were no deaf or blind people. This doesn't mean that things would be better if currently deaf and blind people were exterminated - they wouldn't - but that things would go better if these people could see and hear.
  • Now, given that there are people who are deaf or blind and - in general - their conditions cannot currently be cured - it is good that they can form communities for moral support, shared experience and self-help (and so on). Maybe it would be bad, transitionally, for these communities, if the supply of new recruits dried up, but that's just tough - and other forms of support might need to be dreampt up. After all, no-one would suggest that some children should be blinded so that the 'blind community' could be maintained.
  • Is there really any kind of parallel to racism? No doubt in some societies - though less so now than previously - it is a deprivation to be a member of a race that's not given the same opportunities as the dominant race in a society. But - we have discovered - this isn't because there's something intrinsically wrong with members of this race. They function perfectly well, and can fulfil roles in society according to their abilities just as those of other races. The same is not true of the blind or deaf. There are some things they can't do (even though they can do much more than was previously supposed, and help can be provided). Picking up on the last point - the best help that can be provided is to cure their condition. The same is not true of race; it's not a 'condition' that needs curing.
  • Yes, society should be more accepting of 'diversity' (and it is, compared with the recent past). But medical conditions that can be fixed without excessive risk or cost should be fixed. As the author points out, failure to perform such interventions is a dereliction of duty.
  • I suppose such interventions should not be imposed on those considered competent to refuse them. The 'duty of care' applies more to children or others not considered competent. But those that are competent might have a duty to comply if - to put it bluntly - doing so would make them less of a burden on the rest of us.
  • There are no Aeon Comments.
  • This relates to my Notes on Narrative Identity, Race (because of the supposed parallel) and Psychopathology.

Paper Comment
  • Sub-Title: "As the hearing parent of a deaf baby, I’m confronted with an agonising decision: should I give her an implant to help her hear?"
  • For the full text see Aeon: Stephenson - The cochlear question.



"Stiefel (Klaus M.) - The tentacles of language are always on the move"

Source: Aeon, 15 October 2024


Author's Introduction
  • As an evolutionary biologist, I see the history of species in my own body. In my spinal cord, I see half a billion years of evolution, starting with the flexible cord of a tiny proto-fish in the ancient Cambrian period; I see the steps backwards from my brain to the mere subtle thickening of the nerve cord at the front end of that same proto-fish.
  • In contrast with evolutionary biology, I have little formal training in linguistics beyond two classes in artificial intelligence approaches to language at the University of Vienna. The classes predated ChatGPT by decades and were held in a building that once housed a late-medieval monastery, which seems fitting in retrospect; the computer science of two decades ago feels akin to the Middle Ages.
  • Evolution is more like human language than computer science. It produces major changes in the lineages of animals and plants over hundreds of millions of years, and then occasionally goes into sprint mode, such as in the great African Rift lakes, where a rainbow of new species has evolved in only hundreds of thousands of years. The grand developments of human languages, likewise, can seem impossibly slow – and then suddenly race ahead. The early Indo-European and major language groups of modern Europe and South Asia saw slow, grand developments, yet they too can shift in a lifetime or less. I’m exhibit number one. My exceedingly international biography, with time spent living in Austria, the UK, Germany, the US, Japan, Australia and the Philippines, helped me see such shifts, and gives me more food for linguistic thought than a more stationary human language-user would ever see.

Author's Conclusion
  • The scholarly literature on bilingualism is home to an agitated discussion of the benefits or harms of bringing up a child with two or more languages. Ideological divisions run deep in this literature. In my experience, children are extremely keen to express themselves, by whatever means possible, and they will handle more than one language quite naturally. To come back to our son, he knows, at age three, which set of grandparents to speak to in which language. The development of languages, with their mixing and reshuffling over the course of centuries, almost certainly builds on the dynamics of multilingual families.
Author Narrative
  • Klaus M Stiefel is a biologist, science writer and underwater videographer, based in the Philippines. His latest book is 25 Future Dives (2024).
Notes
Paper Comment



"Sunar (Neesa) - I have no mind's eye: let me try to describe it for you"

Source: Aeon, 10 February 2021


Author's Introduction
  • I have aphantasia, a neurological condition that leaves me with a ‘blind mind’s eye’: the inability to mentally visualise my thoughts. While most people are able to ‘see’ images associated with stories and thoughts when their eyes are closed, I have never had this gift. When I close my eyes, I experience only darkness. I have no sensory experience.

Author Narrative
  • Neesa Sunar is a freelance writer on mental health. She works as a mental health advocate and runs a mental health discussion group on Facebook called What is Wellness? She is also the author of Memories of Psychosis: Poems on the Mental Distress Experience (2019) and a singer/songwriter with guitar. She lives in New York.

Notes
  • When I first read this paper, I remarked that it was ‘not greatly enlightening’, presumably because I read it after watching "Aeon - Video - Out of mind".
  • However, reading it again a couple of years later (July 2023), I found it much more useful and think it raises many interesting issues.
  • There’s a reference to the introduction of "Galton (Francis) - Visualised Numerals" to do with visualising a breakfast table. Good to have a positive reference to Francis Galton. I intend to read this paper and consider the questions he asked his subjects from my own perspective.
  • According to Adam Zeman, voluntary imagery is generated in fronto-parietal and in posterior brain regions.
  • Aphantasia is usually congenital, but can be a response to brain injury. It affects 2% of people.
  • Failure to be able to generate arousing images can cause sexual problems (asexuality).
  • Those with aphantasia often don’t know they have the condition, using language as a substitute – concepts rather than images.
  • Most people with aphantasia can still dream with images1, though the author cannot – dreaming in ‘words, ideas, feelings and verbal knowledge of circumstances’.
  • Friends were shocked on hearing of her ‘condition’, saying visualizing was ‘a big part of their understanding of life’. The author – while having no understanding of having a ‘Mind’s Eye2’ felt she was missing a sense akin to sight or hearing’.
  • Her ability to recall memories3 is diminished and she’s worried she’ll forget what her loved ones look like as she gets older. She takes photos to preserve her memories.
  • She remembers ‘auras’ associated with particular people and places and is thereby able to describe them in the absence of visual imagery.
  • For people, while she can’t recall ‘minute details4’ of what they look like she can recall their personalities well.
  • She said that meditation was of no benefit to her as her lack of mental imagery left her with a clear mind. I was doubtful – mental images don’t thrust themselves upon you, though present worries and non-visual recollection of past events may do.
  • She finds reading fiction difficult and just skips the descriptive passages lest she lose track of the narrative. I must say I do likewise, though it has little to do with the ability to conjure up the scenes the author is describing.
  • She trained to be a concert violinist, but couldn’t hear the music in her mind, which meant her playing and application of teachers’ advice was less nuanced than needed at the professional level. I was surprised at this – does she really not hear music in her head? This is much more of a deficit than having no mental imagery, but at least it would protect you from earworms. Are auditory memories – and creative musical ideas – generated using the same neural circuits as visual ones?
  • An advantage is a reduction in PTSD (caused by alleged childhood ‘emotional abuse’ by her father). Auras are less traumatic than mental images.
  • She was also ‘diagnosed with schizoaffective disorder5, due to (her) past with psychosis and delusions’. The ‘voices’ – trains of thought which ‘identified themselves as outside entities’ were not audible so were ‘easier to discredit’. I’m not sure of any of this.
  • Her condition makes her more ‘pensive6’ – contemplating ‘the meaning of life’ and free for ideas and ‘creative inspirations’ – without the interruption of images of the tangible world. However, she has to keep a journal as a substitute for memories.
  • She references a couple of books recent, but they don’t seem to be up to much based on reviews on Amazon (which is how she came across them). There’s other material on-line – some of it therapeutic – including a TedX talk (Tamara Alireza – Aphantasia: can you see in your mind’s eye?).
  • She concludes with thoughts about ‘neuroatypicality7’ and ‘cures’. She doesn’t view her condition as a ‘disability or disorder’. I think it probably is, but not as debilitating as many, especially as most people don’t realise they have the ‘condition’. Not having perfect pitch isn’t a ‘condition’ nor is the lack of any other special skill. But the complete lack of a facility that almost everyone has might be deemed to be a ‘condition’, unless it’s a matter of degree and the ‘sufferer’ just happens to be at the extreme end.
  • As far as a ‘cure’ is concerned – if there was one – the author is unsure she’d take it even if – just maybe – it would make her smarter, help her understand others more easily, sharing in their perceptions or making her more creative. She says she’d find the prospect ‘scary’, though she might adjust.
  • At least her rejection of a non-existent cure has nothing to do with wanting to be part of a ‘community’. This is a hot topic in the (so-called) ‘deaf community’ about refusing to have cochlear implants8 and insisting on communication by sign-language. To my mind, this shows a lack of gratitude in rejecting what is surely a godsend.
  • Further reflection is probably required, but must await further reading on the topic.

Paper Comment




In-Page Footnotes ("Sunar (Neesa) - I have no mind's eye: let me try to describe it for you")

Footnote 1: Footnote 2: Footnote 4:
  • Who can? Reference the standard observation that many men can recall minute details about their cars but can’t remember the colour of their wife’s eyes.
Footnote 5: Footnote 6: Footnote 7:
  • I did a Google, and Wikipedia: Neurodiversity came up, which distinguishes between Neurodiversity and Neurotypicality.
  • I’m suspicious of all this ‘stuff’. Some ‘conditions’ are not just benign differences to be celebrated but real deficits. But, I don’t think aphantasia falls into this category – and (though the paper doesn’t discuss the possibility) may be a matter of degree (it surely is) so may fall into the class of other pseudo-disorders like dysmathia, dyspraxia and maybe dyslexia.
Footnote 8:



"Sutton (Isabel) - Dementia is not a death. For some, it marks a new beginning"

Source: Aeon, 23 July 2024


Excerpts
  • Dementia is the name given to a range of diseases that affect the brain, including Alzheimer’s, Huntington’s and Parkinson’s, as well as specific brain conditions, such as vascular dementia and frontotemporal dementia. This form of cognitive decline was first identified by the German neuropathologist Alois Alzheimer in 1906 when he found, as he described it, an ‘unusual disease of the cerebral cortex’ in the brain of a deceased 55-year-old woman. The strangeness of the syndrome ensured it would remain poorly understood and largely ignored throughout the 20th century. In fact, until the 1970s, cognitive decline in older people was thought to be caused by the hardening of the blood vessels rather than changes in the brain. During this period, doctors didn’t think that much could be done about it and dementia was widely viewed as an inevitable and natural part of ageing; however, it is not. Yes, after 65 its prevalence doubles with every five years of increasing age, but there appear to be plenty of other things influencing its onset that go beyond the brain. The Lancet Commission on dementia prevention suggests that 12 factors account for around 40 per cent of cases, including hearing loss, traumatic brain injury, low level of early life education, air pollution, smoking, alcohol, high blood pressure, diabetes, physical inactivity and obesity, depression and, critically, social isolation. Evidence from long-term studies consistently shows that having more social participation in the middle and later stages of life is associated with 30 to 50 per cent lower risk of developing dementia. This represents one of the most important shifts in how the condition is understood.
  • ...
  • In recent decades, this changing view of dementia has been amplified by researchers and authors who stress a more social and hopeful way of thinking about cognitive decline. Zeisel emphasises many of the strengths and capacities of people with dementia in his book I’m Still Here, reminding us how much of the brain remains intact, even as it changes: 70 to 90 billion active brain cells, he writes, ‘hold memories, the ability to learn, the ability to be creative, and to enjoy life’. The UK-based writer Wendy Mitchell, who died earlier this year with dementia, wrote the books Somebody I Used to Know (2018) and What I Wish People Knew About Dementia (2022), demonstrating that it was possible to communicate in writing even as speech became harder. ‘The hesitant verbal me can feel frustrating,’ she explained, ‘but the typing me feels calm, fluent and closer to my thoughts and feelings.’ In Japan, the author Yusuke Kakei’s bestselling ‘guidebook’ Ninchishō sekai no arukikata (2021), which translates as ‘walking in the world of dementia’, also sets out to correct biases in how people think of the condition. Kakei allows readers to encounter dementia as a different world with distinctive, sometimes strange customs.
  • We still have no cure for the neurodegenerative causes of dementia, which makes the exploration of the social and environmental aspects of the syndrome even more important and urgent. In an era of rapidly ageing populations, as more people than ever face the likelihood of cognitive decline, we need better ways of understanding the world of dementia. Look at James McKillop. Now in his 80s, he has lived with dementia for more than 20 years and has been at the forefront of an advocacy movement for those with the syndrome. He is still working, living and remembering – just differently.
  • No, dementia is not a living death. For those who are allowed to change, and supported and stimulated, it can mark a radical new beginning.
Author Narrative
  • Isabel Sutton is a radio and television producer and journalist. She lives in London.
Notes
  • This a brief and sensible paper. It includes sundry positive cases of a degree of flourishing with dementia. It is right to point out how its effects can be mitigated both before and after its onset. But there are cases where the fight has failed. She doesn't mention the issue with late onset dementia arising in highly intelligent people whose coping strategies have finally been breached and whose decline is precipitous.
  • Either way, it's a warning to seize the day.
  • I agree that it's incorrect to say that X 'died when they became demented'. It's a consequence of the Psychological View of personal identity.
  • There are few sensible Aeon Comments, mostly pointing out the expense of providing a stimulating environment for those with dementia.
  • This relates to my Notes on Psychopathology and Memory.

Paper Comment



"Tahar-Malaussena (Mathilde) - Why the cat wags her tail"

Source: Aeon, 28 March 2025


Author's Introduction
  • In Cheshire, a fox is poised to pounce on its mate when a badger bursts from a bush. The badger starts chasing the fox, which keeps leaping away, finally distancing itself. Then the fox suddenly turns back, approaches cautiously, and jumps sideways, facing the badger head-on. Back arched, head low, it stops, remains still. After a pause, the badger swiftly resumes the chase, causing the fox to hop around before lunging at its companion and darting off together.
  • In Orlando, three dolphins are swimming in unison when one forms a perfect bubble ring. Another immediately approaches and blows another ring, which merges with the first to create a larger hoop. The third dolphin appears to attempt to pass through it, completing their improvised choreography.
  • Animals often engage in play, from the spectacular to the subtle. Hyenas stage mock brawls, cats spin in circles chasing their tails, octopuses play push-and-pull with bottles, dogs bury sticks only to dig them up moments later… Even polar bears have been spotted playing with dogs, grabbing them in what looks like a hug, rolling in the snow, and letting the dogs gently nibble their lips. Such scenes make us grin with delight. But is that all there is to it?
  • Animal play can seem trivial, even laughable. Often defined as an intrinsically rewarding activity, yet offering no immediate survival benefits, its very existence is puzzling. While it has long been hypothesised that play serves as a rehearsal for adult behaviours, some studies suggest that it might not be crucial to their development. Similarly, although some scholars propose that play allows animals to expend surplus resources (time, energy, neural activity) – which could explain play’s prominence in pets – this does not account for its widespread occurrence in wild species. Play challenges us with its apparent lack of biological necessity.

Author's Conclusion
  • Recent studies have highlighted the link between animal innovations and evolutionary change. One well-documented case is birdsongs. In many bird species, songs are learnt and form part of their local cultural heritage. In other words, they have been invented, with each population having its own song patterns. And the greater the difference between the songs, the harder it becomes for males from one population to mate with females from another, as they don’t know the local ‘dialect’. Indeed, populations with different dialects also seem to constitute distinct genetic subspecies, hinting at a speciation process sparked, or at least accelerated, by cultural diversity.
  • Another example involves black rats, which developed a technique for opening pinecones. This innovation gave them access to a new resource and enabled them to invade nearby forests. It is reasonable to hypothesise that this environmental shift will, over generations, lead these rats to evolve differently from their urban counterparts, as they now face distinct selective pressures – a change initiated by their pinecone invention. Moreover, acquiring this new technique imposes a cost on the rats, requiring the effort to invent or learn. This means that, if a genetic variant arose in the population that made the behaviour easier to develop and less costly to acquire, it would likely be selected – individuals carrying the variant would leave more offspring inheriting it. Therefore, innovations can alter selective pressures and, if the right genetic variations emerge, drive evolutionary change – a process known as the Baldwin effect.
  • If invention promotes the animals’ adaptation to new conditions and, in some cases, enables evolutionary change, and if play reveals their capacity for invention – and even appears to be the activity par excellence through which animals develop their inventiveness – should we go further and ask: could the most playful species also be the most capable of meeting ecological challenges? Could play itself sometimes drive adaptation and evolution? Some researchers have proposed this hypothesis but, for now, it lacks empirical evidence – or rather, empirical research. It is almost as if the hypothesis were too seductive for the gravitas of scientific enquiry, as though scientists preferred to conceal the fact that their research is driven less by the prospect of useful solutions than by the sheer pleasure of discovery.
  • The exploration and potential corroboration of this hypothesis could, nonetheless, offer a glimpse of another way of living with other animals – one not based on hierarchy or exploitation, but on playful relationships. Like Pippo and Albertine, Safi and Wister, ravens and wolves, we could perhaps co-create with other species the conditions for a shared world. This would not mean simulating shared activities, as in typical play, but drawing from interspecific play the effort to understand other species – an effort that could foster real collaborations, or at least ones that are realisable because they are desirable.
Author Narrative
  • Mathilde Tahar-Malaussena is a philosopher of biology working as a research fellow at University College London, in the framework of the research project Animal Inventiveness: A New Insight on Agency in Evolution, funded by the Leverhulme Trust. She is the author of Du finalisme en biologie (2024).
Notes
  • An interesting Paper: the embedded videos are worth watching too - especially the polar bears playing with dogs.
  • While play can be a training exercise or a trial of alternative ways of doing things, I get the impression that for dogs it's just fun.
  • There are 10 Aeon Comments, each with a reply by the author, though I've not studied them yet.
  • This relates to my Note on Animals.

Paper Comment



"Taiwo (Olufemi) - It never existed"

Source: Aeon, 13 January 2023


Author's Introduction
  • We should expunge, forever, the epithet ‘precolonial’ or any of its cognates from all aspects of the study of Africa and its phenomena. We should banish title phrases, names and characterisations of reality and ideas containing the word.
  • To those who might be put off by the severity of the proposal, or its ideological-police ring, I hear you and ask only that, with just a little patience, you hear me out. It will not take much to jolt us out of the present unthinking in assuming that ‘precolonial’ or ‘traditional’, and ‘indigenous’, has any worthwhile role to play in our attempt to track, describe, explain and make sense of African life and history.
  • When ‘precolonial’ is used for describing African ideas, processes, institutions and practices, through time, it misrepresents them. When deployed to explain African experience and institutions, and characterise the logic of their evolution through history, it is worthless and theoretically vacuous. The concept of ‘precolonial’ anything hides, it never discloses; it obscures, it never illuminates; it does not aid understanding in any manner, shape or form.

Author's Conclusion
  • To date, the works of individual thinkers, their respective places in the annals of thought across the globe and their contributions to the perennial questions of philosophy in their own domains have not been part of Africa’s intellectual history and philosophy. Because we are working within the Gregorian calendar that most of the world now follows, we are able to zero in, as a matter of historical specificity, on particular thinkers in particular periods, working on their own or being parts of discursive communities not limited to their own vicinities. Thus, we end up with more robust and more adequate renderings of the historicity of African ideas and thinkers in Africa and their place in the world. Eighteenth-century philosophy can open up beyond Königsberg to Timbuktu. Nineteenth-century philosophy can take seriously the exertions of James Africanus Beale Horton, Rif’ā’ah al-Ṭahṭāwī and Fukuzawa Yukichi – in Sierra Leone, Egypt and Japan, respectively – in their engagement with modernity and what it meant for their respective societies. Horton wanted Africans to embrace modernity, and both al-Ṭahṭāwī and Yukichi are regarded as the principal proponents of modernity in their respective locations in the 19th century.
  • All this would be invisible to the trinity of precolonial, colonial and postcolonial division of African history for organising states and ideas, practices and institutions, processes and thinkers and intellectual movements through time. Tossing the retrograde ‘precolonial’ epithet in the dustbin can bring only gains in expanding our knowledge, enriching our conceptual repertoires, and telling stories that are closer to the truth than the alternative.
  • It is time to say bye-bye to the idea of a ‘precolonial’ anything in our intellectual discourses respecting Africa.

Author Narrative
  • Olúfẹ́mi Táíwò is professor of Africana Studies at the Africana Studies and Research Center at Cornell University in New York. He is the author of How Colonialism Preempted Modernity in Africa (2010) and Africa Must Be Modern (2014).

Notes
  • This rather long Paper is interesting but – to my mind – rather annoying in places.
  • My comments on the Paper are inhibited somewhat by my general ignorance of African history. I’m not sure who the author’s target audience is, but he asks quite a lot of them in his allusions to African history.
  • Therefore, I need to read up on African history. I have a few rather old books on the subject, which – while they won’t take advantage of ‘recent research’ (which may not be any less biased than ‘old research’) may well give the sort of historical analysis that our Author is complaining about.
  • An Aeon Paper complaining of the dearth of African historiography by Africans is "Green (Toby) - Africa, in its fullness".
  • Overall, I think the author's position is implausible despite him being correct in arguing that the tripartite division of African history as pivoting around European Colonialism misses out a lot.
  • However, no-one can seriously dispute the importance of European Colonialism for modern Africa. The current nation states are mostly as carved out - for good or (usually) ill - by the Colonial powers and the 'modernising' of (and often, the creation of) many African cities and infrastructure is as a direct result of Colonialism.
  • When Europeans talk of 'Colonial Africa' they are really talking about Black sub-Saharan Africa. The author is right that Africa as a continent is not uniform. It is essentially divided by the virtually impenetrable Sahara desert, much as the sub-continent of India is separated from Eurasia by the virtually impenetrable Himalayas. Of course, sea routes enabled contact by Arabic and European powers in both cases.
  • A distinction must be made between the Roman province of Africa and the other parts of North-East Africa - Egypt, Ethiopia and Nubia - that were part of the ancient world as known to the West, and those parts of Africa for which the late 19th-century ‘scramble’ took place. When ‘colonial Africa’ is under discussion, it is sub-Saharan Africa that is intended. Those parts of North Africa that became detached from the Ottoman Empire are another matter entirely. The author is right to pour scorn on the very brief Italian attempts to colonize Libya and Ethiopia prior to WW2, but this isn’t really central to the issue.
  • Recruiting the ancient Egyptians, Carthaginians and Arabs as ‘Africans’ muddies the waters: especially when luminaries such as Septimus Severus and St. Augustine are wheeled out as great Africans. They were not Black Africans and no-one has ever claimed that the histories of these great civilisations pivot around recent European meddling, thought they possibly do pivot around earlier interventions by the Romans and the Arabs.
  • Sub-Saharan Africa enjoyed (or suffered) considerable Arab influence, which provided the literacy for and improved the technology of its various kingdoms (I dare say some high culture was entirely indigenous; like 'Great Zimbabwe', but I think the Benin Bronzes that feature as the cover photo use technology imported by the Arabs).
  • One of the reasons the initiation of Colonialism was so successful was the ability of the colonising powers to take advantage of the rivalries between the indigenous states (whether in Africa, South and Meso-America or in India). Only later did the Maxim gun1 give the European powers overwhelming military might (at least in pitched battles).
  • I wasn’t sure why he had to mention the ‘racist philosopher’ Hegel2, who is lampooned in "Russell (Bertrand) - History of Western Philosophy" in this context (Book 3, Part 2, XXII – Hegel3). Also, ‘racism’ isn’t – other than in current perception – the one example of wickedness on which the world turns. Racism wasn’t – as far as I know – central to Hegel’s philosophy except in the use of silly analogies in his Philosophy of History. Maybe I should look at "Hegel (Georg Wilhelm Friedrich) - The Philosophy of Right, The Philosophy of History". Also, maybe "Sorensen (Roy) - Mach and Inner Cognitive Africa" is relevant4? Racism was more central to Wagner’s philosophy, but few would refer to him as the ‘racist composer’.
  • The author mentions Yukichi Fukuzawa on a couple of occasions, though what he has to do with the debate escapes me.
  • The author makes a lot of play on African philosophy. I note that Philos-List has a fair amount of traffic on African philosophy, which I;ve hitherto ignored as too far from my interests, whatever its intrinsic merits. I just note here that "Herbjornsrud (Dag) - The African Enlightenment" is an Aeon paper claiming an Ethiopian Locke, though there’s dispute about the genuineness of the key work (see Wikipedia: Zera Yacob (philosopher)).
  • More to be added in due course ...
  • For the Author, see:-
    Cornell: Africana Studies and Research Center - Olúfẹ́mi Táíwò
    Wikipedia: Olúfẹ́mi Táíwò
  • Note: As Wikipedia points out, there's another very similarly named Black US philosopher (our author is Nigerian; the doppelganger's parents were also Nigerian):-
    Olúfẹ́mi O. Táíwò: Home Page
    Wikipedia: Olúfẹ́mi O. Táíwò

Paper Comment
  • Sub-Title: "The idea of a ‘precolonial’ Africa is theoretically vacuous, racist and plain wrong about the continent’s actual history"
  • For the full text see Aeon: Taiwo - It never existed.




In-Page Footnotes ("Taiwo (Olufemi) - It never existed")

Footnote 1:
  • See Wikipedia: Maxim gun (including Hilaire Belloc’s couplet “Whatever happens, we have got; The Maxim gun, and they have not”).
Footnote 2:
  • Another side-swipe at Hegel recently is an account of the Examiner’s response to an answer to the Examination question ‘Was Hegel a great philosopher’ being ‘Yes’ as “Very succinct, but a shorter and more accurate answer would be ‘No’”.
Footnote 3:
  • From a quick look, Russell makes no reference to Hegel’s use of Africa, but does – on p. 705 of my edition – quote his use of China (“of which Hegel knew nothing other than that it existed”) as exemplifying Pure Being.
Footnote 4:
  • I need to read this as part of my studies in Thought Experiments, but in this context only to determine whether Sorensen is using Hegelian terminology.



"Terzian (Giulia) & Corbalan (M. Ines) - Do you have a duty to tell people they’re wrong about carrots?"

Source: Aeon, 21 December 2022


Authors' Conclusion
  • In an age of rising misinformation, as navigating expert testimonies becomes ever more difficult, we must be epistemically vigilant. We must be careful when it comes to our own beliefs. We shouldn’t treat gossip and speculation as fact. We should be wary of far-fetched explanations. And we should also look after the epistemic wellbeing of others (when feasible).
  • Sometimes, we have an epistemic duty to speak up. Even when it makes us uncomfortable. Even when it seems rude, weird or exhausting. And even when it’s about something as seemingly unimportant as the price of carrots.
Author Narrative
  • Giulia Terzianis a research fellow at the ArgLab, the Reasoning and Argumentation Lab within the Institute of Philosophy at the NOVA University of Lisbon in Portugal.
  • M Inés Corbalán is a collaborating member of the ArgLab in the Institute of Philosophy at NOVA University of Lisbon in Portugal.
Notes
  • A fairly interesting but important paper which raises many issues germane to the 'post truth' era.
  • I've long believed it an epistemic duty to believe only truths, insofar as these can be determined.
  • Also, that we should be guided by, and submit to, experts - where there are such, and there is a consensus - unless our own expertise exceeds theirs.
  • This applies to religious beliefs as well as to political and other more mundane beliefs.
  • This Paper is assigned to Logic of Identity pending the creation of a new Note on Truth.

Paper Comment



"Thompson (Evan) - Clock time contra lived time"

Source: Aeon, 30 September 2024


Author's Introduction
  • On the evening of 6 April 1922, during a lecture in Paris, the philosopher Henri Bergson and the physicist Albert Einstein clashed over the nature of time in one of the great intellectual debates of the 20th century. Einstein, who was then 43 years old, had been brought from Berlin to speak at the Société française de philosophie about his theory of relativity, which had captivated and shocked the world. For the German physicist, the time measured by clocks was no longer absolute: his work showed that simultaneous events were simultaneous in only one frame of reference. As a result, he had, according to one New York Times editorial, ‘destroyed space and time’ – and become an international celebrity. He was hounded by photographers from the moment he arrived in Paris. The lecture hall was packed that April evening.
  • Black-and-white photo of a young Albert Einstein explaining equations on a chalkboard to an audience of seated men in a classroom: Albert Einstein at the College de France, with Henri Bergson in the audience (second along from white-bearded man). Paris, 1922.
  • Sitting among the gathered crowd was another celebrity. Bergson, then aged 62, was equally renowned internationally, particularly for his bestselling book Creative Evolution (1907), in which he had popularised his philosophy based on a concept of time and consciousness that he called ‘la durée’ (duration). Bergson accepted Einstein’s theory in the realm of physics, but he could not accept that all our judgments about time could be reduced to judgments about events measured by clocks. Time is something we subjectively experience. We intuitively sense it passing. This is ‘duration’.
  • Their debate began almost by accident. The meeting in April had been convened to bring together physicists and philosophers to discuss relativity theory, but Bergson came intending only to listen. When the discussion flagged, however, he was pressed to intervene. Reluctantly, he rose and presented a few ideas from his forthcoming book, Duration and Simultaneity (1922). As Jimena Canales documented in her book The Physicist and the Philosopher (2015), what Bergson said in the following half an hour would set in motion a debate that reverberated through the 20th century and down into the 21st. It would crystallise controversies still alive today, about the nature of time, the authority of physics versus philosophy, and the relationship between science and human experience.
  • Bergson began by declaring his admiration for Einstein’s work – he had no objection to most of the physicist’s ideas. Rather, Bergson took issue with the philosophical significance of Einstein’s temporal concepts, and he pressed the physicist on the importance of the lived experience of time, and the ways that this experience had been overlooked in relativity theory.
  • Though Einstein was forced to speak in French, a language of which he had a poor command, he took only a minute to respond. He summarised his understanding of what Bergson had said and then shrugged away the philosopher’s ideas as irrelevant to physics. Einstein believed that science was the authority on objective time, and philosophy had no prerogative to weigh in. To end his rebuttal, he declared: ‘[T]here is no time of the philosopher; there is only a psychological time different from the time of the physicist.’
  • But despite what many have come to believe about the debate that began that night, Einstein was wrong. There is a third kind of a time: a time of the philosopher.
    When Duration and Simultaneity was published later that year, Bergson’s debate with Einstein became more public and widespread, drawing in many other physicists and philosophers. But as it spread, cracks began to appear in the philosopher’s claims. The argument showed that Bergson had misunderstood an important technical aspect of Einstein’s theory of special relativity, particularly concerning time dilation (the difference in elapsed time, as measured by two clocks, due to their relative velocities). Due to this failing, many came to believe not only that Einstein won the debate, but that the philosophy of duration had no relevance to the world of physics. Bergson began to appear out of touch with the cutting edge of science.
  • But close examination of Bergson’s work does not bear out these lopsided judgments. He was out of touch neither with science nor mathematics. In fact, he was adept at mathematics – he had won a prestigious mathematics prize, and his first published work was in a mathematics journal. And though he was wrong about one technical aspect of relativity theory, he was right about something more fundamental: time is not just what clocks measure. It must be understood in other ways that draw directly on our experience of duration.

Author's Conclusion
  • In the century since 1922, the conceptual distance between the German physicist and the French philosopher seems to have shrunk. It turns out that there is a way to reconcile Bergson’s ideas with special relativity theory – though none of the parties to the debate seems to have noticed it. As the philosopher Steven Savitt has suggested, duration can be understood as the passage of local or ‘proper time’ – the time measured by a clock following along with an object’s worldline within a reference frame (eg, following a twin leaving Earth at the speed of light). In other words, proper time can be understood as measurable clock time based on the duration proper to an observer within a reference frame.
  • But this reconciliation implies that duration is many, not one, which is something Bergson wanted to avoid because he believed duration was singular and universal. According to this reconciliation, the passage of time is always given from some experienced perspective in the Universe and never from outside it. Duration is many because there is no upper bound on the number of possible perspectives and associated worldlines. Every person, every insect, every rock – every thing – has its own worldline. And each of these worldlines reflects a unique passage through time and possible experience of duration. Better still, each worldline represents the distillation of a unique durational flow, since a worldline is a mathematical abstraction, whereas passage (the experience of time passing) is concrete. The Universe is teeming with times and potential durational rhythms. This means that there is no temporal bird’s-eye view of the Universe that flies above and beholds all these times as one.
  • Through these teeming times and durational rhythms, we can see how the so-called block-universe theory, which has been thought to follow from relativity theory, goes astray. According to this theory, the passage of time is an illusion because the past, present and future all constitute a single block in four-dimensional spacetime. But it is impossible to conceive of the contemporaneous reality of all the events in such a block universe without adopting a bird’s-eye (or God’s-eye) perspective external to the Universe and the passage of nature. Reconciling Bergson and Einstein shows us that there cannot be such a temporal bird’s-eye view of the Universe. There is no way of seeing outside and above the disparate paths through spacetime and the different rhythms of duration.
  • And yet, despite these proliferating times, there is a sense in which duration is also singular and universal, as Bergson thought. Measured time always presupposes the same ineliminable concrete fact of duration or temporal passage. Measurable times and durational rhythms may differ, but the experience of time passing is ultimately immeasurable and resists explanation in terms of anything else. As the mathematician and philosopher Alfred North Whitehead argued around the same time as Bergson, we can single out the characteristic of nature’s passage and describe its relation to other characteristics of nature but we cannot explain it by deriving it from something else – such as the temporal units of a clock. When we measure seconds, hours or other temporal intervals, we measure elapsed time, which depends on the experience of duration. But, as we know, duration cannot be fully understood by measuring these intervals.
  • Since clock time presupposes the experience of duration, to claim that duration and the ‘now’ are an illusion, as Einstein did, cuts out the ground on which science must stand. Investigating that ground and gaining cognitive insight into it are the remit of philosophy, which transcends science. There is a time of the physicist and a time of the psychologist. But there is also a time of the philosopher, which lies beneath both, and which Einstein failed to grasp.
  • The debate that began on the evening of 6 April 1922 and expanded through the 20th century represents a missed opportunity for moving our scientific worldview beyond its blind spot – its inability to see that lived experience is the permanent, necessary wellspring of science, including abstract theories in mathematical physics. In retrospect, we can see that the debate was an unfortunate misunderstanding. Bergson’s and Einstein’s ideas are more aligned than either realised during their lifetimes. By combining their insights, we gain an understanding of something fundamental. All things, us included, embody different durations as they move through the Universe. There is no one time. Through his attempts to show Einstein a hidden world of duration passing beneath special relativity, Bergson continues to remind us of something forgotten in our scientific worldview: experience is the ineliminable source of physics.
  • Parts of this essay were adapted from The Blind Spot: Why Science Cannot Ignore Human Experience (2024) by Adam Frank, Marcelo Gleiser and Evan Thompson.
Author Narrative
  • Evan Thompson is professor of philosophy at the University of British Columbia in Vancouver. He is a Fellow of the Royal Society of Canada. His books include Waking, Dreaming, Being (2015) and, co-authored with Adam Frank and Marcelo Gleiser, The Blind Spot: Why Science Cannot Ignore Human Experience (2024).
Notes
  • This is an important paper. I think it deeply mistaken, but it needs to be taken seriously.
  • I'm firmly of the opinion that 'real' time is what Special & General Relativity say it is. Psychological time and 'duration' are important and useful fictions. They cannot be measured, so are no use to the sciences which are quantitative.
  • As this Paper is partly extracted from The Blind Spot: Why Science Cannot Ignore Human Experience, it might be worth buying this book; but when would I get the time to read it?
  • There are numerous Aeon Comments which deserve careful study.
  • This relates to my Notes on Time and Narrative Identity.

Paper Comment



"Tolhurst (Bryony) - You can think like an animal by silencing your chattering brain"

Source: Aeon, 19 December 2024


Author's Introduction
  • How do animals think? When we attempt to understand other species, we often forget to ask ourselves this question. We rarely consider the nuts and bolts of how animals perceive their worlds, or what their experience of life might be like. Instead, our approach is observational and human-centred: we interpret their behaviour through the lens of our own lives. Even the experts – the scientists who concern themselves with nonhuman animal cognition, such as animal behaviourists or behavioural ecologists – often fail to envisage how animals think. I am one of them.
  • For more than 20 years, I have been a researcher and lecturer in the field of behavioural ecology for wildlife management and conservation. My job involves acquiring knowledge about the motivations of other species and making predictions about their behaviour – how they move, hunt, reproduce, eat and sleep. In the backyards of UK homes, I studied how foxes, badgers and hedgehogs compete for the scraps people throw out, and discovered why some human dwellings are more desirable for certain animals. On the leafy floor of a South American cloud-forest, I caught rare lizards, which exist on only two mountains in the world, to find out how they were affected by human-led changes to their habitats. And in Africa, I threw minced meat to hyaenas and lions to track their movements using indigestible pellets that resurface in their faeces, revealing their territory boundaries and interactions with one another.
  • Through my work with these and other species, I’ve tried to understand how and why certain animals do what they do. But my understanding was never based on how animals think. When I interpreted ‘behaviour’ – an animal’s response to a stimulus – it was always from a human perspective. It involved posing questions, recording data, offering answers based on statistical probabilities and then making management or policy recommendations designed to improve the lives of other species or our interactions with them. But recently, I have begun to feel there is something missing in this approach: it seems phenomenally insufficient for making sense of or empathising with nonhumans. It cannot help us fully co-exist with them. How can I ever hope to properly understand the behaviour of other species if I don’t understand how they think? Increasingly, I want to understand what it is like, to paraphrase the writer and polymath Charles Foster, to ‘be a beast’.

Author's Conclusion

    Indeed, our brain circuitry contains a particularly important set of structures from which these internal narratives arise. It is called the default mode network (DMN). This collection of brain regions becomes more active when we stop focusing on specific tasks or the outside world. It activates when we daydream, allow our minds to wander, reflect on the past, or imagine the future. It helps us make sense of ourselves as individuals, and it does this, for most of us, through words. We imagine hypothetical scenarios, plan, and contemplate our experiences by talking to ourselves in our minds. Linked brain structures broadly comparable to the DMN have been found in rats, mice and nonhuman primates, but there are fewer connections between the brain regions of these animals. In other words, even though they have the substrate for consciousness, and perhaps think in pictures or in a rudimentary language constructed from their particular mode of communication, their DMNs are relatively basic. For many species, it’s possible that the DMN is functionally nonexistent.
  • Perhaps then, we can mimic aspects of animal thought by quieting the DMN in our own minds. But how can we silence brain regions that activate almost unconsciously? Luckily, a suite of techniques exist – in the form of meditation and other mindfulness practices – that humans have used for centuries to calm the chattering of our minds.
  • Through meditation, I have caught the tendrils of this thought-consciousness without language. And I believe this experience may be similar – perhaps only for a few seconds – to the thinking of my feline friend Fred or that foraging otter I once watched on the Scottish coastline. Learning to focus on how we inhabit our human bodies in the present moment may be a way of glimpsing what it is like to experience the world as a badger, a swift or even a praying mantis.
  • But this is not easily achieved. Quieting the DMN requires practice. The long tradition of meditation teaches techniques for body awareness, mindfulness and focused affect, which can all temper the noise of the DMN. Mindfulness of breathing and mindfulness of body sensations are just two examples, but there are many others.
  • In ‘What Is It Like to Be a Bat?’, Nagel concluded that it is impossible to fully know the consciousness of other species, so we should leave their ‘thoughts’ well alone. I disagree. I say we keep trying – I say we keep finding new ways of becoming more empathic towards animals and understanding their needs better. Perhaps by learning to quieten the wordy chattering of our DMN, even for a brief moment, we can enter an unfamiliar sensory world and begin to experience what it is like to be another species.
Author Narrative
  • Bryony Tolhurst is a consultant wildlife biologist, with particular expertise in behavioural ecology. She works for conservation charities, universities and ecological consultancies. She is an honorary research fellow at the University of Brighton and a panel tutor at the University of Cambridge.
Notes
Paper Comment



"Torres (Emile P.) - The ethics of human extinction"

Source: Aeon, 20 February 2023


Introductory Extract
  • But so what if we’re wiped out? What does it matter if Homo sapiens no longer exists? The astonishing fact is that, despite acquiring the ability to annihilate ourselves back in the 1950s, when thermonuclear weapons were invented, very few philosophers in the West have paid much attention to the ethics of human extinction. Would our species dying out be bad, or would it in some way be good – or just neutral? Would it be morally wrong, or perhaps morally right, to cause or allow our extinction to occur? What arguments could support a ‘yes’ or ‘no’ answer?
Author's Conclusion
  • So where does this leave us? I’m inclined to agree with the philosopher Todd May, who argued in The New York Times in 2018 that human extinction would be a mixed bag. I reject the further-loss views of Parfit and the longtermists, and accept the equivalence view about the badness of extinction. But I’m also sympathetic with aspects of pro-extinctionism: all things considered, it’s hard to avoid the conclusion that Being Extinct might, on balance, be positive – even though I’d be saddened if the business of revealing the arcana of the cosmos were left forever unfinished. (The sadness here, though, is not really of the moral kind: it’s the same sort of sadness I’d experience if my favourite sports team were to lose the championship.)
  • That said, the horrors of Going Extinct in a global catastrophe are so enormous that we, as psychically numb inverted Utopians, should do everything in our power to reduce the likelihood of this happening. On my view, the only morally permissible route from Being Extant to Being Extinct would be voluntary antinatalism, yet as many antinatalists themselves have noted – such as Benatar – the probability of everyone around the planet choosing not to have children is approximately zero. The result is a rather unfortunate predicament in which those who agree with me are left anticipatorily mourning all the suffering and sorrow, terrors and torments that await humanity on the road ahead, while simultaneously working to ensure our continued survival, since by far the most probable ways of dying out would involve horrific disasters with the highest body count possible. The upshot of this position is that, since there’s nothing uniquely bad about extinction, there’s no justification for spending disproportionately large amounts of money on mitigating extinction-causing catastrophes compared with what have been called ‘lesser’ catastrophes, as the longtermists would have us do, given their further-loss views. However, the bigger the catastrophe, the worse the harm, and for this reason alone extinction-causing catastrophes should be of particular concern.
  • My aim here isn’t to settle these issues, and indeed our discussion has hardly scratched the surface of existential ethics. Rather, my more modest hope is to provide a bit of philosophical clarity to an immensely rich and surprisingly complicated subject. In a very important sense, virtually everyone agrees that human extinction would be very bad. But beyond this default view, there’s a great deal of disagreement. Perhaps there are other insights and perspectives that have not yet been discovered. And maybe, if humanity survives long enough, future philosophers will discover them.
Author Narrative
  • Émile P Torres is a PhD candidate in philosophy at Leibniz Universität Hannover in Germany. Their writing has appeared in Philosophy Now, Nautilus, Motherboard and the Bulletin of the Atomic Scientists, among others. They are the author of The End: What Science and Religion Tell Us About the Apocalypse (2016), Morality, Foresight, and Human Flourishing: An Introduction to Existential Risks (2017) and Human Extinction: A History of the Science and Ethics of Annihilation (forthcoming from Routledge).
Notes
  • Well, this is a very interesting - if misguided - Paper.
  • It's a plug for his - absurdly expensive - book, which it seems - from an Amazon review - reflects the content of his PhD Thesis.
  • There are lots of mostly supportive and equally misguided Comments, which require a closer reading.
  • Interesting to see the connection to David Benatar (Wikipedia: David Benatar). See:-
    → "Benatar (David) - Kids? Just say no", and
    → "Benatar (David) - Better Never to Have Been: The Harm Of Coming Into Existence"
  • The author doesn’t mention Arthur Schopenhauer, who asks us to contrast the pleasure of eating with the pain of being eaten.
  • There are so many things to say ... I will need to take this up for further consideration.
  • The first thing that struck me is that - given the scope of the topic - just how myopic and parochial the author and the commentators are. The concerns of the moment get muddled up with matters of cosmic importance.
  • The ills of the moment - racism, sexism, child abuse, world poverty, wars, abuse of animals, climate emergencies - are nothing new and humanity is much better able to deal with them than in the past, if only we'd keep a level head. We have to think what things will be like - or could be like - in a thousand or a million years from now (if humanity doesn't 'go extinct').
  • We have to distinguish the ‘collapse of civilisation’ (which may be what the ‘Doomsday Clock’ is thinking of) from ‘the extinction of mankind’. See:-
    Doomsday Clock: Current Time
    Wikipedia: Doomsday Clock
  • ‘Extinction’ means the elimination of every last member of the species. This would not be achieved, most likely, by an all-out nuclear war and especially not by ‘climate change’. Elimination by asteroid impact would likely eliminate all higher life-forms if it eliminated all of us, so wouldn’t leave a happy planet. The most likely extinction cause would be some 100% fatal virus, maybe bioengineered. Take-over by the AIs is rather remote.
  • The author is right to point out that indefinitely extended life might lead to nefarious organisations inflicting indefinitely extended torture on their enemies. This is an obvious objection to 'mind uploading', were it ever to be achievable. The question is whether it would be better to eliminate all of humanity 'just in case'.
  • The author thinks the process of eliminating humankind would almost certainly cause untold suffering, and would therefore the worst thing imaginable. However, once humanity had been eliminated, there would be no more (human) suffering – which he takes to exceed whatever goods accrue to human life (Benatar takes this argument to apply to all sentient life).
  • This sort of approach relates to the arguments of the Epicureans (‘death is nothing to us’). It is – almost – universally agreed that premature death (as distinct from the dying) is bad for the individual who dies – thwarting his plans (for which he may have ‘sunk costs’) and depriving him of future goods – as well as being bad for those ‘left behind’. Torres seems to deny this.
  • It also relates to the question whether the dead can be harmed, and the ancient practice of damnatio memoriae (Wikipedia: Damnatio Memoriae). See:-
    → "Pitcher (George) - The Misfortunes of the Dead",
    → "Feinberg (Joel) - Harm to Others",
    → "Levenbook (Barbara Baum) - Harming Someone after His Death",
    → "Taylor (James Stacey) - The Myth of Posthumous Harm",
    → "Callahan (Joan C.) - On Harming the Dead",
    → "Levenbook (Barbara Baum) - Harming the Dead, Once Again",
    → "Ridge (Michael) - Giving the Dead Their Due",
    → "Marquis (Don) - Harming the Dead",
    → "Hetherington (Stephen) - Deathly Harm",
    → "Solomon (David) - Is There Happiness after Death?",
    → "Grover (Dorothy) - Posthumous Harm",
    → "Luper (Steven) - Posthumous Harm",
  • The author does lament the cultural losses should humanity be extinguished but thinks this an aesthetic rather than moral sensibility of little weight. This is all rather feeble. People – at least those of the better sort – have always struggled to make the world a better place – in some small degree – that they were born in to. Only in some cases is this improvement limited to the reduction in suffering.

Paper Comment



"Uzan (Elad) - Moral mathematics"

Source: Aeon, 28 November 2022


Author's Conclusion
  • Moral mathematics forces precision and clarity. It allows us to better understand the moral commitments of our ethical theories, and identify the sources of disagreement between them. And it helps us draw conclusions from our ethical assumptions, unifying and quantifying diverse arguments and principles of morality, thus discovering the principles embedded in our moral conceptions.
  • Notwithstanding the power of mathematics, we must not forget its limitations when applied to moral issues. Ludwig Wittgenstein once argued that confusion arises when we become bewitched by a picture. It is easy to be seduced by compelling numbers that misrepresent reality. Some may think this is a good reason to keep morality and mathematics apart. But I think this tension is ultimately a virtue, rather than a vice. It remains a task of moral philosophy to meld these two fields together. Perhaps, as John Rawls put it in his book A Theory of Justice (1971), ‘if we can find an accurate account of our moral conceptions, then questions of meaning and justification may prove much easier to answer.’
Author Narrative
  • Elad Uzan is a junior research fellow at Corpus Christi College and a Marie Curie Fellow in the Faculty of Philosophy at the University of Oxford.
Notes
  • A thoroughly stimulating and important - not to say enjoyable - paper.
  • The Aeon comments are also interesting, and I've stored them for future analysis.
  • I'll add my comments in due course.

Paper Comment
  • Sub-Title: "Subjecting the problems of ethics to the cool quantifications of logic and probability can help us to be better people"
  • For the full text see Aeon: Uzan - Moral mathematics.



"Velasco (Pablo Fernandez) & Loev (Slawa) - How ‘feelings about thinking’ help us navigate our world"

Source: Aeon, 02 May 2024


Authors' Conclusion
  • Metacognitive feelings can also shape thinking outside the confines of well-crafted experimental settings and niche environments like University Challenge. Here is a dramatic historical example: in 1983, deafening alarms went off in the nuclear early warning facility in Serpukhov. The system indicated that five ballistic missiles were heading from the United States to the Soviet Union. It was down to Lt Colonel Stanislav Petrov to make a split decision. ‘False alarm,’ Petrov told the Soviet air defence forces, averting nuclear war. When a reporter asked how he was able to make the right call under such enormous pressure, Petrov said: ‘I had a funny feeling in my gut.’
  • As such examples illustrate, we shouldn’t think of feelings as ‘getting in the way’ of higher cognitive processes; they play a crucial role in thinking. The feeling of knowing was found to accurately reflect the quality of learning in one previous study. And when people have to choose what to restudy for tests, they rely on their feelings of knowing in an adaptive way. There are similar findings for other metacognitive feelings. For instance, an experiment using a multiple-choice test as a paradigm found that the strategic use of tip-of-the-tongue experiences led to better scores.
  • This does not mean, however, that you should trust your guts blindly. In fact, metacognitive feelings can sometimes lead you astray. In a cleverly designed experiment, participants were given a sentence in which one word was scrambled, such as: ‘Free will is a powerful oinliusl.’ Then, they had to solve the anagram, resulting in a sentence that encapsulated a particular worldview: ‘Free will is a powerful illusion.’ Participants were then asked to rate the sentence on a truth scale, from definitely false to definitely true. They were also asked if they had experienced an ‘aha’ moment. The researchers found that participants rated the statement as truer when they had experienced ‘aha’ moments after solving the anagram. The problem is that, in this case, the metacognitive feeling had been artificially induced. Obviously, whether or not you get a feeling of sudden insight after solving the anagram ‘oinliusl’ has no bearing on whether free will is an illusion.
  • The need for caution applies to other metacognitive feelings as well. Tweaking the cues that elicit such feelings is, in all likelihood, an important element in the workings of misinformation, advertising and populist propaganda. This is particularly significant since feelings of truth have a strong influence on what we take to be true in the first place. Feelings can be manipulated and, depending on the context, they might be misleading. This makes awareness of metacognitive feelings only more important. So, pay attention to your feelings when you are thinking – but do so wisely.
Author Narrative
  • Pablo Fernandez Velasco is a British Academy postdoctoral fellow in the Department of Philosophy at the University of York in the UK. His research interests are in environmental experience, emotion, and metacognition.
  • Slawa Loev is a data scientist, cognitive scientist and philosopher. When he’s not investigating the mechanisms behind emotion, intuition and metacognition, he’s tinkering with data, machine learning and AI.
Notes
  • Interesting and sensible; somethng I need to follow up in more detail.
  • A point that occurred to me - and one that the authors would probably agree with - is that while our emotions are important in how we think (both in what we think about and how we conduct our cognition) when it comes to presenting the results of our cognitive processes, reason is all that matters.
  • This also reminded me of David Hume's maxim that 'Reason is, and ought only to be the slave of the passions, and can never pretend to any other office than to serve and obey them' (T 2.3.3 p. 415); see What did David Hume mean when he said that 'reason is a slave to the passions'?.
  • As for how this Paper connects to my Thesis on PID, I suppose the following Notes are impacted:-
    Thought
    Language of Thought
    Psychology
    Metaphilosophy

Paper Comment



"Venkataraman (Vivek V.) - Lessons from the foragers"

Source: Aeon, 02 March 2023


Extracts
  1. Author's Introduction
    • In the seminar I teach about hunter-gatherers, I often ask my students whether they think life was better in the past or today. There are, of course, always a few people who insist they couldn’t live without a flushing toilet. But more and more I’m seeing students who opt for a life of prehistoric hunting and gathering. To them, the advantages of modern life – of safety and smartphones – do not outweigh its tangled web of chronic indignities: loneliness, poor mental health, bureaucracy, lack of connection with nature, and overwork. Learning about the lives of hunter-gatherers confirms a suspicion that our modern lives are fundamentally at odds with human nature, that we have lost some kind of primordial freedom. For a generation who came of age with Instagram and TikTok, this is a striking – albeit theoretical – rejection of modernity.
  2. Author's Conclusion:
    • ‘[I]t is questionable if all the mechanical inventions yet made have lightened the day’s toil of any human being,’ wrote John Stuart Mill in Principles of Political Economy (1848). Indeed, our lives today are the Jevons Paradox in microcosm. Frictionless technology at our fingertips leads to the paradoxical situation of our smartphone screens becoming crowded with apps, our days increasingly divided into small things, and our attention shattered. Things that were meant to make our lives easier simply tempt us to put more things on our plates, increasing the amount that we work, and wreaking havoc on our wellbeing.
    • And yet, when we consider work from an evolutionary perspective, it is hard to be optimistic about technological efficiency delivering us to the promised land of Keynes’s 15-hour working week. In this age of unprecedented burnout, it may give some solace to consider that the Jevons Paradox1 has been with us since time immemorial. Our industry is the blessing and curse of our species, a mindset and cultural force shaped by the evolutionary process, and stamped into the very fibre of our being.

Author Narrative
  • Vivek V Venkataraman is assistant professor in the Department of Anthropology and Archaeology at the University of Calgary, Canada. He is also assistant director of the Guassa Gelada Research Project in Ethiopia, and the co-founder and co-principal investigator of the Orang Asli Health and Lifeways Project in Peninsular Malaysia.

Notes
  • This is a fairly complex and nuanced subject - partly reflected in the writing of this paper.
  • It doesn't support the romantic 'noble savage' trope, nor the notion that hunter-gatherers do no work. Rest is required for recovery.
  • Even so, I suspect there's a lot of 'having your cake and eating it' in the nostalgia for more simple societies.
  • Hobbes's 'nasty brutish and short' idea in support of the State is cited without enthusiasm. See Thomas Hobbes - Solitary, Poor, Nasty, Brutish and Short for an analysis I've not yet read. I suppose at the height of the industrial revolution, life for most people wasn't much better than this. However, the main objection to hunter-gathering is that it doesn't scale well, offers no hope of 'improvement' and is unstable unless such societies are left alone or protected.
  • Agriculture allows the massive increase in population-size. There's only so much that can be hunted and gathered without this 'breakthrough'. In the long term, agriculture and industry has allowed for the extension of lifespan via the development of modern medicine and hospitals.
  • All this is obvious, of course. It's a value judgement whether 'civilised' culture (Western or Eastern) is worth the cost. I know which side I'm on.
  • It's not clear to me that the downsides of the modern focus on work are necessary. It's a downside of capitalism. See "Susskind (Daniel) - A World Without Work: Technology, Automation and How We Should Respond" for whether Keynes was right.
  • There are 50 comments which I've printed off and saved for future analysis.

Paper Comment




In-Page Footnotes ("Venkataraman (Vivek V.) - Lessons from the foragers")

Footnote 1:



"Vyazovskiy (Vladyslav) - Could humans hibernate?"

Source: Aeon, 18 November 2024


Author's Introduction
  • The conventional view is that humans and other creatures around us live between periods of waking and sleeping. But it is not true. Many have mastered the art of hibernating, which allows them to spend quite a lot of their life in a mysterious state of suspended animation – sometimes more than half of it. What is hibernation, and is it something that humans might be capable of?
  • At the dawn of scientific enquiry into hibernation (from the Latin hibernus, pertaining to winter) in the mid-19th century, it was defined by Peter A Browne in an 1847 tract as ‘a natural, temporary, intermediate state, between life and death; into which some animals sink, owing to an excess of heat, or of cold, or of drought, or want of oxygen’. That’s a good first approximation. Now we know that – from dormice and bears, to hedgehogs, ground squirrels, bats and even tropical primates – hibernation is a very common phenomenon, found among representatives of at least seven different orders of mammals. It appears in many forms, which makes it difficult to define unequivocally, let alone imagine what it might look like in humans. As another early study points out: ‘we do not find that any two animals, however closely allied, hibernate in precisely the same manner, nor do individuals of the same species always hibernate alike’.
  • Nevertheless, there is a cluster of features typical of hibernation. The most parsimonious description would include reference to a controlled reduction in metabolism, reflected in a slowing down of many physiological and biochemical processes in the body. In sci-fi movies, hibernating humans are often depicted lying in pods, completely immobile and seemingly unconscious, and it is implied that their body temperature is very low – hence it is often called ‘cryosleep’ or something similar. Any mention of how exactly human hibernation is achieved in those conditions, or what triggers ‘awakenings’ from hibernation, is conveniently avoided, as if that were a trivial matter not deserving attention.
  • It could be forgivable to skip explaining how something as exotic as human hibernation happens. But it is sobering to think that sleep – a state so familiar to all of us, one we are perfectly capable of, on a daily basis – also remains a mystery. I am a sleep neuroscientist, and the focus of my laboratory is the origin and fundamental biology of sleep. I am convinced that sleep can be fully understood only when it is considered not in isolation but in juxtaposition with other states of being, such as hibernation. Yet research is still at a basic stage and there are so many things we don’t know about this aspect of life. How, for example, can we seriously talk about hibernation in humans, when it is a condition we also do not fully comprehend in other animals? And how can we understand sleep if we can’t clearly separate it from hibernation?

Author's Conclusion
  • Naturally, we are envious that so many creatures, big and small, around us have mastered and perfected the skill of hibernation, which still escapes our understanding. Is it because we are too obsessed with trying to make sense of what we can see and measure, rather than noticing what is not there as its essential feature? Our efforts to understand hibernation go against its entire idea – to disappear, to disconnect, to stop time, to become one with the world. Is this why understanding hibernation eludes us?
  • Consider this example: when threatened, some animals enter a peculiar state called freezing, which is an extreme version of the fight-or-flight response, when it is pointless to fight and there’s nowhere to fly in the physical, 3D space. Instead, animals find a way to ‘escape’ by entering a different dimension of existence – they don’t merely stop moving, but their physiology slows down, making them less visible. Many living organisms employ a strategy of assuming a fake identity, becoming someone else, called mimicry – for example, to deceive their enemies, they pretend to be bigger or more dangerous than they are in reality.
  • The other form of mimicry is pretending to not be there at all, or playing dead, such as in the case of thanatosis. Could hibernation be understood as an elaborate, and rather extreme, form of mimicry – a state when the organism doesn’t simply pretend to be dead, but undergoes a profound transformation, blurring the boundary between life and death, in order to survive? Better understanding of the biological meaning of hibernation, and how it relates to other states of being and modes of existence, is necessary before we can make tangible progress in inducing artificial hibernation in humans. What if, deep down, humans always knew how to hibernate, and when conditions are right, and when it comes to the point when alternative ways to continue existing are unimaginable, we can bring back to life this forgotten ancestral memory, and enter hibernation in our own, human way?
Author Narrative
  • Vladyslav Vyazovskiy is professor of sleep physiology and tutorial fellow in medicine at the University of Oxford, as well as vice-president of the European Sleep Research Society and a TEDx speaker.
Notes
  • Interesting and deserves a re-read. Is the author really right in his contention that we cannot understand sleep without understanding hibernation?
  • Also, does he really address the question in the sub-title?
  • There are several interesting Aeon Comments, most with replies by the author. They deserve careful reading.
  • One such reply referred to Thukdam, which I'd recently come across in "Thompson (Evan) - Dying: What Happens When We Die?": 'I find extremely interesting the state of Thukdam – a “putative post-mortem meditation state”, where the body does not show signs of decomposition for a while after death, without any signs of brain activity. Perhaps it is a state similar to an extreme form of hibernation?'.
  • This relates to my Notes on Sleep, Death and - I suppose - Transhumanism.

Paper Comment



"Walker (Lydia) - What is decolonisation?"

Source: Aeon, 21 November 2024


Author's Introduction
  • Decolonisation talk is everywhere. Scholars write books about decolonising elite universities. The government of India, a country that has been independent for 77 years, built a new parliament building in order to ‘remove all traces of the colonial era’. There are infographics on how to decolonise introductory psychology courses and guides on how businesses may decolonise their work places. Some Christians from regions that used to be colonies look to decolonise mission work through Biblical readings of Christ’s suffering. Why have expressions of decolonisation become so popular? And is there coherence to these many disparate uses of the term?
  • All these varied and even contradictory forms of decolonisation talk seek to draw upon the moral authority, impact and popular legitimacy of the 20th century’s great anticolonial liberation movements. And it is the gap between these movements’ promise of liberation and the actuality of continued power inequalities even after independence that has given the analytical and political space for such a wide, eclectic and contrasting array of individuals, groups and projects to wield the concept of ‘decolonisation’ to generate support for their endeavours. In the process, decolonisation talk has become more and more attenuated from the historical events of decolonisation.
  • The events of decolonisation involved colonised peoples, predominantly in Asia and Africa, rising up in the mid-20th century and overthrowing colonial systems of rule. National liberation movements that became postcolonial governments transformed the world order through the historical events of decolonisation. In 1945, for example, there were just 64 independent states, while today there are between 193 and 205, depending on who counts them. Before the Second World War, there were only three sovereign states with a Black head of state – Ethiopia, Haiti and Liberia.
  • Colonialism itself was uneven, complex and variegated. In practice, empires ruled by governing different communities differently, intensifying and maintaining often elaborate hierarchies of communities based on region, race, religion or ethnicity. For instance, colonial life looked very different in the French settler colonial cities of Algiers and Oran than it did in the Berber regions of Eastern Algeria. British colonial India was a patchwork of direct rule, princely states that were semi-autonomous regarding domestic policy, and excluded areas with a rather light imperial footprint, among other political configurations.
  • Other examples of colonial difference include the US insular island cases (1901), which determined that particular territories seized during the Spanish-American War would have unequal legal relationships with the continental United States: Puerto Rico, Guam and the Philippines were made into unincorporated territories where the US constitution did not fully apply, while Hawaii (and Alaska) were placed on the path to US statehood. Elsewhere, in East Africa, the German Empire recruited African soldiers who became a class of colonial intermediaries. The Japanese Empire included direct occupations in Manchuria and Korea with collaborations with anticolonial nationalists in Burma and Indonesia. Yet amid all the complexity, popular understandings of colonialism today have a clear iconography of Western conquest – of maps, pith helmets and boots bestriding non-Western continents.

Author's Conclusion
  • As the actual events of historical decolonisation grow more distant, forms of decolonisation talk increase. Decolonisation was once primarily a scholar’s term that effectively depolarised violent national liberation. Now it ascribes radicalism to projects in the realms of economics, culture, education and ideology – spheres whose purpose is not violent regime change.
  • Historical decolonisation was an international liberation project that reached the height of its political optimism in the 1960s and ’70s. It also provided an attractive source of inspiration and even analogy for movements that sought to rectify racism and other forms of injustice in the US and elsewhere. These connections dissolved as many postcolonial states were unable to provide peace and prosperity to their residents and citizens, the Black freedom movement grew less united, and deindustrialisation in so-called ‘developed’ countries (and the perception that it was caused by cheap imports and labour from predominantly postcolonial countries) eroded global solidarities.
  • The declining appeal of historical decolonisation led to its transformation into decolonisation talk. At the same time, it is the original promise (and the perceived viability of postcolonial states to deliver upon that promise) of its national liberatory potential that has made it a recurring source material to legitimise movements, even – or especially – for those whose aims are far removed from historic decolonisation’s regime change. While historic decolonisation is a continued reservoir of legitimacy for decolonisation talk, its inability to deliver liberation to many has created the space for so many discourses to flourish, even as they become increasingly distant from the history of decolonisation.
Author Narrative
  • Lydia Walker is assistant professor and Myers Chair in Global Military History at the Ohio State University. She is the author of States-in-Waiting: A Counternarrative of Global Decolonization (2024).
Notes
  • A thought-provoking and fairly balanced account of the issues. Unfortunately, I can't now remember the details and will need to re-read.
  • A general point I'd make is that there have always been empires - and therefore 'colonialism' - and the colonialism of the nineteenth and early 20th centuries is just the latest phase of empire that receives greatest attention because of its proximity.
  • There are no Aeon Comments.
  • This relates to my Notes on Race and Narrative Identity.

Paper Comment
  • Sub-Title: "There’s more talk of decolonisation than ever, while true independence for former colonies has faded from view. Why?"
  • For the full text see Aeon: Walker - What is decolonisation?.



"Wallace (Rebekah) - Legacy of the angels"

Source: Aeon, 18 March 2025


Author's Introduction
  • ... Before the discovery of gravity, energy or magnetism, it was unclear why the cosmos behaved in the way it did, and angels were one way of accounting for the movement of physical entities. Maimonides argued that the planets, for example, are angelic intelligences because they move in their celestial orbits.
  • While most physicists would now baulk at angelic forces as an explanation of any natural phenomena, without the medieval belief in angels, physics today might look very different. Even when belief in angels later dissipated, modern physicists continued to posit incorporeal intelligences to help explain the inexplicable. Malevolent angelic forces (ie, demons) have appeared in compelling thought experiments across the history of physics. These well-known ‘demons of physics’ served as useful placeholders, helping physicists find scientific explanations for only vaguely imagined solutions. You can still find them in textbooks today.
  • But that’s not the most important legacy of medieval angelology. Angels also catalysed ferociously precise debates about the nature of place, bodies and motion, which would inspire something like a modern conceptual toolbox for physicists, honing concepts such as space and dimension. Angels, in short, underpin our understanding of the cosmos.

Author's Conclusion
  • While it is easy enough to ridicule the suggestion that movement is the result of occult forces such as angels, we cannot, having ascended the ladder of knowledge, so easily kick that ladder out from under ourselves. Studies in embodied cognition (Wilson & Golonka - Embodied cognition is not what you think it is) are showing that our knowledge is built upon our experience of the world. George Lakoff & Mark Johnson’s Metaphors We Live By (1980) shows how our bodily experiences come together to create complex metaphors, grounding abstract concepts. We equate ‘up’ with ‘more’ when we say ‘the stock market rises’ because when we see, for example, rocks piled up, we learn to equate higher with more. We say we ‘grasp’ an idea because we have experienced reaching for a piece of fruit on a tree. In addition, we have a very hard time imagining a nonphysical thing. What we imagine, when we imagine a soul or an angel or a demon, is some kind of insubstantial, but still ghostly, object.
  • Although occult forces such as angels and demons may be ridiculed in modern culture as ‘hand-wavey’ explanations of quite logical, down-to-earth scientific phenomena, I would suggest the inverse. That what is most down-to-earth might in fact be to think about the invisible forces of nature as angels, agents, immaterial intelligences with certain properties familiar to us, but amplified. Properties like agency and intention. It is only in thinking through, and with, these more familiar concepts that we can then discover a less intuitive set of concepts, like spacetime, which require grounding in concepts like dimension, body, place and movement. These necessary grounding concepts were sharpened, historically, by thinking through the relationship between the material and immaterial world, and angelology played a significant role in their honing.
  • The use of supernatural intelligences such as angels and demons to think through physics stuck around long after the actual belief in the existence of these beings had dissipated. It seems that this imaginative framework resounds in the actual structure of how our thought operates. By virtue of this, angelology lay the groundwork for thinking through the nature of place, time and motion in quite complex ways. Did angels and demons carve a conceptual space for the invisible forces that physics would later come to discover? Though it may seem that the scientific and the demonic are at polar ends of the spectrum when it comes to explaining the natural world, angels and demons have actually shaped modern scientific explanation as we know it today.
Author Narrative
  • Rebekah Wallace is a junior research fellow at Blackfriars Hall, University of Oxford where she writes and researches on science and religion. She is also a lecturer in philosophy, religion and ethics at the University of Winchester, UK.
Notes
Paper Comment
  • Sub-Title: "When medieval scholars sought to understand the nature of angels, they unwittingly laid the foundations of modern physics"
  • For the full text see Aeon: Wallace - Legacy of the angels.



"Wallingford (John) - Building embryos"

Source: Aeon, 16 May 2024


Author's Introduction
  • Fifty-four years ago, I did something extraordinary. I built myself. I was a single, round cell with not the slightest hint of my final form. Yet the shape of my body now – the same body – is dazzlingly complex. I am comprised of trillions of cells. And hundreds of different kinds of cells; I have brain cells, muscle cells, kidney cells. I have hair follicles, though tragically few still decorate my head.
  • But there was a time when I was just one cell. And so were you. And so were my cats, Samson and Big Mitch. That salmon I had for dinner last night and the last mosquito that bit you also started as a single cell. So did Tyrannosaurus rex and so do California redwoods. No matter how simple or complex, every organism starts as a single cell. And from that humble origin emerges what Charles Darwin called ‘endless forms most beautiful’.
  • Once you’ve come to terms with that mind-boggling fact, consider this: all organisms, including humans, build themselves. Our construction proceeds with no architects, no contractors, no builders; it is our own cells that build our bodies. Watching an embryo, then, is rather like watching a pile of bricks somehow make themselves into a house, to paraphrase the biologist Jamie Davies in Life Unfolding (2014).
  • This process of body sculpting is called embryonic development, and it is a symphony of cells and tissues conducted by genetics, biochemistry and mechanics. People who study this, like me, are called developmental biologists. And while you may not know it, our field is in a period of tremendous excitement, but also upheaval.

Author's Conclusion
  • We’ve pondered embryos for thousands of years, in part because they spark our inherent wonder; theirs is the ultimate emergent property. Across that long arc, it’s usually been animal embryos under our microscopes, organisms that assemble themselves just like we do but whose development we have fewer qualms about interrupting for the sake of knowledge. Like any basic science, animal embryos provide ‘a glimpse of what is possible in this world’, Lehmann writes. But this science has always been a proxy, however imperfect, for understanding how our own bodies come to be. And, quite suddenly now, we seem to have the tools and the appetite to get far more than just a glimpse at the human embryo.
  • Martinez Arias recently told me that ‘when you put the word “human” there, you are talking to the whole of society.’ It’s worth recalling, then, that this conversation is also thousands of years old. And history tells us that our collective decisions on issues of the human embryo will ultimately be influenced by both science and faith.
  • Science can tell us how the human embryo develops, and it is an undisputed certainty that embryos develop progressively, building complexity and identity only over time. But there is no scientific consensus on when during that progression ‘life’ begins. Likewise, there is no consensus among faiths on when life begins. Certain Christian faiths now hold that life begins at conception, and these have an outsized influence. Yet, even within Christianity, that view is a recent stance, and one that reversed centuries of thought. Other Western religious traditions don’t share Christianity’s ambiguity. Cleaving to the ancient gradualist view of development, Islamic tradition generally holds the embryo to become human 120 days after fertilisation, though some use the 40-day mark; in most Jewish traditions, it happens only at birth.
  • We are 3,000 years deep in the adventure called developmental biology, yet the embryo remains in many ways just as mysterious as ever. As we enter a new era of explicitly human developmental biology, we should approach it with all the grace and humility we possibly can.
Author Narrative
  • John Wallingford is the Mr and Mrs Robert P Doherty, Jr Regents Chair in Molecular Biology at the University of Texas at Austin, US. He is a 2022 Guggenheim Fellow and a past president of the Society for Developmental Biology.
Notes
  • I'm sure this is a very useful paper, but I read it a month ago while walking Bertie the dog (as at end June 2024) and now I can remember next to nothing about it.
  • I'll need to read it again to write anything sensible.

Paper Comment



"Waltner-Toews (David) - Kinship"

Source: Aeon, 23 February 2024


Author's Introduction (Excerpt)
  • As an epidemiologist specialising in diseases other animals share with humans, I have frequently found myself wading waist-deep through a quagmire of quandaries, none more confusing than when, a few years ago, I struggled with the Alice-in-Wonderland issue of human relationships with insects. Insects are animals. Veterinarians, by law and inclination, are compelled to care for the wellbeing of all animals. Ergo… what?
  • In my teaching and research, I usually framed human-animal relationships, including those with arthropods, in terms of diseases transmitted between other animals and humans. I investigated the tick-borne Lyme disease, for instance, as well as the mosquito-borne West Nile virus, the tsetse fly-borne sleeping sickness, and leishmaniasis transmitted by sand flies. My work was to describe the life cycles of these diseases, and then find efficient ways of killing the vectors.
  • But insects are more than couriers bearing messages of parasites, disease and death. As several conferences of the United Nations have asserted, they are also an important, sustainable source of food for billions of people.
Author's Conclusion (Excerpt)
  • Many of us in applied biological sciences, especially ones related to epidemics of zoonotic diseases, and the relationships between the health of humans, other animals and ecosystems, have long recognised the challenges of managing ecosystems from the inside out. It is reassuring, from a scientific viewpoint, that physicists are coming to similar conclusions.
  • If we stand back a little, it is not an imaginative stretch to see that what some cultures have called sacred beings, groves, springs, valleys, spirits or gods are a fusion of (unseen) gravitational forces, and the regenerative and personal, quorum-sensing conversations among microbiomes.
  • The forces of gravity, with their electromagnetic, strong and weak couriers, link everything, everywhere, sustaining and giving shape to the cosmos in which we dwell. In the imperial, colonising imaginations, not just those of the Europeans and their post-Enlightenment descendants, but also those of the myriad other ‘ans’ – Assyrians, Babylonians, Egyptians, Mongolians, Persians, Romans, Russians, Germans, Mauryans – these mysterious forces were called gods. Organic Being, emerging from the microcosmos, was long honoured in sacred places, diets, waters, spirit animals and people.
  • Our task as humans is to explore, in every way we can, what this means.
  • In this understanding of the Universe, there are no laws handed down from beyond the world we know, no tablets, revealed scriptures, oracles or final theories designed by mathematicians or physicists or biologists that can explain everything.
  • If we begin with where we are, what we observe, what (we think) we know – the relationships and nonverbal communications and conversations among the cows, the cats, the chickens, the rats, lambs, ticks, bacteria – we can begin to create a non-object-obsessed science commensurate to the task, an expanded, multispecies version of post-normal science.
  • For those who seek meaning in the world, I can think of no better beginning than observing that, since Being and the Universe are co-creators, and the future is uncertain, everything we do matters.
  • I return to my presence in the midst of this physical evidence – the cows, ticks, fruit flies, cats, chickens, lambs and bacteria. Together, we are searching for ways to preserve our collective memories beyond the extinction of our cobbled-together bodies and species, into a renewed, renewable world we cannot imagine.
Author Narrative
  • David Waltner-Toews is a Canadian veterinary epidemiologist. His books include The Origin of Feces (2013), Eat the Beetles! (2017), On Pandemics (2020), A Conspiracy of Chickens (2022) and The Gravity of Love (2023). He lives in Ontario on land that is the traditional home of the Haudenosaunee, Anishinaabe and Neutral People.
Notes
  • I thought the author was trying to cover so much ground in this paper that I lost track of what his argument was, if there was one.
  • At times it seems to be about scientific methodology. But he keeps hopping about with anecdotes from his working life, the import of which isn't always clear.
  • He thinks we should think holistically rather than be forever categorising things and looking at the minutiae.
  • In so doing, I think he's misunderstanding the scientific method and mathematical modeling.
  • The only way we can get anywhere with understnding the world around us is to focus on small parts and see how they seem to work. We model them, making 'simplifying assumptions' to make then tractable. Then we try to join things up. All in an iterative manner as our models improve.
  • Of course, we mustn't mistake models for the reality, but we also must not lose faith in the method, which the author agrees has taught us a lot.
  • He's right that there's more to science than physics, and he's right to criticise Hawking's 'Mind of God' quote (which - he notes - Hawking repented of later in life). Getting a final physics - if we ever do - won't be enough to explain how everything works - its too complicated for that.
  • There are a few comments - which I've not studied - and the commentators seem to have got something out of the paper.
  • I need to read these and re-read the paper.

Paper Comment
  • Sub-Title: "Science must become attuned to the subtle conversations that pervade all life, from the primordial to the present"
  • For the full text see Aeon: Waltner-Toews - Kinship.



"Ward (Thomas M.) - Who was Duns Scotus?"

Source: Aeon, 03 August 2023


Author's Introduction
  • I am not nearly old enough to remember dunce caps, but I do remember a pedagogical illustration of a sad little boy sitting in the corner of a classroom wearing a pointy hat while his peers gaze joyfully at their teacher. My teacher explained that the pointy hat was called a dunce cap, and was used in olden times to humiliate and so punish the dunces, that is, the students who cannot or will not learn their lessons. Our own lesson was clear: we might not have the pointy hats anymore, but only sorrow and ostracisation await children who do poorly in school.
  • Ironically, John Duns Scotus (c1265-1308), after whom the dunces are named, did very well in school, impressing his Oxford Franciscan colleagues so much that they sent him to the University of Paris. His brilliance at Paris eventually earned him the temporary but prestigious post of Regent Master of Theology. His writings, despite their difficulty, were enormously influential in Western philosophy and theology, so much so that universities all over Europe established Chairs of Scotist thought side by side with Chairs dedicated to Thomism. In the 19th century, the Jesuit poet Gerard Manley Hopkins declared that it is Scotus ‘who of all men most sways my spirits to peace’, and halfway through the 20th century the celebrity monk Thomas Merton could say that Duns Scotus’s proof for God’s existence is the best that has ever been offered.
  • This prestigious legacy notwithstanding, as early as the 16th century educated Englishmen were appropriating ‘Duns’ as a term of abuse. In 1587, the English chronicler Raphael Holinshed wrote that ‘it is grown to be a common prouerbe in derision, to call such a person as is senselesse or without learning a Duns, which is as much as a foole.’ But in the same age a bookish person might also be labelled a dunce: ‘if a person is given to study, they proclayme him a duns,’ John Lyly explains in his Euphues: The Anatomy of Wit (1578). Humanist contempt of scholastic methods and style – of which Scotus’s own tortuous texts sometimes read like a parody – is probably an adequate explanation of the unfortunate union of ‘fool’ and ‘studious’ in ‘dunce’. A person must be a fool to waste time reading John Duns Scotus!
  • Scotus remains a polarising figure, but his humanist detractors would be horrified to learn that here in the 21st century we are witnessing a Scotus revival. Philosophers, theologians and intellectual historians are once again taking Scotus seriously, sometimes in a spirit of admiration and sometimes with passionate derision, but seriously nonetheless. Doubtless this is due in part to the progress of the International Scotistic Commission, which has in recent years completed critical editions of two of Scotus’s monumental works of philosophical theology: Ordinatio and Lectura. As these and other works have become more accessible, Scotus scholarship has boomed. According to the Scotus scholar Tobias Hoffmann, 20 per cent of all the Scotus scholarship produced over the past 70 years was produced in the past seven years. This explosion of interest in Scotus offers as good an occasion as any for introducing this brilliant and enigmatic thinker to a new audience.

Author's Conclusion
  • Scotus’s doctrine of haecceity is yet another of his views in which some have discerned world-historical significance. In A Secular Age (2007), Charles Taylor, inspired by Louis Dupré, said that Ockham the nominalist and Scotus the realist share a focus on individuality that gives ‘a new status to the particular’, and marks ‘a major turning point in the history of Western civilisation, an important step towards that primacy of the individual which defines our culture.’
  • I confess I am often tempted to make sweeping historical conclusions about the medieval figures I work on. If I could believe them, I might think my research is more important than it is, and conduct my work with extra vigour. In a Taylorian spirit, for example, I might say that Ockham and Scotus, along with their predecessor Aquinas, with the focus on individuals these three share, gave rise to the primacy of the individual that defines our culture. Or, in the same spirit but with a greater sense of boldness, I might say that Aquinas, with his materialistic answer to the problem of individuation, along with Scotus and Ockham, who believed in the existence of matter, together ushered in the pervasive materialism of contemporary science and culture.
  • Of course, it would take a reckless frame of mind to believe either of these assertions: the connections drawn between Aquinas, Scotus and Ockham are insufficiently robust to unite them as common causes of the historical events attributed to them. But that’s the point: a theory of nominalism is about individuals in some sense (since it asserts there are only individuals) and so, too, a theory of haecceity is about individuals in some sense (since it asserts an individuating entity in addition to the common nature). But these theories are about individuals in radically different senses, just as Aquinas’s materialistic solution to the problem of individuation is about matter in a sense radically different from the sense in which, say, Thomas Hobbes is a materialist about human minds. Therefore, they should not be lumped together as common causes of the same historical event. Ockham’s denial that there is such a thing as human nature does seem like the sort of denial that would affect the way ordinary people live their lives, if it ever came to influence them. The same can be said of Scotus’s affirmation that there is such a thing as human nature. But it would be rather surprising – and a mere accident – if the denial and affirmation of exactly the same view had exactly the same influence on how people live their lives.
  • As a Scotus scholar, I welcome this century’s revival of interest in Scotus. But a more fruitful way to indulge that interest, especially for those just starting their intellectual journey with Duns Scotus, is simply to try to take him on his own terms, engaging first-order questions of philosophy and theology with Scotus, and resisting the storyteller’s urge to situate this or that feature of Scotus’s thought within a narrative that explains why we are where we are now. It really is just as possible for a person of the 21st century as it was for a person of the 14th to wonder whether God exists, or whether universals are real, or whether objective morality requires a divine lawgiver. When we ask these questions now, we’re asking the very same questions they were asking then. And, thanks to the efforts of the dunces who for centuries have kept alive Scotus’s memory, editing and transmitting his texts, and writing papers and books trying to explain his thought, we can welcome Scotus into our own puzzlings over these and other perennial questions. At the speed of philosophy, 1308 is not so very far away after all.
Author Narrative
  • Thomas M Ward is associate professor of philosophy at Baylor University in Texas. He is the author of Ordered by Love: An Introduction to John Duns Scotus (2022) and the translator of Duns Scotus’s Treatise on the First Principle (forthcoming, 2023).
Notes
Paper Comment
  • Sub-Title: "His name is now the byword for a fool, yet his proof for the existence of God was the most rigorous of the medieval period"
  • For the full text see Aeon: Ward - Who was Duns Scotus?.



"Wengrow (David) - Beyond kingdoms and empires"

Source: Aeon, 05 July 2024


Author's Introduction
  • Contemporary historians tell us that, by the start of the Common Era, approximately three-quarters of the world’s population were living in just four empires (we’ve all heard of the Romans and the Han; fewer of us, perhaps, of the Parthians and Kushans). Just think about this for a minute. If true, then it means that the great majority of people who ever existed were born, lived and died under imperial rule. Such claims are hardly original, but for those who share Arnold Toynbee’s conviction that history should amount to more than ‘just one damned thing after another’, they have taken on a new importance.
  • For some scholars today, the claims prove that empires are obvious and natural structures for human beings to inhabit, or even attractive political projects that, once discovered, we have reproduced again and again over the longue durée of history. The suggestion is that if the subjects of empire in times past could have escaped, they’d have been unwise to do so, and anyway the majority would have preferred life in imperial cages to whatever lay beyond, in the forest or marshes, in the mountains and foothills, or out on the open steppe. Such ideas have deep roots, which may be one reason why they often go unchallenged.
  • In the late 18th century, Edward Gibbon – taking inspiration from ancient writers such as Tacitus – described the Roman Empire (before its ‘decline and fall’) as covering ‘the fairest part of the earth, and the most civilised portion of mankind,’ surrounded by barbarians whose freedom was little more than a side-effect of their primitive ways of life. Gibbon’s barbarian is an inveterate idler: free, yes, but only to live in scattered homesteads, wearing skins for clothes, or following his ‘monstrous herds of cattle’. ‘Their poverty,’ wrote Gibbon of the ancient Germans, ‘secured their freedom.’
  • It is from such sources that we get, not just our notion of empire as handmaiden to civilisation, but also our contemporary image of life before and beyond empire as being small-scale, chaotic and largely unproductive. In short, everything that is still implied by the word ‘tribal’. Tribes are to empires (and their scholarly champions) much as children were to adults of Gibbon’s generation – occasionally charming or amusing creatures, but mostly a disruptive force, whose destiny is to be disciplined, put to useful work, and governed, at least until they are ready to govern themselves in a similar fashion. Either that, or to be confined, punished and, if necessary, eliminated from the pages of history.
  • Ideas of this sort are, in fact, as old as empire itself. In their diplomatic correspondence, which can be followed back to a time more than 3,000 years ago, the rulers of Egypt and Syria grumble incessantly about the subversive activities of groups calledʿApiru. Scholars of the ancient Near East once tookʿApiru to be an early reference to the Hebrews, but it’s now thought to be an umbrella term, used almost indiscriminately for any group of political defectors, dissenters, insurgents or refugees who threatened the interests of Egypt’s vassals in neighbouring Canaan (much as some modern politicians have been known to use the word ‘terrorist’ for rhetorical effect today).
  • In Babylonia, such groups – when not given tribal or ethnic labels – might be variously described as ‘scattered people’, ‘head-bangers’ or simply ‘enemies’. In the early centuries BCE, emissaries of the Han Empire wrote in similar ways about the rebellious marsh-dwellers of the tropical coastlands to their south. Historians now see these ancient inhabitants of Guangdong and Fujian through Han eyes, as the ‘Bai-yue’ (‘Hundred Yue’), who were said to shave their heads, cover their bodies in tattoos, and sacrifice live humans to their savage gods. After centuries of resistance and guerrilla warfare, we learn, the Yue capitulated. On the order of Emperor Wu, most were deported and put to hard labour, their lands given over to colonial settlers from the north, including many retired soldiers.

Author's Conclusion
  • On a global scale, we are witnessing a revolution in our understanding of ancient demography. To ignore it, these days, is to indulge in a cruel sort of intellectual prank, by which the genocide of Indigenous populations – a direct consequence of the planetary revolt against freedom, in the past 500 years – is naturalised as a perennial absence of people. Nor can we just assume that if we want to understand the prospects for our modern world, the only ‘big’ stories worth telling are those of empire.
  • The world we live in today is not just the one created by the likes of Tiberius of Rome, or even Emperor Wu of Han. Until surprisingly recent times, spaces of human freedom existed across large parts of our planet. Millions lived in them. We don’t know their names, as they didn’t carve them in stone, but we know that many lived lives in which one could hope to do more than just scratch out an existence, or rehearse someone else’s script of ‘the origin of the state’ – in which one could move away, disobey, experiment with other notions of how to live, even create new forms of social reality.
  • Sometimes, the unfree did this too, against much harder odds. How many, back then, preferred imperial control to non-imperial freedoms? How many were given a choice? How much choice do we have now? It seems nobody really knows the answers to these questions, at least not yet. In future, it will take more than zombie statistics to stop us from asking them. There are forgotten histories buried in the ground, of human politics and values. The soil mantle of Earth, including the very soil itself, turns out to be not just our species’ life support system, but also a forensic archive, containing precious evidence to challenge timeworn narratives about the origins of inequality, private property, patriarchy, warfare, urban life and the state – narratives born directly from the experience of empire, written by the ‘winners’ of a future that may yet make losers of us all.
  • Investigating the human past in this way is not a matter of searching for utopia, but of freeing us to think about the true possibilities of human existence. Unhampered by outdated theoretical assumptions and dogmatic interpretations of obsolete data, could we look with fresh eyes at the very meaning of terms like ‘civilisation’? Our species has existed for something like 300,000 years. Today, we stand on a precipice, confronting a future defined by environmental collapse, the erosion of democracy, and wars of unprecedented destructiveness: a new age of empire, perhaps the last in a cycle of such ages that, for all we really know, may represent only a modest fraction of the human experience.
  • For those who seek to change course, such uncertainty about the scope of human freedoms may itself be a source of liberation, opening pathways to other futures.
Author Narrative
  • David Wengrow is professor of comparative archaeology at University College London. His books include The Origins of Monsters (2013), What Makes Civilization? (2nd ed, 2018) and, co-authored with David Graeber, The Dawn of Everything (2021).
Notes
  • I found this paper - while somewhat informative - rather annoying and politically motivated.
  • It needs a lot of following up.
  • I agree it's important to take new research into account. I was alerted by Nature to a BBC report (and there are others):-
    BBC: Rannard - PhD student finds lost city in Mexico jungle by accident
    BBC: Rannard - Huge ancient lost city found in the Amazon
  • I've no doubt historic climate change is responsible for the demise of many ancient civilisations. The Sahara was once green, and - it seems - the Amazon was once not a rain forest.
  • No doubt the estimates of population in antiquity are dubious, but the extent of the empires is less so. I dare say I should obtain Atlas of World Population History, defective though it may be.
  • Also, the claims about the percentage of ancient populations being subject to empires isn't referring to what was going on in the New World (or tropical Africa). But since we know little of this, how do we know that these vanished societies weren't themselves empires?
  • I've no doubt empires are oppressive, but they do suppress endless warfare within their borders. Historically, small states haven't lived at peace with their neighbours.
  • There are lots of congratulatory (and worthless) Aeon Comments - presumably from political fellow-travellers. These can be ignored.
  • However, the paper itself deserves closer consideration.

Paper Comment



"Westbury (Chris F.) - Why do beautiful people also seem smart and likeable?"

Source: Aeon, 05 May 2025


Authors' Introduction
  • Have you ever noticed how someone who’s drop-dead gorgeous can also seem charming, honest and kind – even before they’ve said a word? That’s the halo effect, a common psychological bias where one trait (such as good looks) influences your impressions of someone’s other qualities. The halo effect was first systematically studied by the psychologist Edward Thorndike more than a century ago. In 1920, Thorndike reported that when he analysed the judgments of military officers evaluating their subordinates, their ratings of intelligence, leadership and physical qualities tended to blur together. If a subordinate excelled in one area, the evaluator was inclined to think he was exceptional in all of them. The effect shows up in many different fields, including social psychology, clinical psychology, child psychology, health, politics and marketing. The ‘attractiveness halo effect’ is what happens when a physically beautiful person also seems interesting, capable and good-natured. It’s as if we’re wired to judge books by their covers, even if we know we shouldn’t.
  • The halo effect isn’t confined to human traits, either. This cognitive shortcut shapes perceptions of everything from objects to organisations. One study examined how labelling a bottle of wine as ‘organic’ led tasters to rate its aroma and taste and their overall enjoyment more highly than the same wine without the ‘organic’ label. Marketing, politics, labels, branding and first impressions all tap into this bias, nudging us to think: ‘If one thing about this is good, or bad, then other things about it must be.’
  • Despite how common and powerful this bias is, scientists still don’t know exactly why it happens. Exploring this gap in understanding is crucial, because uncovering the foundations of our biases equips us with the knowledge to critically examine them and to develop better strategies to manage their impact. Early theoretical models of the halo effect attempted to frame how our evaluations get intertwined. These suggested, for instance, that having a general impression of someone shapes judgments of that person’s specific traits, or that one particularly salient trait sways judgments of other traits. While these models mapped out how the halo effect might happen, they can’t explain how to predict which traits will get lumped together.

Authors' Conclusion
  • Unpacking the halo effect using language patterns could have many practical applications. By quantifying the context similarity of two words, it may be possible to account for some of the variance in how individuals perceive and evaluate people, products, or ideas. For instance, in mental health, there have been growing concerns about incorrectly diagnosing people with two different conditions. This has been attributed in part to the halo effect: one diagnosis may influence the likelihood that another disorder is perceived. However, if we can spot which diagnostic labels are used in similar contexts, that might help clinicians guard against bias.
  • There could be another practical application related to marketing and advertising. Questions have been raised about the ethics of using language that misleads consumers into believing products are healthier than they truly are. For example, some research found that using the term ‘natural’ in advertising cigarettes led consumers to mistakenly believe that the cigarettes were healthier and had less potential to cause disease. A calculable measure that indicates how likely one word (such as ‘natural’) is to evoke others in the minds of consumers might help policymakers judge the acceptability of certain terms in marketing.
  • Then there are the judgments people make about each other in everyday life. Let this article serve as a reminder that the next time you find yourself thinking that someone has a delightful personality or a bright mind just because they have a dazzling smile, you might be under the spell of the halo effect. However, the more we understand how and why people mentally link certain traits together, the better we can guard against quick, unfounded judgments.
Author Narrative
  • Chris F Westbury is a professor in the Department of Psychology at the University of Alberta, Canada.
  • Daniel King is a psychology graduate who completed his honours research under Chris F Westbury at the University of Alberta, Canada.
Notes
  • Interesting enough. The 'halo effect' was well-known to me, but the idea of using linguistic-frequency (inspired by ChatGPT) to explain the connections between unrelated traits was new - and maybe not convincing.
  • There's no mention of height being mistakenly taken as an indicator of other positive traits. Cf. John Cleese (gags, not as an individual).
  • It is doubtless useful to be reminded of such biases.
  • I wonder whether there is, though, a statistical correlation between various positive qualities because of a common cause (privileged background, genetic inheritance and the like). Also, when it comes to moral considerations, a 'nice' person is likely to share many positive traits while a 'nasty' person many negative ones. So, as a rule of thumb, we are best off assuming these correlations until we have evidence to the contrary.
  • Doubtless this area could do with further thought.
  • There are no Aeon Comments invited.
  • This relates to my Notes on Psychology, Narrative Identity and Language.

Paper Comment



"Williams (Ruth) - Three ways to get in touch with your Shadow self"

Source: Aeon, 03 July 2024


Author's Introduction
  • ‘Shadow’ is the term used by Carl Jung to refer to those aspects of yourself that you do not like or want to be associated with. You might even refuse to acknowledge them as being a part of you. They can range from the fact you’re a bit controlling, all the way to having a drive towards world domination.
  • For a contemporary fictional depiction of the Shadow, take the Netflix series Ripley (2024). This is the latest adaptation of Patricia Highsmith’s brilliant novel The Talented Mr Ripley (1955). The protagonist – Tom Ripley – is shown at various points taking on the character of Dickie whose identity he steals and whom he murders. The look on Ripley’s face as he adopts the character is utterly chilling. He becomes someone else.
  • In the guise of Dickie, Ripley is able to become all sorts of things that his regular persona does not allow. He is acting on his envy of Dickie’s glamour and wealthy, upper-class life. This could be described as an extreme portrayal of the Shadow in film.
  • Envy is often the fuel behind Shadow-driven actions – thankfully, not usually so extreme as murder – such as betrayal, connivance and one-upmanship. Envy is such an unpleasant and repulsive feeling that it can easily become denied at a conscious level, but then it breaks through in unexpected ways. Sigmund Freud called this the ‘return of the repressed’. Does that ring any bells? I expect we have all had such experiences, whether as the one envying or on the receiving end.
  • If these ugly behaviours are a manifestation of your Shadow self, why would you even want to know about it? Well, you might find it helps you live a more authentic life; a life in which you more fully incorporate your potential and feel more whole. This could include becoming more accepting of those aspects of yourself that feel distasteful. It is often easy to point fingers at those who repel us and to think how awful they are. But sometimes you might find that they are acting as a kind of mirror, reflecting back qualities you find abhorrent, but that you actually possess yourself, as uncomfortable and unwelcome as that idea might be.
  • Coming to terms with these unwanted aspects of ourselves is arguably a necessary struggle in life. It enables us to take responsibility for our faults, our failings or shortcomings, and to blame others less. This can create deeper and more meaningful relationships.
Author Narrative
  • Ruth Williams is a Jungian analyst in private practice in London, UK. She is the author of C G Jung: The Basics (2019) and Exploring Spirituality from a Post-Jungian Perspective: Clinical and Personal Reflections (2023).
Notes
  • This is an interesting paper. It's really a self-help guide, and there are lots of suggestions (well, three) as to how to discover your Shadow Self. I'm not sure I want to go down this route. It seems a bit like the Roman Catholic exhortation to 'examine your conscience'.
  • The 'three ways' promised are:-
    → Record and reflect on your dreams
    → Keep a journal
    → Practise active imagination
  • There are very few Aeon Comments but they deserve careful study, and have responses from the author.
  • This relates to my Notes on the Self and Narrative Identity.

Paper Comment



"Williamson (Timothy) - The patterns of reality"

Source: Aeon, 14 November 2023


Author's Conclusion
  • Logic and computing have continued to interact since Turing. Programming languages are closely related in structure to logicians’ formal languages. A flourishing branch of logic is computational complexity theory, which studies not just whether there is an algorithm for a given class, but how fast the algorithm can be, in terms of how many steps it involves as a function of the size of the input. If you look at a logic journal, you will see that the contributors typically come from a mix of academic disciplines – mathematics, computer science, and philosophy.
  • Since logic is the ultimate go-to discipline for determining whether deductions are valid, one might expect basic logical principles to be indubitable or self-evident – so philosophers used to think. But in the past century, every principle of standard logic was rejected by some logician or other. The challenges were made on all sorts of grounds: paradoxes, infinity, vagueness, quantum mechanics, change, the open future, the obliterated past – you name it. Many alternative systems of logic were proposed. Contrary to prediction, alternative logicians are not crazy to the point of unintelligibility, but far more rational than the average conspiracy theorist; one can have rewarding arguments with them about the pros and cons of their alternative systems. There are genuine disagreements in logic, just as there are in every other science. That does not make logic useless, any more than it makes other sciences useless. It just makes the picture more complicated, which is what tends to happen when one looks closely at any bit of science. In practice, logicians agree about enough for massive progress to be made. Most alternative logicians insist that classical logic works well enough in ordinary cases. (In my view, all the objections to classical logic are unsound, but that is for another day.)
  • What is characteristic of logic is not a special standard of certainty, but a special level of generality. Beyond its role in policing deductive arguments, logic discerns patterns in reality of the most abstract, structural kind. A trivial example is this: everything is self-identical. The various logical discoveries mentioned earlier reflect much deeper patterns. Contrary to what some philosophers claim, these patterns are not just linguistic conventions. We cannot make something not self-identical, however hard we try. We could mean something else by the word ‘identity’, but that would be like trying to defeat gravity by using the word ‘gravity’ to mean something else. Laws of logic are no more up to us than laws of physics.

Author Narrative
  • Timothy Williamson is the Wykeham Professor of Logic at the University of Oxford. His main research interests are in philosophical logic, epistemology, metaphysics and philosophy of language. His latest books are Suppose and Tell (2020) and, with Paul Boghossian, Debating the A Priori (2020).

Notes
  • Well, this is an iteresting and informative Paper, though one that - for me - is 'revision material' rather than revelatory. I wonder who it's for.
  • Timothy Williamson is just about the most distinguished logician alive and active. What's he doing writing this piece?
  • It deserves a second reading and a summarisation - with the following-up of some links - so I'll add it to my list.
  • Early on there's a slightly tricky question. I'd expected to see the solution at the end, but it's not given. Of course, the answer is obvious if you're used to this sort of thing, but most people are distracted by the truth or falsity of the premises rather than focusing on the logical structure of the argument. The correct answer might be arrived at for the wrong reasons as the valid argument has slightly less absurd premises than the invalid one.
  • This Paper doesn't seem to be a plug for a book, though the volume with Paul Boghossian, Debating the A Priori looks interesting (though not related to logic, but to Williamson's other major interest - epistemology). However - looking at the excerpts visible on Amazon - I suspect some of the chapters therein are available on-line (I have a couple), but maybe not the 'replies' and 'reply to replies'.

Paper Comment



"Worsnip (Alex) - What is incoherence?"

Source: Aeon, 23 January 2024


Author's Conclusion
  • Why is transparent incoherence bizarre? In my view, it’s because to count as genuinely having a certain mental state (an intention, a belief, a preference, etc), you need to have some tendency to make your other mental states coherent with it, when your mental states are transparent to you. For example, I suggested earlier that to count as genuinely intending to wear new shoes to the wedding, I need to have some tendency to also form intentions to do whatever I believe is necessary for this – for example, to go to the mall to buy some. We can now qualify this in a subtle but crucial way: I need to have some tendency to form the intention to go to the mall – when my intention to wear new shoes and my belief that to do this I must go to the mall are both transparent to me.
  • This is a conceptual point, not an experimentally demonstrable one: if I don’t have that tendency, I just don’t count as intending to wear new shoes to the wedding. Nevertheless, the view that I’m suggesting here fits with a lot of what we know from both science and our own experience. It fits with the way that psychologists can exploit ordering and framing effects in surveys to elicit responses that seem so utterly incoherent that practically no one would ever give all of them at once: plausibly, this is possible because the participants don’t consider all of their responses together at once. It fits with the fact that reporting one’s own incoherent states aloud in speech seems a lot stranger than merely being incoherent: this is because reporting the state aloud in speech requires bringing all the states to one’s conscious attention, making them transparent. And it explains why, when our incoherence is brought to our attention, we scramble to revise or reinterpret our mental states to make them coherent: ‘When I said “all”, I didn’t really mean all’; ‘I’ll do anything to help small businesses within reason’; and so on.
  • We’re often incoherent through inattention to our mental states, through failure to put them together to draw the obvious conclusions. Still, the fact that we do tend to revise our states to make them coherent when they’re brought to our attention suggests that there is a kind of rationality – structural, rather than substantive, rationality – that we at least tend to approximate. We may not be very reasonable creatures a lot of the time. But we are coherent creatures, to some degree, and under certain conditions. For this baseline level of coherence is built into what it is to even have beliefs, intentions, preferences and the whole gamut of human responses to the world.
Author Narrative
  • Alex Worsnip is an associate professor of Philosophy at the University of North Carolina at Chapel Hill. He is the author of Fitting Things Together: Coherence and the Demands of Structural Rationality (2021).

Notes
  • Very interesting, though it's a plug for the author's absurdly expensive new book.
  • I though the distinction between two sorts of rationality helpful, taking:-
    1. ‘Substantive rationality’ to refer to reasonableness, and
    2. ‘Structural rationality’ to refer to coherence
  • Above, 'reasonableness' refers to the propositional content of your beliefs - whether they have been acquired in a reliable way, while 'coherence' applies to how your beliefs hang together.
  • Given the complexity of our web of beliefs, and our tendency to compartmentalise them, I suppose all people are 'structurally irrational' to some degree. But the author is right that it's only when the irrationallity is pointed out, and denied, that the epithet is justified.
  • Some points are debateable. Like one commentator, I wondered whether preferences are transitive. If not, then they don't necessarily imply cyclical incoherence.
  • Unfortunately, the author's choice of example was a bit silly, and led to lots of (rather stupid) commentators quibbling over whether it was realistic - such as whether there were alternative outlets to the mall. This is a problem with [Thought Experiments] - some people just don't understand them.
  • I've saved the Comments for future consderation, but they aren't very incisive.
  • The author's thesis is relevant to a Note I'm to set up of Rationality, but is for now parked with Narrative Identity. I will review the contents in due course.

Paper Comment
  • Sub-Title: "We can all be inconsistent. Philosophy illuminates a bigger puzzle: how do we hold contradictory beliefs at the same time?"
  • For the full text see Aeon: Worsnip - What is incoherence?.



"Wyatt (Jeremy) & Ulatowski (Joseph) - How to think about truth"

Source: Aeon, 30 November 2022


Key points – How to think about truth
  • Before accepting or dismissing the idea of objective truth, ask yourself what ‘objective truth’ is supposed to be. There are different ways to understand the idea that truth is objective, and each of them is plausible when applied to at least some of our beliefs.
  • The ideas of ‘your truth’ and ‘my truth’ may be self-undermining, and they’re hard to spell out. It may seem that each of us has ‘different truths’, but endorsing this idea seems to undermine it, and it’s not so clear what the idea even amounts to.
  • Truth may be one of the most basic concepts that we have. The idea that truth is a primitive concept helps to explain why philosophical definitions of truth fail to deliver. Four decades of research in developmental psychology also shows us that we should take this idea seriously.
  • Someone from another culture may think differently about truth than you do. Our cultures and languages can affect the beliefs we have about truth. This means that we need to carefully consider how truth is represented across different cultures.
  • Truth’s nature may be simpler than you think. The nature of truth can seem impossibly complex. However, the commonsensical idea that true claims tell us what the world is like gives us a straightforward and useful procedure for thinking about truth.
Author Narrative
  • Jeremy Wyatt is senior lecturer in philosophy at the University of Waikato. He works mainly on truth and taste, and is particularly enthusiastic about experimental and cross-linguistic philosophy. He is a co-editor, with Michael P Lynch, Junyeol Kim and Nathan Kellen, of The Nature of Truth: Classic and Contemporary Perspectives (2nd ed, 2021) and, with Julia Zakkou and Dan Zeman, of Perspectives on Taste (2022). With Joe Ulatowski, he is also co-editing the Asian Journal of Philosophy topical collection Truth Without Borders (forthcoming).
  • Joseph Ulatowskiis senior lecturer in philosophy at the University of Waikato. He is the author of Commonsense Pluralism about Truth (2017). He is co-editor of Virtue, Narrative, and Self (2020), the 2018 Synthese special collection Paul Horwich’s Minimalism about Truth, the forthcoming Asian Journal of Philosophy special collection Truth Without Borders, and is finishing two books, one on the nature and value of facts and the other on the nature of truth.
Notes
  • An interesting and important paper.
  • It seems to be somewhat controversial, by the look of the unusually intelligent Aeon comments, which I've stored away for future review.
  • It seems to try to steer a middle course between absolute and relative truth (both have their place, in my view)
  • There are some diversions into other cultures, including some sensible remarks from African philosophy about the limitations of language in discussing philosophical topics. No mention of Newspeak, though.
  • I need to create a PID Note on Truth, so will look at this Paper again in due course.

Paper Comment



"Young (Nick) - Time doesn’t flow like a river. So why do we feel swept along?"

Source: Aeon, 21 September 2022


Author's Conclusion
  • Tied up with our sense that we are the cause of our actions is the feeling that we can stop doing whatever it is we are doing and start doing something different. If you wanted, you could close this browser tab right now and get up from where you are sitting. But, though we can change our behaviour, the possibility of performing no bodily or mental action whatsoever is never an option. As long as you are awake, you will never feel as if you can stop causing change. Jean-Paul Sartre declared that mankind was ‘condemned to be free’; similarly, we find ourselves at every waking moment condemned to act. Of course, we stop acting when we fall asleep but, as any insomniac will tell you, sleep is something you must wait for, not something you do. You can hasten sleep’s arrival, but you cannot switch yourself off like a laptop.
  • I believe that this leads us to mistake the feeling of doing – moving, thinking, focusing – for the feeling of time passing. We experience ourselves as perpetually, helplessly active. This is likely a product of our neurophysiology. Brains don’t stop: information is continually being received, recalled, processed and responded to, so it is not surprising that we always find ourselves doing something. But we are not consciously aware of this fact. In fact, consciousness does not provide any explanation as to why we find ourselves in such a state. We are driven to keep making changes. And it is here that we make a mistake. Rather than blaming our neurophysiology for the feeling that we must constantly act, we blame the world outside: we mistakenly think that some outside force (like a flowing river of time) is responsible for the ever-present feeling that we are being ‘pushed along’.
  • We are condemned to act. It is not, as Heraclitus imagined, that ‘everything flows and nothing abides.’ Instead, the feeling of being swept along is the result of our brains’ constant churning. We mistake our own momentum for that of the world. Time does not flow. We do.
Author Narrative
  • Nick Young is an adjunct coordinator at the Centre for Philosophy of Time at the University of Milan in Italy.
Notes
  • Interesting, but I'm not sold on the idea that the 'flow' of time is a purely psychological illusion. I'm not even sure what 'flow' means in this context.
  • Time is a variable in equations that applies whether or not there are any humans to appreciate its flow, if that's what it does or doesn't. Likewise, it has an 'arrow' - based on entropy - irrespective of our interests.
  • His introduction quotes appreciatively from "Rovelli (Carlo) - The Order of Time".
  • There are some interesting comments - some with replies by the author - which I've saved lest they disappear.
  • But it's all very difficult. I need to re-read this paper!

Paper Comment



"Zangwill (Nick) - Our Moral Duty to Eat Meat"

Source: Journal of the American Philosophical Association 7(3), June 2021


Author’s Abstract
  1. I argue that eating meat is morally good and our duty when it is part of a practice that has benefited animals. The existence of domesticated animals depends on the practice of eating them, and the meat-eating practice benefits animals of that kind if they have good lives. The argument is not consequentialist but historical, and it does not apply to non-domesticated animals. I refine the argument and consider objections.

Introduction
  1. Eating nonhuman animal meat is not merely permissible but also good. It is what we ought to do, and it is our moral duty. So I argue. I shall not distinguish the claim that eating meat is good from the claim that we ought to eat meat. The claim that it is our duty is a stronger claim. The claim that it is good and the claim that it is what we ought to do are closely related to the claim that it is our duty: if something is our duty, then it is good to do it, and we ought to do it. Furthermore, I take the goods, oughts, and duties here to be moral ones. Note also that by the word ‘animals’ in what follows, I mean nonhuman animals, and by ‘meat’ I mean nonhuman animal meat.

Contents
  1. The Benefit to Animals of Eating Meat
  2. Consciousness, Happiness, Suffering, and Death
  3. Three Comments
  4. Beneficial Historical Practices: Wild and Domesticated Animals
  5. Other Writers: Compare and Contrast
  6. Killing and Eating Enslaved Human Beings?
  7. Which Animals?
  8. Good and Good-For
  9. Who is Obligated?
  10. All Things Considered
  11. Coda

Paper Comment

See Zangwill - Our Moral Duty to Eat Meat



"Zangwill (Nick) - Why you should eat meat"

Source: Aeon, 24 January 2022


Author's Introduction
  • If you care about animals, you should eat them. It is not just that you may do so, but you should do so. In fact, you owe it to animals to eat them. It is your duty. Why? Because eating animals benefits them and has benefitted them for a long time. Breeding and eating animals is a very long-standing cultural institution that is a mutually beneficial relationship between human beings and animals. We bring animals into existence, care for them, rear them, and then kill and eat them. From this, we get food and other animal products, and they get life. Both sides benefit. I should say that by ‘animals’ here, I mean nonhuman animals. It is true that we are also animals, but we are also more than that, in a way that makes a difference.
Author Narrative
Notes
  • I found the first half of the paper very convincing: that well-kept farm animals have - on balance – a life worth living, which would not be lived were they not bred for the purpose of being eaten. Also, that most lives – and especially those of wild animals – end pretty dreadfully, so the ultimate slaughter of farm animals isn't a serious objection unless death is an objection to all life (but see "Benatar (David) - Better Never to Have Been: The Harm Of Coming Into Existence").
  • I wasn’t so impressed by the second half; trying to argue for human exclusivism based on rights. My thoughts are more that it has to do with plans: humans don’t just live from day to day, at least not when they are flourishing, and the badness of death – for a human being – has much to do with the curtailment of their plans.
  • There was no mention of "Ishiguro (Kazuo) - Never Let Me Go".
  • Some interesting discussion of Pigs. See "De Waal (Frans) - Primates and Philosophers: How Morality Evolved" and Lori Marino – “Thinking Pigs: A Comparative Review of Cognition, Emotion, and Personality in Sus domesticus” (downloaded but not yet in my database!).
  • There are lots of comments (169, a fair few suppressed for contravening the guidelines!). Unfortunately, the vast majority are worthless rants. The author is very patient with them.
  • See "Zangwill (Nick) - Our Moral Duty to Eat Meat" for the author's published paper on this topic.

Paper Comment



"Zellmer (Jacob) - Baffled by human diversity"

Source: Aeon, 08 July 2024


Author's Introduction
  • Imagine for a moment that tomorrow we find humans on another planet. It’s an unlikely scenario to be sure, but you can imagine the theories that people might venture to explain their existence. They might propose that humans on this other planet descended from ancient earthlings who in prehistoric times had the technology to travel there. Or that humans on both planets were planted there by aliens. Or that conditions on the other planet are so similar to those found on Earth that humans evolved on both planets simultaneously. Then again, perhaps God created humans on both planets.
  • A very similar problem confronted the European mind in the modern period (though the analogy is not perfect). As Europeans embarked on great voyages of discovery from the 15th through to the 19th centuries, they were at a loss to explain how the native peoples they encountered in distant lands actually got there. Europeans generally assumed that all people descended from Adam and Eve – a view called monogenism – and so it was unclear how native humans came to exist in ‘New Worlds’ oceans apart from the original Garden of Eden. What is more, non-Europeans looked and acted differently. Before Charles Darwin, and for some time after him, there was no generally accepted explanation of these physical and cultural differences.
  • The challenge for early moderns, then, was to find a plausible explanation for how physical differences between humans arose if all humans on Earth descended from Adam and Eve. Some thinkers, such as the German naturalist Johann Friedrich Blumenbach (1752-1840), thought that human diversity arose through ‘degeneration’ from the original humans, where degeneration is caused by environmental features such as what your ancestors ate or the climate they inhabited.
  • A more radical way to explain human diversity involved accepting polygenism – the view that God originally created multiple first human mating pairs besides Adam and Eve. In the hypothetical scenario above, polygenism is akin to thinking that God created humans on both planets. As it happens, many influential 19th-century scientists in the United States were polygenists, turning to the theory as a means of explaining human racial differences – differences that were front and centre in American life. Proponents of polygenism argued that racial differences were fixed, some of them recruiting the theory to justify enslaving peoples whom God had made ‘lesser’.
Author Narrative
  • Jacob Zellmer is a PhD candidate in philosophy at the University of California, San Diego, US. His primary area of research is early modern philosophy, especially Spinoza.
Notes
  • This is an interesting paper. It's not an analysis of modern racism, but an account of polygenism - the religious view that there were multiple creations of different human species (or at least of multiple initial mating pairs).
  • There are 23 Aeon Comments, including responses from the author, that deserve careful study.
  • This relates to my Notes on Race, Homo Sapiens and Religion.

Paper Comment



"Zeman (Adam) - When the mind is dark, making art is a thrilling way to see"

Source: Aeon, 06 April 2021


Author's Conclusion
  • Our finding that aphantasia predisposes people to work in the sciences suggests that our initial assumption, that it would militate against a career in art, was not wholly wrong. But we are immensely complex creatures: the possession of visual imagery is one small piece in the huge jigsaw of cognition. There are many ways to represent things in their absence besides a sensory image – language for example and, as our exhibition illustrated, art!
  • Our work with aphantasic creatives drives home another broad conclusion: we shouldn’t confuse visualisation with imagination, the far broader capacity, to represent, reshape and reconceive things in their absence. Imagination can certainly make use of imagery – but it doesn’t have to. As the examples of Bakes, Catmull, Keane, Ross, Sacks and Venter demonstrate amply, aphantasia is no bar to an imaginative life.
  • If, reading this, you come to the conclusion that you too might lack imagery, or have it in spades, don’t hesitate to contact me (a.zeman@exeter.ac.uk) as we still have much to learn.
Author Narrative
  • Adam Zeman is professor of neurology at the University of Exeter Medical School in the UK. His interests lie at the intersection of neurology, psychology and psychiatry. He was chairman of the British Neuropsychiatry Association 2007-2011, and is the author of Consciousness: A User’s Guide (2003) and A Portrait of the Brain (2008).
Notes
  • This is one of several papers on Aphantasia I've been reading recently.
  • This one requires closer consideration than I've been able to give it, and it needs linking to the others.

Paper Comment



Text Colour Conventions (see disclaimer)
  1. Blue: Text by me; © Theo Todman, 2025
  2. Mauve: Text by correspondent(s) or other author(s); © the author(s)



© Theo Todman, June 2007 - Dec 2025. Please address any comments on this page to theo@theotodman.com. File output:
Website Maintenance Dashboard
Return to Top of this Page Return to Theo Todman's Philosophy Page Return to Theo Todman's Home Page