Author's Conclusion:
- Data can be acquired without explanation and without understanding. The very definition of a bad education is simply to be drilled with facts: as in learning history by rote-memorising dates and events. But true understanding is the expectation that other human beings, or agents more generally, can explain to us how and why their methods work. We require some means of replicating an idea and of verifying its accuracy. This requirement extends to nonhuman devices that purport to be able to solve problems intelligently. Machines need to be able to give an account of what they’ve done, and why.
- The requirement to explain is what links understanding to teaching and learning. ‘Teaching’ is the name we give to the effective communication of mechanisms that are causal (‘if you follow these rules, you will achieve long division), while ‘learning’ is the acquisition of an intuition for the relationships between causes and their effects (‘this is why long division rules work’). The nature of understanding is the very ground for the reliable transmission and cultural accumulation of knowledge. And, by extension, it is also the basis of all long-term prediction.
Obscure quotation from Jorge Luis Borges excised ...
It is the challenge of the 21st century to integrate the sciences of complexity with machine learning and artificial intelligence. The most successful forms of future knowledge will be those that harmonise the human dream of understanding with the increasingly obscure echoes of the machines. Author Narrative:
- David C Krakauer is the president and William H Miller Professor of Complex Systems at the Santa Fe Institute in New Mexico. He works on the evolution of intelligence and stupidity on Earth. Whereas the first is admired but rare, the second is feared but common. He is the founder of the InterPlanetary Project at SFI and the publisher/editor-in-chief of the SFI Press.
Notes
- This is an important paper that deserves further consideration.
- Krakauer points out that associative-learning machines cannot explain how their algorithms work, even if they are successful.
- Hence, they may know, but don't understand.
- He seems to be sympathetic to Searle's "Chinese Room" argument, as expounded in "Searle (John) - Minds, Brains and Science: The 1984 Reith Lectures".
Comment:
For the full text, follow this link (Local website only): PDF File1.
- Sub-Title: "Science today stands at a crossroads: will its progress be driven by human minds or by the machines that we’ve created?"
- For the full text see Aeon: Krakauer - At the limits of thought
- Date: 2020
Text Colour Conventions (see disclaimer)
- Blue: Text by me; © Theo Todman, 2025
- Mauve: Text by correspondent(s) or other author(s); © the author(s)