One Thing I Learned: Moonwalking with Einstein
Josh Foer’s investi-memoir about the world memory circuit, Moonwalking with Einstein, offers a few intriguing insights into how the mind works into its quick tour – a year in the life of a journalist who decides to give mental athletics a try, and surprises himself at how practical the mechanics of extreme memory are – and raises a few more possibilities that linger with me.
The core insight of the book is that memory is a function of how sticky an idea is to our human brains, or how sticky we make it. This might mean that if we want to recall something that the brain is naturally poor at – numbers, e.g. – we might try to associate that information with something the brain is uncannily good at – spatial recall being the biggest classic example. Hence, the “memory palace” is a physical space that one might mentally revisit, and try to code other information into what the brain already finds sticky in spatial terms. One mentally fills the rooms of one’s childhood home with those things that we don’t want to forget, and upon revisiting that space mentally, we find that the associations we’ve made between the childhood bedroom and the data in question is much stickier than the data in question would be on its own. Likewise, if we can manipulate the data, or code it, to resemble something much stickier to the mammalian brain – arrange that string of digits into some kind of lewd sex act, e.g. – we find that, however absurdly constructed the association is, it returns to us.
[The second principle on which memory arts functions is chunking – that is, creatively assembling several pieces of data into a single datum, allowing us to store more and more inside what feels like a small number of things to remember. If, e.g., I ask you to remember the digit string 7-1-0-2, you can probably do that with some small attention. If I reverse those digits, though, and tell you to remember the year 2017, it’s a much simpler task.]
There is a recurrent theme about synaesthesia in the text that I find fascinating and unfathomed. A number of famous natural mnemonists describe their synaesthetic experiences – intuitively associating one sense with another, drawing relations between numbers and colors, colors and smells, smells and time. Those associations, in principle, starkly resemble what the trained mnemonist tries to manufacture – laboriously constructing associations between places and things, sounds and images, personages and concepts – so that the desired information has a web of mental associations constructed around it that would allow the multipolar mind to grasp it in some sixteen different ways.
This, to me, seems to resemble how all intelligence functions: by a thick bedding of associations, we integrate every dimension of a thing, some of which dimensions are just sticky enough for us to remember, and which lead eventually to the whole network. It is a rich attention that finds thirty different intuitive associations with the Habsburg monarchy, even if several of them are poetic nonsense. It is the densification of associative logic that allows one to reach insights across disciplines. The more interconnected our concepts, the more creatively we can deploy them.
Which brings me to generative AI, seeing as that’s on everyone’s minds these days. Like many, I’ve tinkered with the newest toys from OpenAI trying to find how they’re useful or threatening. And like many writers, it’s become apparent to me that generative AI is not currently in the running to be even a poor fiction writer. That might be intuitive when we consider how generative AI is constructed: it functions on algorithms that are trained to find the most probable construction of language and ideas in any context. That is, it takes gajillions of pieces of training data, and sifting between them, finds the most well-tread, overdone, tiresome path leading out of any possible prompt. It should be no surprise that generative AI is not yet capable of writing prose that entertains, because entertaining prose is not very probable. In response to any series of prompts, the chatbot still returns the most technically proficient, wooden, tiring recitation possible, by design. Its imagination is blinkered at every turn by what seems likelier: the prosaic or the magical.
Which leads me to an interesting query: if we want to train the next generative AI to be truly creative, should we not be systematically strengthening its associations between unlike concepts, disparate disciplines, the weird and the prosaic? And if we were to do so with deliberate intent over many probably nonsensical intermediate versions of the beast, might it not just learn to mimic a human imagination, but suggest to us things that we would tire long before imagining?
Though this technology, seemingly for now, rests in the hands of a technocratic class trying to maximize its profit potential, it would be very, very timely for artists to start trying to understand how they might lend their understanding of imaginative processes – the types of associations it thrives on and how they are selected for – to its development.