What I fell on last night was an article in New Scientist about Tomas Rokicki and "his team" having solved the problem of how many moves it would take to solve Rubik's cube from every possible starting arrangement. This number of moves is called God's number (or God's algorithm) because if God was to solve the cube he/she/it would do it as efficiently as possible. (My feeling is that God's number would be zero, or maybe a glance, but it's a fun name anyway.) This number has been long sought after by mathematicians with ever better algorithms and ever faster computers. What this group of people did was to finally and conclusively prove God's number to be exactly twenty.
(By the way, I hate it when people say "so-and-so and his team" so here is the full list of participants, from cube20.org:
Our group consists of Morley Davidson, a mathematician from Kent State University, John Dethridge, an engineer at Google in Mountain View, Herbert Kociemba, math teacher from Darmstadt, Germany, and Tomas Rokicki, a programmer from Palo Alto, California.Note the alphabetical order.)
The interesting connection to narrative is in this part of the New Scientist article:
Previous methods solved around 4000 cubes per second by attempting a set of starting moves, then determining if the resulting position is closer to the solution. If not, the algorithm throws those moves away and starts again.
Rokicki's key insight was to realise that these dead-end moves are actually solutions to a different starting position, which led him to an algorithm that could try out one billion cubes per second.
A series of moves that was a mistake for one configuration was a solution for another, and the mathematicians stored the mistake for later use. They used stories to transfer knowledge, in the simplest way possible. (What an ingenious solution.)
The reason I got very excited when I read this was because I once worked in the world of simulation and animal behavior, and when I started working in organizational story I had in mind (but never found the time to create) a simulation exploring what I called "minimally storytelling agents." I wanted to find out what you would have to create in both the environment and the makeup of artificial life elements in order for them to start telling each other minimal this-happened stories. Of course, they would have to do it without you programming it into them, so the puzzle would be what would get them started, minimally, on that course. My intuitions about what might be required for storytelling in simple agents have always been mainly interaction (my fitness depends in part on your behavior) and iteration (we interact repeatedly).
But I've always thought there must be something else, something about
the nature of the environment, that must be in there as well. Something
worth telling stories about.
It's one of those projects I'd still like to do when I get time. Has anybody else beat me to it? Maybe, I'm not sure. (Tell me if you've seen anything like it.) On a quick look I see that Kerstin Dautenhahn and Steven J. Coles published a 2001 paper in the Journal of Artificial Societies and Social Simulation describing a simulation in which simulated robots were given minimally self-storytelling behavior and were found to derive fitness benefits from it in some environments. But as far as I know nobody has yet watched storytelling "evolve" so maybe I'll still tackle it when I get a free block of time.
At any rate these connections may be useful to people thinking about what stories
are for, especially those looking at narrative from the perspective of
evolutionary psychology. I have been enjoying Brian Boyd's book On the Origin of Stories, and this whole thing fits right into his ideas about why people tell stories at the most elemental level, especially what he writes about play.
Says the article:
The hippocampus, a part of the brain essential for memory, has long been known to "replay" recently experienced events. Previously, replay was believed to be a simple process of reviewing recent experiences in order to help consolidate them into long-term memory.Replay meaning storytelling, if internal. So, by watching the firing patterns in the brains of these rats, the researchers were able to find out not only where the rat was located but also what locations the rat was thinking about as they ran about (or didn't). The presence of "cognitive maps" in the brain, in which "place cells" describe the animal's current location, have been researched for a few decades already. But this is the first time I've seen it linked to storytelling (not that I'm up on the neurological cutting edge so I may just be ill-informed). Here was the bit I found most interesting:
[The researchers] found that it was not the more recent experiences that were played back in the hippocampus, but instead, the animals were most frequently playing back the experiences they had encountered the least. They also discovered that some of the sequences played out by the animal were ones they had never before experienced.
This reminds me of Gary Klein's work (described in Sources of Power) on naturalistic decision making: that people make decisions by recalling cases from the past. However, as I recall it, Klein's firefighters did not recall (replay) their great masses of mundane experience; they compared what they were sensing mainly to the extremes. My strongest memory from Klein's book is of a firefighter entering a room, feeling a hot floor, remembering a time when a floor gave way because there was fire beneath it, and ordering everyone out just before the floor collapsed. Now I don't believe naturalistic decision making replaces the rational, normative model; I think they coexist and commingle. But still, these things connect. People tell stories (to themselves and others) -- in part -- to explore the areas of their worlds that hold more danger, because it is only there that it is worth the time and trouble. Stories about rules are valuable, but stories about exceptions to rules are either equally or more valuable.
This also links to the Dautenhahn and Coles paper I mentioned above. The environment in which self-storytelling produced an adaptive advantage was the more variable one:
We can speculate that minimal mechanisms in the "first story-telling animal" survived because the animal was better adapted to a dynamic environment, story-telling increased its survival chances.So, when the environment included broader extremes between safe and dangerous territories, storytelling conferred a greater advantage. Which links to the rat-place-cells finding and Klein's work.
In practice, what does this all mean to people working with stories in organizations and communities? Well, first, it gives credence to the view that stories of best practice are not as useful as stories of worst practice. However, it balances that by pointing out that when positive stories are "encountered the least" -- exceptional -- they hold special value. This also connects to Stephen Crites' wonderful distinction (in his excellent chapter "The Narrative Quality of Experience" in Memory, Identity, Community) between sacred and mundane stories. My favorite quote on this (and I think I've quoted this before) is:
Such [sacred] stories, and the symbolic worlds they project, are not like monuments that men behold, but like dwelling places. People live in them....[They] inform people's sense of the story of which their own lives are a part, of the moving course of their own action and experience.
People live in sacred stories, while they live with mundane stories. If stories map our worlds, those at the outer edges are the ones we can least afford to lose. (The outer edges of what is a valid question for another time.)
The other gem this series of linked observations contains, to me at least, is that if anyone doubts the value of storytelling for knowledge transfer, we can go back to the very basics of cognitive function and problem solving to provide examples of utility. That's useful all by itself.
No comments:
Post a Comment