Orders of Intentionality

I came across this idea in my reading the other night that is strikingly similar to the subject of the previous essay, only it goes further. The basic concept is there is an ascending sequence of orders of intentionality, as demonstrated by this example:


Order 0: I will eat the apple.
Order 1: I know that I will eat the apple.
Order 2: I know that you know that I will eat the apple.
Order 3: I know that you know that I know that I will eat the apple.

And so on. According to this book, chimpanzees can handle perhaps two orders of intentionality, and humans at their limit can handle five orders of intentionality (example: "I can't believe that you would think me capable of hating Fred just because he failed to defend Mary when John called her a dummy." Yes, it's a little hard to follow, but you can just barely make sense out of it. At six orders of intentionality, we have to write it down on paper and parse it out mechanically. In most situations, though, we operate at about three orders of intentionality.

This stuff might seem like silly logic games, but in fact it is crucial to social interaction. A culture with strictly bilateral relationships is subject to the rule of the strongest, but the richness of human interaction derives from the multilateral nature of our relationships. If I can recruit Fred to help me against you, the two of us together can defeat you. And if our social group has, say, a hundred members, then no individual can ever stand up to the group will. Manipulating social relationships becomes the source of power. The person who can think one order further than others will better anticipate consequences, and thus better control events.

This stuff is all very important to drama, and in fact I have built in a capability to address it in the storyengine: the virtual reaction. This allows a character to assess the emotional reaction of another character to any contemplated action. However, there are two huge restrictions on this capability. First, it is only one order deep. Second, it concerns only the emotional reactions, not the actions. In other words, the storyengine can produce this:

"If I have an affair, my husband will be really really mad."

but not this:

"If I have an affair, my husband might catch us in bed and shoot us both."

This second extension is a necessary condition to obtain the first extension. You can't anticipate Mary's reaction to Fred's action if you can't calculate Fred's likely action. Now computationally, this should not be difficult to do; it would require that I rewrite all the relevant routines to be re-entrant, and then set up the recursion. I do see a problem getting such a system to run fast enough on a home PC. Sheesh, all those layers of planning would get horribly time-consuming! As it happens, the emotional reaction is an adequate first-order approximation of the process; it gives us a good guide to the desirability of an action.

This idea I will set aside for the next-generation storyengine. This one is important.