Consciousness

August 14th, 1999

I’ve been reading plenty of books about consciousness, and I must say, they certainly don’t seem to have a grip on the problem. Many interesting ideas have been offered, each of which offers some grain of truth, but as yet nobody seems to have nailed down the problem.

My own work in interactive storytelling generated a fascinating and unconventional view of consciousness. I’ll trace how I came to my startling conclusions.

The starting point was my early attempt at generating a personality model. Now, personality modeling is an ancient and much-practiced art; personality models are as common as hot new 3D game designs. So I didn’t expend much energy trying to build the perfect personality model. I slapped together a list of personality traits that I thought might be useful for interactive storytellers, and figured I’d tweak it over time.

The next step was to build relationship models. This was a trickier problem. What are the fundamental components of every relationship between two people? I wanted to be more careful here, because these relationships are two-dimensional and therefore require more RAM. If I have, say, 32 characters in my storyworld, then I’ll only need 32 bytes of RAM for each personality trait that I add -- one byte for each character. But relationships require that I store 1024 bytes for each relationship -- 32 relationships for each of 32 characters. And if the character count grows larger, my problem could become serious. So I wanted to be stingy with my relationship model. I wanted to find the "minimum set of orthogonal vectors that span the relationship space" -- the fewest relationships, sharing nothing in common with each other, that cover everything about human relationships. Well, maybe not everything, but most of the important stuff for interactive storytellers.

While floundering around with this idea, I stumbled upon an odd realization: a relationship value could be treated as one person’s
perception of another’s intrinsic traits. This first struck me when considering trust, surely one of the fundamental relationships. My trust in you is really just my perception or assessment of your intrinsic integrity. Similarly, my fear of you could be described as my perception of your intrinsic capacity to inflict harm. The more I thought about it, the more convinced I became that all relationships were perceptions of intrinsic traits.

This is a cute idea, and I was rather pleased with my cleverness. It did not, however, lead me to any grand realizations. The Big Idea came much later, when I was contemplating what I called "circumferential" relationships. Let’s use a geometric visualization of a relationships model. We’ll put our subject in the center of the universe, which is where we all place ourselves anyway. The intrinsic traits of our subject are numbers that belong right there with the individual. Next, we scatter other people around the subject, and his relationships with them are imagined as arrows running from the center to each of the others. All these relationships radiate out from the center -- that’s why I call them "radial" relationships. They’re also two-dimensional relationships, because two people are involved in each relationship.

The next step is to imagine the relationships between the other people. These aren’t radial; instead they run around the edges of our imaginary diagram. I call these new relationships "circumferential" relationships. Here’s a key point: I’m not talking about what those relationships really are, I’m talking about what they are as perceived by the Big Ego in the center. If we want to know what they really are, we need to look at the radial relationships of each of the circumferential characters. For this diagram, the circumferential relationships are the perceptions of the individual in the center.

Confused? Here’s an example. John is in the center. Jane is his girlfriend; she’s very close to him, to the left and slightly above John. Tom is a friend of John; he’s at the top of the diagram. John is jealous because he thinks that Tom and Jane are in love. In truth, they aren’t, but John thinks so and that’s what’s important. John’s jealousy could be the driving element in a good story -- and it’s a perceived relationship, not an actual one.

By the way, note that circumferential relationships are three-dimensional; they require three people: the relator, the relatee, and the perceiver of the relationship.

These circumferential relationships might seem esoteric to you, but in fact they play a major role in human behavior. A great many of our social decisions are based on these circumferential relationships -- the relationships between others as we perceive them. Call it social politics, call it the Eternal Triangle, it’s important and we all devote plenty of time and energy to assessing those circumferential relationships. Is Fred cozying up to the boss to squeeze me out of the big promotion? Does my daughter love her new stepfather? Is the teacher favoring Johnny over Jimmy? Is my best friend blabbing my innermost secrets to others?

Since these relationships are so important, we all need powerful mental facilities for evaluating them. We can concoct an imaginary evolutionary scenario for explaining it. Here we have, say, John Homo Erectus, grousing about for roots and berries. He has well-developed mental facilities for handling radial relationships, so he knows that he really really likes Jane Homo Erectus, and he appreciates that Tom Homo Erectus is his best buddy, having saved him from that charging woolly mammoth. But poor old John just doesn’t get circumferential relationships; he doesn’t realize that while he’s out there grubbing for roots and berries, clever Tom is impregnating Jane with his own sperm. Poor John! He’ll spend the rest of his days feeding Tom’s offspring! His genes won’t pass into the gene pool; Tom’s will.

Tom’s advantage arises from his ability to perform social calculations involving circumferential relationships. He has figured out how to impregnate Jane and get away with it. Now, how did Tom suddenly acquire this enormous new capability? Was he born with a brain several hundred cc’s larger than John’s? No; his only trick was to slightly extend his existing social calculation facilities to use them at a higher level of indirection.

Tom already boasts a powerful set of brain functions for evaluating radial relationships. He knows that, if he gives food to another person, she is appreciative. If his woman is unfaithful, he gets mad. These social computations are pretty much knee-jerk, automatic processes. He has abstracted them enough that he can plug any person into the relationship-calculator and get reliable results. Give food to Jane, and she’ll be appreciative. Give food to Jack, and he’ll be appreciative, too. Tom understands the principle independent of the person.

John shares all these capabilities with Tom, but there is a hitch: the abstraction is one-sided. John’s social computations or rules have no subject. John thinks "Give food to X, and X is appreciative." John has abstracted the rule to the point of using an X for the indirect object, but no further. Tom, however, has added a new level of abstraction: he thinks "Y gives food to X, and X is appreciative of Y". He knows that when John gives food to Jane, Jane appreciates John. That’s a circumferential relationship.

This kind of thinking enables Tom to calculate that John can’t get mad over what he doesn’t know, and Tom can thereby formulate a scheme for impregnating Jane in John’s absence. The scheme will work because Tom can put himself into John’s shoes and calculate John’s perceptions, refining the scheme until he’s sure that it will work.

All of this arose for sound evolutionary reasons, your basic survival-of-the-fittest kind of stuff. But there was an unintended side effect. Dumb old John always thought in terms like "Give food to...". This way of thinking has an implicit but unrecognized subject. John innately understood that it meant "I give food to...", but he never had to use the "I", because it was never necessary. Thus, for John, the concept of "I" never existed. But Tom, having reformulated the rule into "Y gives food to..." created a new problem for himself: what to plug into Y for myself? The answer is obvious to you and me: "I". But the concept of "I" hadn’t been invented back then. Tom had to create it.

Thus was "I" invented. It’s a good thing Tom didn’t patent it, or we’d all be in a lot of trouble, having to pay Tom if we wanted to be conscious individuals. Because "I" really is the core concept of consciousness. Once Tom had created "I", he could plug it into all sorts of other social calculations that previously had been applied only to others. He could wonder how he would feel if he were hunted, killed, and eaten like the deer he had just feasted upon; angst goes hand in hand with "I". So does deception, which got Tom ahead in the gene-pool race. And so does morality. Tom could understand the Golden Rule; John couldn’t.