Back in the Saddle For the Umpteenth Time

I have been busy with many tasks, and have made only slow progress with the Siboot storyworld. The biggest change is the introduction of a new capability for the actors: alliances. One actor can propose to another that they ally against a third actor for the remainder of the game. If they do so ally, then they will share all information they have about the third actor and, if possible, cooperatively attack him. 

The verbs are all in place and properly linked, and now I have resumed writing scripts for the verbs. This is the most difficult and most tedious task in storyworld building. And I am right back at a problem that I first discussed nearly three months ago: algorithms for lying. As I explained then, this is immensely complicated stuff. As I mentioned there, the actor can be motivated by a combination of three factors:

1. possession of significantly more aura pons than the liar; motivation is rational strategy
2. dislike or distrust on the part of the liar; motivation is simple emotion
3. recent maltreatment of the liar; motivation is revenge

Negative factors involved include:

A. The likelihood of being caught in the lie.
B. The need for the trust of the DirObject. 
C. ReactingActor’s own Honest value.

So let’s see if I can’t reduce these concepts to mathematical form.

1. Use the difference in total aura counts between 4Actor and the reacting actor.

2. Simply use pGood.

3. This is too messy to use. Rely instead upon the changes in the reacting actor’s pGood due to the maltreatment.

A. This is crucial, and dependent on the nature of the anticipated lie. Let’s break it down by the three types of lies:

I. lie about lying. ReactingActor tells DirObject that 4Actor told a lie. This is so vague an accusation that it could never be tracked down, and so is not subject to verification. However, DirObject will assess the likelihood of this based only on a comparison of the two pHonest values. If pHonest[DirObject, ReactingActor] < pHonest[DirObject, 4Actor], this lie would serve only to reduce the pHonest value of DirObject for ReactingActor. In the reverse case, DirObject would decrease pHonest for 4Actor, but there might not be any other benefits.

II. lie about betrayal. Reacting Actor tells DirObject that 4Actor betrayed (cheated on a deal with) a fourth person. This can be verified by direct questioning of the fourth person. If the fourth person is ReactingActor himself, then DirObject must treat this exactly as he would treat an accusation of lying.

III. lie about an alliance. Reacting Actor tells DirObject that 4Actor proposed an alliance to ReactingActor against DirObject. This is impossible to directly verify. DirObject could draw some inferences by comparing this report with reports from other actors, as well as a knowledge of whether 4Actor has attacked him.

This opens up a whole new can of worms. Should actors attempt to explicitly determine the actions of others to decide if they’re “out to get them”? This also raises the issue of circumferential relationships. Should these be invoked again? It certainly seems appropriate. But how inaccurate is it to say that pGood[1, 2] ~= pGood[2, 1]? Can X not assume that Y feels about X the same way that X feels about Y?

SWAT does not support the use of circumferential relationships; perhaps this was a mistake. I could of course set up an oafish system of circumferential relationships by creating 49 relationships, but that’s just too clumsy. Perhaps I should add circumferential relationships to SWAT myself. That’s a daunting task — the original code is quite sophisticated and I’m not sure I can successfully modify it.

Am I getting too complicated here? After all, shouldn’t the actors all know each other from years of experience? More important, shouldn’t the player be able to be confident in his knowledge of the actors?

Here might be the solution: actors start off with their first-person traits, which are immutable. Those first-person traits play the primary role in making decisions. However, the second-person traits are mutable (obviously) and those second-person traits also play an important role. Thus, while actors can remain honest, their trust in each other can steadily fall during the course of the game. 

Combine this with mirrored perceptions (X thinks that Y feels the same for X as X feels for Y) and I think I have a solution.