Algorithms for Lying

I have gotten much of the basic structure of the conversation set up in SWAT, but now I turn to a particularly difficult task: designing the algorithms that will control the actors’ decisions about lying to each other.

We humans are so adept at lying that we don’t appreciate just how tricky a process it is. To be effective, a lie must address some weakness in the victim of the lie that makes it plausible. It must also be believable to the listener. It should be difficult to check up on. Lastly, it must be useful to the liar; there’s no point in risking your credibility by a lie that doesn’t gain you anything. Moreover, there are four classes of potential lie, each of which will need to be handled separately. 

Four forms of lie
I shall begin with the four forms of lie available:

1. tell badmouth
2. tell betrayal
3. tell lie
4. tell defeat

The first of these serves to reduce the listener’s pGood, so it must also be compatible with the listener’s existing value of pGood. 

The second and third will reduce the listener’s pTruthful, with which it should be compatible.

The fourth doesn’t run much risk of detection.

Let’s start with the determination of what would constitute a beneficial life from the point of view of the liar. This would be a lie that undermines the listener’s trust or affection for an enemy. So the liar must first choose a suitable enemy to besmirch. There are three possible criteria here:

1. possession of significantly more aura pons than the liar; motivation is rational strategy
2. dislike or distrust on the part of the liar; motivation is simple emotion
3. recent maltreatment of the liar; motivation is revenge

I should include all three factors in the decision, weighted by personality traits of the liar. Those would probably be new personality traits. I can pair #3 with Power and #2 with Good, but I don’t see a pairing for #1, unless I create something for Smart_Dumb or Rational_Emotional.

Thus, the basic desirability formula would be:

Desirability = Rational * sum(p[Aura pons]) + Good * (-pGood - pTruthful) + Power * ???

This last term (???) is unclear to me (obviously). It represents the summation of wrong that the candidate has done to the liar. But is this not the same as the liar’s pGood for the candidate? Well, no, it’s more like the differential in pGood that has accumulated during the course of the game. 

Perhaps I should merge #2 and #3; are they really that different? 

Small point: I shall have to convert the weighting factors to UNumbers so that people with Rational < 0 don’t pick unthreatening candidates, etc.

Plausibility
The next consideration is the degree to which the lie will be believable to the listener. For this the liar must access the pValues of the listener; I’m willing to give them that access without resorting to 3pValues. I think these terms express the degree of implausibility of each of the verbs:

tell badmouth: bDifference(pGood, Quantifier2BNumber(4Quantifier))

tell betrayal, tell lie: bDifference(pTruthful, −0.5)

tell defeat: no basis for implausibility

I have to close this essay for now. More later.