Da Big Algorithm

I am at last returning to work on Siboot, and the problem I face is a nasty one: how to handle reactions to gossip? Let’s suppose that Actor A tells Actor B that Actor C thinks that Actor D’s Value for a personality trait T is X ± Y. That statement has seven parameters: A, B, C, D, T, X, and Y. 

Definition of intermediate terms
This is going to be complicated, so I’m going to define a number of intermediate terms. First, the easy ones:

pValue(D, T): the true value of trait T for actor D.
p2Value(D, T, C): the value of trait T for actor D as perceived by actor C
p3Value(D, T, C, A): the value of trait T for actor D as perceived by actor C, according to actor A. Same as X.
u3Value(D, T, C, A): the uncertainty of the value of trait T for actor D as perceived by C, according to A. Same as Y.
weightedMean(x1, u1, x2, u2): the mean of x1 and x2, weighted by u1 and u2.
grossWeightedMean(T, B, D): the weighted mean of all p3Values of T towards D, as known by B.

The algorithm will take as input the seven parameters provided in A’s statement to B, and replace the existing values of p3Value(D, T, C, A) and u3Value(D, T, C, A), although it must first increase the reported u3Value(D, T, C, A) by the degree of distrust that B feels for A, that is, p2Value(A, honest_dishonest, B). Then it will calculate grossWeightedMean(T, B, D). This value now constitutes B’s best value for p2Value(D, T, B). Next, the algorithm must evaluate the deviation of each and every actor’s reported p3Value(D, T, C, actor) from this p2Value(D, T, B). That deviation is used to alter the p2Value(actor, honest_dishonest, B). In other words, the more that actor’s report differs from the new best value, the less honesty that B will feel for that actor. 

Here’s the whole thing in Java. To simplify the presentation, I have left out the value of T; this code assumes a single personality trait that is implicit. The slightly different version that can handle any trait obfuscates without benefit.

public void reactToGossip(int a, int b, int c, int d, float x, float y);
// enter reported values into the array
setP3Value(c, b, a, x);
setU3Value(c, b, a, Blend(y, +1, -getP2Honest(a, b)));
setP2TValue(d, b, grossWeightedMeanT(b, d));

for (int i=0; i < actorCount; ++i) {
   if (i != b) {
      float discrepancy = BAbs(BDifference(getP3TValue(d, i, b) - getP2TValue(d, b)));
      setP2HonestValue(i, b, -UNumber2BNumber(discrepancy));
   }
}

This raises a crucial question: how will P2Honest values drift during the course of the game? Will they drift upwards or downwards? In other words, will people tend to gain trust in each other, or will they lose trust in each other? 

I’ve been wringing my hands over this algorithm for four days now; it’s time to put an end to this egregious cogitation and implement it. Let’s just do this and see how it works.