Alterations to Face Editor

Now that we have some faces to work with, all manner of possibilities arise. In this essay I shall present possibilities.

The current irises are the same as human irises; I shall propose that they be unique to each species. They could be slightly different shapes or, more interestingly, having different internal striations. I would like to be able to modify them to alter the size of the pupil in response to different emotional situations. Also, I’d really like to make subtle changes to the color of the iris to indicate mood, like so:

Here’s how Skordokott looks with the different iris:


We could go even further and have some very weird irises. One important thing to remember here: we need different irises for the different actors so that the player can recognize his opponents in Dreamspace, because the player sees only the eyes.

In any event, I think it important that the eyes stand out, be striking, attract attention.

Coordinate System
I am considering revising the coordinate system. Currently its origin is at the upper left corner, as is typical with screen displays. What makes this all tricky is the fact that I end up mixing THREE different factors: the emotional expression, the actor specifics, and the excursion expression. Currently, all three are expressed in purely additive terms. I wonder if perhaps I can make matters neater by making all actor values multiplicative? In other words, the basic emotion specifies the coordinates of the control points, which are then multiplied by a factor specific to each actor that is close to 1.00. In other words, the emotional expression sets the basic positions, but the actor-specific factors can stretch them. If this is to be so, then I must implement a new coordinate system with an origin at the center of the face, like so:

The real trick to making this work will lie in coming up with good multiplicative factors. I have already implemented eye sizing and jaw sizing. Do I need a redesign to implement other actor traits?

The Next Day
Alvaro has asked me to prepare an explanation of how all this works, so I might as well do it here. I’ll be describing how it works NOW, not how I am considering changing it.

First, some definitions:

Control Point: a single (x,y) coordinate specifying the location of one of the squares in the above diagram. The vertical origin for control points is the top of the image. The horizontal origin is the centerline of the face. The coordinates are specified on the scale of the big image, but they are scaled down for the small face images.

Feature: a single component of the overall face. There are nine features: Eye, Iris, OrbLine, BrowLine, EyeBrow, Nose, Jowl, UpperLip, and LowerLip. Each feature consists of a set of control points. OrbLine, BrowLine, Eyebrow, and Jowl are simple lines. Eye is a closed loop. Iris and Nose are images that are simply pasted on top of the baseFace. The UpperLip and LowerLip are complicated because they cross the centerline. Every feature except the Nose has a left side and a right side. Left x-coordinates are negative, Right x-coordinates are positive. Linear features are drawn with different line thicknesses assigned to each control point. 

Expression: the data that specifies the locations of the control points for all the features

Actor Face: the data that specifies the peculiarities of a particular actor’s face. These include baseY to shift the drawing of the expression up or down relative to the baseFace. JawWidth determines how wide the mouth is and pushes the Jowl lines apart. EyeSize scales down the size of the Iris. EyeSeparation moves the eyes apart horizontally. There is also a VerticalOffset assigned to each feature that makes it possible to adjust features vertically for each actor’s face.

Excursion: similar to an Expression, except that it is used only for animation.