Advanced Tinkertoy Text

July 19th, 2010

I well remember the inspiration for tinkertoy text; it was back in 1984. I was in a market in New York City with my friend Eric Goldberg when my eyes happened on a copy of The National Enquirer at the checkout stand. Contemplating the headlines thereon, I realized that they appeared to have been created with an algorithm combining such elements as Elvis, “flying saucers”, “revolutionary new diet plan”, “love child”, “reincarnated”, and so forth. This inspired me to create the first tinkertoy text system for Balance of Power. It combined four text elements, sometimes scalable by a modifier, for each text component of a sentence. It was primitive, but it worked.

I made a slight improvement upon the system with Siboot in 1987, and then did little with the idea. Its next incarnation appeared in the Erasmatron, in which I had an improved tinkertoy text system that gave pretty good results, but was clumsy to use.

The most thorough implementation of tinkertoy text came with Storytron in 2008. It was based on text strings; these were the operators available for use:

TextConstant {a string specified directly by the user}
Concat {plug one string onto the end of the previous string}
Concat3 {same as above, but with 3 strings}
Concat4 {same as above, but with 4 strings}
Various pronouns and labels for Actors, Stages, Props, and Quantifiers
PickUpperTextIf {permitted choice of two text strings}
CalculatedText {a sort of SWITCH statement for text strings, permitting 3, 5, 7, 9, or 11 strings to be selected from based on a numeric value.

These operators allowed some rich tinkertoy text, but they suffered from two weaknesses: First, it took a lot of work to create algorithms that didn’t read like artificial text. It didn’t take long for readers to sense the mechanical nature of the algorithm. Second, you were forced into a single sentence structure; it was very difficult to avoid results like this:

The tired soldiers trudged by.
The nervous warriors stumbled by.
The eager fighters marched by.
The reluctant people moved forward.

It would be nice to break this pattern, permitting sentences with different structures, like so:

The soldiers looked tired as they walked by.
They were marching forward, but they didn’t look enthusiastic.
A sense of fear pervaded the fighters as they trudged forward.

But the algorithms for that are as yet unimplemented. The purpose of this essay is to consider means for improving tinkertoy text. My main goal is to get away from the need to build sentences string by string and come up with something more general. This requires a much more abstract approach, but I have several ideas for how this might be done. Both of them, however, are extremely odd.

The first is “schemolics”. This is an idea I developed in 1971 and has since been developed by the computer science community as “neural networks”. When I did my work on the subject, it was considered a waste of time, because a couple of psychologists had published a paper that had proven that “perceptrons” could not work. The perceptron was a model for a neuron that could supposedly form the basis for a computational model of the brain. Since perceptrons couldn’t work, there was no point in pursuing the concept that later came to be called “neural networks”. I, however, didn’t much care for their thinking (and they were later proved to be wrong). So I developed my ideas anyway.

My own approach was in some ways quite different from neural networks, and very similar in other ways. Basically, I modeled my system as a collection of pseudoneurons, each of which operated in a simply way. It had a threshold value theta, and when the sum of all inputs exceeded that threshold, it fired, sending a signal down each of its output lines. It had inputs coming in from numerous other pseudoneurons; each input contributed one unit of excitation. Time was quantized: each output required one quantum of time; there were no temporally overlapping inputs or outputs.

I identified two crucial factors characterizing any such system: the average threshold of the pseudoneurons, and the average number of outputs per pseudoneuron. I initially organized my pseudoneurons into layers but I later replaced that with random connections; any pseudoneuron could be connected to any other pseudoneuron. I assumed that some set of pseudoneurons constituted the input to the system, another set of pseudoneurons constituted the output, and the system as a whole constituted a pattern recognizer. I did some experiments with the idea while I was a graduate student and later toyed with it briefly around 1982. I haven’t messed with it since.

One question I never resolved was how the system learned. There were three possible methods: increasing thresholds, destroyed connections, and inhibitory outputs. I strongly leaned towards the first of these for theoretical reasons, but was never able to come up with reasoning demonstrating that the other two were the wrong approach (both are used in real nervous systems). However, I was adamant that such a system must learn through pain, which I defined as an overload of inputs on any pseudoneuron. My thinking was that any such overload would result in an increase in the threshold of the pseudoneuron, and this constituted learning. One thing I realized early on was that any functioning schemolic system had to be gigantic, with thousands or (more likely) millions of pseudoneurons.

Applying the schemolic system to my problem, it’s easy to see some parts of it. The output pseudoneurons trigger particular linguistic elements: either letters or entire words. And it’s easy to imagine a suitably trained schemolic system producing any desired textual output. But what kind of input would be required to start the schemolic process? How do I express a sentence specification to schemolic form? And the second huge question here is, how do I train the schemolic system? Both of these problems look like killers to me.

The second big strategy I will call “genetic programming”. This is not at all like what are called “genetic algorithms”, which are more aptly termed “evolutionary algorithms”. My concern is with genetics in the ontogenetic sense, not the phylogenetic sense. That is, I have a hunch that the method by which the genes in an egg develop into a mature organism might have value