Braggadocio Versus Bravura

We software developers live on the edge. Everything we do teeters on the brink of impossibility, twisting and stretching technology to achieve goals that we never know to be achievable until it actually works. Thus, chutzpah is just as important to a designer as technical expertise.

But what are the upper limits of chutzpah?At what point does a designer go too far, pushing the technology so hard that the result, while nominally functional, constitutes poor design? At what point does bravura slide into braggadocio?

I shall illustrate this distinction with two case histories. They are contrarian in style, for the good guy case history comes from Microsoft, while the bad guy case history comes from Apple. My own opinion is that Apple has consistently outperformed Microsoft in terms of software engineering and user interface design, but in this exceptional case, Microsoft showed better design judgement.

Microsoft Does it Right
First, the Microsoft case history: Word 6.0 and a feature Microsoft calls "clairvoyance". I haven’t used Word 6 (I’ve had it with Word, and have switched to MacWrite, a much better-designed product), but I gather that, when clairvoyance is turned on, the software scans the letters that the user is typing and fills in the rest of the word for the user based on past typing history.

Now, I’ll immediately grant that the details of implementation of this feature make all the difference in the world. Does it just fill in the word in the middle of your typing?Does it wait for a pause in your typing to fill in the rest of the word? Or do you have to invoke it with a special keystroke, authorizing it to fill in the word when you’ve typed the first few letters? Each of these approaches has implications that could ruin the value of clairvoyance if not implemented well. Without knowing these details, I cannot pass judgement on the feature as a whole. Instead, I want to zero in on a specific complaint registered against the program in general and clairvoyance in particular: that it’s slow as molasses.

We can easily see why clairvoyance would slow things down. A regular word processor spends most of its time in a wait loop, waiting for keystrokes. When it gets a keystroke, it prints the character on the screen and advances the cursor. If it is close to a line break, it may have to move some characters around on the screen. A page break might require even more processing. But most of the time, the word processor sits in a simple loop, performing simple processing functions.

Clairvoyance, however, requires far more processing. After each keystroke, the software must compare the current subword with a table of subwords, attempting to determine if enough characters have already been entered to confidently predict the remainder of the word. This requires extensive lookup and computation. And it must all be done in the time allowed for a single keystroke. A decent typist will enter a new keystroke every 100 milliseconds or so. Thus, the entire analysis must be carried out in under 100 milliseconds. With a 33 MHz clock, that’s only about 3 million clock cycles to do the work. I think you can see why Word 6 runs slowly on many machines.

So here we have a clear example of the designers pushing the technology. We can agree that, with a fast enough machine, clairvoyance would not be a problem. The Microsoft designers had to make a hunch as to whether it would run fast enough for most people on most machines.

It is important to remember that this is a pure gut-hunch. It depends on the speed of the processor, the number of special features that the user has turned on, the amount of RAMin the machine, and the speed of the typist. A hunt-and-peck typist on a lousy machine would not notice any speed problems. And what are the odds that a flaming fast typist will be using an obsolete computer? Wouldn’t we expect a computer hotshot to be using a hotshot computer? In the final analysis, the Microsoft designers had to play a hunch that the clairvoyance feature would be fast enough for most people. Perhaps it was a bad hunch, but we can’t excoriate them for rank stupidity if they were wrong it was a hunch, not a computation.

But I don’t think that their hunch was wrong. I think that their design judgement was on the mark here. If clairvoyance requires a Pentium to run adequately, I don’t think we can accuse the Microsoft designers of poor judgement. Yes, they pushed the limits of the technology. But Pentium-based machines are rapidly penetrating the marketplace; assuming such technology does not exceed the bounds of "reasonable chutzpah".

For me, the compelling argument comes from the opposite direction. Imagine that we have a computer with a 100 MHz CPU and 16 megabytes of RAM and a 1-gigabyte hard drive and all sorts of other doodads and geegaws. Wouldn’t it be stupid to design a word processor for such a machine that runs in only 100Kof RAM?That’s not parsimonious that’s just plain stupidity. Good design takes advantage of all (or most of) the capabilities of the hardware. Given 16 MBof RAM, a word processor that doesn’t take a few megs for its own use is a stupid design.

The same thinking applies to CPU cycles. A CPU with a 100 MHz clock will execute 100 million clock cycles every second. Each and every one of those clock cycles can be used to deliver value to the customer. And yet, if I were to carry out a usage analysis of my word processor, counting how many cycles were expended on what kinds of activities, I would find that at least 99%of all my clock cycles were expended in a wait loop. My word processor is so fast that it spends most of its time waiting for me. Now, that’s the way it should be:computers should sit around waiting for people, not vice versa. But the fact is, I bought my word processor to process words, and it spends less than 1% of its clock cycles processing words. What would you say about a word processor that required 10 MB of RAM, but used only 100Kfor word processing, and wasted the other 9.9MB?

Seen this way, the clairvoyance feature makes a lot more sense. Microsoft took some of those wasted clock cycles and put them to good use. In the process, the computer spends less time waiting for the human, but unfortunately, the human spends more time waiting for the computer. The crucial judgement here is, is the trade-off worth it? I suspect that it is indeed a worthwhile tradeoff with the faster computers. On the other hand, the howls of protest from users demonstrate that, with a significant minority of machines, the tradeoff doesn’t work.

Apple Gets it Wrong
Now let’s turn to the negative case. I’m going to pick on Apple’s hardware/software feature that allows multiple monitors to be plugged into the Macintosh, and the operating system can recognize and deal with all of them, even if they are of different sizes and pixel depths.

This is truly astounding technology. You can take any monitor sold for the Macintosh, plug its card into any slot, turn it on, install driver software, and poof! it works! For years I had two monitors, one black and white 1152 x 870, the other 256 colors, 640 x 480. They sat side by side and the operating system knew their relative positions. The cursor could slide from one monitor to the other. I could reposition windows to straddle the boundary between the two monitors, and the windows would display properly on each of the monitors. It was breathtaking at first, and it’s still years ahead of Windows 95.

But I think that in this case Apple’s chutzpah went too far. There are tradeoffs in everything, and the costs of this wonderful feature are greater than its benefits. There are two penalties paid for this technological snazziness: greater complexity in software development, and slower display processing.

The complexity issue is certainly the most vexing. The presence of multiple monitors makes it impossible to presume a palette. Instead, Apple came up with this godawful color management system whose complexity baffles me. I have never understood it. You don’t specify a palette; you request one, and the operating system decides which colors most closely approximate the ones you have requested. It is almost impossible to figure out what colors you actually have being displayed. A simple matter of determining the current color table requires a rather complicated piece of software whose operation I still don’t fully understand. It apparently works by examining every single pixel in the pixel map. Somewhere in that data structure there’s a color table, but I can’t get direct access to it. What a pain!

Of course, there are plenty of old pros out there who will laugh, "Oh, you’re overstating the difficulties, Chris. Once you understand it, it really is quite simple." That’s the same thing people used to say about those godawful WordPerfect format codes. The best objective assessment comes from Apple’s own technical documentation. The discussion of the color management system is the densest, longest, and most involved item in the documentation, stretching out over a number of chapters with extensive cross-references.

The consequence of this over-complex color management system is that fewer programmers learn the system and use it to its fullest. I have given up ever trying to understand all its complexities; I prefer to write simpler software. Thus, the software I write for the Macintosh isn’t as good as it otherwise might be. How many other programmers are there like me?How many good programs haven’t been written because of this problem?And how many published programs have suffered because the programmer spent more time struggling to understand the color management system than polishing the product itself?

Then there’s the runtime cost of all this complexity. The basic display software has to take into account the additional complexity arising from multiple monitors. This costs machine cycles. Moreover, this costs cycles in one of the most critical inner processes of the computer:the display subsystem. I’m sure that the Apple programmers were terribly clever in coming up with sneaky ways to speed it all up, but the fact remains that they have to do extra processing, and that’s always going to slow things down.

Those are the costs; what are the benefits?The main benefit is, you can plug a bunch of monitors into a Mac and they’ll all work. But how many people actually do that? I’ve seen some impressive rigs at Apple with eight monitors stacked up to form a mega-monitor. But out there in the real world, I have seen only a handful of multi-monitor systems. The vast majority of users have a single monitor.

What’s particularly funny is that every single multi-monitor system that I’ve seen belongs to a software developer. We developers seem to be the only ones who use the feature, and we’re the ones who pay the price for it. Is this cosmic justice, or are we making the best of a bad situation?

Think how much cleaner and simpler the Macintosh color management system would be if it assumed a single monitor. One palette, a defined pixel depth, clearly specified monitor boundaries -- the mind reels! But such is not our fate. In order to support a snazzy feature that benefits a tiny fraction of the population, we Macintosh developers must jump through extra hoops, struggle with excessively complex software, and accept slower drawing times.

This is chutzpah carried too far. This isn’t technological bravura, it is just technological braggadocio, showing off a capability that really doesn’t mean anything to most people. This is one case an exception, to be sure where the Apple designers blew it.

Good design is an endless series of tradeoffs. We push the technology to deliver maximum value to the customer, and along the way we make guesses as to the costs and benefits of any given innovation. We need the chutzpah to push into new areas, but we must not succumb to the temptation to show off, to implement a feature only because it’s "cool technology". The distinction between bravura and braggadocio is impossible to delineate with precision, but a good designer knows the difference in his bones.