How Computers Work

Goal of this Appendix
My goal in this appendix is to explain in simple terms exactly how it is that a collection of silicon can do all the wonderful things that computers can do. This is certainly a tall order, but not an impossible one. If you have followed me this far, you should have no problems understanding this appendix. However, while the concepts themselves are simple, they do tend to pile one on top of one another in a rather intimidating heap. If you pay attention and bear with me, we’ll pick this heap apart. I assure you, the result -- the knowledge that even the mighty computer is within your mental reach -- will be worth the mental exertion.

My fundamental strategy in this appendix will be a process of agglomeration. I’ll start with the simplest building blocks and use them to assemble bigger building blocks. These bigger blocks will then form the basis for the next, even bigger group of building blocks. At each stage of the game, once you understand how the building block is put together, you forget the internal details and just treat it as a black box whose properties you have already figured out. Its kind of like going from atoms to molecules to cells to people to societies. Leaping from atoms to societies is a mind-boggling endeavor, but if you take it in steps, it really can make sense. Let’s begin.

Level One: Transistors
The atoms of a computer are transistors. A typical personal computer will have millions of these little devices inside its chips. What they do is very simple, but how they do it involves a great deal of tricky physics. A transistor controls the flow of electricity by controlling electrons. The big trick is its ability to control the flow of many electrons with just a few electrons. Crudely speaking, it uses a few moving electrons to stampede many more electrons. The result is like a switch that is controlled by electricity instead of a finger. You can use the presence or absence of electricity in one wire to turn on or turn off electricity in another wire.

Three special properties of the transistor make it possible to build computers out of transistors. The first is that we can run it either all the way on or all the way off, and treat those two states as numbers. If the transistor is on, we say that it means "1"; if it is off, we say that it means "0". All the 1s and 0s inside a computer are just represented by transistors that are turned on or off. We can extend the idea to the wires inside a computer. If a wire has electricity on it (meaning that a transistor connected to it has turned on), then the wire represents a "1"; if the wire has no electricity, it represents a "0". It’s not as if there are tiny little 1s and 0s printed on the wires; it’s a code. Electricity means 1, no electricity means 0. This gives us the ability to manipulate numbers (i.e., to compute) by manipulating electricity.

The second special property of the transistor that makes computers possible involves a special type of transistor that can not one but two independent controlling wires. Earlier, I said that a transistor is like a switch that is controlled by electricity instead of a finger. Well, it is possible to make a transistor that uses two finger-wires instead of just one. It’s rather like the double light switches in long hallways in some houses -- either one will turn on the light.

The third special property of transistors is their ability to invert the relationship between input and output. Normally, we think of the transistor as creating a direct relationship between the input wire and the output wire: if there is electricity on the input wire, it turns on the output wire, and if there is no electricity on the input wire, then it turns off the output wire. It is also possible to make transistors reverse this relationship, so that electricity on the input wire will yield no electricity on the output wire, and no electricity on the input wire will yield electricity on the output wire.

Let’s summarize what we’ve got on transistors with some simple diagrams:

Level Two: Gates (Assemblies of Transistors)
We can use transistors as the building blocks for the next level in our hierarchy: the gate. A gate is an electronic circuit that accepts one or more inputs and produces a single output that is a logical function of the inputs. In simple terms, you plug wires into it and it has one wire coming out of it. Whether or not the output wire will have electricity on it depends on the electricity on the input wires and the rule that the gate uses. The first type of gate is called an "AND" gate. This gate is drawn as follows:

The rule for the AND gate is simple: if the first input is a 1 AND the second input is also a 1, then the output will be a 1. Otherwise, it will be a 0. This is just like the Boolean AND function you use in programming; in fact, this is the electronic circuit that the computer uses to make that program work.

The second type of gate is called the "OR" gate, and the diagram we use for it looks like this:

The rule for the OR gate is also simple: if the first input is a 1 OR the second input is a 1, then the output will be a 1. Otherwise, it will be a 0. Again, this is just like the OR Boolean function you met in programming and is the hardware source of the software capability.

The third simple gate is hardly a gate at all: it is a simple inverter, and it is diagrammed like so:

The inverter simply takes the input and inverts it, so that a 1 becomes a 0 and vice versa. This little guy is handy for all sorts of jobs.

Level Three: Decoders, Latches, Adders (Assemblies of gates)
We can use the gates we have just built to create even more elaborate devices. The first new assembly is called a decoder. Its purpose is to electronically convert a numeric reference to a specific one. Instead of talking about "the third one", we can use a decoder to talk about "that one". Suppose, for example, that we have a wire. It can carry a 1 or a 0. Thus, one wire can represent one of two choices. We could, for example, use it to select one of two light switches that we might want to turn on. In other words, if the wire has a 0 on it, then we want to turn on light #0, and if it has a 1 on it, then we want to turn on light #1. However, there’s a problem: the wire by itself can’t turn on the right switch. If we hook it up directly to a light, it will turn the light on when we have a 1 on it, and turn it off when we have a 0 on it. We need a decoder. A decoder for this job might look like this:

If you put a 1 into this decoder, Ouput #0 will have a 1 on it and Output #1 will all have 0. If you put a 0 into this decoder, then Output #0 will get a 1 and Output #1 will get a 0. In short, a decoder translates "Gimme number x" into "Gimme that one".

The next doodad we will build is called a latch. We build this one from NAND gates. "NAND" means "Not AND"; it is just an AND gate whose output is inverted. In other words, we take a plain old AND gate and stick an inverter on its output. We indicate this by putting a little circle on the end of the AND gate; that makes it a NAND gate. The rule for a NAND gate is as follows: if Input #0 is a 1, AND Input #1 is a 1, then the output will be a 0; otherwise, the output will be a 1. So, here is a latch:

(Here’s another fine point: when wires cross, they are not considered to be connected unless there is a dot marking the spot. Thus, in this diagram, there is no connection between the diagonally crossing wires in the center. There are four connections marked by four black dots.)

This one may look a little intimidating, but what it does is simple and important: it remembers. Suppose that you start off with both inputs ("Remember 0" and "Remember 1") set to 0. If you then make the "Remember 0" wire equal to 1, then the output wire will be 0. If you make the "Remember 1" wire 1, then the output wire will be 1. More important, after you stop making one of those wires 1, and revert to the normal state in which both wires are 0, then the output will STILL reflect the state that you put it into earlier. It remembers!

The last device we will assemble is called an adder. It is not a snake, but a circuit that will add two numbers together. In this case, we are going to keep it real simple: we are going to add two single-bit numbers. In other words, this circuit will be able to calculate just four possible additions:


That last addition may throw you; since when did one plus one equal ten? Remember, we are working in binary numbers here, and binary 10 is just decimal 2, so the equation really does work. Here’s what the adder looks like:

We now have devices that can select, remember, and add. Time for the next step.

Level Four: Breadth
We are now going to expand the circuits we have to make them more practical with real numbers. The above circuits are all single-bit circuits; each one can handle only a single bit of information. The decoder can decode just one wire, selecting one of only two possible options. The latch can remember only one bit, a single 1 or 0. And the adder can only add two single-bit numbers. These devices are almost useless. Who wants to add 1 to 0 all day long? Who needs to remember just a single 1 or 0?

The big trick we are going to pull is embarassingly simple. We are going to gang each of these devices up in parallel with a bunch of its brothers. Lo and behold, they will suddenly be useful! Let’s start with the latch, for it’s the easiest one to understand. Here’s a diagram:

This is nothing more than eight separate latches sitting side by side.

This little guy may not look like much, but you have known and used him many times. May I introduce you to one byte of RAM. Eight bits side by side. You can store an eight-bit number in here. That’s enough to denote one character of text, or an integer between 0 and 255. As you can see, one little latch may not do much, but when it gangs up with seven siblings, it suddenly becomes a worthwhile bit of silicon.

Now let’s turn to the decoder. The example I gave earlier was a one-bit decoder. Here is a two-bit decoder:

The single-bit decoder looks at one wire to select one of two possible options. The two-bit decoder looks at two wires to select one of four possible options. If you have two wires carrying 1s or 0s, then there are four possible combinations of the 1s and 0s: 00, 01, 10, and 11. If you can read binary, you will recognize these numbers as just decimal 0, 1, 2, and 3. Three wires would give eight combinations; eight wires would give 256 combinations. By making our decoder a little bit bigger, we make it a lot more powerful. What do you use a decoder for? I’ll get to that in the next section.

For out next breadth trick, we’ll broaden out an AND-gate:

This device allows us to take two eight-bit numbers, AND them together, and read the result at the output of the gates. You may wonder, of course, why anybody would want to AND two numbers together. I have to admit, it’s not one of the most useful stunts in the world, but it is occasionally useful to a programmer, and it makes an excellent introduction the next broadening trick.

Now we are going to broaden the one-bit adder. This is a tricky operation, because the individual bits in an addition are not independent the way the latch bits in a byte are independent. Recall the problem of the one-bit addition. What do we do with the extra digit when we add 1+1? All the other additions yield but a single bit of output, but this one needs an extra bit, and every bit needs one wire, so we will need two output wires to represent the output. That second wire will be treated differently. We will call it the "Carry" wire. Do you remember your old addition exercises in grammar school? Do you remember chanting little songs like, "5 plus 8 is 13; put down the 3 and carry up the 1"? The rule with addition is that when you get a number bigger than ten, you write down the result and carry up the one. It’s the same way with binary addition; when you add 1+1, you get 10; write down the 0 and carry up the one to the next decimal place (or binary place, in this case).

All this nonsense with carry bits is important because it allows us to expand our adder to a useful size. If we now throw in a circuit that will also add in an input carry bit, a carry bit that would come from a lower stage in the addition, then we can build adders as wide as we want. Consider this example: we want to add two eight-bit binary numbers:

If we break this addition up by digits, adding just one column at a time, then we can use a one-bit adder for each column. We start at the right side of the number, just like you do in regular addition. If the one-bit adder ends up generating a carry, it passes the carry on to the next higher one-bit adder, which adds it into its work. We will need better adders that can use the carry bit that is passed to them, but that is simply a matter of a few more gates.

Level Five: Assembling RAM
Now we are ready to construct our first major computer component: the computer’s RAM. In the previous section, I showed you a single byte of RAM. You probably know, though, that a typical microcomputer has 64K bytes of RAM -- that’s 65,536 bytes total. (Let’s keep this simple by ignoring the current generation of 16-bit and 32-bit computers. I’ll talk only about the simpler 8-bit computers that were the rage in the early 1980s. They are obsolete now, but the bigger computers that people use nowadays work on the same principles.) It takes no great leap of imagination to go from one byte to 65,536. There remain, however, a few practical problems associated with actually using all that RAM. If each byte of RAM has eight wires going into it and eight wires coming out of it, the computer will need over one million little tiny wires inside just to hook up all those bytes. That’s ridiculous! There’s gotta be a better way.

There is, but it’s a little harder to understand. It’s called a bus. A bus is a bundle of wires that everyone shares. Every computer has three buses: the address bus, the data bus, and the control bus.

The address bus is the easiest to understand. How does the computer select which of those 65,536 bytes it wants to use? It surely can’t have 65,536 little wires coming out of the main chip, with one wire going to each byte of RAM. The solution is to use an address bus. This is a group of 16 wires coming out of the main chip. Each byte of RAM has its own unique number, called an address, that the computer can use to refer to it. The first byte is called byte 0, the next one is byte 1, then byte 2, 3, and so forth until we reach the last byte, byte number 65,535. Sixteen wires allow us to specify any of these addresses. For example, byte number 0 would have the code 0000000000000000 on the address bus, while byte number 65,535 would have the binary code 1111111111111111 on the address bus. Byte number 37,353 would have the binary code 1001000111101001 on the address bus. Thus, a computer can specify any of its 65,536 bytes with only a sixteen-wire address bus.

You may wonder, though, how the RAM can actually figure out which byte is the proper one. When the main chip tells the RAM, "I need to know what number was stored in byte number 37,353", how does the RAM decode that number to select the right byte? The answer comes from that decoder circuit that we worked up earlier. You build a big decoder into each RAM chip that can decode the address bus and figure out exactly which byte is being accessed. That way, all the 65,536 little wires that go directly to the bytes to say, "You’re the one he wants" -- those little wires are built directly into the silicon chip, saving everybody a lot of soldering.

To summarize address buses: an address bus is a group of 16 wires that run from the main processor to the RAM. It allows the computer to specify exactly which byte it wants out of RAM. The RAM chips have eight latches for each byte they store, and a huge decoder that decodes the address bus and selects the appropriate byte.

The next question is, how do we get data into and out of the bytes? An address bus lets the computer point to a particular byte and say, "I’m talking to YOU, silicon-head!" But how does the computer get data into or out of the byte?

The answer is, with the data bus. A data bus is a group of wires that run from the computer to the RAM. It is eight bits wide, and so allows the computer to transfer data a byte at a time. The data bus goes to every single byte in the computer’s RAM. Now, that raises a problem: who talks on the data bus? Let’s say, for example, that byte number 37 has the value 11111111 stored inside, while byte number 38 has a 00000000 stored inside it. If both bytes are putting their values onto the data bus, then what does it have: 1s or 0s? Who wins when everybody talks at once?

The answer to this question involves two tricks: tri-state logic and a control bus. Tri-state logic is a variation on normal chip design that allows a chip to have not just two states (0 or 1), but three: 0, 1, or disconnected. A regular chip is always sending out his value on his wire. Imagine him shouting into a pipe: "I’m a 0, a 0, a big fat 0; I’m still a 0, 0, 0; whoops, now I’m a 1, a 1, a 1; I’m still a 1; yes, I’m a big, tall 1, yes I am, a 1, a 1. . . " A tri-state RAM chip can be a 0, a 1, or he can shut up. Even handier, you can shut him up with an electrical signal. Thus, each byte of computer RAM stays shut up until the computer authorizes it to talk through the address bus. Thus, a data bus is like a huge telephone party line with 65,536 people on it, but they won’t talk until the master operator tells them to talk.

The second trick to making a data bus work requires the use of a control bus. After all, RAM is supposed to work two ways: either the computer wants to save a number into RAM, or it wants to recall a number out of RAM. We also call this writing and reading. So we run a few more wires from the computer to the RAM called a control bus. The first wire in the control bus tells the RAM whether it’s being accessed for a write or a read operation. You might say that this wire is a command to either talk or listen. The other common wire on a control bus is called the clock signal. This wire goes on and off, on and off, in a regular cycle. It keeps everybody synchronized. Have you ever watched two jugglers tossing pins or knives back and forth? They count off to each other, "One, two, three, four; one, two, three, four. . ." This allows them to synchronize their actions. The clock wire in the control bus serves the same purpose, except with bytes rather than knives.

We have now built an imaginary RAM module that can store 65,536 bytes of information. The computer can read and write data into it, and the whole thing is practical to build and operate.

Level Six: The Central Processing Unit (CPU)
Our next creation will be the heart of the computer: the CPU. This is the unit that actually crunches the numbers, calls the subroutines, and loops the loops. In this highly simplified and imaginary computer, we will have but four parts: the ALU, the instruction decoder, the registers, and the address controller.

We shall take up the ALU first. ALU is an acronym for "Arithmetic and Logic Unit". The ALU is a rather like one of those all-purpose handy-dandy kitchen utensils that slices, dices, and shreds, only it does its work on bytes rather than morsels. Instead of a collection of blades in various shapes and sizes, the ALU contains four fundamental byte-munchers: an AND-gate set, an OR-gate set, and EOR-gate set, and an adder. Each of these is eight bits wide, taking two bytes as input and producing a single byte as output. You ship two bytes to the ALU, tell it which handy-dandy blade you want it to use, and you read the byte of output. It activates the proper blade with a decoder-selector circuit and something like tri-state logic in much the same way that our RAM module selects the correct byte and tells everyone else to shut up.

The registers are another simple part of the CPU. These are simply bytes of RAM inside the CPU that can be used as quick storage of intermediate results. The relationship between register-RAM and regular RAM is rather like the relationship between a desktop and a desk drawer. You keep your papers in the desk drawer most of the time, but when you are actually working with a particular document, you bring it to your desk. The desk can’t hold many documents, but they are much easier to work with on your desk than in your desk drawer. Almost all programs follow a very simple strategy: bring some bytes out of RAM into the registers; crunch them up; spit out the results back into RAM.

These first two parts of the CPU (the ALU and the registers) are devoted to handling data. The next two parts of the CPU are trickier because they handle the more complex task of making the program go. The first of these is the address controller; its task is to get program instructions out of the RAM and into the CPU. You will recall from Chapter Seven that bytes in the RAM can represent many different things. From the point of view of the CPU, those bytes in RAM fall into two broad categories: data to be processed, or instructions that tell how to process the data. The address controller fetches those instructions out of the RAM in the proper sequence and delivers them to the CPU. Thus, most programs follow an alternating sequence of "fetch an instruction, fetch a byte of data". The address controller also handles the manipulations required for branching and looping. These are very simple manipulations. Each program instruction sits at an address in RAM; successive instructions are placed in successive RAM locations so that the address controller doesn’t have to think too hard to figure out where to get the next instruction. However, if the program needs to branch or loop, the address controller merely loads in the address of the new instruction and proceeds from the new location. It’s that simple.

Now we are ready to tackle the inner sanctum of the CPU: the instruction decoder. This is the module that actually translates instructions into action. It is the heart and soul of the entire computer, the most necessary of all the necessary components, the essence of the computer. It is based on nothing more than the simple decoder, with a number of intricacies added. There are two broad types of information conveyed in a typical microcomputer instruction: what to do and who to do it to.

The "what to do" part is easy to understand. This boils down to just two basic types of commands: move information or crunch information. Moving information is just a matter of taking a byte from one place and storing a copy someplace else. Depending on the complexity of the processor, you might have any number of combinations here. You could move a byte from one register to another register, from a register to RAM, or from RAM to a register. The crunching part just tells the CPU to use the ALU to crunch two bytes.

All this is based on a fancy decoder circuit. The bits of the instruction go to the decoder. If they indicate a crunching operation, the decoder activates the ALU, sending it the proper code to activate the desired circuitry inside the ALU. If the instruction indicates a move operation, then the decoder will select the proper registers or RAM locations and send the proper signal to the Read/Write line on the control bus to indicate the direction of the move operation.

The next part of the instruction specifies the object of the command; normally this is a register or RAM-location. For example, if we have a move instruction, we must specify the source and destination register or RAM location. With a typical home computer, this is done with additional bytes that follow the main instruction byte. Thus, the command would consist of three bytes. The first byte says "Load this register with the byte in the RAM location whose address follows". The second and third bytes give the 16-bit address of the RAM location. The instruction decoder, when it receives the first byte, opens up a digital pathway that will send the next two bytes straight into the address controller; then it activates the address controller, which in turn fetches the byte from the address it holds.

In essence, then, the instruction decoder translates instructions into action by using decoders that activate different sections of the CPU. Some decoders might open up pathways in the CPU to ship bytes around to different locations; some might activate the ALU; some might activate multi-step sequences. In a real computer, it might be very complex, but it is certainly not magic.
And that is how computers work.