I don’t understand why pre-transistor computers were so slow. It seems
like they could have run a thousand times faster than they did, at
speeds comparable to personal computers of the 1980s, in merely
desk-sized cases, for prices lower than some vacuum-tube computers
that actually existed.

(Edited from my comments on [a blog post by Mark VandeWettering] [4].)

I’ve been thinking a lot about tubes recently. The 955 acorn tube came
out in 1933 and could amplify a 500MHz signal then; so why were tube
computers so slow? You’d think that would allow you to run a
bit-serial full-adder at at least a hundred megabits per second, but
actual machines of the 1950s ran at more like a hundred kilobits. I’m
pretty ignorant about vacuum tubes and microwave circuit design, so it
could be something really obvious.

There were entire *computers*, like the LGP-30, that had under 100
vacuum tubes, and even fairly fast computers like the Bendix G-15 that
had under 500. So I’m not proposing they should have built *bigger*
computers. I’m proposing they should have built *smaller* ones in
which the tubes switched more often, and I don’t know why they
didn’t. Was it a lack of theory? (As Alan Yates points out in the
comment thread, asynchronous logic is still relatively underdeveloped
even today.) A lack of fast storage devices? (There’s no way you could
get megabits per second out of the drums of the time.)

Microwave-frequency circuitry built out of vacuum tubes — with coaxial
transmission lines of carefully matched lengths, etc. — wasn’t a new
problem in the late 1950s. There had been radar systems of some
complexity since the 1930s, which I believe is what the 933 was
developed for.

Drum computers (like the IBM 650 and 704, the LGP-30, and the G-15)
and delay-line computers (like the ACE and UNIVAC) very commonly
worked bit-serially, which eliminates the carry path length problem in
e.g. 32-bit-wide adders. There’s no obvious reason why you couldn’t
store the bits of a word on adjacent tracks of a drum instead of
bit-serially on a single track, but I think it was very atypical to do
so.

There were some tubes which could store multiple bits in one tube; I
think Dekatrons, which could store almost 4 bits, were the most common
of these. And of course there were the 1024-bit Williams tubes. But
both Dekatrons and Williams tubes were *really* slow, around 10 000ns,
the Dekatrons because they’re gas tubes and the Williams tubes, well,
I don’t know why.

ROM lookup tables are labor-intensive to make by hand, but fairly
inexpensive and very reliable; for N words of M bits, you need about
NM/2 diodes (semiconductor diodes were used in radios before 1910, and
good, cheap ones were available from about 1950) and 2M decoders of √N
outputs each (ideally, M of them sourcing current on their outputs and
M sinking it, but otherwise you can use an extra transistor or triode
per output on M of them). The Apollo Guidance Computer used “rope
memory”, which I think used a single N-way decoder instead of 2M
√N-way decoders, and ferrite cores instead of diodes, but the
principle was the same.

(You may be able to dispense with the decoders if you have an
already-decoded input handy, like the output of a Dekatron. I’ve been
trying to figure out how hard it would be to build an arbitrary
finite-state machine of up to ten states out of a Dekatron and some
handmade diode ROM. I think you’d need at least ten more amplifiers
(e.g. power transistors or triodes) to pull the new cathode of the
Dekatron below zero, and you might need an additional Dekatron to
latch the old output during the transition. But Dekatrons are really
slow, anyway.)

Alan Yates suggested prototyping such a device in 7400-series TTL
integrated circuits. The idea of prototyping in ICs is a good one, but
I think 7400 might be the wrong series to use; even a 74S04 typically
has a 3ns propagation delay, which means that you’re going to go too
much above 300MHz even with a single-gate path-length, and a plain
7404 is quite a bit worse. Also, they’re a lot less finicky about
low-current EMI than CMOS and, presumably, than vacuum tubes, since
both IGFETs and “Audions” are basically capacitive-input devices, and
vacuum tubes typically require quite high voltages, so they might not
flush out certain issues. (If electromagnetic noise was present, it
might screw up a vacuum-tube or CMOS machine, but not a TTL machine.)

Unfortunately, even modern 74HC04s seem to be pretty slow, like 8ns:
apparently 8× slower than the 955 triode from 1933. (But that 500MHz
number probably means it can linearly amplify a 500MHz sine wave; can
you get it to do something noticeably nonlinear a billion times a
second? I have no idea. It might take a little longer to saturate
it. Turing’s ACE notes give a number of 8ns, but I suspect that the
ACE, like most vacuum tube machines, wasn’t built with acorn tubes.)

I don’t know if you’ve seen this, but Tom Jennings designed and, I
think, started building a small, very slow tube computer in the last
few years, called the [Universal Machine] [0]. I think he might not
have been doing much on it lately.

It seems like, for machines operating at microwave frequencies,
*electrical* delay lines might be superior to latches and cores for
registers. Apparently [you can buy 500 feet of cable-TV cable for
US$40] [1] now, and I think the prices on alibaba.com can go down to a
quarter of that. At 1Gbps (at which speed you’d have to splice in some
amplifiers if you use ordinary TV cable) that would be about 600 bits,
and at 100Mbps it would be about 60 bits.  A few spools of that would
give you some pretty serious register capacity.

Turing [actually considered electric delay lines] [2] for the ACE,
although his notes suggest he was considering doing FDM (presumably
CW?) around 30GHz in a copper waveguide, not just dumping unmodulated
pulses one at a time into a bunch of coaxial cables. His survey table
shows them as better than acoustic delay lines in every way, often by
an order of magnitude, except for being twice as expensive. Yet he
devotes 11 pages of the proposal to explaining how to make acoustic
delay lines work, and nine words to electric delay lines.

Some intuition about how this could work might come from “WireWorld”,
which is a toy, a cellular automaton for digital logic; being a CA, it
incorporates transmission-line delay naturally. A few years back I
built a bit-serial full-adder in it, but with the propagation delays
of the gates and the transmission delays, it took about 21 generations
for the carry to cycle back around and be ready for the next bit. But
each gate could process a pair of bits every 4 generations
smoothly. (As you can imagine, this took quite a bit of tweaking of
the transmission line lengths.)

It turned out that you could feed five bit-interleaved pairs of
numbers through it, bit-serially, and it would correctly produce their
five sums bit-interleaved on its output.

I doubt I’ll ever work with a logic family in real life where that
trick works in exactly that way. Signals in real cables aren’t pure,
single-directional, and self-reshaping; they’re fuzzy and get fuzzier
as they travel, they slosh back and forth in the transmission line
whenever they encounter the slightest change in impedance, they ring
in weird places, they jump from one line to another, they glitch from
timing skew, and so on. But it was still inspirational.

Anyway, so there’s probably something I don’t understand that makes
this a lot harder than it sounds.

[0]: http://wps.com/J/UM/
[1]: 
http://www.amazon.com/ColeMan-Cable-92003-45-08-500RG6U-CoaxCable/dp/B0013AZ4WA/ref=sr_1_16?ie=UTF8&s=hi&qid=1267507841&sr=8-16
[2]: http://www.alanturing.net/turing_archive/archive/p/p01/P01-047.html "memo 
“Proposed Electronic Calculator”, by Alan Turing, probably from 1945, p.47, 
online thanks to the Turing Archive for the History of Computing"
[4]: 
http://brainwagon.org/2010/02/28/tubes-who-uses-tubes-anymore/comment-page-1/

-- 
To unsubscribe: http://lists.canonical.org/mailman/listinfo/kragen-tol

Reply via email to