Windows 95:  A 32-bit patch for a 16-bit GUI shell running on top of
an 8-bit operating system written for a 4-bit processor by a 2-bit
company who cannot stand 1 bit of competition.

What exactly does 32 and 64 bit mean?

It is the native size of how many zeros and ones the CPU can deal with
in one cycle.  The number always double, so there is nothing between
32 and 64.  At a given speed, a 64 bit cpu is roughly twice as
powerful as a 32 bit cpu which processes twice as much as a 16 bit
cpu.  The problem is the overhead increases geometrically.  Which is
why we went quickly from 4 bits to 8 bit to 16 bits and why getting to
32 bits quite a bit longer and why 64 bit is just getting here.  The
next jump to 128 bit will not be for quite some time, probably
decades.

The word length determines how many discrete states the computer knows
and consequently how high it can easily count (without resorting to
tricks).  Apple likes to make a big deal how the G5 Macs can handle
more than 4 gigabytes of RAM, since that is the limit imposed by a 32
bit cpu or 32 bit OS.

The problem with being 64 bit is that there just isn't that much that
benefits from that doubling of complexity.  Apple has been working
hard to make use of the extra bandwidth when available, but work
without compromise on 32 bit architectures.  That is  a neat technical
trick.  Windows XP, by comparison, is available in separate 32 and 64
bit versions and you cannot use the 64 bit version on a 32 bit cpu and
vice versa.

Is that enough bits?

Reply via email to