On Mon, Sep 29, 2025 at 04:14:53PM -0000, Greg wrote:
On 2025-09-29, Michael Stone <[email protected]> wrote:
On Mon, Sep 29, 2025 at 05:26:54AM -0500, Richard Owlett wrote:
Underlying my question was the assumption that when a processor was
referred to as 32 or 64 bit, it was a reference to the width of the
data bus.
Not really, which is why this was a weird/misleading/confusing question.
A "bus" is the physical connection between components in a computer. The
(basically obsolete) phrase "data bus" referred to the physical
connection between the processor and external components. In early
I thought a 32-bit data bus meant → 4 bytes at once, and 64-bit data bus
meant → 8 bytes at once (i.e. the number of bits capable of being
transferred over the bus in parallel, simultaneously).
Assuming an 8 bit byte[1], yes, "32 bits" or "4 bytes" are two different
ways to describe the same thing. The relevance of that thing to
describing a modern computer system is what's in question, because a
modern system doesn't conduct I/O via "a" data bus which physically
connects some number of bits in parallel to the CPU. Even in the old
days the data bus size didn't really matter from a logical perspective;
the 8086 and the 8088 used the same instruction set, and a program
didn't know/care that the 8088 took two clock cycles to transfer a 16
bit register to/from memory (8 bits at a time) rather than one clock
cycle (16 bits at a time). Similarly a 386SX was functionally identical
to a 386DX except that it needed two 16 bit transfers instead of one 32
bit transfer to implement an instruction to move a 32 bit value to/from
memory. Unless calculating performance or implementing a hardware
interface the data bus width was just an esoteric implementation detail.
I also believed there were actually two types of widths: the data bus
width, and the CPU architecture width, and that the two didn't
necessarily have to match.
I have no idea what an "architecture width" is. As described in my
original message, you can describe an architecture in terms of the size
of a general purpose acumulator register, the size of instruction
operands, the size of an address location, the size of a byte, etc. In a
modern CPU many of these are "complicated" because their capabilities
are so much broader than a CPU 50 years ago that they do not fit neatly
into an older theoretical model.
[1] Internet RFCs use the term "octet" specifically because "byte" had
not yet been standardized and "octet" clearly indicated 8 bits rather
than 6 or 7 bits.