Stan Doore wrote: "IBM invented the hexadecimal to provide for all types of
international characters and many special symbols."
 
Not quite. For that purpose, they invented and introduced EBCDIC (Extended
Binary Coded Decimal Interchange Code), for which the unit was/is the byte,
defined as a group of 8 bits. Because the three-bit grouping of the octal
notation was potentially awkward, they introduced four-bit [half byte]
hexadecimal notation, which already existed conceptually, but had no
practical application in the days of computers with 36-bit word sizes (e.g.,
the IBM 7090). Any EBCDIC value was thus expressible as 2 hexadecimal digits
(as was, eventually, any 8-bit ISO 646 [ASCII in the US] value).
 
Of course, it was still awkward, in that we all had to learn to use A
through F for the six four-bit groupings beyond the one expressed as 9. 
 
Code points in today's 16-bit Unicode are, of course, expressible as strings
of four hexadecimal digits.
 
Bill Potts 
(whose first experience with a computer was on the Burroughs E101 Desk Size
Engineering Computer, with its 256 10-digit decimal words on a drum,
plugboard programming, and a contemporary accounting-machine numerals-only
print mechanism).
 
 
 

Reply via email to