Greg Ewing wrote: > > If a word is needed for this concept, then invent a new > one, e.g. "size unit", rather than reusing "byte", which > everyone already understands as meaning 8 bits. >
Maybe everyone understands it as 8 bits, but it has always been wrong. Byte is a unit of storage, which often contains 8 bits, but not always. This definition of a byte as a unit of storage certainly precludes the convention that one byte = 8 bits; even if it always contained 8 bits, it would still be wrong to say that one byte is 8 bits BTW: the byte notion (unit of storage), and its actual size are totally different concepts. > > No, "char" and "unsigned char" can still be different types. > You just need to say that sizeof(char) == sizeof(unsigned char) == 1, > and leave bytes out of the discussion altogether. > I was merely answering to the question "why not using char in the first place": because they are totally difference concepts. If you assume char and byte are the same thing because sizeof(char) == 1 byte, then you should assume that unsigned char is the same as a byte, and thus that unsigned char and char are the same. This was a proof by contradiction :) cheers, David _______________________________________________ Python-3000 mailing list Python-3000@python.org http://mail.python.org/mailman/listinfo/python-3000 Unsubscribe: http://mail.python.org/mailman/options/python-3000/archive%40mail-archive.com