On 09/06/13 14:03, Jonathan M Davis wrote:
If I had been designing the language, I might have gone for int8, uint8,
int16, uint16, etc. (in which case, _all_ of them would have had sizes with no
aliases without - it seems overkill to me to have both), but I also don't
think that it's a big deal for them to not have the numbers either, and I
don't understand why anyone would think that it's all that hard to learn and
remember what the various sizes are
It's the ghost of problems past when the sizes many of the various
integer/natural types in C were "implementation dependent". Maybe it
only afflicts programmers over a certain age :-)
Platform dependent macros such as int32 mapping to the appropriate type
for the implementation were a mechanism for making code portable and old
habits die hard.
Peter
PS the numbered int/uint versions would allow "short" and "long" to be
removed from the set of keywords (eventually).
PPS I think the numbering paradigm would be good for floating point
types as well. The mathematician in me is unsettled by a digital type
called "real" as real numbers can't be represented in digital form -
only approximated. So, if it wasn't already too late, I'd go for
float32, float64 and float80.