On Sunday, June 09, 2013 15:40:40 Peter Williams wrote: > On 09/06/13 14:03, Jonathan M Davis wrote: > > If I had been designing the language, I might have gone for int8, uint8, > > int16, uint16, etc. (in which case, _all_ of them would have had sizes > > with no aliases without - it seems overkill to me to have both), but I > > also don't think that it's a big deal for them to not have the numbers > > either, and I don't understand why anyone would think that it's all that > > hard to learn and remember what the various sizes are > > It's the ghost of problems past when the sizes many of the various > integer/natural types in C were "implementation dependent". Maybe it > only afflicts programmers over a certain age :-) > > Platform dependent macros such as int32 mapping to the appropriate type > for the implementation were a mechanism for making code portable and old > habits die hard.
I'm well aware of that. I work in C++ for a living and have to deal with variable-sized integral types all the time. But that doesn't mean that it's not easy to learn and remember that D made its integral types fixed size and what that size is for each of them. > PPS I think the numbering paradigm would be good for floating point > types as well. The mathematician in me is unsettled by a digital type > called "real" as real numbers can't be represented in digital form - > only approximated. So, if it wasn't already too late, I'd go for > float32, float64 and float80. The size of real is implementation defined. You have no guarantee whatsoever that you even _have_ float80. real is defined to be the largest floating point type provided by the architecture or double - whichever is larger. On x86, that happens to be 80 bits, but it won't necessarily be 80 bits on other architectures. - Jonathan M Davis
