On Sun, Jan 29, 2012 at 02:26:55PM -0800, Jonathan M Davis wrote: [...] > This is one of the many reasons why I think that any language which > didn't define integers according to their _absolute_ size instead of > relative size (with the possible exception of some types which vary > based on the machine so that you're using the most efficient integer > for that machine or are able to index the full memory space) made a > huge mistake.
IMNSHO, you need both, and I can't say I'm 100% satisfied with how D uses 'int' to mean 32-bit integer no matter what. The problem with C is that there's no built-in type for guaranteeing 32-bits (stdint.h came a bit too late into the picture--by then, people had already formed too many bad habits). There's a time when code needs to be able to say "please give me the default fastest int type on the machine", and a time for code to say "I want the int type with exactly n bits 'cos I'm assuming specific properties of n-bit numbers". > C's type scheme is nothing but trouble as far as integral sizes go > IMHO. printf in particular is one of the more annoying things to worry > about with cross-platform development thanks to varying integer size. > Bleh. Enough of my whining. [...] Yeah, size_t especially drives me up the wall. Is it %u, %lu, or %llu? I think either gcc or C99 actually has a dedicated printf format for size_t, except that C++ doesn't include parts of C99, so you end up with format string #ifdef nightmare no matter what you do. I'm so glad that %s takes care of it all in D. Yet another thing D has done right. T -- MSDOS = MicroSoft's Denial Of Service
