Such a statement is technically correct but amazingly pedantic in a world where there is legacy code in other languages, crufty old D code from before there was a 64-bit compiler to test it on, and the need to optimize storage sometimes, but not too many billion element arrays.  If anyone has ever in their entire life worked with an array with over 2 billion elements (which would demonstrate that the unsafeness isn't purely theoretical), please speak up.

On 2/17/2011 6:31 PM, Andrei Alexandrescu wrote:
On 2/17/11 5:10 PM, David Simcha wrote:
Have you actually tried porting any application code to 64? Phobos and
other similarly generic libraries don't count because code that's that
generic legitimately can't assume that no arrays are going to be
billions of elements long.

Code that uses the unrecommended practice of mixing int and uint with size_t everywhere will be indeed difficult to port to 64 bits. But that's a problem with the code, and giving that unrecommended practice legitimacy by making it look good is aiming at the wrong target.

Use size_t for sizes and it's golden. You can't go wrong. On the rare occasions when you want to store arrays of indexes, do the cast by hand, don't ask the standard library to give it a nice face by making the assumption for you.


Andrei
_______________________________________________
phobos mailing list
[email protected]
http://lists.puremagic.com/mailman/listinfo/phobos


_______________________________________________
phobos mailing list
[email protected]
http://lists.puremagic.com/mailman/listinfo/phobos

Reply via email to