On Jul 2, 2012, at 3:10 PM, Jens Alfke wrote:

> On Jul 2, 2012, at 12:06 PM, Chris Hanson wrote:
> 
>> NSInteger and NSUInteger also have the advantage of having the same 
>> @encode() on 32-bit and 64-bit, which can be important for binary 
>> compatibility of archives and IPC between architectures, depending on how 
>> you do it.
> 
> How can they? They're different sizes. Even if your protocol prefixes them 
> with a type-code or length to get around that, you have another problem: if 
> you write an NSInteger in 64-bit and read it in 32-bit, you're potentially 
> losing data. (The same situation there's a compiler warning for when you 
> assign to a variable of smaller width.)
> 
> If you want binary compatibility you should definitely be using types that 
> have an explicit guarantee of size, like int32_t or int64_t.

When I’m parsing a binary file format, of course I always use the 
explicitly-sized types. Most of the time, that’s not what you’re doing, though, 
and for general-purpose stuff NS(U)Integer is better.

> I really don't understand the thought behind creating NSInteger. It seems 
> dangerous to have a 'standard' type whose size isn't fixed. It leads to 
> mistakes like storing a file size in an NSUInteger — that's fine in 64-bit, 
> but in a 32-bit app it blows up on large files. I thought we'd already 
> learned this lesson with the old C 'int' type, which has been giving people 
> cross-platform problems since the 1970s.

NSInteger is always equal to the native integer size of the host machine; 32 
bits in 32-bit, 64 bits in 64-bit. I would imagine this helps performance, as 
the processor will be dealing with its native integer type. Also, for indexes 
and offsets, things like -[NSData length], it makes sense; a NSUInteger ensures 
that it will be able to hold any value that’s legal on the architecture. If 
-[NSData length] had been defined as a uint32_t, that would have caused forward 
compatibility problems with the move to 64-bit.

And really, if you are storing NSUIntegers to the disk, you’re just doing it 
wrong. XML is what you should be using for most new formats these days; when 
you have to read/write in binary formats, that’s what the u?int[0-9]+_t formats 
are for. Using a uint32_t in cases where there’s no real reason to require that 
the integer be exactly 32 bits wide, though, doesn’t seem to provide any 
benefit.

Charles

_______________________________________________

Cocoa-dev mailing list (Cocoa-dev@lists.apple.com)

Please do not post admin requests or moderator comments to the list.
Contact the moderators at cocoa-dev-admins(at)lists.apple.com

Help/Unsubscribe/Update your Subscription:
https://lists.apple.com/mailman/options/cocoa-dev/archive%40mail-archive.com

This email sent to arch...@mail-archive.com

Reply via email to