Dilwyn Jones wrote:
In attempting to write routines to convert graphics, I've used some Windows documentation which refers to C structures.
It refers to UINT, DWORD and LONG.
Can someone explain these to a C ignoramus (i.e. me):
UINT - Unsigned integer presumably, how many bytes?
DWORD - Double word? Signed? How many bytes?
LONG - Like a QL 32 bit integer?
It is all very arbitrary. You really need to track back somehow to find out which variation of the standard(sic) include file happens to have been used. You probably need to find a "types.h" that comes with the compiler that you think the code was defined for.
Loosely, you have the right idea. The must probable definitions are:
typedef unsigned int UINT typedef unsigned long DWORD typedef signed long LONG
However, that really doesn't help a lot.
An "int" doesn't neccesarily mean 16 or 32 bit. Older compilers default to 16, newer to 32, and neither is guaranteed. It was supposed to be the "natural" (i.e. most efficient) size for an integer on whatever architecture you were compiling for. I.e. if you had a 20 bit machine, that could be what "int" came out as. However, on 8-bit machines, "int"s are typically 16 bit!
A "short" is only guaranteed to be no bigger than a "long" with an "int" somewhere between.
Also, the "WORD" in "DWORD" is not always defined as unsigned.
-----------------
However - all is not so bleak - I've just done the obvious(?) and searched Google for "types.h DWORD UINT LONG" and come up with http://ssobjects.sourceforge.net/docs/html/msdefs_8h.html and http://www.minigui.com/api_ref/group__win32__types.html which seem exactly what you're after (and match what I said above).
-----------------
Finally, a search on the MS site got me to http://msdn.microsoft.com/library/default.asp?url=/library/en-us/winprog/winprog/windows_data_types.asp which ties it all up.
DWORD - 32 bit unsigned integer LONG - 32 bit signed integer UINT - unsigned INT INT - signed 32 bit integer
Hence, according to Microsnot, we have UINT as an unsigned signed 32 bit integer - I'm not at all sure what that means. (Actually, I am - unsigned takes precedence over signed).
-----------------
Out of amusement, on looking at Micro$oft's table, they have the following:
Single bit (in effect) - BOOL or BOOLEAN (they don't actually say what storage size it really is).
8 bit, but no idea whether it's signed or unsigned - BYTE or CHAR.
8 bit unsigned - UCHAR.
16 bit signed - SHORT.
16 bit unsigned - USHORT or WORD.
32 bit signed - INT, INT32, LONG or LONG32.
32 bit unsigned - UINT, UINT32, ULONG, ULONG32, DWORD or DWORD32. (No less than six synonyms!)
64 bit signed - INT64, LONG64 or LONGLONG.
64 bit unsigned - ULONG64, UINT64, DWORD64 or ULONGLONG.
-----------------
I give up.
-----------------
(I'm being a little cruel about the BYTE/CHAR type - K & R left it open as to whether a "char" was signed or not. On early compilers, before the "signed" attribute was invented, you couldn't actually get a signed 8 bit value at all.)
--
Lau
http://www.bergbland.info
Get a domain from http://oneandone.co.uk/xml/init?k_id=5165217 and I'll get the commission!
