> From: Bart Veer
>
> These types date back to the very early days of eCos, and were
> intended to allow the system to run on 16-bit and 64-bit processors as
> well as 32-bit ones. Consider two processors: a 16-bit one where both
> short and int are 2 bytes, a long is 4 bytes, and 16-bit operations
> are more efficient than 32-bit ones; and a 32-bit processor where a
> short is two bytes, both int and long are 4 bytes, and 32-bit
> arithmetic is more efficient than 16-bit.
>
> On the 16-bit processor we will have:
>
> typedef unsigned int cyg_uint16;
> typedef unsigned int cyg_ucount16;
> typedef unsigned long cyg_uint32;
> typedef unsigned long cyg_ucount32;
>
> On the 32-bit processor we will have:
>
> typedef unsigned short cyg_uint16;
> typedef unsigned int cyg_ucount16;
> typedef unsigned int cyg_uint32;
> typedef unsigned int cyg_ucount32;
>
>
> On both processors cyg_uint16 is exactly 16 bits and cyg_uint32 is
> exactly 32 bits. Hence those data types can be used reliably for
> describing hardware, for defining network protocols, etc.
>
> However cyg_ucount16 is 16 bits on the 16-bit processor and 32 bits on
> the 32-bit processor. In both cases cyg_ucount16 is the most efficient
> data type that provides at least the specified number of bits. In the
> context of a loop:
>
> for (i = 0; i < 32768; i++) {
> ...
> }
>
> If i has type cyg_uint16 then that would be optimal on the 16-bit
> processor but not on the 32-bit one. If i has type cyg_uint32 then
> that would be optimal on the 32-bit processor but not the 16-bit one.
> If i has type cyg_ucount16 then that would be optimal on both
> processors.
Sounds like a good application for "int" or "unsigned int". Having a type
alias for a 32-bit integer with "16" in its name is pretty confusing,
leading to questions like this.
--
Ciao, Paul D. DeRocco
Paul mailto:[EMAIL PROTECTED]
--
Before posting, please read the FAQ: http://ecos.sourceware.org/fom/ecos
and search the list archive: http://ecos.sourceware.org/ml/ecos-discuss