On 31/07/09 21:13, Paul wrote:
A char, for instance, is always 8 bits or one byte. A Unicode char, if
implemented, is 16 bits or two bytes.
A char is always one byte (by definition) but this need not be 8 bits.
An int is a natural integer, a.k.a. word, for the CPU architecture,
so 8, 16, 32
Hello,
I do observe abnormal results with my applications
Moving from a 32 to 64 bits architecture, using, c, C++, perl, is
they something that I should be aware of ? like size of the
float, integer ?
Thank
On Fri, 31 Jul 2009 20:12:05 +0100 (BST)
Patrick Dupre pd...@york.ac.uk wrote:
Hello,
I do observe abnormal results with my applications
Moving from a 32 to 64 bits architecture, using, c, C++, perl, is
they something that I should be aware of ? like size of the
float, integer ?
Yes
stan wrote:
On Fri, 31 Jul 2009 20:12:05 +0100 (BST)
Patrick Dupre pd...@york.ac.uk wrote:
Hello,
I do observe abnormal results with my applications
Moving from a 32 to 64 bits architecture, using, c, C++, perl, is
they something that I should be aware of ? like size of the
float, integer
On 07/31/2009 03:12 PM, Patrick Dupre wrote:
Hello,
I do observe abnormal results with my applications
Moving from a 32 to 64 bits architecture, using, c, C++, perl, is
they something that I should be aware of ? like size of the
float, integer ?
Thank.
In the C and C++ languages