Scott Zhong wrote:
Martin,

  From the bug description, its saying that in theory integer types
might have extra bit padding that contribute to their size but not their
range. What the bit padding will affect is the sizeof() function.
   For example: for a 64-bit system, if a strange compiler decides that
int are 6 bytes long instead of 8 bytes long. There would be a 2 byte
padding to make int faster to access because it fills the boundary.  In
this case, sizeof(int) will return 8, but the actual size of int is only
6 byte.

In my function, compute_byte_size(), it assumes that 1 is stored as
00000000 00000000 ... 0000001.  In the for loop, it shifts that 1 bit to
the next bit field.

First iteration:  00000000 ... 00000010
Second iteration:  00000000 ... 00000100
...
...
...
Last iteration: 00000000 00000001 ... 00000000 (this would return 0
which is less than the previous value)
And so on and so forth.  It then checks the value at those bits to
ensure that the value is greater than the previous. This will run until
it is out of boundary of the int type. My function will return 6 bytes
in the example that I given instead of 8 bytes that is returned by
sizeof().

Thanks for the explanation, Scott. The patch makes more sense now.


As I write this, I realize that the my function, compute_byte_size(),
can be optimized to shift one bit to the next byte boundary.

I don't think we need to (or should) worry about optimizing config
tests. What we might want to do is use the template parameter in
the signature of the function for non-conforming compilers that
have trouble with these types of things.


template <class T>
unsigned compute_byte_size()
{
    T max = T (one);
    unsigned byte = 0;
    for (; T (max * 128) > max; max *= 128) {

FWIW, for signed T the expression T(max * 128) > max has undefined
behavior in the presence of overflow. We've seen at least two (if
not three) compilers exploit this presumably in some aggressive
optimizations (see, for example, STDCXX-482). Since most (all?)
hardware simply wraps around in the presence of overflow we just
need to prevent the compiler optimization here.

        byte++;
    }
return byte > 0 ? byte : 1; }
From what I read, the only way we can create a type that has bit padding
is by creating a struct with two different size types and only certain
compiler will add the padding.

Yes, but this is about fundamental types at the language/hardware
level and unusual word sizes like the one used in the PDP-8 (12
bits), or on the UNIVAC (36 bits). See for example this article:
  http://en.wikipedia.org/wiki/Word_%28computing%29
As I said before, this is mostly of academic interest today since
virtually all modern hardware employs powers of 2 word sizes.

Martin


In gcc and 32bit x86 system:

struct new_int{
  int a;
  char b;
}
Int are 4 bytes, char is 1 byte, new_int is padded to 8 byte size with
gcc.

-----Original Message-----
From: Martin Sebor [mailto:[EMAIL PROTECTED] On Behalf Of Martin Sebor
Sent: Tuesday, February 12, 2008 4:17 PM
To: [email protected]
Subject: Re: [PATCH] STDCXX-423

Sorry Scott, I still don't understand what you're trying to do here.

The problem noted in STDCXX-423 is pretty much theoretical. AFAIK,
stdcxx doesn't build or run on any platform with padding bits. In
fact, there may not be any such platforms. Which doesn't mean that
there will never be any (as pointed out in the comp.lang.c thread
I just updated the issue with), just that coming up with a test
case will be tough, as will be testing a patch for the problem.

Martin

Scott Zhong wrote:
--- LIMITS.cpp  (revision 624452)
+++ LIMITS.cpp  (working copy)
@@ -223,13 +223,27 @@
     return bits;
 }
+template <class T>
+unsigned compute_byte_size()
+{
+    T max = T (one);
+    unsigned byte = 1;
+ for (int i = 1; T (max * two) > max; max *= two, i++) { + if (i > 8 ) {byte++; i = 1; } + } + return byte; +} +
 // used to compute the size of a pointer to a member function
 struct EmptyStruct { };
// to silence printf() format comaptibility warnings
 #define SIZEOF(T)   unsigned (sizeof (T))
+ +// to not include possible bit padding
+#define CALC_SIZEOF(T)  compute_byte_size<T>()
int main ()
@@ -243,17 +257,17 @@
#ifndef _RWSTD_NO_BOOL
     printf ("#define _RWSTD_BOOL_SIZE   %2u /* sizeof (bool) */\n",
-            SIZEOF (bool));
+            CALC_SIZEOF (bool));
 #endif   // _RWSTD_NO_BOOL
printf ("#define _RWSTD_CHAR_SIZE %2u /* sizeof (char) */\n",
-            SIZEOF (char));
+            CALC_SIZEOF (char));
     printf ("#define _RWSTD_SHRT_SIZE   %2u /* sizeof (short) */\n",
-            SIZEOF (short));
+            CALC_SIZEOF (short));
     printf ("#define _RWSTD_INT_SIZE    %2u /* sizeof (int) */\n",
-            SIZEOF (int));
+            CALC_SIZEOF (int));
     printf ("#define _RWSTD_LONG_SIZE   %2u /* sizeof (long) */\n",
-            SIZEOF (long));
+            CALC_SIZEOF (long));
printf ("#define _RWSTD_FLT_SIZE %2u /* sizeof (float) */\n",
             SIZEOF (float));
@@ -319,7 +333,7 @@
# define LLong long long - printf ("#define _RWSTD_LLONG_SIZE %2u\n", SIZEOF (LLong));
+    printf ("#define _RWSTD_LLONG_SIZE  %2u\n", CALC_SIZEOF (LLong));
const char llong_name[] = "long long"; @@ -332,7 +346,7 @@ # define LLong __int64 - printf ("#define _RWSTD_LLONG_SIZE %2u\n", SIZEOF (LLong));
+    printf ("#define _RWSTD_LLONG_SIZE  %2u\n", CALC_SIZEOF (LLong));
const char llong_name[] = "__int64"; @@ -352,7 +366,7 @@
 #ifndef _RWSTD_NO_WCHAR_T
printf ("#define _RWSTD_WCHAR_SIZE %2u /* sizeof (wchar_t)
*/\n",
-            SIZEOF (wchar_t));
+            CALC_SIZEOF (wchar_t));
const char *suffix = "U";
     if ((wchar_t)~0 < (wchar_t)0)




Reply via email to