Hi all,
VC++ 32 and 64 both return:
t1 struct size: 2
t2 struct size: 8
t3 struct size: 8

Which IMHO is correct. I really don’t understand why gcc returns
t1 struct size: 2
t2 struct size: 4
t3 struct size: 4

To me,
struct t2 {
    uint32_t op_type:1;
    uint8_t op_flags;
};

op_type uses a plain uint32_t to store only one bit as there is another field 
which is not a bit bitfield, C compile must allocate at least one extra byte in 
this struct. The minimal size I expect is 5! 

I changed you sample a little bit to see offset of op_flags:

#include <stdint.h>
#include <stdio.h>
#include <stddef.h>
struct t1 {
    uint8_t op_type:1;
    uint8_t op_flags;
};
struct t2 {
    uint32_t op_type:1;
    uint8_t op_flags;
};
struct t3 {
    unsigned op_type:1;
    char op_flags;
};

int main() {
    printf("t1 struct size: %ld\n", sizeof(struct t1));
    printf("t2 struct size: %ld\n", sizeof(struct t2));
    printf("t2 offset of op_flags: %ld\n", offsetof(struct t2, op_flags));
    printf("t3 struct size: %ld\n", sizeof(struct t3));
    return 0;
}

Visual C++ gives:
t1 struct size: 2
t2 struct size: 8
t2 offset of op_flags: 4
t3 struct size: 8

While gcc gives
c:\tmp>a.exe
t1 struct size: 2
t2 struct size: 4
t2 offset of op_flags: 1
t3 struct size: 4

Which IMHO is wrong. If later I change my mind with:
struct t2 {
    uint32_t op_type:1;
    uint32_t op_other_field: 12;
    uint8_t op_flags;
};

I expect my struct will have the same size. I don't think we want tcc mimic 
this behavior. Doing this will break Windows code that depends on this 
behavior. The only solution is to add a compiler flag to ask for a specific 
behavior.

From: Tinycc-devel [mailto:tinycc-devel-bounces+eligis=orange...@nongnu.org] On 
Behalf Of Richard Brett
Sent: mardi 18 octobre 2016 06:08
To: tinycc-devel@nongnu.org
Subject: Re: [Tinycc-devel] Weird bitfield size handling, discrepancy with gcc

Hello

  Is this really a problem for tcc?  An old version of VC produces the same 
sizes as tcc.  The spec seems to say (not sure I'm reading this right, first 
time I've read the spec)

"An implementation may allocate any addressable storage unit large enough to 
hold a bitfield.....snip..... the order of bitfields within a unit is ( ... 
high to low or .....  low to high) implementation defined.  The alignment of 
the addressable storage unit is undefined"

This seems to suggest that each implementation can do what it wants with 
bitfields and that passing them between different compilers is probably 
undefined.   

Having said all that, I'm not overly worried if it gets changed, just seems 
like risk for something that might not be broken.  And, I am not an expert on C 
compiler internals.   I do pass structures a lot between tcc and other 
compilers, but they are all carefully crafted with PACK directives/pragmas to 
ensure exact memory layout, and I dont use bitfields as part of compiler - 
hence why I looked at this.

Cheers
.Richard


David Mertens wrote: 
Hello everyone,
I recently uncovered some segfaulting code when compiling code with macros that 
manipulate certain Perl structs on 64-bit Linux. I boiled the problem down to a 
discrepancy between how tcc and gcc determine the size needed by a series of 
bit fields. The tcc-compiled function would get the Perl interpreter struct 
produced by gcc-compiled code, then reach into the wrong memory slot for 
something. A reduced example is provided below.
Question 1: Would anybody be opposed to changing tcc's behavior to match gcc's 
behavior here? This could lead to binary incompatibility with object code 
previously compiled with tcc, but that seems to me highly unlikely to be a real 
problem for anyone.
Question 2: Does anybody know tccgen.c well enough to fix this? I can work on 
it, but if anybody knows exactly where this goes wrong, it would save me a few 
hours.

--------%<--------
#include <stdint.h>
#include <stdio.h>
struct t1 {
    uint8_t op_type:1;
    uint8_t op_flags;
};
struct t2 {
    uint32_t op_type:1;
    uint8_t op_flags;
};
struct t3 {
    unsigned op_type:1;
    char op_flags;
};

int main() {
    printf("t1 struct size: %ld\n", sizeof(struct t1));
    printf("t2 struct size: %ld\n", sizeof(struct t2));
    printf("t3 struct size: %ld\n", sizeof(struct t3));
    return 0;
}
-------->%--------
With tcc, this prints:
t1 struct size: 2
t2 struct size: 8
t3 struct size: 8
With gcc, this prints:
t1 struct size: 2
t2 struct size: 4
t3 struct size: 4
This suggests that with tcc, the number of bytes given to a series of bitfields 
in a struct depends upon the integer type of the bitfield. In particular, plain 
old "unsigned" is interpreted (in 64-bit context) to be "unsigned int", which 
has 32 bits. This is incompatible with gcc's interpretation.
The relevant code is, I think, in tccgen.c's struct_decl. However, I can't 
quite tease apart where the declaration comes in, and how it effect struct size 
calculations.
David


-- 
 "Debugging is twice as hard as writing the code in the first place.
  Therefore, if you write the code as cleverly as possible, you are,
  by definition, not smart enough to debug it." -- Brian Kernighan

________________________________________

_______________________________________________
Tinycc-devel mailing list
Tinycc-devel@nongnu.org
https://lists.nongnu.org/mailman/listinfo/tinycc-devel
  



_______________________________________________
Tinycc-devel mailing list
Tinycc-devel@nongnu.org
https://lists.nongnu.org/mailman/listinfo/tinycc-devel

Reply via email to