Currently, when we see code that compares a bitfield to a constant, in
some cases we will turn the bitfield from a COMPONENT_REF into a
BIT_FIELD_REF.  For example, this test case is based on PR 22563:

struct s
{
  int a : 3;
};

struct s g;

int
foo ()
{
  g.a = 2;
  return g.a != 2;
}

In .003t.original, this looks like this:

{
  g.a = 2;
  return (BIT_FIELD_REF <g, 8, 0> & 7) != 2;
}

In other workds, the comparison in the second line becomes a
BIT_FIELD_REF rather than a COMPONENT_REF.  This happens in
fold_binary, at the very start of the compilation process.  Because it
is a BIT_FIELD_REF, the tree optimizers never notice that the code is
looking at the value that was just set.  And the RTL optimizers never
notice either.  And so we wind up generating this (-O2
-momit-leaf-frame-pointer):

foo:
        movl    g, %eax
        andl    $-8, %eax
        orl     $2, %eax
        movl    %eax, g
        movzbl  g, %eax
        andl    $7, %eax
        cmpb    $2, %al
        setne   %al
        movzbl  %al, %eax
        ret

With a trivial patch to make optimize_bit_field_compare do nothing, we
get this:

.003t.original:

{
  g.a = 2;
  return g.a != 2;
}

foo:
        movl    g, %eax
        andl    $-8, %eax
        orl     $2, %eax
        movl    %eax, g
        xorl    %eax, %eax
        ret

Now, this is obviously a contrived example.  But, in general, we do
not optimize BIT_FIELD_REF in the tree optimizers.  Whereas we do
optimize COMPONENT_REF.  The only advantage I can see of generating
BIT_FIELD_REF early is that we will pick it up in PRE.  But of course
we also have RTL level CSE/PRE.

I think it is likely to be appropriate to move the optimizations in
fold-const.c which generate BIT_FIELD_REF to occur just before we
convert to RTL, rather than applying them at the start.

Any thoughts on whether this is a good idea?

Ian

Reply via email to