On 12/05/16 12:37, Richard Biener wrote:
On Thu, May 12, 2016 at 12:17 PM, Richard Biener
<richard.guent...@gmail.com> wrote:
On Thu, May 12, 2016 at 12:10 PM, Claudiu Zissulescu
<claudiu.zissule...@synopsys.com> wrote:
Hi,

I've been trying the following simple test case on latest gcc, and it seems to 
produce unwanted unaligned accesses for bit-fields.

Test cases:

struct lock_chain {
   unsigned int irq_context: 2,
     depth: 6,
     base: 24;
};

struct lock_chain * foo (struct lock_chain *chain)
{
   int i;
   for (i = 0; i < 100; i++)
     {
       chain[i+1].base = chain[i].base;
     }
   return chain;
}

GCC options -O3 (we need predictive commoning to kick in).

The result for ARM:

         .cpu arm926ej-s
         .eabi_attribute 20, 1
         .eabi_attribute 21, 1
         .eabi_attribute 23, 3
         .eabi_attribute 24, 1
         .eabi_attribute 25, 1
         .eabi_attribute 26, 1
         .eabi_attribute 30, 2
         .eabi_attribute 34, 0
         .eabi_attribute 18, 4
         .file   "t01.c"
         .text
         .align  2
         .global foo
         .syntax unified
         .arm
         .fpu softvfp
         .type   foo, %function
foo:
         @ args = 0, pretend = 0, frame = 0
         @ frame_needed = 0, uses_anonymous_args = 0
         @ link register save eliminated.
         ldr     r1, [r0, #1]                    <<<<<<<<<<<< Unaligned access
         add     r3, r0, #4
         lsl     r1, r1, #8
         add     ip, r0, #404
.L2:
         ldrb    r2, [r3]        @ zero_extendqisi2
         orr     r2, r1, r2
         str     r2, [r3], #4
         cmp     r3, ip
         bne     .L2
         bx      lr
         .size   foo, .-foo
         .ident  "GCC: (GNU) 7.0.0 20160502 (experimental)"

Please observe the ldr instruction will access an unaligned address.

I observed this issue for ARC processor, but because the ARM is more popular, 
I've use it for this case.

Is this expected behavior or do I miss something?

Looks like a bug - please file a bugreport.

Try

Index: gcc/tree-predcom.c
===================================================================
--- gcc/tree-predcom.c  (revision 236159)
+++ gcc/tree-predcom.c  (working copy)
@@ -1391,9 +1395,10 @@ ref_at_iteration (data_reference_p dr, i
        && DECL_BIT_FIELD (TREE_OPERAND (DR_REF (dr), 1)))
      {
        tree field = TREE_OPERAND (DR_REF (dr), 1);
+      tree type = build_aligned_type (DECL_BIT_FIELD_TYPE (field),
+                                     BITS_PER_UNIT);
        return build3 (BIT_FIELD_REF, TREE_TYPE (DR_REF (dr)),
-                    build2 (MEM_REF, DECL_BIT_FIELD_TYPE (field),
-                            addr, alias_ptr),
+                    build2 (MEM_REF, type, addr, alias_ptr),
                      DECL_SIZE (field), bitsize_zero_node);
      }
    else


Richard.

Richard.

Thank you,
Claudiu

It looks better. For ARC I need to optimize this type of access as it uses three loads, and it can do with two. Anyhow the bug id is: 71083

Thank you,
Claudiu

Reply via email to