https://gcc.gnu.org/bugzilla/show_bug.cgi?id=67701

--- Comment #6 from rguenther at suse dot de <rguenther at suse dot de> ---
On Thu, 24 Sep 2015, ebotcazou at gcc dot gnu.org wrote:

> https://gcc.gnu.org/bugzilla/show_bug.cgi?id=67701
> 
> Eric Botcazou <ebotcazou at gcc dot gnu.org> changed:
> 
>            What    |Removed                     |Added
> ----------------------------------------------------------------------------
>              Status|UNCONFIRMED                 |NEW
>    Last reconfirmed|                            |2015-09-24
>                  CC|                            |ebotcazou at gcc dot gnu.org
>      Ever confirmed|0                           |1
> 
> --- Comment #5 from Eric Botcazou <ebotcazou at gcc dot gnu.org> ---
> I wonder what the motivation to make this change was as, historically, GCC has
> never tried to rescue the programmer in this clearly invalid case.  Some
> obscure situation with SSE on x86?  Do other compilers do the same, e.g. on
> ARM?

Yes, AFAIK this was some obscure situation with SSE on x86.  IIRC
code doing unaligned scalar accesses (which is ok on x86) but then
vectorized using peeling for alignment (which cannot succeed if the
element is not naturally aligned) and segfaulting for the emitted
aligned move instructions.

Of course the pessimistic assumption we make only applies when
the compiler sees the underlying decl, thus only a fraction of
all possible cases are handled conservative that way.

Maybe these days the legacy has been cleaned up enough so we can
remove that conservative handling again...  I think it also causes
us to handle

char c[4];

int main()
{
  if (!((unsigned long)c & 3))
    return *(int *)c;
  return c[0];
}

too conservatively as we expand

  _5 = MEM[(int *)&c];

and thus lost the flow-sensitive info.

Implementation-wise you'd have to adjust get_object_alignment_2 to
get at a MEM_REF base (get_inner_reference will look through MEM_REFs
with &decl operand 0).  Eventually by adjusting get_inner_reference
itself to avoid doing the work twice.

Reply via email to