Re: [PATCH][AArch64] Unify vec_set patterns, support floating-point vector modes properly

2018-05-17 Thread James Greenhalgh
On Thu, May 17, 2018 at 09:26:37AM -0500, Kyrill Tkachov wrote:
> 
> On 17/05/18 14:56, Kyrill Tkachov wrote:
> >
> > On 17/05/18 09:46, Kyrill Tkachov wrote:
> >>
> >> On 15/05/18 18:56, Richard Sandiford wrote:
> >>> Kyrill  Tkachov  writes:
>  Hi all,
> 
>  We've a deficiency in our vec_set family of patterns.  We don't
>  support directly loading a vector lane using LD1 for V2DImode and all
>  the vector floating-point modes.  We do do it correctly for the other
>  integer vector modes (V4SI, V8HI etc) though.
> 
>  The alternatives on the relative floating-point patterns only allow a
>  register-to-register INS instruction.  That means if we want to load a
>  value into a vector lane we must first load it into a scalar register
>  and then perform an INS, which is wasteful.
> 
>  There is also an explicit V2DI vec_set expander dangling around for no
>  reason that I can see. It seems to do the exact same things as the
>  other vec_set expanders. This patch removes that.  It now unifies all
>  vec_set expansions into a single "vec_set" define_expand using
>  the catch-all VALL_F16 iterator.
> 
>  I decided to leave two aarch64_simd_vec_set define_insns. One
>  for the integer vector modes (that now include V2DI) and one for the
>  floating-point vector modes. That is so that we can avoid specifying
>  "w,r" alternatives for floating-point modes in case the
>  register-allocator gets confused and starts gratuitously moving
>  registers between the two banks.  So the floating-point pattern only
>  two alternatives, one for SIMD-to-SIMD INS and one for LD1.
> >>> Did you see any cases in which this was necessary?  In some ways it
> >>> seems to run counter to Wilco's recent patches, which tended to remove
> >>> the * markers from the "unnatural" register class and trust the register
> >>> allocator to make a sensible decision.
> >>>
> >>> I think our default position should be trust the allocator here.
> >>> If the consumers all require "w" registers then the RA will surely
> >>> try to use "w" registers if at all possible.  But if the consumers
> >>> don't care then it seems reasonable to offer both, since in those
> >>> cases it doesn't really make much difference whether the payload
> >>> happens to be SF or SI (say).
> >>>
> >>> There are also cases in which the consumer could actively require
> >>> an integer register.  E.g. some code uses unions to bitcast floats
> >>> to ints and then do bitwise arithmetic on them.
> >>>
> >>
> >> Thanks, that makes sense. Honestly, it's been a few months since I worked 
> >> on this patch.
> >> I believe my reluctance to specify that alternative was that it would mean 
> >> merging the integer and
> >> floating-point patterns into one (like the attached version) which would 
> >> put the "w, r" alternative
> >> first for the floating-point case. I guess we should be able to trust the 
> >> allocator to pick
> >> the sensible  alternative though.
> >>
> >
> > With some help from Wilco I can see how this approach will give us 
> > suboptimal code though.
> > If we modify the example from my original post to be:
> > v4sf
> > foo_v4sf (float *a, float *b, float *c, float *d)
> > {
> > v4sf res = { *a, b[2], *c, *d };
> > return res;
> > }
> >
> > The b[2] load will load into a GP register then do an expensive INS into 
> > the SIMD register
> > instead of loading into an FP S-register and then doing a SIMD-to-SIMD INS.
> > The only way I can get it to use the FP load then is to mark the "w, r" 
> > alternative with a '?'
> >
> 
> That patch would look like the attached. Is this preferable?
> For the above example it generates the desired:
> foo_v4sf:
>  ldr s0, [x0]
>  ldr s1, [x1, 8]
>  ins v0.s[1], v1.s[0]
>  ld1 {v0.s}[2], [x2]
>  ld1 {v0.s}[3], [x3]
>  ret
> 
> 
> rather than loading [x1, 8] into a W-reg.

OK,

Thanks,
James



Re: [PATCH][AArch64] Unify vec_set patterns, support floating-point vector modes properly

2018-05-17 Thread Wilco Dijkstra
Kyrill Tkachov wrote:

> That patch would look like the attached. Is this preferable?
> For the above example it generates the desired:
> foo_v4sf:
>   ldr s0, [x0]
>   ldr s1, [x1, 8]
>   ins v0.s[1], v1.s[0]
>   ld1 {v0.s}[2], [x2]
>   ld1 {v0.s}[3], [x3]
>    ret

Yes that's what I expect. Also with only non-zero offsets we emit:

foo_v2di:
ldr d0, [x0, 8]
ldr d1, [x1, 16]
ins v0.d[1], v1.d[0]
ret

foo_v4sf:
ldr s0, [x0, 4]
ldr s3, [x1, 20]
ldr s2, [x2, 32]
ldr s1, [x3, 80]
ins v0.s[1], v3.s[0]
ins v0.s[2], v2.s[0]
ins v0.s[3], v1.s[0]
ret

The patch looks good now, lots of patterns removed, yet we generate better code!

Wilco

Re: [PATCH][AArch64] Unify vec_set patterns, support floating-point vector modes properly

2018-05-17 Thread Kyrill Tkachov


On 17/05/18 14:56, Kyrill Tkachov wrote:


On 17/05/18 09:46, Kyrill Tkachov wrote:


On 15/05/18 18:56, Richard Sandiford wrote:

Kyrill  Tkachov  writes:

Hi all,

We've a deficiency in our vec_set family of patterns.  We don't
support directly loading a vector lane using LD1 for V2DImode and all
the vector floating-point modes.  We do do it correctly for the other
integer vector modes (V4SI, V8HI etc) though.

The alternatives on the relative floating-point patterns only allow a
register-to-register INS instruction.  That means if we want to load a
value into a vector lane we must first load it into a scalar register
and then perform an INS, which is wasteful.

There is also an explicit V2DI vec_set expander dangling around for no
reason that I can see. It seems to do the exact same things as the
other vec_set expanders. This patch removes that.  It now unifies all
vec_set expansions into a single "vec_set" define_expand using
the catch-all VALL_F16 iterator.

I decided to leave two aarch64_simd_vec_set define_insns. One
for the integer vector modes (that now include V2DI) and one for the
floating-point vector modes. That is so that we can avoid specifying
"w,r" alternatives for floating-point modes in case the
register-allocator gets confused and starts gratuitously moving
registers between the two banks.  So the floating-point pattern only
two alternatives, one for SIMD-to-SIMD INS and one for LD1.

Did you see any cases in which this was necessary?  In some ways it
seems to run counter to Wilco's recent patches, which tended to remove
the * markers from the "unnatural" register class and trust the register
allocator to make a sensible decision.

I think our default position should be trust the allocator here.
If the consumers all require "w" registers then the RA will surely
try to use "w" registers if at all possible.  But if the consumers
don't care then it seems reasonable to offer both, since in those
cases it doesn't really make much difference whether the payload
happens to be SF or SI (say).

There are also cases in which the consumer could actively require
an integer register.  E.g. some code uses unions to bitcast floats
to ints and then do bitwise arithmetic on them.



Thanks, that makes sense. Honestly, it's been a few months since I worked on 
this patch.
I believe my reluctance to specify that alternative was that it would mean 
merging the integer and
floating-point patterns into one (like the attached version) which would put the "w, 
r" alternative
first for the floating-point case. I guess we should be able to trust the 
allocator to pick
the sensible  alternative though.



With some help from Wilco I can see how this approach will give us suboptimal 
code though.
If we modify the example from my original post to be:
v4sf
foo_v4sf (float *a, float *b, float *c, float *d)
{
v4sf res = { *a, b[2], *c, *d };
return res;
}

The b[2] load will load into a GP register then do an expensive INS into the 
SIMD register
instead of loading into an FP S-register and then doing a SIMD-to-SIMD INS.
The only way I can get it to use the FP load then is to mark the "w, r" 
alternative with a '?'



That patch would look like the attached. Is this preferable?
For the above example it generates the desired:
foo_v4sf:
ldr s0, [x0]
ldr s1, [x1, 8]
ins v0.s[1], v1.s[0]
ld1 {v0.s}[2], [x2]
ld1 {v0.s}[3], [x3]
ret


rather than loading [x1, 8] into a W-reg.

Thanks,
Kyrill



Kyrill



This version is then made even simpler due to all the vec_set patterns being 
merged into one.
Bootstrapped and tested on aarch64-none-linux-gnu.

Is this ok for trunk?

Thanks,
Kyrill

2018-05-17  Kyrylo Tkachov  

* config/aarch64/aarch64-simd.md (vec_set): Use VALL_F16 mode
iterator.  Delete separate integer-mode vec_set expander.
(aarch64_simd_vec_setv2di): Delete.
(vec_setv2di): Delete.
(aarch64_simd_vec_set): Delete all other patterns with that name.
Use VALL_F16 mode iterator.  Add LD1 alternative and use vwcore for
the "w, r" alternative.

2018-05-17  Kyrylo Tkachov  

* gcc.target/aarch64/vect-init-ld1.c: New test.


With this patch we avoid loading values into scalar registers and then
doing an explicit INS on them to move them into the desired vector
lanes. For example for:

typedef float v4sf __attribute__ ((vector_size (16)));
typedef long long v2di __attribute__ ((vector_size (16)));

v2di
foo_v2di (long long *a, long long *b)
{
v2di res = { *a, *b };
return res;
}

v4sf
foo_v4sf (float *a, float *b, float *c, float *d)
{
v4sf res = { *a, *b, *c, *d };
return res;
}

we currently generate:

foo_v2di:
  ldr d0, [x0]
  ldr x0, [x1]
  ins v0.d[1], x0
  ret

foo_v4sf:
  ldr s0, [x0]
  ldr s3, [x1]
  ldr s2, [x2]
  ldr s1, 

Re: [PATCH][AArch64] Unify vec_set patterns, support floating-point vector modes properly

2018-05-17 Thread Kyrill Tkachov


On 17/05/18 09:46, Kyrill Tkachov wrote:


On 15/05/18 18:56, Richard Sandiford wrote:

Kyrill  Tkachov  writes:

Hi all,

We've a deficiency in our vec_set family of patterns.  We don't
support directly loading a vector lane using LD1 for V2DImode and all
the vector floating-point modes.  We do do it correctly for the other
integer vector modes (V4SI, V8HI etc) though.

The alternatives on the relative floating-point patterns only allow a
register-to-register INS instruction.  That means if we want to load a
value into a vector lane we must first load it into a scalar register
and then perform an INS, which is wasteful.

There is also an explicit V2DI vec_set expander dangling around for no
reason that I can see. It seems to do the exact same things as the
other vec_set expanders. This patch removes that.  It now unifies all
vec_set expansions into a single "vec_set" define_expand using
the catch-all VALL_F16 iterator.

I decided to leave two aarch64_simd_vec_set define_insns. One
for the integer vector modes (that now include V2DI) and one for the
floating-point vector modes. That is so that we can avoid specifying
"w,r" alternatives for floating-point modes in case the
register-allocator gets confused and starts gratuitously moving
registers between the two banks.  So the floating-point pattern only
two alternatives, one for SIMD-to-SIMD INS and one for LD1.

Did you see any cases in which this was necessary?  In some ways it
seems to run counter to Wilco's recent patches, which tended to remove
the * markers from the "unnatural" register class and trust the register
allocator to make a sensible decision.

I think our default position should be trust the allocator here.
If the consumers all require "w" registers then the RA will surely
try to use "w" registers if at all possible.  But if the consumers
don't care then it seems reasonable to offer both, since in those
cases it doesn't really make much difference whether the payload
happens to be SF or SI (say).

There are also cases in which the consumer could actively require
an integer register.  E.g. some code uses unions to bitcast floats
to ints and then do bitwise arithmetic on them.



Thanks, that makes sense. Honestly, it's been a few months since I worked on 
this patch.
I believe my reluctance to specify that alternative was that it would mean 
merging the integer and
floating-point patterns into one (like the attached version) which would put the "w, 
r" alternative
first for the floating-point case. I guess we should be able to trust the 
allocator to pick
the sensible  alternative though.



With some help from Wilco I can see how this approach will give us suboptimal 
code though.
If we modify the example from my original post to be:
v4sf
foo_v4sf (float *a, float *b, float *c, float *d)
{
v4sf res = { *a, b[2], *c, *d };
return res;
}

The b[2] load will load into a GP register then do an expensive INS into the 
SIMD register
instead of loading into an FP S-register and then doing a SIMD-to-SIMD INS.
The only way I can get it to use the FP load then is to mark the "w, r" 
alternative with a '?'

Kyrill



This version is then made even simpler due to all the vec_set patterns being 
merged into one.
Bootstrapped and tested on aarch64-none-linux-gnu.

Is this ok for trunk?

Thanks,
Kyrill

2018-05-17  Kyrylo Tkachov  

* config/aarch64/aarch64-simd.md (vec_set): Use VALL_F16 mode
iterator.  Delete separate integer-mode vec_set expander.
(aarch64_simd_vec_setv2di): Delete.
(vec_setv2di): Delete.
(aarch64_simd_vec_set): Delete all other patterns with that name.
Use VALL_F16 mode iterator.  Add LD1 alternative and use vwcore for
the "w, r" alternative.

2018-05-17  Kyrylo Tkachov  

* gcc.target/aarch64/vect-init-ld1.c: New test.


With this patch we avoid loading values into scalar registers and then
doing an explicit INS on them to move them into the desired vector
lanes. For example for:

typedef float v4sf __attribute__ ((vector_size (16)));
typedef long long v2di __attribute__ ((vector_size (16)));

v2di
foo_v2di (long long *a, long long *b)
{
v2di res = { *a, *b };
return res;
}

v4sf
foo_v4sf (float *a, float *b, float *c, float *d)
{
v4sf res = { *a, *b, *c, *d };
return res;
}

we currently generate:

foo_v2di:
  ldr d0, [x0]
  ldr x0, [x1]
  ins v0.d[1], x0
  ret

foo_v4sf:
  ldr s0, [x0]
  ldr s3, [x1]
  ldr s2, [x2]
  ldr s1, [x3]
  ins v0.s[1], v3.s[0]
  ins v0.s[2], v2.s[0]
  ins v0.s[3], v1.s[0]
  ret

but with this patch we generate the much cleaner:
foo_v2di:
  ldr d0, [x0]
  ld1 {v0.d}[1], [x1]
  ret

foo_v4sf:
  ldr s0, [x0]
  ld1 {v0.s}[1], [x1]
  ld1 {v0.s}[2], [x2]
  ld1 

Re: [PATCH][AArch64] Unify vec_set patterns, support floating-point vector modes properly

2018-05-17 Thread Kyrill Tkachov


On 15/05/18 18:56, Richard Sandiford wrote:

Kyrill  Tkachov  writes:

Hi all,

We've a deficiency in our vec_set family of patterns.  We don't
support directly loading a vector lane using LD1 for V2DImode and all
the vector floating-point modes.  We do do it correctly for the other
integer vector modes (V4SI, V8HI etc) though.

The alternatives on the relative floating-point patterns only allow a
register-to-register INS instruction.  That means if we want to load a
value into a vector lane we must first load it into a scalar register
and then perform an INS, which is wasteful.

There is also an explicit V2DI vec_set expander dangling around for no
reason that I can see. It seems to do the exact same things as the
other vec_set expanders. This patch removes that.  It now unifies all
vec_set expansions into a single "vec_set" define_expand using
the catch-all VALL_F16 iterator.

I decided to leave two aarch64_simd_vec_set define_insns. One
for the integer vector modes (that now include V2DI) and one for the
floating-point vector modes. That is so that we can avoid specifying
"w,r" alternatives for floating-point modes in case the
register-allocator gets confused and starts gratuitously moving
registers between the two banks.  So the floating-point pattern only
two alternatives, one for SIMD-to-SIMD INS and one for LD1.

Did you see any cases in which this was necessary?  In some ways it
seems to run counter to Wilco's recent patches, which tended to remove
the * markers from the "unnatural" register class and trust the register
allocator to make a sensible decision.

I think our default position should be trust the allocator here.
If the consumers all require "w" registers then the RA will surely
try to use "w" registers if at all possible.  But if the consumers
don't care then it seems reasonable to offer both, since in those
cases it doesn't really make much difference whether the payload
happens to be SF or SI (say).

There are also cases in which the consumer could actively require
an integer register.  E.g. some code uses unions to bitcast floats
to ints and then do bitwise arithmetic on them.



Thanks, that makes sense. Honestly, it's been a few months since I worked on 
this patch.
I believe my reluctance to specify that alternative was that it would mean 
merging the integer and
floating-point patterns into one (like the attached version) which would put the "w, 
r" alternative
first for the floating-point case. I guess we should be able to trust the 
allocator to pick
the sensible  alternative though.

This version is then made even simpler due to all the vec_set patterns being 
merged into one.
Bootstrapped and tested on aarch64-none-linux-gnu.

Is this ok for trunk?

Thanks,
Kyrill

2018-05-17  Kyrylo Tkachov  

* config/aarch64/aarch64-simd.md (vec_set): Use VALL_F16 mode
iterator.  Delete separate integer-mode vec_set expander.
(aarch64_simd_vec_setv2di): Delete.
(vec_setv2di): Delete.
(aarch64_simd_vec_set): Delete all other patterns with that name.
Use VALL_F16 mode iterator.  Add LD1 alternative and use vwcore for
the "w, r" alternative.

2018-05-17  Kyrylo Tkachov  

* gcc.target/aarch64/vect-init-ld1.c: New test.


With this patch we avoid loading values into scalar registers and then
doing an explicit INS on them to move them into the desired vector
lanes. For example for:

typedef float v4sf __attribute__ ((vector_size (16)));
typedef long long v2di __attribute__ ((vector_size (16)));

v2di
foo_v2di (long long *a, long long *b)
{
v2di res = { *a, *b };
return res;
}

v4sf
foo_v4sf (float *a, float *b, float *c, float *d)
{
v4sf res = { *a, *b, *c, *d };
return res;
}

we currently generate:

foo_v2di:
  ldr d0, [x0]
  ldr x0, [x1]
  ins v0.d[1], x0
  ret

foo_v4sf:
  ldr s0, [x0]
  ldr s3, [x1]
  ldr s2, [x2]
  ldr s1, [x3]
  ins v0.s[1], v3.s[0]
  ins v0.s[2], v2.s[0]
  ins v0.s[3], v1.s[0]
  ret

but with this patch we generate the much cleaner:
foo_v2di:
  ldr d0, [x0]
  ld1 {v0.d}[1], [x1]
  ret

foo_v4sf:
  ldr s0, [x0]
  ld1 {v0.s}[1], [x1]
  ld1 {v0.s}[2], [x2]
  ld1 {v0.s}[3], [x3]
  ret

Nice!  The original reason for:

   /* FIXME: At the moment the cost model seems to underestimate the
  cost of using elementwise accesses.  This check preserves the
  traditional behavior until that can be fixed.  */
   if (*memory_access_type == VMAT_ELEMENTWISE
   && !STMT_VINFO_STRIDED_P (stmt_info)
   && !(stmt == GROUP_FIRST_ELEMENT (stmt_info)
   && !GROUP_NEXT_ELEMENT (stmt_info)
   && !pow2p_hwi (GROUP_SIZE (stmt_info
 {
   if (dump_enabled_p ())
dump_printf_loc (MSG_MISSED_OPTIMIZATION, 

Re: [PATCH][AArch64] Unify vec_set patterns, support floating-point vector modes properly

2018-05-15 Thread Richard Sandiford
Kyrill  Tkachov  writes:
> Hi all,
>
> We've a deficiency in our vec_set family of patterns.  We don't
> support directly loading a vector lane using LD1 for V2DImode and all
> the vector floating-point modes.  We do do it correctly for the other
> integer vector modes (V4SI, V8HI etc) though.
>
> The alternatives on the relative floating-point patterns only allow a
> register-to-register INS instruction.  That means if we want to load a
> value into a vector lane we must first load it into a scalar register
> and then perform an INS, which is wasteful.
>
> There is also an explicit V2DI vec_set expander dangling around for no
> reason that I can see. It seems to do the exact same things as the
> other vec_set expanders. This patch removes that.  It now unifies all
> vec_set expansions into a single "vec_set" define_expand using
> the catch-all VALL_F16 iterator.
>
> I decided to leave two aarch64_simd_vec_set define_insns. One
> for the integer vector modes (that now include V2DI) and one for the
> floating-point vector modes. That is so that we can avoid specifying
> "w,r" alternatives for floating-point modes in case the
> register-allocator gets confused and starts gratuitously moving
> registers between the two banks.  So the floating-point pattern only
> two alternatives, one for SIMD-to-SIMD INS and one for LD1.

Did you see any cases in which this was necessary?  In some ways it
seems to run counter to Wilco's recent patches, which tended to remove
the * markers from the "unnatural" register class and trust the register
allocator to make a sensible decision.

I think our default position should be trust the allocator here.
If the consumers all require "w" registers then the RA will surely
try to use "w" registers if at all possible.  But if the consumers
don't care then it seems reasonable to offer both, since in those
cases it doesn't really make much difference whether the payload
happens to be SF or SI (say).

There are also cases in which the consumer could actively require
an integer register.  E.g. some code uses unions to bitcast floats
to ints and then do bitwise arithmetic on them.

> With this patch we avoid loading values into scalar registers and then
> doing an explicit INS on them to move them into the desired vector
> lanes. For example for:
>
> typedef float v4sf __attribute__ ((vector_size (16)));
> typedef long long v2di __attribute__ ((vector_size (16)));
>
> v2di
> foo_v2di (long long *a, long long *b)
> {
>v2di res = { *a, *b };
>return res;
> }
>
> v4sf
> foo_v4sf (float *a, float *b, float *c, float *d)
> {
>v4sf res = { *a, *b, *c, *d };
>return res;
> }
>
> we currently generate:
>
> foo_v2di:
>  ldr d0, [x0]
>  ldr x0, [x1]
>  ins v0.d[1], x0
>  ret
>
> foo_v4sf:
>  ldr s0, [x0]
>  ldr s3, [x1]
>  ldr s2, [x2]
>  ldr s1, [x3]
>  ins v0.s[1], v3.s[0]
>  ins v0.s[2], v2.s[0]
>  ins v0.s[3], v1.s[0]
>  ret
>
> but with this patch we generate the much cleaner:
> foo_v2di:
>  ldr d0, [x0]
>  ld1 {v0.d}[1], [x1]
>  ret
>
> foo_v4sf:
>  ldr s0, [x0]
>  ld1 {v0.s}[1], [x1]
>  ld1 {v0.s}[2], [x2]
>  ld1 {v0.s}[3], [x3]
>  ret

Nice!  The original reason for:

  /* FIXME: At the moment the cost model seems to underestimate the
 cost of using elementwise accesses.  This check preserves the
 traditional behavior until that can be fixed.  */
  if (*memory_access_type == VMAT_ELEMENTWISE
  && !STMT_VINFO_STRIDED_P (stmt_info)
  && !(stmt == GROUP_FIRST_ELEMENT (stmt_info)
   && !GROUP_NEXT_ELEMENT (stmt_info)
   && !pow2p_hwi (GROUP_SIZE (stmt_info
{
  if (dump_enabled_p ())
dump_printf_loc (MSG_MISSED_OPTIMIZATION, vect_location,
 "not falling back to elementwise accesses\n");
  return false;
}

was that we seemed to be too optimistic about how cheap it was to
construct a vector from scalars.  Maybe this patch brings the code
closer to the cost (for AArch64 only of course).

FWIW, the patch looks good to me bar the GPR/FPR split.

Thanks,
Richard


[PATCH][AArch64] Unify vec_set patterns, support floating-point vector modes properly

2018-05-15 Thread Kyrill Tkachov

Hi all,

We've a deficiency in our vec_set family of patterns.
We don't support directly loading a vector lane using LD1 for V2DImode and all 
the vector floating-point modes.
We do do it correctly for the other integer vector modes (V4SI, V8HI etc) 
though.

The alternatives on the relative floating-point patterns only allow a 
register-to-register INS instruction.
That means if we want to load a value into a vector lane we must first load it 
into a scalar register and then
perform an INS, which is wasteful.

There is also an explicit V2DI vec_set expander dangling around for no reason 
that I can see. It seems to do the
exact same things as the other vec_set expanders. This patch removes that.
It now unifies all vec_set expansions into a single "vec_set" 
define_expand using the catch-all VALL_F16 iterator.
I decided to leave two aarch64_simd_vec_set define_insns. One for the 
integer vector modes (that now include V2DI)
and one for the floating-point vector modes. That is so that we can avoid specifying 
"w,r" alternatives for floating-point
modes in case the register-allocator gets confused and starts gratuitously 
moving registers between the two banks.
So the floating-point pattern only two alternatives, one for SIMD-to-SIMD INS 
and one for LD1.

With this patch we avoid loading values into scalar registers and then doing an 
explicit INS on them to move them into
the desired vector lanes. For example for:

typedef float v4sf __attribute__ ((vector_size (16)));
typedef long long v2di __attribute__ ((vector_size (16)));

v2di
foo_v2di (long long *a, long long *b)
{
  v2di res = { *a, *b };
  return res;
}

v4sf
foo_v4sf (float *a, float *b, float *c, float *d)
{
  v4sf res = { *a, *b, *c, *d };
  return res;
}

we currently generate:

foo_v2di:
ldr d0, [x0]
ldr x0, [x1]
ins v0.d[1], x0
ret

foo_v4sf:
ldr s0, [x0]
ldr s3, [x1]
ldr s2, [x2]
ldr s1, [x3]
ins v0.s[1], v3.s[0]
ins v0.s[2], v2.s[0]
ins v0.s[3], v1.s[0]
ret

but with this patch we generate the much cleaner:
foo_v2di:
ldr d0, [x0]
ld1 {v0.d}[1], [x1]
ret

foo_v4sf:
ldr s0, [x0]
ld1 {v0.s}[1], [x1]
ld1 {v0.s}[2], [x2]
ld1 {v0.s}[3], [x3]
ret


Bootstrapped and tested on aarch64-none-linux-gnu and also tested on 
aarch64_be-none-elf.

Ok for trunk?
Thanks,
Kyrill

2018-05-15  Kyrylo Tkachov  

* config/aarch64/aarch64-simd.md (vec_set): Use VALL_F16 mode
iterator.  Delete separate integer-mode vec_set expander.
(aarch64_simd_vec_setv2di): Delete.
(vec_setv2di): Delete.
(aarch64_simd_vec_set, VQDF_F16): Move earlier in file.
Add second memory alternative to emit an LD1.
(aarch64_simd_vec_set, VDQ_BHSI): Change mode iterator to
VDQ_I.  Use vwcore mode attribute in first alternative for operand 1
constraint.

2018-05-15  Kyrylo Tkachov  

* gcc.target/aarch64/vect-init-ld1.c: New test.
diff --git a/gcc/config/aarch64/aarch64-simd.md b/gcc/config/aarch64/aarch64-simd.md
index 1154fc3d58deaa33413ea3050ff7feec37f092a6..df3fad2d71ed4096accdfdf725e194bf555d40d2 100644
--- a/gcc/config/aarch64/aarch64-simd.md
+++ b/gcc/config/aarch64/aarch64-simd.md
@@ -694,11 +694,11 @@ (define_insn "one_cmpl2"
 )
 
 (define_insn "aarch64_simd_vec_set"
-  [(set (match_operand:VDQ_BHSI 0 "register_operand" "=w,w,w")
-(vec_merge:VDQ_BHSI
-	(vec_duplicate:VDQ_BHSI
+  [(set (match_operand:VDQ_I 0 "register_operand" "=w,w,w")
+	(vec_merge:VDQ_I
+	(vec_duplicate:VDQ_I
 		(match_operand: 1 "aarch64_simd_general_operand" "r,w,Utv"))
-	(match_operand:VDQ_BHSI 3 "register_operand" "0,0,0")
+	(match_operand:VDQ_I 3 "register_operand" "0,0,0")
 	(match_operand:SI 2 "immediate_operand" "i,i,i")))]
   "TARGET_SIMD"
   {
@@ -707,7 +707,7 @@ (define_insn "aarch64_simd_vec_set"
switch (which_alternative)
  {
  case 0:
-	return "ins\\t%0.[%p2], %w1";
+	return "ins\\t%0.[%p2], %1";
  case 1:
 	return "ins\\t%0.[%p2], %1.[0]";
  case 2:
@@ -719,6 +719,30 @@ (define_insn "aarch64_simd_vec_set"
   [(set_attr "type" "neon_from_gp, neon_ins, neon_load1_one_lane")]
 )
 
+(define_insn "aarch64_simd_vec_set"
+  [(set (match_operand:VDQF_F16 0 "register_operand" "=w,w")
+	(vec_merge:VDQF_F16
+	(vec_duplicate:VDQF_F16
+		(match_operand: 1 "aarch64_simd_general_operand" "w,Utv"))
+	(match_operand:VDQF_F16 3 "register_operand" "0,0")
+	(match_operand:SI 2 "immediate_operand" "i,i")))]
+  "TARGET_SIMD"
+  {
+   int elt = ENDIAN_LANE_N (, exact_log2 (INTVAL (operands[2])));
+   operands[2] = GEN_INT ((HOST_WIDE_INT) 1 << elt);
+   switch (which_alternative)
+ {
+ case 0:
+	return "ins\\t%0.[%p2], %1.[0]";
+ case 1:
+	return "ld1\\t{%0.}[%p2], %1";
+ default:
+	gcc_unreachable ();
+ }
+  }
+  [(set_attr "type"