> On Mar 20, 2018, at 8:11 PM, Kyrill Tkachov <kyrylo.tkac...@foss.arm.com> 
> wrote:
> 
> Hi all,
> 
> This PR shows that we get the load/store_lanes logic wrong for arm big-endian.
> It is tricky to get right. Aarch64 does it by adding the appropriate 
> lane-swapping
> operations during expansion.
> 
> I'd like to do the same on arm eventually, but we'd need to port and validate 
> the VTBL-generating
> code and add it to all the right places and I'm not comfortable enough doing 
> it for GCC 8, but I am keen
> in getting the wrong-code fixed.
> As I say in the PR, vectorisation on armeb is already severely restricted (we 
> disable many patterns on BYTES_BIG_ENDIAN)
> and the load/store_lanes patterns really were not working properly at all, so 
> disabling them is not
> a radical approach.
> 
> The way to do that is to return false in ARRAY_MODE_SUPPORTED_P for 
> BYTES_BIG_ENDIAN.
> 
> Bootstrapped and tested on arm-none-linux-gnueabihf.
> Also tested on armeb-none-eabi.
> 
> Committing to trunk.
...
> --- a/gcc/testsuite/lib/target-supports.exp
> +++ b/gcc/testsuite/lib/target-supports.exp
> @@ -6609,7 +6609,8 @@ proc check_effective_target_vect_load_lanes { } {
>       verbose "check_effective_target_vect_load_lanes: using cached result" 2
>      } else {
>       set et_vect_load_lanes 0
> -     if { ([istarget arm*-*-*] && [check_effective_target_arm_neon_ok])
> +     # We don't support load_lanes correctly on big-endian arm.
> +     if { ([istarget arm-*-*] && [check_effective_target_arm_neon_ok])
>            || [istarget aarch64*-*-*] } {
>           set et_vect_load_lanes 1
>       }
> 

Hi Kyrill,

This part makes armv8l-linux-gnueabihf target fail a few of slp-perm-* tests.  
Using [check_effective_target_arm_little_endian] should fix this.

Would you please fix this on master and gcc-7-branch?

Thanks!

--
Maxim Kuvyrkov
www.linaro.org




Reply via email to