Hi,

As subject, this patch adds a new RTL simplification for the case of a
VEC_SELECT selecting the low part of a vector. The simplification
returns a SUBREG.

The primary goal of this patch is to enable better combinations of
Neon RTL patterns - specifically allowing generation of 'write-to-
high-half' narrowing intructions.

Adding this RTL simplification means that the expected results for a
number of tests need to be updated:
* aarch64 Neon: Update the scan-assembler regex for intrinsics tests
  to expect a scalar register instead of lane 0 of a vector.
* aarch64 SVE: Likewise.
* arm MVE: Use lane 1 instead of lane 0 for lane-extraction
  intrinsics tests (as the move instructions get optimized away for
  lane 0.)

Regression tested and bootstrapped on aarch64-none-linux-gnu,
x86_64-unknown-linux-gnu, arm-none-linux-gnueabihf and
aarch64_be-none-linux-gnu - no issues.

Ok for master?

Thanks,
Jonathan

---

gcc/ChangeLog:

2021-06-08  Jonathan Wright  <jonathan.wri...@arm.com>

        * combine.c (combine_simplify_rtx): Add vec_select -> subreg
        simplification.
        * config/aarch64/aarch64.md 
(*zero_extend<SHORT:mode><GPI:mode>2_aarch64):
        Add Neon to general purpose register case for zero-extend
        pattern.
        * config/arm/vfp.md (*arm_movsi_vfp): Remove "*" from *t -> r
        case to prevent some cases opting to go through memory.
        * cse.c (fold_rtx): Add vec_select -> subreg simplification.
        * simplify-rtx.c (simplify_context::simplify_binary_operation_1):
        Likewise.

gcc/testsuite/ChangeLog:

        * gcc.target/aarch64/extract_zero_extend.c: Remove dump scan
        for RTL pattern match.
        * gcc.target/aarch64/simd/vmulx_laneq_f64_1.c: Update
        scan-assembler regex to look for a scalar register instead of
        lane 0 of a vector.
        * gcc.target/aarch64/simd/vmulx_laneq_f64_1.c: Likewise.
        * gcc.target/aarch64/simd/vmulxd_laneq_f64_1.c: Likewise.
        * gcc.target/aarch64/simd/vmulxs_lane_f32_1.c: Likewise.
        * gcc.target/aarch64/simd/vmulxs_laneq_f32_1.c: Likewise.
        * gcc.target/aarch64/simd/vqdmlalh_lane_s16.c: Likewise.
        * gcc.target/aarch64/simd/vqdmlals_lane_s32.c: Likewise.
        * gcc.target/aarch64/simd/vqdmlslh_lane_s16.c: Likewise.
        * gcc.target/aarch64/simd/vqdmlsls_lane_s32.c: Likewise.
        * gcc.target/aarch64/simd/vqdmullh_lane_s16.c: Likewise.
        * gcc.target/aarch64/simd/vqdmullh_laneq_s16.c: Likewise.
        * gcc.target/aarch64/simd/vqdmulls_lane_s32.c: Likewise.
        * gcc.target/aarch64/simd/vqdmulls_laneq_s32.c: Likewise.
        * gcc.target/aarch64/sve/dup_lane_1.c: Likewise.
        * gcc.target/aarch64/sve/live_1.c: Update scan-assembler regex
        cases to look for 'b' and 'h' registers instead of 'w'.
        * gcc.target/arm/mve/intrinsics/vgetq_lane_f16.c: Extract
        lane 1 as the moves for lane 0 now get optimized away.
        * gcc.target/arm/mve/intrinsics/vgetq_lane_f32.c: Likewise.
        * gcc.target/arm/mve/intrinsics/vgetq_lane_s16.c: Likewise.
        * gcc.target/arm/mve/intrinsics/vgetq_lane_s32.c: Likewise.
        * gcc.target/arm/mve/intrinsics/vgetq_lane_s8.c: Likewise.
        * gcc.target/arm/mve/intrinsics/vgetq_lane_u16.c: Likewise.
        * gcc.target/arm/mve/intrinsics/vgetq_lane_u32.c: Likewise.
        * gcc.target/arm/mve/intrinsics/vgetq_lane_u8.c: Likewise.

Attachment: rb14526.patch
Description: rb14526.patch

Reply via email to