https://gcc.gnu.org/bugzilla/show_bug.cgi?id=111754

--- Comment #14 from GCC Commits <cvs-commit at gcc dot gnu.org> ---
The master branch has been updated by Jakub Jelinek <ja...@gcc.gnu.org>:

https://gcc.gnu.org/g:e6c01334ccfca8bc748c8de90ba2a636d1662490

commit r14-5902-ge6c01334ccfca8bc748c8de90ba2a636d1662490
Author: Jakub Jelinek <ja...@redhat.com>
Date:   Tue Nov 28 10:16:47 2023 +0100

    testsuite: Fix up pr111754.c test

    On Tue, Nov 28, 2023 at 03:56:47PM +0800, juzhe.zh...@rivai.ai wrote:
    > Hi, there is a regression in RISC-V caused by this patch:
    >
    > FAIL: gcc.dg/vect/pr111754.c -flto -ffat-lto-objects  scan-tree-dump
optimized "return { 0.0, 9.0e\\+0, 0.0, 0.0 }"
    > FAIL: gcc.dg/vect/pr111754.c scan-tree-dump optimized "return { 0.0,
9.0e\\+0, 0.0, 0.0 }"
    >
    > I have checked the dump is :
    > F foo (F a, F b)
    > {
    >   <bb 2> [local count: 1073741824]:
    >   <retval> = { 0.0, 9.0e+0, 0.0, 0.0 };
    >   return <retval>;
    >
    > }
    >
    > The dump IR seems reasonable to me.
    > I wonder whether we should walk around in RISC-V backend to generate the
same IR as ARM SVE ?
    > Or we should adjust the test ?

    Note, the test also FAILs on i686-linux (but not e.g. on x86_64-linux):
    /home/jakub/src/gcc/obj67/gcc/xgcc -B/home/jakub/src/gcc/obj67/gcc/
/home/jakub/src/gcc/gcc/testsuite/gcc.dg/vect/pr111754.c
-fdiagnostics-plain-output -O2 -fdump-tree-optimized -S
    +-o pr111754.s
    /home/jakub/src/gcc/gcc/testsuite/gcc.dg/vect/pr111754.c: In function
'foo':
    /home/jakub/src/gcc/gcc/testsuite/gcc.dg/vect/pr111754.c:7:1: warning: SSE
vector return without SSE enabled changes the ABI [-Wpsabi]
    /home/jakub/src/gcc/gcc/testsuite/gcc.dg/vect/pr111754.c:6:3: note: the ABI
for passing parameters with 16-byte alignment has changed in GCC 4.6
    /home/jakub/src/gcc/gcc/testsuite/gcc.dg/vect/pr111754.c:6:3: warning: SSE
vector argument without SSE enabled changes the ABI [-Wpsabi]
    FAIL: gcc.dg/vect/pr111754.c (test for excess errors)
    Excess errors:
    /home/jakub/src/gcc/gcc/testsuite/gcc.dg/vect/pr111754.c:7:1: warning: SSE
vector return without SSE enabled changes the ABI [-Wpsabi]
    /home/jakub/src/gcc/gcc/testsuite/gcc.dg/vect/pr111754.c:6:3: warning: SSE
vector argument without SSE enabled changes the ABI [-Wpsabi]

    PASS: gcc.dg/vect/pr111754.c scan-tree-dump-not optimized "VEC_PERM_EXPR"
    FAIL: gcc.dg/vect/pr111754.c scan-tree-dump optimized "return { 0.0,
9.0e\\+0, 0.0, 0.0 }"

    So, I think it is wrong to specify
    /* { dg-options "-O2 -fdump-tree-optimized" } */
    in the test, should be dg-additional-options instead, so that it gets
    the implied vector compilation options e.g. for i686-linux (-msse2 in that
    case at least), question is if -Wno-psabi should be added as well or not,
    and certainly the scan-tree-dump needs to be guarded by appropriate
    vect_* effective target (but dunno which, one which asserts support for
    V4SFmode and returning it).
    Alternatively, perhaps don't check optimized dump but some earlier one
    before generic vector lowering, then hopefully it could match on all
    targets?  Maybe with the <retval> = ... vs. return ... variants.

    2023-11-28  Jakub Jelinek  <ja...@redhat.com>

            PR middle-end/111754
            * gcc.dg/vect/pr111754.c: Use dg-additional-options rather than
            dg-options, add -Wno-psabi and use -fdump-tree-forwprop1 rather
than
            -fdump-tree-optimized.  Scan forwprop1 dump rather than optimized
and
            scan for either direct return or setting of <retval> to the vector.

Reply via email to