Re: [Linaro-TCWG-CI] gcc-15-874-g9bda2c4c81b: Failure on arm

2024-06-06 Thread Maxim Kuvyrkov
Hi David,

Your patch below breaks linux kernel build on 32-bit ARM -- see [1].  Would you 
please investigate?

[1] 
https://ci.linaro.org/job/tcwg_kernel--gnu-master-arm-stable-allmodconfig-build/144/artifact/artifacts/06-build_linux/console.log.xz

Thanks!

--
Maxim Kuvyrkov
https://www.linaro.org

> On Jun 6, 2024, at 09:46, ci_not...@linaro.org wrote:
> 
> Dear contributor, our automatic CI has detected problems related to your 
> patch(es).  Please find some details below.  If you have any questions, 
> please follow up on linaro-toolchain@lists.linaro.org mailing list, Libera's 
> #linaro-tcwg channel, or ping your favourite Linaro toolchain developer on 
> the usual project channel.
> 
> We appreciate that it might be difficult to find the necessary logs or 
> reproduce the issue locally. If you can't get what you need from our CI 
> within minutes, let us know and we will be happy to help.
> 
> We track this report status in https://linaro.atlassian.net/browse/GNU-1242 , 
> please let us know if you are looking at the problem and/or when you have a 
> fix.
> 
> In CI config tcwg_kernel/gnu-master-arm-stable-allmodconfig after:
> 
>  | commit gcc-15-874-g9bda2c4c81b
>  | Author: David Malcolm 
>  | Date:   Tue May 28 15:55:24 2024 -0400
>  | 
>  | libcpp: move label_text to its own header
>  | 
>  | No functional change intended.
>  | 
>  | libcpp/ChangeLog:
>  | * Makefile.in (TAGS_SOURCES): Add include/label-text.h.
>  | * include/label-text.h: New file.
>  | ... 4 lines of the commit log omitted.
> 
> Results changed to
> # reset_artifacts:
> -10
> # build_abe binutils:
> -9
> # build_abe stage1:
> -5
> # build_abe qemu:
> -2
> # linux_n_obj:
> 33
> 
> From
> # reset_artifacts:
> -10
> # build_abe binutils:
> -9
> # build_abe stage1:
> -5
> # build_abe qemu:
> -2
> # linux_n_obj:
> 34005
> # linux build successful:
> all
> # linux boot successful:
> boot
> 
> The configuration of this build is:
> CI config tcwg_kernel/gnu-master-arm-stable-allmodconfig
> 
> -8<--8<--8<--
> The information below can be used to reproduce a debug environment:
> 
> Current build   : 
> https://ci.linaro.org/job/tcwg_kernel--gnu-master-arm-stable-allmodconfig-build/144/artifact/artifacts
> Reference build : 
> https://ci.linaro.org/job/tcwg_kernel--gnu-master-arm-stable-allmodconfig-build/143/artifact/artifacts
> 
> Reproduce last good and first bad builds: 
> https://git-us.linaro.org/toolchain/ci/interesting-commits.git/plain/gcc/sha1/9bda2c4c81b668b1d9abbb58cc4e805ac955a639/tcwg_kernel/gnu-master-arm-stable-allmodconfig/reproduction_instructions.txt
> 
> Full commit : 
> https://github.com/gcc-mirror/gcc/commit/9bda2c4c81b668b1d9abbb58cc4e805ac955a639
> 
> List of configurations that regressed due to this commit :
> * tcwg_kernel
> ** gnu-master-arm-stable-allmodconfig
> *** Failure
> *** 
> https://git-us.linaro.org/toolchain/ci/interesting-commits.git/plain/gcc/sha1/9bda2c4c81b668b1d9abbb58cc4e805ac955a639/tcwg_kernel/gnu-master-arm-stable-allmodconfig/details.txt
> *** 
> https://ci.linaro.org/job/tcwg_kernel--gnu-master-arm-stable-allmodconfig-build/144/artifact/artifacts


___
linaro-toolchain mailing list -- linaro-toolchain@lists.linaro.org
To unsubscribe send an email to linaro-toolchain-le...@lists.linaro.org


Re: gcc patch #88922: FAIL: 1 regressions on aarch64

2024-04-24 Thread Maxim Kuvyrkov
> On Apr 24, 2024, at 09:55, Paul Richard Thomas 
>  wrote:
> 
> Hi,
> 
> Executing on host:
> /home/tcwg-build/workspace/tcwg_gnu_4/abe/builds/destdir/aarch64-unknown-linux-gnu/bin/aarch64-unknown-linux-gnu-gfortran
> 
> /home/tcwg-build/workspace/tcwg_gnu_4/abe/snapshots/gcc.git~master/gcc/testsuite/gfortran.dg/pr89462.f90
>   -fdiagnostics-plain-output  -fdiagnostics-plain-output-O
> -pedantic-errors -S -o pr89462.s(timeout = 600)
> spawn -ignore SIGHUP
> /home/tcwg-build/workspace/tcwg_gnu_4/abe/builds/destdir/aarch64-unknown-linux-gnu/bin/aarch64-unknown-linux-gnu-gfortran
> /home/tcwg-build/workspace/tcwg_gnu_4/abe/snapshots/gcc.git~master/gcc/testsuite/gfortran.dg/pr89462.f90
> -fdiagnostics-plain-output -fdiagnostics-plain-output -O -pedantic-errors
> -S -o pr89462.s
> /home/tcwg-build/workspace/tcwg_gnu_4/abe/snapshots/gcc.git~master/gcc/testsuite/gfortran.dg/pr89462.f90:6:14:
> Warning: Obsolescent feature: Old-style character length at (1)
> /home/tcwg-build/workspace/tcwg_gnu_4/abe/snapshots/gcc.git~master/gcc/testsuite/gfortran.dg/pr89462.f90:7:17:
> Warning: Obsolescent feature: Old-style character length at (1)
> 
> As far as I can see, adding -pedantic-errors and the warning fixes the
> regression.

Hi Paul,

I don't quite understand the above comment.  Could you elaborate, please?

> 
> Do you only run the pre-commit regression tests with -pedantic-errors?

We run both post-commit and pre-commit tests with the same flags -- the default 
ones.  I'm guessing the -pedantic-errors is coming from 
gcc/testsuite/gfortran.dg/dg.exp via DEFAULT_FFLAGS.

Does this answer your question?

King regards,

--
Maxim Kuvyrkov
https://www.linaro.org

___
linaro-toolchain mailing list -- linaro-toolchain@lists.linaro.org
To unsubscribe send an email to linaro-toolchain-le...@lists.linaro.org


Re: [Linaro-TCWG-CI] 3 patches in gcc: FAIL: 260 regressions on arm

2024-04-23 Thread Maxim Kuvyrkov
Hi Qing,

> On Apr 22, 2024, at 17:53, Qing Zhao  wrote:
> 
> Hi, Maxim, 
> 
> Thanks for your quick reply.
> 
> Yes, I see now.
> 
> However, my patch set includes 4 patches, and the last one is the testing 
> case adjustment patch that should fix all the testing failures. I am not sure 
> whether the last patch was applied or not (from my understanding, the last 
> patch was not applied). 

Correct.  The report says ...
 | 3 patches in gcc
 | Patchwork URL: https://patchwork.sourceware.org/patch/88759
 | da63cf36d84 Add testing cases for flexible array members in unions and alone 
in structures.
 | 9a83cd642a0 C and C++ FE changes to support flexible array members in unions 
and alone in structures.
 | 513291ec443 Documentation change
 | ... applied on top of baseline commit:
 | 9f10005dbc9 RISC-V: Add xfail test case for wv insn register overlap
... which means the results are for the first 3 patches.

You can also see at [1] that results when all 4 patches are applied are OK.

It's fine for review purposes to separate test-case adjustments into a 
standalone patch, but please squash them with the main patch when merging, so 
that GCC history does not include unnecessary regressions.

> 
> Another question is, are all the patches posted to GNU toolchain mailing 
> lists tested by Linaro?

We aim to test all patches processed by patchwork.  However, patchwork can 
process only patches that follow standard git patch rules.  Also, some patches 
fail to apply to current mainline, so we skip them.  See [2] for more details.

[1] https://patchwork.sourceware.org/project/gcc/list/?series=33005
[2] https://patchwork.sourceware.org/project/gcc/list/

Kind regards,

--
Maxim Kuvyrkov
https://www.linaro.org


> 
> Qing
> 
>> On Apr 22, 2024, at 09:45, Maxim Kuvyrkov  wrote:
>> 
>> Hi Qing,
>> 
>> Linaro runs pre-commit CI in which we test patches posted to GNU toolchain 
>> mailing lists.  Your patch series showed regressions in our pre-commit 
>> testing, and CI sent you below report.
>> 
>> The goal of pre-commit CI is catch problematic patches before they are 
>> merged.
>> 
>> Does this answer your question?
>> 
>> Kind regards,
>> 
>> --
>> Maxim Kuvyrkov
>> https://www.linaro.org
>> 
>>> On Apr 22, 2024, at 17:33, Qing Zhao  wrote:
>>> 
>>> Hi,
>>> 
>>> I am wondering why I got the following message?
>>> 
>>> I only sent patch review request to 
>>> gcc-patc...@gcc.gnu.org<mailto:gcc-patc...@gcc.gnu.org>, never committed 
>>> the patches to any public repository.
>>> Are there anyone else applied the patches to sourceware and tested them?
>>> I have posted many patch review request to 
>>> gcc-patc...@gcc.gnu.org<mailto:gcc-patc...@gcc.gnu.org>, this is the first 
>>> time I got such message.
>>> 
>>> Thanks a lot for your help.
>>> 
>>> Qing
>>> 
>>> 
>>> On Apr 20, 2024, at 00:27, ci_not...@linaro.org wrote:
>>> 
>>> Dear contributor, our automatic CI has detected problems related to your 
>>> patch(es).  Please find some details below.  If you have any questions, 
>>> please follow up on linaro-toolchain@lists.linaro.org mailing list, 
>>> Libera's #linaro-tcwg channel, or ping your favourite Linaro toolchain 
>>> developer on the usual project channel.
>>> 
>>> We appreciate that it might be difficult to find the necessary logs or 
>>> reproduce the issue locally. If you can't get what you need from our CI 
>>> within minutes, let us know and we will be happy to help.
>>> 
>>> In gcc_check master-arm after:
>>> 
>>> | 3 patches in gcc
>>> | Patchwork URL: https://patchwork.sourceware.org/patch/88759
>>> | da63cf36d84 Add testing cases for flexible array members in unions and 
>>> alone in structures.
>>> | 9a83cd642a0 C and C++ FE changes to support flexible array members in 
>>> unions and alone in structures.
>>> | 513291ec443 Documentation change
>>> | ... applied on top of baseline commit:
>>> | 9f10005dbc9 RISC-V: Add xfail test case for wv insn register overlap
>>> 
>>> FAIL: 260 regressions
>>> 
>>> regressions.sum:
>>> === g++ tests ===
>>> 
>>> Running g++:g++.dg/dg.exp ...
>>> FAIL: c-c++-common/builtin-clear-padding-3.c -std=gnu++14  (test for 
>>> errors, line 5)
>>> FAIL: c-c++-common/builtin-clear-padding-3.c -std=gnu++14 (test for excess 
>>> errors)
>>> FAIL: c-c++-common/builtin-clear-padding-3.c -std=gnu++17  (test for 
>&g

Re: [Linaro-TCWG-CI] 3 patches in gcc: FAIL: 260 regressions on arm

2024-04-22 Thread Maxim Kuvyrkov
Hi Qing,

Linaro runs pre-commit CI in which we test patches posted to GNU toolchain 
mailing lists.  Your patch series showed regressions in our pre-commit testing, 
and CI sent you below report.

The goal of pre-commit CI is catch problematic patches before they are merged.

Does this answer your question?

Kind regards,

--
Maxim Kuvyrkov
https://www.linaro.org

> On Apr 22, 2024, at 17:33, Qing Zhao  wrote:
> 
> Hi,
> 
> I am wondering why I got the following message?
> 
> I only sent patch review request to 
> gcc-patc...@gcc.gnu.org<mailto:gcc-patc...@gcc.gnu.org>, never committed the 
> patches to any public repository.
> Are there anyone else applied the patches to sourceware and tested them?
> I have posted many patch review request to 
> gcc-patc...@gcc.gnu.org<mailto:gcc-patc...@gcc.gnu.org>, this is the first 
> time I got such message.
> 
> Thanks a lot for your help.
> 
> Qing
> 
> 
> On Apr 20, 2024, at 00:27, ci_not...@linaro.org wrote:
> 
> Dear contributor, our automatic CI has detected problems related to your 
> patch(es).  Please find some details below.  If you have any questions, 
> please follow up on linaro-toolchain@lists.linaro.org mailing list, Libera's 
> #linaro-tcwg channel, or ping your favourite Linaro toolchain developer on 
> the usual project channel.
> 
> We appreciate that it might be difficult to find the necessary logs or 
> reproduce the issue locally. If you can't get what you need from our CI 
> within minutes, let us know and we will be happy to help.
> 
> In gcc_check master-arm after:
> 
> | 3 patches in gcc
> | Patchwork URL: https://patchwork.sourceware.org/patch/88759
> | da63cf36d84 Add testing cases for flexible array members in unions and 
> alone in structures.
> | 9a83cd642a0 C and C++ FE changes to support flexible array members in 
> unions and alone in structures.
> | 513291ec443 Documentation change
> | ... applied on top of baseline commit:
> | 9f10005dbc9 RISC-V: Add xfail test case for wv insn register overlap
> 
> FAIL: 260 regressions
> 
> regressions.sum:
> === g++ tests ===
> 
> Running g++:g++.dg/dg.exp ...
> FAIL: c-c++-common/builtin-clear-padding-3.c -std=gnu++14  (test for errors, 
> line 5)
> FAIL: c-c++-common/builtin-clear-padding-3.c -std=gnu++14 (test for excess 
> errors)
> FAIL: c-c++-common/builtin-clear-padding-3.c -std=gnu++17  (test for errors, 
> line 5)
> FAIL: c-c++-common/builtin-clear-padding-3.c -std=gnu++17 (test for excess 
> errors)
> FAIL: c-c++-common/builtin-clear-padding-3.c -std=gnu++20  (test for errors, 
> line 5)
> FAIL: c-c++-common/builtin-clear-padding-3.c -std=gnu++20 (test for excess 
> errors)
> FAIL: c-c++-common/builtin-clear-padding-3.c -std=gnu++98  (test for errors, 
> line 5)
> ... and 259 more entries
> 
> You can find the failure logs in *.log.1.xz files in
> - 
> https://ci.linaro.org/job/tcwg_gcc_check--master-arm-precommit/6897/artifact/artifacts/artifacts.precommit/00-sumfiles/
> The full lists of regressions and progressions as well as configure and make 
> commands are in
> - 
> https://ci.linaro.org/job/tcwg_gcc_check--master-arm-precommit/6897/artifact/artifacts/artifacts.precommit/notify/
> The list of [ignored] baseline and flaky failures are in
> - 
> https://ci.linaro.org/job/tcwg_gcc_check--master-arm-precommit/6897/artifact/artifacts/artifacts.precommit/sumfiles/xfails.xfail
> 
> The configuration of this build is:
> CI config tcwg_gcc_check master-arm
> 
> -8<--8<--8<--
> The information below can be used to reproduce a debug environment:
> 
> Current build   : 
> https://ci.linaro.org/job/tcwg_gcc_check--master-arm-precommit/6897/artifact/artifacts
> Reference build : 
> https://ci.linaro.org/job/tcwg_gcc_check--master-arm-build/1997/artifact/artifacts
> 
> Warning: we do not enable maintainer-mode nor automatically update
> generated files, which may lead to failures if the patch modifies the
> master files.
> 
> ___
> linaro-toolchain mailing list -- linaro-toolchain@lists.linaro.org
> To unsubscribe send an email to linaro-toolchain-le...@lists.linaro.org

___
linaro-toolchain mailing list -- linaro-toolchain@lists.linaro.org
To unsubscribe send an email to linaro-toolchain-le...@lists.linaro.org


Re: Pre-commit execution test for pr113363.f90

2024-04-10 Thread Maxim Kuvyrkov
[CC: Thiago, for GDB crash]

Hi Paul,

The test crashes immediately, it doesn't time out.  You see "timeout" in the 
output because we run all tests under "timeout" utility.

The backtrace is from the crash is:
===
rogram received signal SIGSEGV, Segmentation fault.
0xf7da9070 in arena_for_chunk (ptr=0x26208) at arena.c:156
156 arena.c: No such file or directory.
(gdb) bt
#0  0xf7da9070 in arena_for_chunk (ptr=0x26208) at arena.c:156
#1  arena_for_chunk (ptr=0x26208) at arena.c:160
#2  __GI___libc_free (mem=) at malloc.c:3390
#3  0x000125e8 in p ()
at 
/home/maxim.kuvyrkov/tcwg_gnu/abe/snapshots/gcc.git~master/gcc/testsuite/gfortran.dg/pr113363.f90:49
===
which is the line ...

(gdb) up
#3  0x000125e8 in p ()
at 
/home/maxim.kuvyrkov/tcwg_gnu/abe/snapshots/gcc.git~master/gcc/testsuite/gfortran.dg/pr113363.f90:49
49deallocate (x, y)
(gdb) p y
$1 = ( _data = 0x26238, _vptr = 0x24090 <__vtab_CHARACTER_1_.7>, _len = 10 )

However, if I try to print "x", GDB crashes:
===
(gdb) p x
$2 = ( _data = (0x6568, 
/build/gdb-aPmCGS/gdb-12.1/gdb/value.c:856: internal-error: 
value_contents_bits_eq: Assertion `offset1 + length <= TYPE_LENGTH 
(val1->enclosing_type) * TARGET_CHAR_BIT' failed.
A problem internal to GDB has been detected,
further debugging may prove unreliable.
- Backtrace -
0x911165 ???
0xb69e3b ???
0xb69fdd ???
0xc9b135 ???
0xb78c97 ???
0x9d6d65 ???
0x9d7513 ???
0x9d6813 ???
0xb74e23 ???
0x9d66e7 ???
0xb74e23 ???
0x92b193 ???
0xb74f5b ???
0xa81f3f ???
0xa820f9 ???
0x935699 ???
0xb41929 ???
0x9c7023 ???
0x9c72e9 ???
0x9c7973 ???
-
/build/gdb-aPmCGS/gdb-12.1/gdb/value.c:856: internal-error: 
value_contents_bits_eq: Assertion `offset1 + length <= TYPE_LENGTH 
(val1->enclosing_type) * TARGET_CHAR_BIT' failed.
A problem internal to GDB has been detected,
further debugging may prove unreliable.
===

So try looking at the dumps of how "x" is created/freed.

The testcase is at https://people.linaro.org/~maxim.kuvyrkov/pr113363.exe ; it 
was compiled with:

/home/maxim.kuvyrkov/tcwg_gnu/abe/builds/destdir/armv8l-unknown-linux-gnueabihf/bin/armv8l-unknown-linux-gnueabihf-gfortran
 
/home/maxim.kuvyrkov/tcwg_gnu/abe/snapshots/gcc.git~master/gcc/testsuite/gfortran.dg/pr113363.f90
 -fdiagnostics-plain-output -fdiagnostics-plain-output -g -pedantic-errors 
-L/home/maxim.kuvyrkov/tcwg_gnu/abe/builds/armv8l-unknown-linux-gnueabihf/armv8l-unknown-linux-gnueabihf/gcc-gcc.git~master-stage2/armv8l-unknown-linux-gnueabihf/./libgfortran/.libs
 
-L/home/maxim.kuvyrkov/tcwg_gnu/abe/builds/armv8l-unknown-linux-gnueabihf/armv8l-unknown-linux-gnueabihf/gcc-gcc.git~master-stage2/armv8l-unknown-linux-gnueabihf/./libatomic/.libs
 -lm -o ./pr113363.exe

It should run on any stock ubuntu-22.04 rootfs for armhf architecture.

Kind regards,

--
Maxim Kuvyrkov
https://www.linaro.org

> On Apr 10, 2024, at 18:31, Paul Richard Thomas 
>  wrote:
> 
> Hi there,
> 
> Thanks for the heads-up on gcc patch #88281: 6 regressions on arm.
> 
> I see from the log that the test timed-out and the core was dumped. I
> cannot reproduce this and can see nothing in the tree-dump that might cause
> a time out. I would appreciate some help on where the fault lies and what
> the cause might be.
> 
> Best regards
> 
> Paul Thomas
> ___
> linaro-toolchain mailing list -- linaro-toolchain@lists.linaro.org
> To unsubscribe send an email to linaro-toolchain-le...@lists.linaro.org

___
linaro-toolchain mailing list -- linaro-toolchain@lists.linaro.org
To unsubscribe send an email to linaro-toolchain-le...@lists.linaro.org


Re: [Linaro-TCWG-CI] gdb patch #87686: FAIL: 7 regressions: 16 progressions on arm

2024-03-29 Thread Maxim Kuvyrkov
Hi Christina,

This is false-positive report -- see 
https://sourceware.org/bugzilla/show_bug.cgi?id=31575 for details.

--
Maxim Kuvyrkov
https://www.linaro.org

> On Mar 29, 2024, at 10:22, ci_not...@linaro.org wrote:
> 
> Dear contributor, our automatic CI has detected problems related to your 
> patch(es).  Please find some details below.  If you have any questions, 
> please follow up on linaro-toolchain@lists.linaro.org mailing list, Libera's 
> #linaro-tcwg channel, or ping your favourite Linaro toolchain developer on 
> the usual project channel.
> 
> We appreciate that it might be difficult to find the necessary logs or 
> reproduce the issue locally. If you can't get what you need from our CI 
> within minutes, let us know and we will be happy to help.
> 
> In gdb_check master-arm after:
> 
>  | gdb patch https://patchwork.sourceware.org/patch/87686
>  | Author: Christina Schimpe 
>  | Date:   Wed Mar 27 07:47:37 2024 +
>  | 
>  | gdb: Make tagged pointer support configurable.
>  | 
>  | The gdbarch function gdbarch_remove_non_address_bits adjusts addresses 
> to
>  | enable debugging of programs with tagged pointers on Linux, for 
> instance for
>  | ARM's feature top byte ignore (TBI).
>  | Once the function is implemented for an architecture, it adjusts 
> addresses for
>  | memory access, breakpoints and watchpoints.
>  | ... 12 lines of the commit log omitted.
>  | ... applied on top of baseline commit:
>  | 1678a15b694 Automatic date update in version.in
> 
> FAIL: 7 regressions: 16 progressions
> 
> regressions.sum:
> === gdb tests ===
> 
> Running gdb:gdb.ada/verylong.exp ...
> FAIL: gdb.ada/verylong.exp: print (x / 4) * 2
> FAIL: gdb.ada/verylong.exp: print +x
> FAIL: gdb.ada/verylong.exp: print -x
> FAIL: gdb.ada/verylong.exp: print x
> FAIL: gdb.ada/verylong.exp: print x - 99 + 1
> FAIL: gdb.ada/verylong.exp: print x / 2
> FAIL: gdb.ada/verylong.exp: print x = 170141183460469231731687303715884105727
> ... and 1 more entries
> 
> progressions.sum:
> === gdb tests ===
> 
> Running gdb:gdb.ada/convvar_comp.exp ...
> FAIL: gdb.ada/convvar_comp.exp: print $item.started
> FAIL: gdb.ada/convvar_comp.exp: print item.started
> FAIL: gdb.ada/convvar_comp.exp: set variable $item := item
> 
> Running gdb:gdb.ada/enum_idx_packed.exp ...
> FAIL: gdb.ada/enum_idx_packed.exp: scenario=minimal: print small
> FAIL: gdb.ada/enum_idx_packed.exp: scenario=minimal: print multi_multi(1,3)
> ... and 18 more entries
> 
> You can find the failure logs in *.log.1.xz files in
> - 
> https://ci.linaro.org/job/tcwg_gdb_check--master-arm-precommit/2024/artifact/artifacts/artifacts.precommit/00-sumfiles/
> The full lists of regressions and progressions as well as configure and make 
> commands are in
> - 
> https://ci.linaro.org/job/tcwg_gdb_check--master-arm-precommit/2024/artifact/artifacts/artifacts.precommit/notify/
> The list of [ignored] baseline and flaky failures are in
> - 
> https://ci.linaro.org/job/tcwg_gdb_check--master-arm-precommit/2024/artifact/artifacts/artifacts.precommit/sumfiles/xfails.xfail
> 
> The configuration of this build is:
> CI config tcwg_gdb_check master-arm
> 
> -8<--8<--8<--
> The information below can be used to reproduce a debug environment:
> 
> Current build   : 
> https://ci.linaro.org/job/tcwg_gdb_check--master-arm-precommit/2024/artifact/artifacts
> Reference build : 
> https://ci.linaro.org/job/tcwg_gdb_check--master-arm-build/984/artifact/artifacts
> 
> Warning: we do not enable maintainer-mode nor automatically update
> generated files, which may lead to failures if the patch modifies the
> master files.


___
linaro-toolchain mailing list -- linaro-toolchain@lists.linaro.org
To unsubscribe send an email to linaro-toolchain-le...@lists.linaro.org


Re: [Linaro-TCWG-CI] gdb patch #87793: FAIL: 7 regressions: 16 progressions on arm

2024-03-29 Thread Maxim Kuvyrkov
Hi Kevin,

This is false-positive report -- see 
https://sourceware.org/bugzilla/show_bug.cgi?id=31575 for details.

--
Maxim Kuvyrkov
https://www.linaro.org

> On Mar 29, 2024, at 10:48, ci_not...@linaro.org wrote:
> 
> Dear contributor, our automatic CI has detected problems related to your 
> patch(es).  Please find some details below.  If you have any questions, 
> please follow up on linaro-toolchain@lists.linaro.org mailing list, Libera's 
> #linaro-tcwg channel, or ping your favourite Linaro toolchain developer on 
> the usual project channel.
> 
> We appreciate that it might be difficult to find the necessary logs or 
> reproduce the issue locally. If you can't get what you need from our CI 
> within minutes, let us know and we will be happy to help.
> 
> In gdb_check master-arm after:
> 
>  | gdb patch https://patchwork.sourceware.org/patch/87793
>  | Author: Kevin Buettner 
>  | Date:   Thu Mar 28 15:53:14 2024 -0700
>  | 
>  | New test: gdb.base/check-errno.exp
>  | 
>  | Printing the value of 'errno' from GDB is sometimes problematic.  The
>  | situation has improved in recent years, though there are still
>  | scenarios for which "print errno" doesn't work.
>  | 
>  | The test, gdb.base/check-errno.exp, introduced by this commit,
>  | ... 174 lines of the commit log omitted.
>  | ... applied on top of baseline commit:
>  | 1678a15b694 Automatic date update in version.in
> 
> FAIL: 7 regressions: 16 progressions
> 
> regressions.sum:
> === gdb tests ===
> 
> Running gdb:gdb.ada/verylong.exp ...
> FAIL: gdb.ada/verylong.exp: print (x / 4) * 2
> FAIL: gdb.ada/verylong.exp: print +x
> FAIL: gdb.ada/verylong.exp: print -x
> FAIL: gdb.ada/verylong.exp: print x
> FAIL: gdb.ada/verylong.exp: print x - 99 + 1
> FAIL: gdb.ada/verylong.exp: print x / 2
> FAIL: gdb.ada/verylong.exp: print x = 170141183460469231731687303715884105727
> ... and 1 more entries
> 
> progressions.sum:
> === gdb tests ===
> 
> Running gdb:gdb.ada/convvar_comp.exp ...
> FAIL: gdb.ada/convvar_comp.exp: print item.started
> FAIL: gdb.ada/convvar_comp.exp: set variable $item := item
> FAIL: gdb.ada/convvar_comp.exp: print $item.started
> 
> Running gdb:gdb.ada/enum_idx_packed.exp ...
> FAIL: gdb.ada/enum_idx_packed.exp: scenario=minimal: print small
> FAIL: gdb.ada/enum_idx_packed.exp: scenario=minimal: print multi
> ... and 18 more entries
> 
> You can find the failure logs in *.log.1.xz files in
> - 
> https://ci.linaro.org/job/tcwg_gdb_check--master-arm-precommit/2028/artifact/artifacts/artifacts.precommit/00-sumfiles/
> The full lists of regressions and progressions as well as configure and make 
> commands are in
> - 
> https://ci.linaro.org/job/tcwg_gdb_check--master-arm-precommit/2028/artifact/artifacts/artifacts.precommit/notify/
> The list of [ignored] baseline and flaky failures are in
> - 
> https://ci.linaro.org/job/tcwg_gdb_check--master-arm-precommit/2028/artifact/artifacts/artifacts.precommit/sumfiles/xfails.xfail
> 
> The configuration of this build is:
> CI config tcwg_gdb_check master-arm
> 
> -8<--8<--8<--
> The information below can be used to reproduce a debug environment:
> 
> Current build   : 
> https://ci.linaro.org/job/tcwg_gdb_check--master-arm-precommit/2028/artifact/artifacts
> Reference build : 
> https://ci.linaro.org/job/tcwg_gdb_check--master-arm-build/984/artifact/artifacts
> 
> Warning: we do not enable maintainer-mode nor automatically update
> generated files, which may lead to failures if the patch modifies the
> master files.


___
linaro-toolchain mailing list -- linaro-toolchain@lists.linaro.org
To unsubscribe send an email to linaro-toolchain-le...@lists.linaro.org


Re: [Linaro-TCWG-CI] 8 patches in gdb: FAIL: 7 regressions: 16 progressions on arm

2024-03-29 Thread Maxim Kuvyrkov
Hi Abdul,
Hi Nils,

This is false-positive report -- see 
https://sourceware.org/bugzilla/show_bug.cgi?id=31575 for details.

--
Maxim Kuvyrkov
https://www.linaro.org

> On Mar 29, 2024, at 10:52, ci_not...@linaro.org wrote:
> 
> Dear contributor, our automatic CI has detected problems related to your 
> patch(es).  Please find some details below.  If you have any questions, 
> please follow up on linaro-toolchain@lists.linaro.org mailing list, Libera's 
> #linaro-tcwg channel, or ping your favourite Linaro toolchain developer on 
> the usual project channel.
> 
> We appreciate that it might be difficult to find the necessary logs or 
> reproduce the issue locally. If you can't get what you need from our CI 
> within minutes, let us know and we will be happy to help.
> 
> In gdb_check master-arm after:
> 
>  | 8 patches in gdb
>  | Patchwork URL: https://patchwork.sourceware.org/patch/87768
>  | 96ab853e5d4 gdb, mi: Skip trampoline functions for the -stack-list-frames 
> command.
>  | e2c217f4226 gdb: Skip trampoline functions for the return command.
>  | 687b0373e1e gdb: Skip trampoline functions for the up command.
>  | 5f8cc05162b gdb: Skip trampoline functions for the finish and 
> reverse-finish commands.
>  | 287546c56c8 gdb: Skip trampoline frames for the backtrace command.
>  | ... and 3 more patches in gdb
>  | ... applied on top of baseline commit:
>  | 1678a15b694 Automatic date update in version.in
> 
> FAIL: 7 regressions: 16 progressions
> 
> regressions.sum:
> === gdb tests ===
> 
> Running gdb:gdb.ada/verylong.exp ...
> FAIL: gdb.ada/verylong.exp: print (x / 4) * 2
> FAIL: gdb.ada/verylong.exp: print +x
> FAIL: gdb.ada/verylong.exp: print -x
> FAIL: gdb.ada/verylong.exp: print x
> FAIL: gdb.ada/verylong.exp: print x - 99 + 1
> FAIL: gdb.ada/verylong.exp: print x / 2
> FAIL: gdb.ada/verylong.exp: print x = 170141183460469231731687303715884105727
> ... and 1 more entries
> 
> progressions.sum:
> === gdb tests ===
> 
> Running gdb:gdb.ada/convvar_comp.exp ...
> FAIL: gdb.ada/convvar_comp.exp: set variable $item := item
> FAIL: gdb.ada/convvar_comp.exp: print $item.started
> FAIL: gdb.ada/convvar_comp.exp: print item.started
> 
> Running gdb:gdb.ada/enum_idx_packed.exp ...
> FAIL: gdb.ada/enum_idx_packed.exp: scenario=minimal: ptype multi
> FAIL: gdb.ada/enum_idx_packed.exp: scenario=minimal: ptype small
> ... and 18 more entries
> 
> You can find the failure logs in *.log.1.xz files in
> - 
> https://ci.linaro.org/job/tcwg_gdb_check--master-arm-precommit/2026/artifact/artifacts/artifacts.precommit/00-sumfiles/
> The full lists of regressions and progressions as well as configure and make 
> commands are in
> - 
> https://ci.linaro.org/job/tcwg_gdb_check--master-arm-precommit/2026/artifact/artifacts/artifacts.precommit/notify/
> The list of [ignored] baseline and flaky failures are in
> - 
> https://ci.linaro.org/job/tcwg_gdb_check--master-arm-precommit/2026/artifact/artifacts/artifacts.precommit/sumfiles/xfails.xfail
> 
> The configuration of this build is:
> CI config tcwg_gdb_check master-arm
> 
> -8<--8<--8<--
> The information below can be used to reproduce a debug environment:
> 
> Current build   : 
> https://ci.linaro.org/job/tcwg_gdb_check--master-arm-precommit/2026/artifact/artifacts
> Reference build : 
> https://ci.linaro.org/job/tcwg_gdb_check--master-arm-build/984/artifact/artifacts
> 
> Warning: we do not enable maintainer-mode nor automatically update
> generated files, which may lead to failures if the patch modifies the
> master files.


___
linaro-toolchain mailing list -- linaro-toolchain@lists.linaro.org
To unsubscribe send an email to linaro-toolchain-le...@lists.linaro.org


Re: [Linaro-TCWG-CI] 4 patches in gdb: FAIL: 7 regressions: 16 progressions on arm

2024-03-29 Thread Maxim Kuvyrkov
Hi Gustavo,

This is false-positive report -- see 
https://sourceware.org/bugzilla/show_bug.cgi?id=31575 for details.

--
Maxim Kuvyrkov
https://www.linaro.org

> On Mar 29, 2024, at 10:54, ci_not...@linaro.org wrote:
> 
> Dear contributor, our automatic CI has detected problems related to your 
> patch(es).  Please find some details below.  If you have any questions, 
> please follow up on linaro-toolchain@lists.linaro.org mailing list, Libera's 
> #linaro-tcwg channel, or ping your favourite Linaro toolchain developer on 
> the usual project channel.
> 
> We appreciate that it might be difficult to find the necessary logs or 
> reproduce the issue locally. If you can't get what you need from our CI 
> within minutes, let us know and we will be happy to help.
> 
> In gdb_check master-arm after:
> 
>  | 4 patches in gdb
>  | Patchwork URL: https://patchwork.sourceware.org/patch/87792
>  | 81f55c5f65e gdb: Add new remote packet to check if address is tagged
>  | c72332af687 gdb: aarch64: Remove MTE address checking from memtag_matches_p
>  | d1ba7d95516 gdb: aarch64: Move MTE address check out of set_memtag
>  | 9769943c7d4 gdb: aarch64: Remove MTE address checking from get_memtag
>  | ... applied on top of baseline commit:
>  | 1678a15b694 Automatic date update in version.in
> 
> FAIL: 7 regressions: 16 progressions
> 
> regressions.sum:
> === gdb tests ===
> 
> Running gdb:gdb.ada/verylong.exp ...
> FAIL: gdb.ada/verylong.exp: print (x / 4) * 2
> FAIL: gdb.ada/verylong.exp: print +x
> FAIL: gdb.ada/verylong.exp: print -x
> FAIL: gdb.ada/verylong.exp: print x
> FAIL: gdb.ada/verylong.exp: print x - 99 + 1
> FAIL: gdb.ada/verylong.exp: print x / 2
> FAIL: gdb.ada/verylong.exp: print x = 170141183460469231731687303715884105727
> ... and 1 more entries
> 
> progressions.sum:
> === gdb tests ===
> 
> Running gdb:gdb.ada/convvar_comp.exp ...
> FAIL: gdb.ada/convvar_comp.exp: print item.started
> FAIL: gdb.ada/convvar_comp.exp: print $item.started
> FAIL: gdb.ada/convvar_comp.exp: set variable $item := item
> 
> Running gdb:gdb.ada/enum_idx_packed.exp ...
> FAIL: gdb.ada/enum_idx_packed.exp: scenario=minimal: ptype small
> FAIL: gdb.ada/enum_idx_packed.exp: scenario=minimal: print multi_multi(2)
> ... and 18 more entries
> 
> You can find the failure logs in *.log.1.xz files in
> - 
> https://ci.linaro.org/job/tcwg_gdb_check--master-arm-precommit/2027/artifact/artifacts/artifacts.precommit/00-sumfiles/
> The full lists of regressions and progressions as well as configure and make 
> commands are in
> - 
> https://ci.linaro.org/job/tcwg_gdb_check--master-arm-precommit/2027/artifact/artifacts/artifacts.precommit/notify/
> The list of [ignored] baseline and flaky failures are in
> - 
> https://ci.linaro.org/job/tcwg_gdb_check--master-arm-precommit/2027/artifact/artifacts/artifacts.precommit/sumfiles/xfails.xfail
> 
> The configuration of this build is:
> CI config tcwg_gdb_check master-arm
> 
> -8<--8<--8<--
> The information below can be used to reproduce a debug environment:
> 
> Current build   : 
> https://ci.linaro.org/job/tcwg_gdb_check--master-arm-precommit/2027/artifact/artifacts
> Reference build : 
> https://ci.linaro.org/job/tcwg_gdb_check--master-arm-build/984/artifact/artifacts
> 
> Warning: we do not enable maintainer-mode nor automatically update
> generated files, which may lead to failures if the patch modifies the
> master files.


___
linaro-toolchain mailing list -- linaro-toolchain@lists.linaro.org
To unsubscribe send an email to linaro-toolchain-le...@lists.linaro.org


Re: [Linaro-TCWG-CI] gcc-14-9157-gff442719cdb: slowed down by 23% - 549.fotonik3d_r on aarch64 O3

2024-03-25 Thread Maxim Kuvyrkov
Hi Richard,

Heads up, our benchmarking CI flagged your commit to cause 23% regression in 
549.fotonik3d_r on Cortex-A57 at -O3.

Do you have internal benchmarks for this change?  

Thanks!

--
Maxim Kuvyrkov
https://www.linaro.org

> On Mar 24, 2024, at 03:43, ci_not...@linaro.org wrote:
> 
> Dear contributor, our automatic CI has detected problems related to your 
> patch(es).  Please find some details below.  If you have any questions, 
> please follow up on linaro-toolchain@lists.linaro.org mailing list, Libera's 
> #linaro-tcwg channel, or ping your favourite Linaro toolchain developer on 
> the usual project channel.
> 
> We appreciate that it might be difficult to find the necessary logs or 
> reproduce the issue locally. If you can't get what you need from our CI 
> within minutes, let us know and we will be happy to help.
> 
> We track this report status in https://linaro.atlassian.net/browse/GNU-1181 , 
> please let us know if you are looking at the problem and/or when you have a 
> fix.
> 
> In CI config tcwg_bmk-code_speed-cpu2017rate/gnu-aarch64-master-O3 after:
> 
>  | commit gcc-14-9157-gff442719cdb
>  | Author: Richard Sandiford 
>  | Date:   Fri Feb 23 14:12:55 2024 +
>  | 
>  | aarch64: Spread out FPR usage between RA regions [PR113613]
>  | 
>  | early-ra already had code to do regrename-style "broadening"
>  | of the allocation, to promote scheduling freedom.  However,
>  | the pass divides the function into allocation regions
>  | and this broadening only worked within a single region.
>  | This meant that if a basic block contained one subblock
>  | ... 30 lines of the commit log omitted.
> 
> the following benchmarks slowed down by more than 3%:
> - slowed down by 23% - 549.fotonik3d_r - from 16467 to 20213 perf samples
> the following hot functions slowed down by more than 15% (but their 
> benchmarks slowed down by less than 3%):
> - slowed down by 88% - 549.fotonik3d_r:[.] __material_mod_MOD_mat_updatee - 
> from 4373 to 8204 perf samples
> 
> The configuration of this build is:
> Below reproducer instructions can be used to re-build both "first_bad" and 
> "last_good" cross-toolchains used in this bisection.  Naturally, the scripts 
> will fail when triggerring benchmarking jobs if you don\'t have access to 
> Linaro TCWG CI.
> 
> Configuration:
> - Benchmark: SPEC CPU2017
> - Toolchain: GCC + Glibc + GNU Linker
> - Version: all components were built from their tip of trunk
> - Target: aarch64-linux-gnu
> - Compiler flags: O3
> - Hardware: NVidia TX1 4x Cortex-A57
> 
> This benchmarking CI is work-in-progress, and we welcome feedback and 
> suggestions at linaro-toolchain@lists.linaro.org .  In our improvement plans 
> is to add support for SPEC CPU2017 benchmarks and provide "perf 
> report/annotate" data behind these reports.
> 
> -8<--8<--8<--
> The information below can be used to reproduce a debug environment:
> 
> Current build   : 
> https://ci.linaro.org/job/tcwg_bmk-code_speed-cpu2017rate--gnu-aarch64-master-O3-build/199/artifact/artifacts
> Reference build : 
> https://ci.linaro.org/job/tcwg_bmk-code_speed-cpu2017rate--gnu-aarch64-master-O3-build/198/artifact/artifacts
> 
> Reproduce last good and first bad builds: 
> https://git-us.linaro.org/toolchain/ci/interesting-commits.git/plain/gcc/sha1/ff442719cdb64c9df9d069af88e90d51bee6fb56/tcwg_bmk-code_speed-cpu2017rate/gnu-aarch64-master-O3/reproduction_instructions.txt
> 
> Full commit : 
> https://github.com/gcc-mirror/gcc/commit/ff442719cdb64c9df9d069af88e90d51bee6fb56
> 
> List of configurations that regressed due to this commit :
> * tcwg_bmk-code_speed-cpu2017rate
> ** gnu-aarch64-master-O3
> *** slowed down by 23% - 549.fotonik3d_r
> *** 
> https://git-us.linaro.org/toolchain/ci/interesting-commits.git/plain/gcc/sha1/ff442719cdb64c9df9d069af88e90d51bee6fb56/tcwg_bmk-code_speed-cpu2017rate/gnu-aarch64-master-O3/details.txt
> *** 
> https://ci.linaro.org/job/tcwg_bmk-code_speed-cpu2017rate--gnu-aarch64-master-O3-build/199/artifact/artifacts


___
linaro-toolchain mailing list -- linaro-toolchain@lists.linaro.org
To unsubscribe send an email to linaro-toolchain-le...@lists.linaro.org


Re: [Linaro-TCWG-CI] gdb-14-branchpoint-1411-g033bc67bdb0: FAIL: 2 regressions on arm

2024-03-12 Thread Maxim Kuvyrkov
> On Mar 11, 2024, at 20:52, Jonathan Wakely  wrote:
> 
> On Mon, 11 Mar 2024 at 16:38, Maxim Kuvyrkov  
> wrote:
>> 
>>> On Jan 30, 2024, at 23:03, ci_not...@linaro.org wrote:
>>> 
>>> Dear contributor, our automatic CI has detected problems related to your 
>>> patch(es).  Please find some details below.  If you have any questions, 
>>> please follow up on linaro-toolchain@lists.linaro.org mailing list, 
>>> Libera's #linaro-tcwg channel, or ping your favourite Linaro toolchain 
>>> developer on the usual project channel.
>>> 
>>> We appreciate that it might be difficult to find the necessary logs or 
>>> reproduce the issue locally. If you can't get what you need from our CI 
>>> within minutes, let us know and we will be happy to help.
>>> 
>>> We track this report status in https://linaro.atlassian.net/browse/GNU-1136 
>>> , please let us know if you are looking at the problem and/or when you have 
>>> a fix.
>>> 
>>> In  master-arm after:
>>> 
>>> | commit gdb-14-branchpoint-1411-g033bc67bdb0
>>> | Author: Tom Tromey 
>>> | Date:   Tue Sep 19 17:39:31 2023 -0600
>>> |
>>> | Only search types in cp_lookup_rtti_type
>>> |
>>> | This changes cp_lookup_rtti_type to only search for types -- not
>>> | functions or variables.  Due to the symbol-matching hack, this could
>>> | just use SEARCH_TYPE_DOMAIN, but I think it's better to be clear; also
>>> | I hold on to some hope that perhaps the hack can someday be removed.
>>> 
>>> FAIL: 2 regressions
>>> 
>>> regressions.sum:
>>> === libstdc++ tests ===
>>> 
>>> Running libstdc++:libstdc++-prettyprinters/prettyprinters.exp ...
>>> FAIL: libstdc++-prettyprinters/cxx11.cc print ecmiaow
>>> FAIL: libstdc++-prettyprinters/cxx11.cc print emiaow
>> 
>> Hi Tom,
>> Hi Jonathan,
>> 
>> After the above GDB patch I see 2 new failures both for aarch64-linux-gnu 
>> and arm-linux-gnueabihf in GCC's libstdc++ testsuite.  The log [1] says:
>> ===
>> $35 = warning: RTTI symbol not found for class 'main::custom_cat'
>> warning: RTTI symbol not found for class 'main::custom_cat'
>> got: $35 = warning: RTTI symbol not found for class 'main::custom_cat'
>> FAIL: libstdc++-prettyprinters/cxx11.cc print emiaow
>> skipping: warning: RTTI symbol not found for class 'main::custom_cat'
>> std::error_code = {std::_V2::error_category: 42}
>> skipping: std::error_code = {std::_V2::error_category: 42}
>> $36 = warning: RTTI symbol not found for class 'main::custom_cat'
>> warning: RTTI symbol not found for class 'main::custom_cat'
>> got: $36 = warning: RTTI symbol not found for class 'main::custom_cat'
>> FAIL: libstdc++-prettyprinters/cxx11.cc print ecmiaow
>> ===
>> 
>> Which way should I dig -- GDB or libstdc++?  Does this look like libstdc++ 
>> testcase needs an update?
> 
> 
> Just a guess, but maybe making the type global instead of a local type
> (with no linkage) will solve it:
> 
> --- a/libstdc++-v3/testsuite/libstdc++-prettyprinters/cxx11.cc
> +++ b/libstdc++-v3/testsuite/libstdc++-prettyprinters/cxx11.cc
> @@ -63,6 +63,11 @@ struct datum
> 
> std::unique_ptr global;
> 
> +struct custom_cat : std::error_category {
> +  const char* name() const noexcept { return "miaow"; }
> +  std::string message(int) const { return ""; }
> +};
> +
> int
> main()
> {
> @@ -179,10 +184,7 @@ main()
>  std::error_condition ecinval =
> std::make_error_condition(std::errc::invalid_argument);
>  // { dg-final { note-test ecinval {std::error_condition =
> {"generic": EINVAL}} } }
> 
> -  struct custom_cat : std::error_category {
> -const char* name() const noexcept { return "miaow"; }
> -std::string message(int) const { return ""; }
> -  } cat;
> +  custom_cat cat;
>  std::error_code emiaow(42, cat);
>  // { dg-final { note-test emiaow {std::error_code = {custom_cat: 42}} } }
>  std::error_condition ecmiaow(42, cat);
> 
> 
> If this works, I think this change to the test is reasonable. A local
> type as an error_category probably doesn't make sense in real code.
> 
> But I don't know if this is revealing some issue with Tom's patch and
> how it handles local types (or any types without linkage).

Hi Jonathan,

Your above change to cxx11.cc <http://cxx11.cc/> fixes the failures [1].  Would 
you please commit it?

Thanks!

[1] 
https://ci.linaro.org/job/tcwg_gnu_native_check_gcc--master-aarch64-precommit/1/artifact/artifacts/artifacts.precommit/notify/mail-body.txt/*view*/


--
Maxim Kuvyrkov
https://www.linaro.org


___
linaro-toolchain mailing list -- linaro-toolchain@lists.linaro.org
To unsubscribe send an email to linaro-toolchain-le...@lists.linaro.org


Re: [Linaro-TCWG-CI] gdb-14-branchpoint-1356-g7737b133640: FAIL: 1 regressions on arm

2024-03-12 Thread Maxim Kuvyrkov
> On Mar 12, 2024, at 00:14, Tom Tromey  wrote:
> 
>>>>>> Maxim Kuvyrkov  writes:
> 
>>> | commit gdb-14-branchpoint-1356-g7737b133640
>>> | Author: Tom Tromey 
>>> | Date:   Tue Jan 9 11:47:17 2024 -0700
>>> | 
>>> | Handle DW_AT_endianity on enumeration types
>>> | 
>>> | A user found that gdb would not correctly print a field from an Ada
>>> | record using the scalar storage order feature.  We tracked this down
>>> | to a combination of problems.
>>> | 
>>> | First, GCC did not emit DW_AT_endianity on the enumeration type.
>>> | ... 14 lines of the commit log omitted.
> 
>> I see the above failure for both aarch64-linux-gnu and
>> arm-linux-gnueabihf in our testing.  The log shows ([1]):
> 
>> (gdb) PASS: gdb.ada/scalar_storage.exp: print V_LE
>> get_compiler_info: gcc-14-0-1
> 
>> Any idea what can be causing this?
> 
>> This failure happens in CI configurations where we track tip-of-trunk GCC.
> 
> This failure is what I would expect if your compiler does not have the
> fix.  Can you see if your gcc includes this change?
> 
> commit 5d8b60effc7268448a94fbbbad923ab6871252cd
> Author: Eric Botcazou 
> Date:   Wed Jan 10 13:23:46 2024 +0100
> 
>Fix debug info for enumeration types with reverse Scalar_Storage_Order

Ah, now I understand.  While we do have the above commit in our GCC sources (we 
build tip-of-trunk in this CI configuration), we don't enable ada language.  So 
testsuite harness detects GCC version as 14.0 and enables the test, but actual 
gnat compiler is used from the distro package, which is much older.

We will consider enabling ada language in our CI builds, which should fix this.

Thanks for helping troubleshooting this!

--
Maxim Kuvyrkov
https://www.linaro.org


___
linaro-toolchain mailing list -- linaro-toolchain@lists.linaro.org
To unsubscribe send an email to linaro-toolchain-le...@lists.linaro.org


Re: [Linaro-TCWG-CI] gcc-14-8680-g2f14c0dbb78: FAIL: 3 regressions on arm

2024-03-11 Thread Maxim Kuvyrkov
> On Feb 1, 2024, at 16:07, ci_notify--- via Gcc-regression 
>  wrote:
> 
> Dear contributor, our automatic CI has detected problems related to your 
> patch(es).  Please find some details below.  If you have any questions, 
> please follow up on linaro-toolchain@lists.linaro.org mailing list, Libera's 
> #linaro-tcwg channel, or ping your favourite Linaro toolchain developer on 
> the usual project channel.
> 
> We appreciate that it might be difficult to find the necessary logs or 
> reproduce the issue locally. If you can't get what you need from our CI 
> within minutes, let us know and we will be happy to help.
> 
> We track this report status in https://linaro.atlassian.net/browse/GNU-1140 , 
> please let us know if you are looking at the problem and/or when you have a 
> fix.
> 
> In gcc_check master-arm after:
> 
>  | commit gcc-14-8680-g2f14c0dbb78
>  | Author: Roger Sayle 
>  | Date:   Thu Feb 1 06:10:42 2024 +
>  | 
>  | PR target/113560: Enhance is_widening_mult_rhs_p.
>  | 
>  | This patch resolves PR113560, a code quality regression from GCC12
>  | affecting x86_64, by enhancing the middle-end's tree-ssa-math-opts.cc
>  | to recognize more instances of widening multiplications.
>  | 
>  | The widening multiplication perception code identifies cases like:
>  | ... 116 lines of the commit log omitted.
> 
> FAIL: 3 regressions
> 
> regressions.sum:
> === gcc tests ===
> 
> Running gcc:gcc.target/arm/arm.exp ...
> FAIL: gcc.target/arm/wmul-5.c scan-assembler umlal
> FAIL: gcc.target/arm/wmul-6.c scan-assembler smlalbb
> FAIL: gcc.target/arm/wmul-7.c scan-assembler umlal

Hi Roger,

Your patch seems to regress the above 3 tests for all 32-bit ARM targets (see 
[1]).  Would you please check if the regressions can be avoided?

For reference, here are configure options we use for arm-linux-gnueabihf 
cross-toolchain: [2].

[1] https://linaro.atlassian.net/browse/GNU-1140
[2] 
https://ci.linaro.org/job/tcwg_gnu_cross_check_gcc--master-arm-build/1303/artifact/artifacts/notify/configure-make.txt/*view*/
 

Thanks!

--
Maxim Kuvyrkov
https://www.linaro.org


> 
> 
> You can find the failure logs in *.log.1.xz files in
> - 
> https://ci.linaro.org/job/tcwg_gcc_check--master-arm-build/1650/artifact/artifacts/00-sumfiles/
> The full lists of regressions and progressions as well as configure and make 
> commands are in
> - 
> https://ci.linaro.org/job/tcwg_gcc_check--master-arm-build/1650/artifact/artifacts/notify/
> The list of [ignored] baseline and flaky failures are in
> - 
> https://ci.linaro.org/job/tcwg_gcc_check--master-arm-build/1650/artifact/artifacts/sumfiles/xfails.xfail
> 
> The configuration of this build is:
> CI config tcwg_gcc_check master-arm
> 
> -8<--8<--8<--
> The information below can be used to reproduce a debug environment:
> 
> Current build   : 
> https://ci.linaro.org/job/tcwg_gcc_check--master-arm-build/1650/artifact/artifacts
> Reference build : 
> https://ci.linaro.org/job/tcwg_gcc_check--master-arm-build/1649/artifact/artifacts
> 
> Reproduce last good and first bad builds: 
> https://git-us.linaro.org/toolchain/ci/interesting-commits.git/plain/gcc/sha1/2f14c0dbb789852947cb58fdf7d3162413f053fa/tcwg_gcc_check/master-arm/reproduction_instructions.txt
> 
> Full commit : 
> https://github.com/gcc-mirror/gcc/commit/2f14c0dbb789852947cb58fdf7d3162413f053fa
> 
> List of configurations that regressed due to this commit :
> * tcwg_gcc_check
> ** master-arm
> *** FAIL: 3 regressions
> *** 
> https://git-us.linaro.org/toolchain/ci/interesting-commits.git/plain/gcc/sha1/2f14c0dbb789852947cb58fdf7d3162413f053fa/tcwg_gcc_check/master-arm/details.txt
> *** 
> https://ci.linaro.org/job/tcwg_gcc_check--master-arm-build/1650/artifact/artifacts

___
linaro-toolchain mailing list -- linaro-toolchain@lists.linaro.org
To unsubscribe send an email to linaro-toolchain-le...@lists.linaro.org


Re: [Linaro-TCWG-CI] gdb-14-branchpoint-1411-g033bc67bdb0: FAIL: 2 regressions on arm

2024-03-11 Thread Maxim Kuvyrkov
> On Jan 30, 2024, at 23:03, ci_not...@linaro.org wrote:
> 
> Dear contributor, our automatic CI has detected problems related to your 
> patch(es).  Please find some details below.  If you have any questions, 
> please follow up on linaro-toolchain@lists.linaro.org mailing list, Libera's 
> #linaro-tcwg channel, or ping your favourite Linaro toolchain developer on 
> the usual project channel.
> 
> We appreciate that it might be difficult to find the necessary logs or 
> reproduce the issue locally. If you can't get what you need from our CI 
> within minutes, let us know and we will be happy to help.
> 
> We track this report status in https://linaro.atlassian.net/browse/GNU-1136 , 
> please let us know if you are looking at the problem and/or when you have a 
> fix.
> 
> In  master-arm after:
> 
>  | commit gdb-14-branchpoint-1411-g033bc67bdb0
>  | Author: Tom Tromey 
>  | Date:   Tue Sep 19 17:39:31 2023 -0600
>  | 
>  | Only search types in cp_lookup_rtti_type
>  | 
>  | This changes cp_lookup_rtti_type to only search for types -- not
>  | functions or variables.  Due to the symbol-matching hack, this could
>  | just use SEARCH_TYPE_DOMAIN, but I think it's better to be clear; also
>  | I hold on to some hope that perhaps the hack can someday be removed.
> 
> FAIL: 2 regressions
> 
> regressions.sum:
> === libstdc++ tests ===
> 
> Running libstdc++:libstdc++-prettyprinters/prettyprinters.exp ...
> FAIL: libstdc++-prettyprinters/cxx11.cc print ecmiaow
> FAIL: libstdc++-prettyprinters/cxx11.cc print emiaow

Hi Tom,
Hi Jonathan,

After the above GDB patch I see 2 new failures both for aarch64-linux-gnu and 
arm-linux-gnueabihf in GCC's libstdc++ testsuite.  The log [1] says:
===
$35 = warning: RTTI symbol not found for class 'main::custom_cat'
warning: RTTI symbol not found for class 'main::custom_cat'
got: $35 = warning: RTTI symbol not found for class 'main::custom_cat'
FAIL: libstdc++-prettyprinters/cxx11.cc print emiaow
skipping: warning: RTTI symbol not found for class 'main::custom_cat'
std::error_code = {std::_V2::error_category: 42}
skipping: std::error_code = {std::_V2::error_category: 42}
$36 = warning: RTTI symbol not found for class 'main::custom_cat'
warning: RTTI symbol not found for class 'main::custom_cat'
got: $36 = warning: RTTI symbol not found for class 'main::custom_cat'
FAIL: libstdc++-prettyprinters/cxx11.cc print ecmiaow
===

Which way should I dig -- GDB or libstdc++?  Does this look like libstdc++ 
testcase needs an update?

[1] 
https://ci.linaro.org/job/tcwg_gnu_native_check_gcc--master-aarch64-build/lastSuccessfulBuild/artifact/artifacts/00-sumfiles/libstdc++.log.xz
 

Thanks!

--
Maxim Kuvyrkov
https://www.linaro.org


> 
> 
> You can find the failure logs in *.log.1.xz files in
> - 
> https://ci.linaro.org/job/tcwg_gnu_native_check_gcc--master-arm-build/987/artifact/artifacts/00-sumfiles/
> The full lists of regressions and progressions as well as configure and make 
> commands are in
> - 
> https://ci.linaro.org/job/tcwg_gnu_native_check_gcc--master-arm-build/987/artifact/artifacts/notify/
> The list of [ignored] baseline and flaky failures are in
> - 
> https://ci.linaro.org/job/tcwg_gnu_native_check_gcc--master-arm-build/987/artifact/artifacts/sumfiles/xfails.xfail
> 
> The configuration of this build is:
> CI config tcwg_gnu_native_check_gcc master-arm
> 
> -8<--8<--8<--
> The information below can be used to reproduce a debug environment:
> 
> Current build   : 
> https://ci.linaro.org/job/tcwg_gnu_native_check_gcc--master-arm-build/987/artifact/artifacts
> Reference build : 
> https://ci.linaro.org/job/tcwg_gnu_native_check_gcc--master-arm-build/986/artifact/artifacts
> 
> Reproduce last good and first bad builds: 
> https://git-us.linaro.org/toolchain/ci/interesting-commits.git/plain/gdb/sha1/033bc67bdb0c74d1c63a1998a7e9679a408ba6e4/tcwg_gnu_native_check_gcc/master-arm/reproduction_instructions.txt
> 
> Full commit : 
> https://sourceware.org/git/?p=binutils-gdb.git;a=commitdiff;h=033bc67bdb0c74d1c63a1998a7e9679a408ba6e4
> 
> List of configurations that regressed due to this commit :
> * tcwg_gnu_native_check_gcc
> ** master-arm
> *** FAIL: 2 regressions
> *** 
> https://git-us.linaro.org/toolchain/ci/interesting-commits.git/plain/gdb/sha1/033bc67bdb0c74d1c63a1998a7e9679a408ba6e4/tcwg_gnu_native_check_gcc/master-arm/details.txt
> *** 
> https://ci.linaro.org/job/tcwg_gnu_native_check_gcc--master-arm-build/987/artifact/artifacts


___
linaro-toolchain mailing list -- linaro-toolchain@lists.linaro.org
To unsubscribe send an email to linaro-toolchain-le...@lists.linaro.org


Re: [Linaro-TCWG-CI] gcc-14-8492-g1a8261e047f: FAIL: 3 regressions on arm

2024-03-11 Thread Maxim Kuvyrkov
> On Jan 30, 2024, at 00:35, ci_notify--- via Gcc-regression 
>  wrote:
> 
> Dear contributor, our automatic CI has detected problems related to your 
> patch(es).  Please find some details below.  If you have any questions, 
> please follow up on linaro-toolchain@lists.linaro.org mailing list, Libera's 
> #linaro-tcwg channel, or ping your favourite Linaro toolchain developer on 
> the usual project channel.
> 
> We appreciate that it might be difficult to find the necessary logs or 
> reproduce the issue locally. If you can't get what you need from our CI 
> within minutes, let us know and we will be happy to help.
> 
> We track this report status in https://linaro.atlassian.net/browse/GNU-1132 , 
> please let us know if you are looking at the problem and/or when you have a 
> fix.
> 
> In gcc_check master-arm after:
> 
>  | commit gcc-14-8492-g1a8261e047f
>  | Author: Richard Sandiford 
>  | Date:   Mon Jan 29 12:33:08 2024 +
>  | 
>  | vect: Tighten vect_determine_precisions_from_range [PR113281]
>  | 
>  | This was another PR caused by the way that
>  | vect_determine_precisions_from_range handles shifts.  We tried to
>  | narrow 32768 >> x to a 16-bit shift based on range information for
>  | the inputs and outputs, with vect_recog_over_widening_pattern
>  | (after PR110828) adjusting the shift amount.  But this doesn't
>  | ... 36 lines of the commit log omitted.
> 
> FAIL: 3 regressions
> 
> regressions.sum:
> === gcc tests ===
> 
> Running gcc:gcc.target/arm/simd/simd.exp ...
> FAIL: gcc.target/arm/simd/mve-vshr.c scan-assembler-times 
> vneg.s[0-9]+\\tq[0-9]+, q[0-9]+ 6
> FAIL: gcc.target/arm/simd/mve-vshr.c scan-assembler-times 
> vshl.s[0-9]+\\tq[0-9]+, q[0-9]+ 3
> FAIL: gcc.target/arm/simd/mve-vshr.c scan-assembler-times 
> vshl.u[0-9]+\\tq[0-9]+, q[0-9]+ 3

Hi Richard,

Could you please check whether the above tests need an update after your patch? 
 We see these tests now consistently failing across all 32-bit ARM 
configurations that we track (see [1]).

As an example, our configure options for arm-linux-gnueabihf that show the 
failure are at [2].

[1] https://linaro.atlassian.net/browse/GNU-1132

[2] 
https://ci.linaro.org/job/tcwg_gcc_check--master-arm-build/lastSuccessfulBuild/artifact/artifacts/notify/configure-make.txt/*view*/

Thanks!

--
Maxim Kuvyrkov
https://www.linaro.org


> 
> === Results Summary ===
> 
> You can find the failure logs in *.log.1.xz files in
> - 
> https://ci.linaro.org/job/tcwg_gcc_check--master-arm-build/1636/artifact/artifacts/00-sumfiles/
>  .
> The full lists of regressions and progressions are in
> - 
> https://ci.linaro.org/job/tcwg_gcc_check--master-arm-build/1636/artifact/artifacts/notify/
>  .
> The list of [ignored] baseline and flaky failures are in
> - 
> https://ci.linaro.org/job/tcwg_gcc_check--master-arm-build/1636/artifact/artifacts/sumfiles/xfails.xfail
>  .
> 
> The configuration of this build is:
> CI config tcwg_gcc_check master-arm
> 
> -8<--8<--8<--
> The information below can be used to reproduce a debug environment:
> 
> Current build   : 
> https://ci.linaro.org/job/tcwg_gcc_check--master-arm-build/1636/artifact/artifacts
> Reference build : 
> https://ci.linaro.org/job/tcwg_gcc_check--master-arm-build/1635/artifact/artifacts
> 
> Reproduce last good and first bad builds: 
> https://git-us.linaro.org/toolchain/ci/interesting-commits.git/plain/gcc/sha1/1a8261e047f7a2c2b0afb95716f7615cba718cd1/tcwg_gcc_check/master-arm/reproduction_instructions.txt
> 
> Full commit : 
> https://github.com/gcc-mirror/gcc/commit/1a8261e047f7a2c2b0afb95716f7615cba718cd1
> 
> List of configurations that regressed due to this commit :
> * tcwg_gcc_check
> ** master-arm
> *** FAIL: 3 regressions
> *** 
> https://git-us.linaro.org/toolchain/ci/interesting-commits.git/plain/gcc/sha1/1a8261e047f7a2c2b0afb95716f7615cba718cd1/tcwg_gcc_check/master-arm/details.txt
> *** 
> https://ci.linaro.org/job/tcwg_gcc_check--master-arm-build/1636/artifact/artifacts

___
linaro-toolchain mailing list -- linaro-toolchain@lists.linaro.org
To unsubscribe send an email to linaro-toolchain-le...@lists.linaro.org


Re: [Linaro-TCWG-CI] gdb-14-branchpoint-1356-g7737b133640: FAIL: 1 regressions on arm

2024-03-11 Thread Maxim Kuvyrkov
> On Jan 27, 2024, at 17:25, ci_not...@linaro.org wrote:
> 
> Dear contributor, our automatic CI has detected problems related to your 
> patch(es).  Please find some details below.  If you have any questions, 
> please follow up on linaro-toolchain@lists.linaro.org mailing list, Libera's 
> #linaro-tcwg channel, or ping your favourite Linaro toolchain developer on 
> the usual project channel.
> 
> We appreciate that it might be difficult to find the necessary logs or 
> reproduce the issue locally. If you can't get what you need from our CI 
> within minutes, let us know and we will be happy to help.
> 
> We track this report status in https://linaro.atlassian.net/browse/GNU-1121 , 
> please let us know if you are looking at the problem and/or when you have a 
> fix.
> 
> In  master-arm after:
> 
>  | commit gdb-14-branchpoint-1356-g7737b133640
>  | Author: Tom Tromey 
>  | Date:   Tue Jan 9 11:47:17 2024 -0700
>  | 
>  | Handle DW_AT_endianity on enumeration types
>  | 
>  | A user found that gdb would not correctly print a field from an Ada
>  | record using the scalar storage order feature.  We tracked this down
>  | to a combination of problems.
>  | 
>  | First, GCC did not emit DW_AT_endianity on the enumeration type.
>  | ... 14 lines of the commit log omitted.
> 
> FAIL: 1 regressions
> 
> regressions.sum:
> === gdb tests ===
> 
> Running gdb:gdb.ada/scalar_storage.exp ...
> FAIL: gdb.ada/scalar_storage.exp: print V_BE

Hi Tom,

I see the above failure for both aarch64-linux-gnu and arm-linux-gnueabihf in 
our testing.  The log shows ([1]):
===
Breakpoint 1, storage () at 
/home/tcwg-buildslave/workspace/tcwg_gnu_2/gdb/gdb/testsuite/gdb.ada/scalar_storage/storage.adb:53
53 Do_Nothing (V_LE'Address);  --  START
(gdb) print V_LE
$1 = (value => 126, another_value => 12, color => green)
(gdb) PASS: gdb.ada/scalar_storage.exp: print V_LE
get_compiler_info: gcc-14-0-1
print V_BE
$2 = (value => 126, another_value => 12, color => red)
(gdb) FAIL: gdb.ada/scalar_storage.exp: print V_BE
===

Any idea what can be causing this?

This failure happens in CI configurations where we track tip-of-trunk GCC.


[1] 
https://ci.linaro.org/job/tcwg_gnu_native_check_gdb--master-aarch64-build/lastSuccessfulBuild/artifact/artifacts/00-sumfiles/gdb.log.xz

Thanks,

--
Maxim Kuvyrkov
https://www.linaro.org


> 
> === Results Summary ===
> 
> You can find the failure logs in *.log.1.xz files in
> - 
> https://ci.linaro.org/job/tcwg_gnu_native_check_gdb--master-arm-build/1076/artifact/artifacts/00-sumfiles/
>  .
> The full lists of regressions and progressions are in
> - 
> https://ci.linaro.org/job/tcwg_gnu_native_check_gdb--master-arm-build/1076/artifact/artifacts/notify/
>  .
> The list of [ignored] baseline and flaky failures are in
> - 
> https://ci.linaro.org/job/tcwg_gnu_native_check_gdb--master-arm-build/1076/artifact/artifacts/sumfiles/xfails.xfail
>  .
> 
> The configuration of this build is:
> CI config tcwg_gnu_native_check_gdb master-arm
> 
> -8<--8<--8<--
> The information below can be used to reproduce a debug environment:
> 
> Current build   : 
> https://ci.linaro.org/job/tcwg_gnu_native_check_gdb--master-arm-build/1076/artifact/artifacts
> Reference build : 
> https://ci.linaro.org/job/tcwg_gnu_native_check_gdb--master-arm-build/1075/artifact/artifacts
> 
> Reproduce last good and first bad builds: 
> https://git-us.linaro.org/toolchain/ci/interesting-commits.git/plain/gdb/sha1/7737b1336402cd4682538620ab996bdb7ad0ea79/tcwg_gnu_native_check_gdb/master-arm/reproduction_instructions.txt
> 
> Full commit : 
> https://sourceware.org/git/?p=binutils-gdb.git;a=commitdiff;h=7737b1336402cd4682538620ab996bdb7ad0ea79
> 
> List of configurations that regressed due to this commit :
> * tcwg_gnu_native_check_gdb
> ** master-arm
> *** FAIL: 1 regressions
> *** 
> https://git-us.linaro.org/toolchain/ci/interesting-commits.git/plain/gdb/sha1/7737b1336402cd4682538620ab996bdb7ad0ea79/tcwg_gnu_native_check_gdb/master-arm/details.txt
> *** 
> https://ci.linaro.org/job/tcwg_gnu_native_check_gdb--master-arm-build/1076/artifact/artifacts


___
linaro-toolchain mailing list -- linaro-toolchain@lists.linaro.org
To unsubscribe send an email to linaro-toolchain-le...@lists.linaro.org


Re: [Linaro-TCWG-CI] gdb patch #85948: FAIL: 7 regressions: 1 progressions on arm

2024-02-21 Thread Maxim Kuvyrkov
> On Feb 21, 2024, at 12:44, Tiezhu Yang  wrote:
> 
> 
> 
> On 02/21/2024 03:16 PM, Maxim Kuvyrkov wrote:
>>> On Feb 21, 2024, at 05:46, Tiezhu Yang  wrote:
>>> 
>>> 
>>> 
>>> On 02/21/2024 03:52 AM, ci_not...@linaro.org wrote:
>>>> If you can't get what you need from our CI within minutes, let us know and 
>>>> we will be happy to help.
>>> 
>>> We can see "Operation not permitted" in the log info,
>>> please try one of the following processes to test:
>>> (1) set ptrace_scope as 0
>>>   $ echo 0 | sudo tee /proc/sys/kernel/yama/ptrace_scope
>>>   $ make check-gdb TESTS="gdb.threads/attach-many-short-lived-threads.exp"
>> 
>> Hi Tiezhu,
>> 
>> We already use the above approach for testing.  Also, our CI reports only 
>> regressions, not all failures, and the environment, generally, does not 
>> change whether the test passes or fails.
>> 
>> The problem appears to be the fact that 
>> gdb.threads/attach-many-short-lived-threads.exp tests are flaky, and 
>> detected as such in [1] -- search for "delete all breakpoints".
>> However, because your patch renames the tests, the flaky entries do not 
>> match, and failures are seen as regressions.
> 
> OK, I see. Are there any regressions tested with the following change
> on top of the patch?

Hi Tiezhu,

What I meant is that there are no real regressions from your patch and you can 
ignore the report.

--
Maxim Kuvyrkov
https://www.linaro.org

> 
> diff --git a/gdb/testsuite/lib/gdb.exp b/gdb/testsuite/lib/gdb.exp
> index 7357d56f89a..7e14de44609 100644
> --- a/gdb/testsuite/lib/gdb.exp
> +++ b/gdb/testsuite/lib/gdb.exp
> @@ -373,7 +373,7 @@ proc delete_breakpoints {} {
> #
> set timeout 100
> 
> -set msg "delete all breakpoints, watchpoints, tracepoints, and 
> catchpoints in delete_breakpoints"
> +set msg "delete all breakpoints in delete_breakpoints"
> set deleted 0
> gdb_test_multiple "delete breakpoints" "$msg" {
>-re "Delete all breakpoints, watchpoints, tracepoints, and 
> catchpoints.*y or n.*$" {
> 
> If it is OK for you to avoid regressions, I will squash the above change in 
> the patch and then send a new version.
> 
> Thanks,
> Tiezhu
> 

___
linaro-toolchain mailing list -- linaro-toolchain@lists.linaro.org
To unsubscribe send an email to linaro-toolchain-le...@lists.linaro.org


Re: [Linaro-TCWG-CI] gdb patch #85948: FAIL: 7 regressions: 1 progressions on arm

2024-02-20 Thread Maxim Kuvyrkov
> On Feb 21, 2024, at 05:46, Tiezhu Yang  wrote:
> 
> 
> 
> On 02/21/2024 03:52 AM, ci_not...@linaro.org wrote:
>> If you can't get what you need from our CI within minutes, let us know and 
>> we will be happy to help.
> 
> We can see "Operation not permitted" in the log info,
> please try one of the following processes to test:
> (1) set ptrace_scope as 0
>$ echo 0 | sudo tee /proc/sys/kernel/yama/ptrace_scope
>$ make check-gdb TESTS="gdb.threads/attach-many-short-lived-threads.exp"

Hi Tiezhu,

We already use the above approach for testing.  Also, our CI reports only 
regressions, not all failures, and the environment, generally, does not change 
whether the test passes or fails.

The problem appears to be the fact that 
gdb.threads/attach-many-short-lived-threads.exp tests are flaky, and detected 
as such in [1] -- search for "delete all breakpoints".  However, because your 
patch renames the tests, the flaky entries do not match, and failures are seen 
as regressions.

Do the "gdb.threads/attach-many-short-lived-threads.exp: iter " tests pass 
reliably for you?

Thanks,

[1] 
https://ci.linaro.org/job/tcwg_gdb_check--master-arm-precommit/1725/artifact/artifacts/artifacts.precommit/sumfiles/xfails.xfail/*view*/

--
Maxim Kuvyrkov
https://www.linaro.org

___
linaro-toolchain mailing list -- linaro-toolchain@lists.linaro.org
To unsubscribe send an email to linaro-toolchain-le...@lists.linaro.org


Re: [Linaro-TCWG-CI] gcc patch #85713: FAIL: 19 regressions on arm

2024-02-15 Thread Maxim Kuvyrkov
Hi Richard,

This is a false positive.  We had a bit of instability in our CI yesterday, and 
it should be all fixed now.

Thanks,

--
Maxim Kuvyrkov
https://www.linaro.org

> On Feb 14, 2024, at 23:00, ci_not...@linaro.org wrote:
> 
> Dear contributor, our automatic CI has detected problems related to your 
> patch(es).  Please find some details below.  If you have any questions, 
> please follow up on linaro-toolchain@lists.linaro.org mailing list, Libera's 
> #linaro-tcwg channel, or ping your favourite Linaro toolchain developer on 
> the usual project channel.
> 
> We appreciate that it might be difficult to find the necessary logs or 
> reproduce the issue locally. If you can't get what you need from our CI 
> within minutes, let us know and we will be happy to help.
> 
> In gcc_check master-arm after:
> 
>  | gcc patch https://patchwork.sourceware.org/patch/85713
>  | Author: Richard Biener 
>  | Date:   Wed Feb 14 13:02:56 2024 +0100
>  | 
>  | tree-optimization/113910 - huge compile time during PTA
>  | 
>  | For the testcase in PR113910 we spend a lot of time in PTA comparing
>  | bitmaps for looking up equivalence class members.  This points to
>  | the very weak bitmap_hash function which effectively hashes set
>  | and a subset of not set bits.  The following improves it by mixing
>  | that weak result with the population count of the bitmap, reducing
>  | ... 19 lines of the commit log omitted.
>  | ... applied on top of baseline commit:
>  | a032c319cb9 testsuite: gdc: Require ucn in gdc.test/runnable/mangle.d etc. 
> [PR104739]
> 
> FAIL: 19 regressions
> 
> regressions.sum:
> === g++ tests ===
> 
> Running g++:g++.dg/dg.exp ...
> FAIL: c-c++-common/pr44832.c -std=gnu++17 (test for excess errors)
> FAIL: c-c++-common/pr44832.c -std=gnu++98 (test for excess errors)
> FAIL: g++.dg/opt/pr100541-2.C -std=gnu++14 (test for excess errors)
> FAIL: g++.dg/opt/pr100541-2.C -std=gnu++17 (test for excess errors)
> FAIL: g++.dg/opt/pr100541-2.C -std=gnu++20 (test for excess errors)
> 
> Running g++:g++.dg/pch/pch.exp ...
> ... and 29 more entries
> 
> You can find the failure logs in *.log.1.xz files in
> - 
> https://ci.linaro.org/job/tcwg_gcc_check--master-arm-precommit/6118/artifact/artifacts/artifacts.precommit/00-sumfiles/
> The full lists of regressions and progressions as well as configure and make 
> commands are in
> - 
> https://ci.linaro.org/job/tcwg_gcc_check--master-arm-precommit/6118/artifact/artifacts/artifacts.precommit/notify/
> The list of [ignored] baseline and flaky failures are in
> - 
> https://ci.linaro.org/job/tcwg_gcc_check--master-arm-precommit/6118/artifact/artifacts/artifacts.precommit/sumfiles/xfails.xfail
> 
> The configuration of this build is:
> CI config tcwg_gcc_check master-arm
> 
> -8<--8<--8<--
> The information below can be used to reproduce a debug environment:
> 
> Current build   : 
> https://ci.linaro.org/job/tcwg_gcc_check--master-arm-precommit/6118/artifact/artifacts
> Reference build : 
> https://ci.linaro.org/job/tcwg_gcc_check--master-arm-build/1717/artifact/artifacts


___
linaro-toolchain mailing list -- linaro-toolchain@lists.linaro.org
To unsubscribe send an email to linaro-toolchain-le...@lists.linaro.org


Re: [Linaro-TCWG-CI] glibc patch #85585: FAIL: 1 regressions on arm

2024-02-15 Thread Maxim Kuvyrkov
> On Feb 15, 2024, at 03:54, H.J. Lu  wrote:
> 
> FAIL: elf/tst-gnu2-tls2
> 
> indicates that your _dl_tlsdesc_dynamic may not preserve all caller-saved
> registers.  Please find out how the test fails.

Hi H.J.,

See below.

...
> FAIL: 1 regressions
> 
> regressions.sum:
>=== glibc tests ===
> 
> Running glibc:elf ...
> FAIL: elf/tst-gnu2-tls2
> 
> 
> You can find the failure logs in *.log.1.xz files in
> - 
> https://ci.linaro.org/job/tcwg_glibc_check--master-arm-precommit/1460/artifact/artifacts/artifacts.precommit/00-sumfiles/

tests.log.1.xz contains output of failed tests --
https://ci.linaro.org/job/tcwg_glibc_check--master-arm-precommit/1460/artifact/artifacts/artifacts.precommit/00-sumfiles/tests.log.1.xz
===
FAIL: elf/tst-gnu2-tls2
original exit status 1
open tst-gnu2-tls2mod0.so
open tst-gnu2-tls2mod1.so
open tst-gnu2-tls2mod2.so
close tst-gnu2-tls2mod0.so
close tst-gnu2-tls2mod1.so
open tst-gnu2-tls2mod0.so
open tst-gnu2-tls2mod1.so
Didn't expect signal from child: got `Segmentation fault'
===

Let me know if you need any help investigating this.

Thanks!

--
Maxim Kuvyrkov
https://www.linaro.org

___
linaro-toolchain mailing list -- linaro-toolchain@lists.linaro.org
To unsubscribe send an email to linaro-toolchain-le...@lists.linaro.org


Re: [Linaro-TCWG-CI] gcc patch #85693: FAIL: 33 regressions on arm

2024-02-14 Thread Maxim Kuvyrkov
Hi Nathaniel,

We enabled guality tests in our CI setup yesterday, and this is part of the 
fallout.  Please ignore this report.

Kind regards,

--
Maxim Kuvyrkov
https://www.linaro.org

> On Feb 14, 2024, at 09:55, ci_not...@linaro.org wrote:
> 
> Dear contributor, our automatic CI has detected problems related to your 
> patch(es).  Please find some details below.  If you have any questions, 
> please follow up on linaro-toolchain@lists.linaro.org mailing list, Libera's 
> #linaro-tcwg channel, or ping your favourite Linaro toolchain developer on 
> the usual project channel.
> 
> We appreciate that it might be difficult to find the necessary logs or 
> reproduce the issue locally. If you can't get what you need from our CI 
> within minutes, let us know and we will be happy to help.
> 
> In gcc_check master-arm after:
> 
>  | gcc patch https://patchwork.sourceware.org/patch/85693
>  | Author: Nathaniel Shead 
>  | Date:   Wed Feb 14 12:34:51 2024 +1100
>  | 
>  | c++: Defer emitting inline variables [PR113708]
>  | 
>  | On Tue, Feb 13, 2024 at 06:08:42PM -0500, Jason Merrill wrote:
>  | > On 2/11/24 08:26, Nathaniel Shead wrote:
>  | > >
>  | > > Currently inline vars imported from modules aren't correctly 
> finalised,
>  | > > which means that import_export_decl gets called at the end of TU
>  | ... 44 lines of the commit log omitted.
>  | ... applied on top of baseline commit:
>  | 5f2cd521347 libstdc++: C++ item p2442 is version 1 only
> 
> FAIL: 33 regressions
> 
> regressions.sum:
> === g++ tests ===
> 
> Running g++:g++.dg/debug/dwarf2/dwarf2.exp ...
> FAIL: g++.dg/debug/dwarf2/inline-var-1.C -std=gnu++17  scan-assembler-times 
> 0x3[^\n\r]* DW_AT_inline 6
> FAIL: g++.dg/debug/dwarf2/inline-var-1.C -std=gnu++20  scan-assembler-times 
> 0x3[^\n\r]* DW_AT_inline 6
> FAIL: g++.dg/debug/dwarf2/inline-var-3.C -std=gnu++17  scan-assembler-times 
> 0x3[^\n\r]* DW_AT_inline 4
> FAIL: g++.dg/debug/dwarf2/inline-var-3.C -std=gnu++20  scan-assembler-times 
> 0x3[^\n\r]* DW_AT_inline 4
> 
> Running g++:g++.dg/goacc/goacc.exp ...
> FAIL: c-c++-common/goacc/routine-nohost-2.c -std=c++20  (test for errors, 
> line 10)
> ... and 35 more entries
> 
> You can find the failure logs in *.log.1.xz files in
> - 
> https://ci.linaro.org/job/tcwg_gcc_check--master-arm-precommit/6108/artifact/artifacts/artifacts.precommit/00-sumfiles/
> The full lists of regressions and progressions as well as configure and make 
> commands are in
> - 
> https://ci.linaro.org/job/tcwg_gcc_check--master-arm-precommit/6108/artifact/artifacts/artifacts.precommit/notify/
> The list of [ignored] baseline and flaky failures are in
> - 
> https://ci.linaro.org/job/tcwg_gcc_check--master-arm-precommit/6108/artifact/artifacts/artifacts.precommit/sumfiles/xfails.xfail
> 
> The configuration of this build is:
> CI config tcwg_gcc_check master-arm
> 
> -8<--8<--8<--
> The information below can be used to reproduce a debug environment:
> 
> Current build   : 
> https://ci.linaro.org/job/tcwg_gcc_check--master-arm-precommit/6108/artifact/artifacts
> Reference build : 
> https://ci.linaro.org/job/tcwg_gcc_check--master-arm-build/1714/artifact/artifacts


___
linaro-toolchain mailing list -- linaro-toolchain@lists.linaro.org
To unsubscribe send an email to linaro-toolchain-le...@lists.linaro.org


Re: [Linaro-TCWG-CI] gcc patch #85681: FAIL: 3 regressions on arm

2024-02-14 Thread Maxim Kuvyrkov
Hi H.J.,

We enabled guality tests in our CI setup yesterday, and this is part of the 
fallout.  Please ignore this report.

Kind regards,

--
Maxim Kuvyrkov
https://www.linaro.org

> On Feb 14, 2024, at 09:36, ci_not...@linaro.org wrote:
> 
> Dear contributor, our automatic CI has detected problems related to your 
> patch(es).  Please find some details below.  If you have any questions, 
> please follow up on linaro-toolchain@lists.linaro.org mailing list, Libera's 
> #linaro-tcwg channel, or ping your favourite Linaro toolchain developer on 
> the usual project channel.
> 
> We appreciate that it might be difficult to find the necessary logs or 
> reproduce the issue locally. If you can't get what you need from our CI 
> within minutes, let us know and we will be happy to help.
> 
> In gcc_check master-arm after:
> 
>  | gcc patch https://patchwork.sourceware.org/patch/85681
>  | Author: H.J. Lu 
>  | Date:   Tue Feb 13 13:32:44 2024 -0800
>  | 
>  | x86-64: Generate push2/pop2 only if the incoming stack is 16-byte 
> aligned
>  | 
>  | Since push2/pop2 requires 16-byte stack alignment, don't generate them
>  | if the incoming stack isn't 16-byte aligned.
>  | 
>  | gcc/
>  | 
>  | ... 12 lines of the commit log omitted.
>  | ... applied on top of baseline commit:
>  | 5f2cd521347 libstdc++: C++ item p2442 is version 1 only
> 
> FAIL: 3 regressions
> 
> regressions.sum:
> === g++ tests ===
> 
> Running g++:g++.dg/guality/guality.exp ...
> FAIL: g++.dg/guality/pr55665.C -O2 -flto -fno-use-linker-plugin 
> -flto-partition=none  line 23 p == 40
> 
> Running g++:g++.target/arm/arm.exp ...
> XPASS: g++.target/arm/bfloat_cpp_typecheck.C (test for bogus messages, line 
> 10)
> XPASS: g++.target/arm/bfloat_cpp_typecheck.C (test for bogus messages, line 
> 11)
> 
> 
> You can find the failure logs in *.log.1.xz files in
> - 
> https://ci.linaro.org/job/tcwg_gcc_check--master-arm-precommit/6104/artifact/artifacts/artifacts.precommit/00-sumfiles/
> The full lists of regressions and progressions as well as configure and make 
> commands are in
> - 
> https://ci.linaro.org/job/tcwg_gcc_check--master-arm-precommit/6104/artifact/artifacts/artifacts.precommit/notify/
> The list of [ignored] baseline and flaky failures are in
> - 
> https://ci.linaro.org/job/tcwg_gcc_check--master-arm-precommit/6104/artifact/artifacts/artifacts.precommit/sumfiles/xfails.xfail
> 
> The configuration of this build is:
> CI config tcwg_gcc_check master-arm
> 
> -8<--8<--8<--
> The information below can be used to reproduce a debug environment:
> 
> Current build   : 
> https://ci.linaro.org/job/tcwg_gcc_check--master-arm-precommit/6104/artifact/artifacts
> Reference build : 
> https://ci.linaro.org/job/tcwg_gcc_check--master-arm-build/1714/artifact/artifacts


___
linaro-toolchain mailing list -- linaro-toolchain@lists.linaro.org
To unsubscribe send an email to linaro-toolchain-le...@lists.linaro.org


Re: [Linaro-TCWG-CI] gcc patch #85687: FAIL: 3 regressions on arm

2024-02-14 Thread Maxim Kuvyrkov
Hi Andrew,

We enabled guality tests in our CI setup yesterday, and this is part of the 
fallout.  Please ignore this report.

Kind regards,

--
Maxim Kuvyrkov
https://www.linaro.org

> On Feb 14, 2024, at 09:39, Andrew Pinski (QUIC)  
> wrote:
> 
> This does not make sense at all. The patch only touches aarch64 code and does 
> NOT even touch arm code so there can't be any regressions with arm.
> 
> Thanks,
> Andrew Pinski
> 
>> -Original Message-
>> From: ci_not...@linaro.org 
>> Sent: Tuesday, February 13, 2024 9:34 PM
>> To: Andrew Pinski (QUIC) 
>> Subject: [Linaro-TCWG-CI] gcc patch #85687: FAIL: 3 regressions on arm
>> 
>> Dear contributor, our automatic CI has detected problems related to your
>> patch(es).  Please find some details below.  If you have any questions, 
>> please
>> follow up on linaro-toolchain@lists.linaro.org mailing list, Libera's 
>> #linaro-tcwg
>> channel, or ping your favourite Linaro toolchain developer on the usual 
>> project
>> channel.
>> 
>> We appreciate that it might be difficult to find the necessary logs or 
>> reproduce
>> the issue locally. If you can't get what you need from our CI within 
>> minutes, let
>> us know and we will be happy to help.
>> 
>> In gcc_check master-arm after:
>> 
>>  | gcc patch https://patchwork.sourceware.org/patch/85687
>>  | Author: Andrew Pinski 
>>  | Date:   Tue Feb 13 15:22:32 2024 -0800
>>  |
>>  | aarch64: Reword error message for mismatch guard size and probing
>> interval [PR90155]
>>  |
>>  | The error message is not clear what options are being taked about 
>> when it
>> says the values
>>  | need to match; plus there is a wrong quotation dealing with the
>> diagnostic.
>>  | So this changes the error message to be exactly talking about the 
>> param
>> options that
>>  | are being taked about and now with the options, it needs the quoting.
>>  |
>>  | ... 8 lines of the commit log omitted.
>>  | ... applied on top of baseline commit:
>>  | 5f2cd521347 libstdc++: C++ item p2442 is version 1 only
>> 
>> FAIL: 3 regressions
>> 
>> regressions.sum:
>> === g++ tests ===
>> 
>> Running g++:g++.dg/guality/guality.exp ...
>> FAIL: g++.dg/guality/pr55665.C -O2 -flto -fno-use-linker-plugin -flto-
>> partition=none  line 23 p == 40
>> 
>> Running g++:g++.target/arm/arm.exp ...
>> XPASS: g++.target/arm/bfloat_cpp_typecheck.C (test for bogus messages, line
>> 10)
>> XPASS: g++.target/arm/bfloat_cpp_typecheck.C (test for bogus messages, line
>> 11)
>> 
>> 
>> You can find the failure logs in *.log.1.xz files in
>> - https://ci.linaro.org/job/tcwg_gcc_check--master-arm-
>> precommit/6107/artifact/artifacts/artifacts.precommit/00-sumfiles/
>> The full lists of regressions and progressions as well as configure and make
>> commands are in
>> - https://ci.linaro.org/job/tcwg_gcc_check--master-arm-
>> precommit/6107/artifact/artifacts/artifacts.precommit/notify/
>> The list of [ignored] baseline and flaky failures are in
>> - https://ci.linaro.org/job/tcwg_gcc_check--master-arm-
>> precommit/6107/artifact/artifacts/artifacts.precommit/sumfiles/xfails.xfail
>> 
>> The configuration of this build is:
>> CI config tcwg_gcc_check master-arm
>> 
>> -8<--8<--8<
>> --
>> The information below can be used to reproduce a debug environment:
>> 
>> Current build   : https://ci.linaro.org/job/tcwg_gcc_check--master-arm-
>> precommit/6107/artifact/artifacts
>> Reference build : https://ci.linaro.org/job/tcwg_gcc_check--master-arm-
>> build/1714/artifact/artifacts
> ___
> linaro-toolchain mailing list -- linaro-toolchain@lists.linaro.org
> To unsubscribe send an email to linaro-toolchain-le...@lists.linaro.org

___
linaro-toolchain mailing list -- linaro-toolchain@lists.linaro.org
To unsubscribe send an email to linaro-toolchain-le...@lists.linaro.org


Re: [Linaro-TCWG-CI] gcc patch #85664: FAIL: 29 regressions on arm

2024-02-14 Thread Maxim Kuvyrkov
Hi Robin,

Please ignore this report.  We had a bit of instability in CI testing yesterday.

--
Maxim Kuvyrkov
https://www.linaro.org

> On Feb 14, 2024, at 09:11, ci_not...@linaro.org wrote:
> 
> Dear contributor, our automatic CI has detected problems related to your 
> patch(es).  Please find some details below.  If you have any questions, 
> please follow up on linaro-toolchain@lists.linaro.org mailing list, Libera's 
> #linaro-tcwg channel, or ping your favourite Linaro toolchain developer on 
> the usual project channel.
> 
> We appreciate that it might be difficult to find the necessary logs or 
> reproduce the issue locally. If you can't get what you need from our CI 
> within minutes, let us know and we will be happy to help.
> 
> In gcc_check master-arm after:
> 
>  | gcc patch https://patchwork.sourceware.org/patch/85664
>  | Author: Robin Dapp 
>  | Date:   Tue Feb 13 14:42:50 2024 +0100
>  | 
>  | RISC-V: Adjust vec unit-stride load/store costs.
>  | 
>  | Hi,
>  | 
>  | scalar loads provide offset addressing while unit-stride vector
>  | instructions cannot.  The offset must be loaded into a general-purpose
>  | register before it can be used.  In order to account for this, this
>  | ... 35 lines of the commit log omitted.
>  | ... applied on top of baseline commit:
>  | 5f2cd521347 libstdc++: C++ item p2442 is version 1 only
> 
> FAIL: 29 regressions
> 
> regressions.sum:
> === g++ tests ===
> 
> Running g++:g++.dg/dg.exp ...
> FAIL: g++.dg/ext/has_nothrow_copy-5.C -std=c++98 (test for excess errors)
> FAIL: g++.dg/ext/is_base_of_diagnostic.C -std=c++17  (test for errors, line 
> 13)
> FAIL: g++.dg/ext/is_base_of_diagnostic.C -std=c++17  (test for warnings, line 
> 4)
> FAIL: g++.dg/ext/is_base_of_diagnostic.C -std=c++17 (test for excess errors)
> FAIL: g++.dg/ext/packed4.C -std=gnu++17 execution test
> FAIL: g++.dg/ext/vector27.C -std=c++17 (test for excess errors)
> FAIL: g++.dg/ext/vector41.C -std=gnu++17  (test for errors, line 11)
> ... and 27 more entries
> 
> You can find the failure logs in *.log.1.xz files in
> - 
> https://ci.linaro.org/job/tcwg_gcc_check--master-arm-precommit/6102/artifact/artifacts/artifacts.precommit/00-sumfiles/
> The full lists of regressions and progressions as well as configure and make 
> commands are in
> - 
> https://ci.linaro.org/job/tcwg_gcc_check--master-arm-precommit/6102/artifact/artifacts/artifacts.precommit/notify/
> The list of [ignored] baseline and flaky failures are in
> - 
> https://ci.linaro.org/job/tcwg_gcc_check--master-arm-precommit/6102/artifact/artifacts/artifacts.precommit/sumfiles/xfails.xfail
> 
> The configuration of this build is:
> CI config tcwg_gcc_check master-arm
> 
> -8<--8<--8<--
> The information below can be used to reproduce a debug environment:
> 
> Current build   : 
> https://ci.linaro.org/job/tcwg_gcc_check--master-arm-precommit/6102/artifact/artifacts
> Reference build : 
> https://ci.linaro.org/job/tcwg_gcc_check--master-arm-build/1714/artifact/artifacts


___
linaro-toolchain mailing list -- linaro-toolchain@lists.linaro.org
To unsubscribe send an email to linaro-toolchain-le...@lists.linaro.org


Re: [Linaro-TCWG-CI] gcc-14-8948-g21de3391e4c: FAIL: 33 regressions on aarch64

2024-02-14 Thread Maxim Kuvyrkov
Hi Jakub,

Please ignore this.  I'm going to investigate, but most likely this is due to 
instability of guality tests.

--
Maxim Kuvyrkov
https://www.linaro.org

> On Feb 14, 2024, at 01:43, ci_not...@linaro.org wrote:
> 
> Dear contributor, our automatic CI has detected problems related to your 
> patch(es).  Please find some details below.  If you have any questions, 
> please follow up on linaro-toolchain@lists.linaro.org mailing list, Libera's 
> #linaro-tcwg channel, or ping your favourite Linaro toolchain developer on 
> the usual project channel.
> 
> We appreciate that it might be difficult to find the necessary logs or 
> reproduce the issue locally. If you can't get what you need from our CI 
> within minutes, let us know and we will be happy to help.
> 
> We track this report status in https://linaro.atlassian.net/browse/GNU-1152 , 
> please let us know if you are looking at the problem and/or when you have a 
> fix.
> 
> In gcc_check master-aarch64 after:
> 
>  | commit gcc-14-8948-g21de3391e4c
>  | Author: Jakub Jelinek 
>  | Date:   Tue Feb 13 10:32:01 2024 +0100
>  | 
>  | hwint: Fix up preprocessor conditions for GCC_PRISZ/fmt_size_t
>  | 
>  | Using unsigned long long int for fmt_size_t and "ll" for GCC_PRISZ
>  | as broke the gengtype on i686-linux before the libiberty fix is 
> certainly
>  | unexpected.  size_t is there unsigned int, so expected fmt_size_t is
>  | unsigned int (or some other 32-bit type).
>  | 
>  | ... 8 lines of the commit log omitted.
> 
> FAIL: 33 regressions
> 
> regressions.sum:
> === gcc tests ===
> 
> Running gcc:gcc.dg/guality/guality.exp ...
> FAIL: gcc.dg/guality/example.c -O1  -DPREVENT_OPTIMIZATION  execution test
> FAIL: gcc.dg/guality/pr43051-1.c -O1  -DPREVENT_OPTIMIZATION  line 34 c == 
> [0]
> FAIL: gcc.dg/guality/pr43051-1.c -O1  -DPREVENT_OPTIMIZATION  line 39 c == 
> [0]
> FAIL: gcc.dg/guality/pr43051-1.c -O2  -DPREVENT_OPTIMIZATION  line 34 c == 
> [0]
> FAIL: gcc.dg/guality/pr43051-1.c -O2  -DPREVENT_OPTIMIZATION  line 39 c == 
> [0]
> FAIL: gcc.dg/guality/pr43051-1.c -O2 -flto -fno-use-linker-plugin 
> -flto-partition=none  -DPREVENT_OPTIMIZATION line 34 c == [0]
> FAIL: gcc.dg/guality/pr43051-1.c -O2 -flto -fno-use-linker-plugin 
> -flto-partition=none  -DPREVENT_OPTIMIZATION line 39 c == [0]
> ... and 27 more entries
> 
> You can find the failure logs in *.log.1.xz files in
> - 
> https://ci.linaro.org/job/tcwg_gcc_check--master-aarch64-build/1595/artifact/artifacts/00-sumfiles/
> The full lists of regressions and progressions as well as configure and make 
> commands are in
> - 
> https://ci.linaro.org/job/tcwg_gcc_check--master-aarch64-build/1595/artifact/artifacts/notify/
> The list of [ignored] baseline and flaky failures are in
> - 
> https://ci.linaro.org/job/tcwg_gcc_check--master-aarch64-build/1595/artifact/artifacts/sumfiles/xfails.xfail
> 
> The configuration of this build is:
> CI config tcwg_gcc_check master-aarch64
> 
> -8<--8<--8<--
> The information below can be used to reproduce a debug environment:
> 
> Current build   : 
> https://ci.linaro.org/job/tcwg_gcc_check--master-aarch64-build/1595/artifact/artifacts
> Reference build : 
> https://ci.linaro.org/job/tcwg_gcc_check--master-aarch64-build/1592/artifact/artifacts
> 
> Reproduce last good and first bad builds: 
> https://git-us.linaro.org/toolchain/ci/interesting-commits.git/plain/gcc/sha1/21de3391e4cecfef6ad1b60772cb55616c1bf7bd/tcwg_gcc_check/master-aarch64/reproduction_instructions.txt
> 
> Full commit : 
> https://github.com/gcc-mirror/gcc/commit/21de3391e4cecfef6ad1b60772cb55616c1bf7bd
> 
> List of configurations that regressed due to this commit :
> * tcwg_gcc_check
> ** master-aarch64
> *** FAIL: 33 regressions
> *** 
> https://git-us.linaro.org/toolchain/ci/interesting-commits.git/plain/gcc/sha1/21de3391e4cecfef6ad1b60772cb55616c1bf7bd/tcwg_gcc_check/master-aarch64/details.txt
> *** 
> https://ci.linaro.org/job/tcwg_gcc_check--master-aarch64-build/1595/artifact/artifacts


___
linaro-toolchain mailing list -- linaro-toolchain@lists.linaro.org
To unsubscribe send an email to linaro-toolchain-le...@lists.linaro.org


Re: [Linaro-TCWG-CI] gcc-14-8949-g2ca373b7e8a: FAIL: 1 regressions: 11 progressions on arm

2024-02-14 Thread Maxim Kuvyrkov
> On Feb 13, 2024, at 22:03, ci_not...@linaro.org wrote:
> 
> Dear contributor, our automatic CI has detected problems related to your 
> patch(es).  Please find some details below.  If you have any questions, 
> please follow up on linaro-toolchain@lists.linaro.org mailing list, Libera's 
> #linaro-tcwg channel, or ping your favourite Linaro toolchain developer on 
> the usual project channel.
> 
> We appreciate that it might be difficult to find the necessary logs or 
> reproduce the issue locally. If you can't get what you need from our CI 
> within minutes, let us know and we will be happy to help.
> 
> We track this report status in https://linaro.atlassian.net/browse/GNU-1151 , 
> please let us know if you are looking at the problem and/or when you have a 
> fix.
> 
> In gcc_check master-arm after:
> 
>  | commit gcc-14-8949-g2ca373b7e8a
>  | Author: Jakub Jelinek 
>  | Date:   Tue Feb 13 10:33:08 2024 +0100
>  | 
>  | libgcc: Fix UB in FP_FROM_BITINT
>  | 
>  | As I wrote earlier, I was seeing
>  | FAIL: gcc.dg/torture/bitint-24.c   -O0  execution test
>  | FAIL: gcc.dg/torture/bitint-24.c   -O2  execution test
>  | with the ia32 _BitInt enablement patch on i686-linux.  I thought
>  | floatbitintxf.c was miscompiled with -O2 -march=i686 -mtune=generic, 
> but it
>  | ... 34 lines of the commit log omitted.
> 
> FAIL: 1 regressions: 11 progressions
> 
> regressions.sum:
> === gcc tests ===
> 
> Running gcc:gcc.dg/vect/vect.exp ...
> FAIL: gcc.dg/vect/tsvc/vect-tsvc-s1281.c execution test

Hi Jakub,

The failure is due to the timeout.  I'm going to investigate whether this is a 
legitimate failure or your change just pushed testcase execution time over the 
threshold.

Running on tcwg-local: timeout -k 30s 330s ./vect-tsvc-s1281.exe 
spawn [open ...]
value: inf, expected: inf
timeout: the monitored command dumped core
FAIL: gcc.dg/vect/tsvc/vect-tsvc-s1281.c execution test

> 
> 
> progressions.sum:
> === gcc tests ===
> 
> Running gcc:gcc.dg/guality/guality.exp ...
> UNRESOLVED: c-c++-common/guality/Og-static-wo-1.c -O0  compilation failed to 
> produce executable
> FAIL: c-c++-common/guality/Og-static-wo-1.c -O0  (test for excess errors)
> FAIL: gcc.dg/guality/pr41447-1.c -O0  execution test
> 
> Running gcc:gcc.dg/ipa/ipa.exp ...
> UNRESOLVED: gcc.dg/ipa/iinline-4.c scan-ipa-dump inline "hooray4[^\\n]*inline 
> copy in test4"
> UNRESOLVED: gcc.dg/ipa/iinline-4.c scan-ipa-dump inline "hooray1[^\\n]*inline 
> copy in test1"
> ... and 7 more entries

Ignore these "progressions".  We are having a bit of instability after enabling 
guality tests in our setup.

--
Maxim Kuvyrkov
https://www.linaro.org


> 
> You can find the failure logs in *.log.1.xz files in
> - 
> https://ci.linaro.org/job/tcwg_gcc_check--master-arm-build/1710/artifact/artifacts/00-sumfiles/
> The full lists of regressions and progressions as well as configure and make 
> commands are in
> - 
> https://ci.linaro.org/job/tcwg_gcc_check--master-arm-build/1710/artifact/artifacts/notify/
> The list of [ignored] baseline and flaky failures are in
> - 
> https://ci.linaro.org/job/tcwg_gcc_check--master-arm-build/1710/artifact/artifacts/sumfiles/xfails.xfail
> 
> The configuration of this build is:
> CI config tcwg_gcc_check master-arm
> 
> -8<--8<--8<--
> The information below can be used to reproduce a debug environment:
> 
> Current build   : 
> https://ci.linaro.org/job/tcwg_gcc_check--master-arm-build/1710/artifact/artifacts
> Reference build : 
> https://ci.linaro.org/job/tcwg_gcc_check--master-arm-build/1709/artifact/artifacts
> 
> Reproduce last good and first bad builds: 
> https://git-us.linaro.org/toolchain/ci/interesting-commits.git/plain/gcc/sha1/2ca373b7e8adf9cc0c17aecab5e1cc6c76a92f4c/tcwg_gcc_check/master-arm/reproduction_instructions.txt
> 
> Full commit : 
> https://github.com/gcc-mirror/gcc/commit/2ca373b7e8adf9cc0c17aecab5e1cc6c76a92f4c
> 
> List of configurations that regressed due to this commit :
> * tcwg_gcc_check
> ** master-arm
> *** FAIL: 1 regressions: 11 progressions
> *** 
> https://git-us.linaro.org/toolchain/ci/interesting-commits.git/plain/gcc/sha1/2ca373b7e8adf9cc0c17aecab5e1cc6c76a92f4c/tcwg_gcc_check/master-arm/details.txt
> *** 
> https://ci.linaro.org/job/tcwg_gcc_check--master-arm-build/1710/artifact/artifacts


___
linaro-toolchain mailing list -- linaro-toolchain@lists.linaro.org
To unsubscribe send an email to linaro-toolchain-le...@lists.linaro.org


Re: [Linaro-TCWG-CI] gcc-14-8887-gd9459129ea8: FAIL: 29 regressions on master-thumb_m33_eabi

2024-02-12 Thread Maxim Kuvyrkov
Hi Richard,

Ack.  Thanks for the follow up!

--
Maxim Kuvyrkov
https://www.linaro.org

> On Feb 12, 2024, at 18:46, Richard Earnshaw  
> wrote:
> 
> I think all of these actually fall under
> 
> "I suspect there are still some further issues to address here, since
> the framework does not correctly test that the multilibs and startup
> code enable alternative format; but this is still an improvement over
> what we had before."
> 
> All the failures are execution test failures due to the fact that we don't 
> check the available hardware/multilibs for running the test; so blindly 
> adding options and then running the test is incorrect.  But we currently lack 
> such a test in the framework.
> 
> It's also less than clear exactly what these tests are checking and which 
> part of what they are checking that really requires the options they add.  I 
> suspect that they previously passed only by accident (they didn't really add 
> enough flags to enable what they author thought they were checking).
> 
> R.
> 
> On 10/02/2024 02:43, ci_not...@linaro.org wrote:
>> Dear contributor, our automatic CI has detected problems related to your 
>> patch(es).  Please find some details below.  If you have any questions, 
>> please follow up on linaro-toolchain@lists.linaro.org mailing list, Libera's 
>> #linaro-tcwg channel, or ping your favourite Linaro toolchain developer on 
>> the usual project channel.
>> We appreciate that it might be difficult to find the necessary logs or 
>> reproduce the issue locally. If you can't get what you need from our CI 
>> within minutes, let us know and we will be happy to help.
>> We track this report status in https://linaro.atlassian.net/browse/GNU-1149 
>> <https://linaro.atlassian.net/browse/GNU-1149> , please let us know if you 
>> are looking at the problem and/or when you have a fix.
>> In  arm-eabi cortex-m33 hard after:
>>   | commit gcc-14-8887-gd9459129ea8
>>   | Author: Richard Earnshaw 
>>   | Date:   Mon Feb 5 17:16:45 2024 +
>>   |
>>   | arm: testsuite: fix issues relating to fp16 alternative testing
>>   |
>>   | The v*_fp16_xN_1.c tests on Arm have been unstable since they were
>>   | added.  This is not a problem with the tests themselves, or even the
>>   | patches that were added, but with the testsuite infrastructure.  It
>>   | turned out that another set of dg- tests for fp16 were corrupting the
>>   | cached set of options used by the new tests, leading to running the
>>   | ... 45 lines of the commit log omitted.
>> FAIL: 29 regressions
>> regressions.sum:
>> === g++ tests ===
>> Running g++:g++.dg/dg.exp ...
>> FAIL: g++.dg/ext/arm-fp16/arm-fp16-ops-3.C -std=c++14 execution test
>> FAIL: g++.dg/ext/arm-fp16/arm-fp16-ops-3.C -std=c++17 execution test
>> FAIL: g++.dg/ext/arm-fp16/arm-fp16-ops-3.C -std=c++20 execution test
>> FAIL: g++.dg/ext/arm-fp16/arm-fp16-ops-3.C -std=c++98 execution test
>> FAIL: g++.dg/ext/arm-fp16/arm-fp16-ops-4.C -std=gnu++14 execution test
>> FAIL: g++.dg/ext/arm-fp16/arm-fp16-ops-4.C -std=gnu++17 execution test
>> FAIL: g++.dg/ext/arm-fp16/arm-fp16-ops-4.C -std=gnu++20 execution test
>> ... and 26 more entries
>> You can find the failure logs in *.log.1.xz files in
>>  - 
>> https://ci.linaro.org/job/tcwg_gnu_embed_check_gcc--master-thumb_m33_eabi-build/363/artifact/artifacts/00-sumfiles/
>>  
>> <https://ci.linaro.org/job/tcwg_gnu_embed_check_gcc--master-thumb_m33_eabi-build/363/artifact/artifacts/00-sumfiles/>
>> The full lists of regressions and progressions as well as configure and make 
>> commands are in
>>  - 
>> https://ci.linaro.org/job/tcwg_gnu_embed_check_gcc--master-thumb_m33_eabi-build/363/artifact/artifacts/notify/
>>  
>> <https://ci.linaro.org/job/tcwg_gnu_embed_check_gcc--master-thumb_m33_eabi-build/363/artifact/artifacts/notify/>
>> The list of [ignored] baseline and flaky failures are in
>>  - 
>> https://ci.linaro.org/job/tcwg_gnu_embed_check_gcc--master-thumb_m33_eabi-build/363/artifact/artifacts/sumfiles/xfails.xfail
>>  
>> <https://ci.linaro.org/job/tcwg_gnu_embed_check_gcc--master-thumb_m33_eabi-build/363/artifact/artifacts/sumfiles/xfails.xfail>
>> The configuration of this build is:
>> CI config tcwg_gnu_embed_check_gcc arm-eabi -mthumb 
>> -march=armv8-m.main+dsp+fp -mtune=cortex-m33 -mfloat-abi=hard -mfpu=auto
>> -8<--8<--8<--
>> The information below can be used to reproduce a debug environment:
>> Current build   :

Re: [Linaro-TCWG-CI] glibc-2.38.9000-528-g6bd0e4efcc: FAIL: 1 regressions on arm

2024-01-30 Thread Maxim Kuvyrkov
Hi Arjun,

This is not a real regression.  We have a problem in our CI that causes 
container tests fail for 32-bit ARM.  Therefore, any new container test shows 
up as a regression.

We are working on fixing this.

Thanks,

--
Maxim Kuvyrkov
https://www.linaro.org

> On Jan 31, 2024, at 06:42, ci_not...@linaro.org wrote:
> 
> Dear contributor, our automatic CI has detected problems related to your 
> patch(es).  Please find some details below.  If you have any questions, 
> please follow up on linaro-toolchain@lists.linaro.org mailing list, Libera's 
> #linaro-tcwg channel, or ping your favourite Linaro toolchain developer on 
> the usual project channel.
> 
> We appreciate that it might be difficult to find the necessary logs or 
> reproduce the issue locally. If you can't get what you need from our CI 
> within minutes, let us know and we will be happy to help.
> 
> We track this report status in https://linaro.atlassian.net/browse/GNU-1138 , 
> please let us know if you are looking at the problem and/or when you have a 
> fix.
> 
> In glibc_check master-arm after:
> 
>  | commit glibc-2.38.9000-528-g6bd0e4efcc
>  | Author: Arjun Shankar 
>  | Date:   Mon Jan 15 17:44:43 2024 +0100
>  | 
>  | syslog: Fix heap buffer overflow in __vsyslog_internal (CVE-2023-6246)
>  | 
>  | __vsyslog_internal did not handle a case where printing a SYSLOG_HEADER
>  | containing a long program name failed to update the required buffer
>  | size, leading to the allocation and overflow of a too-small buffer on
>  | the heap.  This commit fixes that.  It also adds a new regression test
>  | that uses glibc.malloc.check.
>  | ... 4 lines of the commit log omitted.
> 
> FAIL: 1 regressions
> 
> regressions.sum:
> === glibc tests ===
> 
> Running glibc:misc ...
> FAIL: misc/tst-syslog-long-progname 
> 
> 
> You can find the failure logs in *.log.1.xz files in
> - 
> https://ci.linaro.org/job/tcwg_glibc_check--master-arm-build/892/artifact/artifacts/00-sumfiles/
> The full lists of regressions and progressions as well as configure and make 
> commands are in
> - 
> https://ci.linaro.org/job/tcwg_glibc_check--master-arm-build/892/artifact/artifacts/notify/
> The list of [ignored] baseline and flaky failures are in
> - 
> https://ci.linaro.org/job/tcwg_glibc_check--master-arm-build/892/artifact/artifacts/sumfiles/xfails.xfail
> 
> The configuration of this build is:
> CI config tcwg_glibc_check master-arm
> 
> -8<--8<--8<--
> The information below can be used to reproduce a debug environment:
> 
> Current build   : 
> https://ci.linaro.org/job/tcwg_glibc_check--master-arm-build/892/artifact/artifacts
> Reference build : 
> https://ci.linaro.org/job/tcwg_glibc_check--master-arm-build/890/artifact/artifacts
> 
> Reproduce last good and first bad builds: 
> https://git-us.linaro.org/toolchain/ci/interesting-commits.git/plain/glibc/sha1/6bd0e4efcc78f3c0115e5ea9739a1642807450da/tcwg_glibc_check/master-arm/reproduction_instructions.txt
> 
> Full commit : 
> https://sourceware.org/git/?p=glibc.git;a=commitdiff;h=6bd0e4efcc78f3c0115e5ea9739a1642807450da
> 
> List of configurations that regressed due to this commit :
> * tcwg_glibc_check
> ** master-arm
> *** FAIL: 1 regressions
> *** 
> https://git-us.linaro.org/toolchain/ci/interesting-commits.git/plain/glibc/sha1/6bd0e4efcc78f3c0115e5ea9739a1642807450da/tcwg_glibc_check/master-arm/details.txt
> *** 
> https://ci.linaro.org/job/tcwg_glibc_check--master-arm-build/892/artifact/artifacts


___
linaro-toolchain mailing list -- linaro-toolchain@lists.linaro.org
To unsubscribe send an email to linaro-toolchain-le...@lists.linaro.org


Re: [Linaro-TCWG-CI] gdb-14-branchpoint-1426-gb960445a459: FAIL: 1 regressions on arm

2024-01-30 Thread Maxim Kuvyrkov
Hi All,

This is a false positive, obviously.  We do our best to filter out flaky tests, 
but in this case "gdb.threads/staticthreads.exp: up 10" PASSed twice in the 
previous run, and then FAILed twice in the next run.  Sneaky!

Re. the FAIL, the testcase expects to be " in main .*" after "up 10", but ends 
up in pthread_join() instead:
===
up 10
#4  0x0001b864 in pthread_join ()
(gdb) FAIL: gdb.threads/staticthreads.exp: up 10
===
See [1] for details.

[1] 
https://ci.linaro.org/job/tcwg_gdb_check--master-arm-build/752/artifact/artifacts/00-sumfiles/gdb.log.1.xz
 .

Hi Thiago,

Would you please investigate whether ending up in pthread_join() is 
expected/reasonable for 32-bit ARM?  In other words, whether we have a GDB bug 
exposed by staticthreads.exp or the testcase needs to be generalized a bit.

Thank you,

--
Maxim Kuvyrkov
https://www.linaro.org

> On Jan 31, 2024, at 01:30, ci_not...@linaro.org wrote:
> 
> Dear contributor, our automatic CI has detected problems related to your 
> patch(es).  Please find some details below.  If you have any questions, 
> please follow up on linaro-toolchain@lists.linaro.org mailing list, Libera's 
> #linaro-tcwg channel, or ping your favourite Linaro toolchain developer on 
> the usual project channel.
> 
> We appreciate that it might be difficult to find the necessary logs or 
> reproduce the issue locally. If you can't get what you need from our CI 
> within minutes, let us know and we will be happy to help.
> 
> We track this report status in https://linaro.atlassian.net/browse/GNU-1137 , 
> please let us know if you are looking at the problem and/or when you have a 
> fix.
> 
> In gdb_check master-arm after:
> 
>  | commit gdb-14-branchpoint-1426-gb960445a459
>  | Author: GDB Administrator 
>  | Date:   Tue Jan 30 00:00:26 2024 +
>  | 
>  | Automatic date update in version.in
> 
> FAIL: 1 regressions
> 
> regressions.sum:
> === gdb tests ===
> 
> Running gdb:gdb.threads/staticthreads.exp ...
> FAIL: gdb.threads/staticthreads.exp: up 10
> 
> 
> You can find the failure logs in *.log.1.xz files in
> - 
> https://ci.linaro.org/job/tcwg_gdb_check--master-arm-build/752/artifact/artifacts/00-sumfiles/
> The full lists of regressions and progressions as well as configure and make 
> commands are in
> - 
> https://ci.linaro.org/job/tcwg_gdb_check--master-arm-build/752/artifact/artifacts/notify/
> The list of [ignored] baseline and flaky failures are in
> - 
> https://ci.linaro.org/job/tcwg_gdb_check--master-arm-build/752/artifact/artifacts/sumfiles/xfails.xfail
> 
> The configuration of this build is:
> CI config tcwg_gdb_check master-arm
> 
> -8<--8<--8<--
> The information below can be used to reproduce a debug environment:
> 
> Current build   : 
> https://ci.linaro.org/job/tcwg_gdb_check--master-arm-build/752/artifact/artifacts
> Reference build : 
> https://ci.linaro.org/job/tcwg_gdb_check--master-arm-build/751/artifact/artifacts
> 
> Reproduce last good and first bad builds: 
> https://git-us.linaro.org/toolchain/ci/interesting-commits.git/plain/gdb/sha1/b960445a45981873c5b1718824ea9d3b5749433a/tcwg_gdb_check/master-arm/reproduction_instructions.txt
> 
> Full commit : 
> https://sourceware.org/git/?p=binutils-gdb.git;a=commitdiff;h=b960445a45981873c5b1718824ea9d3b5749433a
> 
> List of configurations that regressed due to this commit :
> * tcwg_gdb_check
> ** master-arm
> *** FAIL: 1 regressions
> *** 
> https://git-us.linaro.org/toolchain/ci/interesting-commits.git/plain/gdb/sha1/b960445a45981873c5b1718824ea9d3b5749433a/tcwg_gdb_check/master-arm/details.txt
> *** 
> https://ci.linaro.org/job/tcwg_gdb_check--master-arm-build/752/artifact/artifacts


___
linaro-toolchain mailing list -- linaro-toolchain@lists.linaro.org
To unsubscribe send an email to linaro-toolchain-le...@lists.linaro.org


Re: [Linaro-TCWG-CI] gdb-14-branchpoint-1354-g8669a8b6740: FAIL: 2 regressions on arm

2024-01-25 Thread Maxim Kuvyrkov
> On Jan 25, 2024, at 19:04, Guinevere Larsen  wrote:
> 
> On 25/01/2024 10:10, Maxim Kuvyrkov wrote:
>>> On Jan 25, 2024, at 04:08, ci_not...@linaro.org wrote:
>>> 
>>> Dear contributor, our automatic CI has detected problems related to your 
>>> patch(es). Please find some details below.  If you have any questions, 
>>> please follow up on linaro-toolchain@lists.linaro.org mailing list, 
>>> Libera's #linaro-tcwg channel, or ping your favourite Linaro toolchain 
>>> developer on the usual project channel.
>>> 
>>> We appreciate that it might be difficult to find the necessary logs or 
>>> reproduce the issue locally. If you can't get what you need from our CI 
>>> within minutes, let us know and we will be happy to help.
>>> 
>>> We track this report status in https://linaro.atlassian.net/browse/GNU-1120 
>>> , please let us know if you are looking at the problem and/or when you have 
>>> a fix.
>>> 
>>> In gdb_check master-arm after:
>>> 
>>>  | commit gdb-14-branchpoint-1354-g8669a8b6740
>>>  | Author: Guinevere Larsen 
>>>  | Date:   Thu Aug 24 11:00:35 2023 +0200
>>>  |
>>>  | gdb/testsuite: add test for backtracing for threaded inferiors from 
>>> a corefile
>>>  |
>>>  | This patch is based on an out-of-tree patch that fedora has been
>>>  | carrying for a while. It tests if GDB is able to properly unwind a
>>>  | threaded program in the following situations:
>>>  | * regular threads
>>>  | * in a signal handler
>>>  | ... 14 lines of the commit log omitted.
>>> 
>>> FAIL: 2 regressions
>>> 
>>> regressions.sum:
>>> === gdb tests ===
>>> 
>>> Running gdb:gdb.threads/threadcrash.exp ...
>>> FAIL: gdb.threads/threadcrash.exp: test_gcore: $thread_count == 7
>>> FAIL: gdb.threads/threadcrash.exp: test_gcore: $thread_count == [llength 
>>> $test_list]
>> Hi Guinevere,
>> 
>> The failures seem to be due to "LWP" output (instead of "Thread") in 
>> test_gcore.
>> 
>> I.e., test_corefile succeeds with
>> 
>> ===
>> (gdb) PASS: gdb.threads/threadcrash.exp: test_corefile: loading_corefile
>> info threads
>>   Id   Target Id  Frame
>> * 1Thread 0xf7dbe7e0 (LWP 476389) 0x00830cea in crash_function () at 
>> /home/tcwg-buildslave/workspace/tcwg_gnu_4/abe/snapshots/gdb.git~master/gdb/testsuite/gdb.threads/threadcrash.c:381
>>   2Thread 0xf7c6f3a0 (LWP 476390) do_spin_task (location=NORMAL) at 
>> /home/tcwg-buildslave/workspace/tcwg_gnu_4/abe/snapshots/gdb.git~master/gdb/testsuite/gdb.threads/threadcrash.c:139
>>   3Thread 0xf746e3a0 (LWP 476391) do_spin_task (location=SIGNAL_HANDLER) 
>> at 
>> /home/tcwg-buildslave/workspace/tcwg_gnu_4/abe/snapshots/gdb.git~master/gdb/testsuite/gdb.threads/threadcrash.c:139
>>   4Thread 0xf6c6d3a0 (LWP 476392) do_spin_task 
>> (location=SIGNAL_ALT_STACK) at 
>> /home/tcwg-buildslave/workspace/tcwg_gnu_4/abe/snapshots/gdb.git~master/gdb/testsuite/gdb.threads/threadcrash.c:139
>>   5Thread 0xf52fe3a0 (LWP 476395) __libc_do_syscall () at 
>> ../sysdeps/unix/sysv/linux/arm/libc-do-syscall.S:46
>>   6Thread 0xf646c3a0 (LWP 476393) __libc_do_syscall () at 
>> ../sysdeps/unix/sysv/linux/arm/libc-do-syscall.S:46
>>   7Thread 0xf5aff3a0 (LWP 476394) __libc_do_syscall () at 
>> ../sysdeps/unix/sysv/linux/arm/libc-do-syscall.S:46
>> (gdb) PASS: gdb.threads/threadcrash.exp: test_corefile: $thread_count == 7
>> ===
>> 
>> and then test_gcore fails with
>> 
>> ===
>> (gdb) PASS: gdb.threads/threadcrash.exp: test_gcore: loading_corefile
>> info threads
>>   Id   Target Id Frame
>> * 1LWP 4764400x00400cea in crash_function () at 
>> /home/tcwg-buildslave/workspace/tcwg_gnu_4/abe/snapshots/gdb.git~master/gdb/testsuite/gdb.threads/threadcrash.c:381
>>   2LWP 476442do_spin_task (location=NORMAL) at 
>> /home/tcwg-buildslave/workspace/tcwg_gnu_4/abe/snapshots/gdb.git~master/gdb/testsuite/gdb.threads/threadcrash.c:139
>>   3LWP 476443do_spin_task (location=SIGNAL_HANDLER) at 
>> /home/tcwg-buildslave/workspace/tcwg_gnu_4/abe/snapshots/gdb.git~master/gdb/testsuite/gdb.threads/threadcrash.c:139
>>   4LWP 476444do_spin_task (location=SIGNAL_ALT_STACK) at 
>> /home/tcwg-buildslave/workspace/tcwg_gnu_4/abe/snapshots/gdb.git~master/gdb/testsuite/gdb.threads/threadcrash.c:139
>>   5LWP 4764450xf7eadb04 in ??

Re: [Linaro-TCWG-CI] gdb-14-branchpoint-1354-g8669a8b6740: FAIL: 2 regressions on arm

2024-01-25 Thread Maxim Kuvyrkov
> On Jan 25, 2024, at 04:08, ci_not...@linaro.org wrote:
> 
> Dear contributor, our automatic CI has detected problems related to your 
> patch(es).  Please find some details below.  If you have any questions, 
> please follow up on linaro-toolchain@lists.linaro.org mailing list, Libera's 
> #linaro-tcwg channel, or ping your favourite Linaro toolchain developer on 
> the usual project channel.
> 
> We appreciate that it might be difficult to find the necessary logs or 
> reproduce the issue locally. If you can't get what you need from our CI 
> within minutes, let us know and we will be happy to help.
> 
> We track this report status in https://linaro.atlassian.net/browse/GNU-1120 , 
> please let us know if you are looking at the problem and/or when you have a 
> fix.
> 
> In gdb_check master-arm after:
> 
>  | commit gdb-14-branchpoint-1354-g8669a8b6740
>  | Author: Guinevere Larsen 
>  | Date:   Thu Aug 24 11:00:35 2023 +0200
>  | 
>  | gdb/testsuite: add test for backtracing for threaded inferiors from a 
> corefile
>  | 
>  | This patch is based on an out-of-tree patch that fedora has been
>  | carrying for a while. It tests if GDB is able to properly unwind a
>  | threaded program in the following situations:
>  | * regular threads
>  | * in a signal handler
>  | ... 14 lines of the commit log omitted.
> 
> FAIL: 2 regressions
> 
> regressions.sum:
> === gdb tests ===
> 
> Running gdb:gdb.threads/threadcrash.exp ...
> FAIL: gdb.threads/threadcrash.exp: test_gcore: $thread_count == 7
> FAIL: gdb.threads/threadcrash.exp: test_gcore: $thread_count == [llength 
> $test_list]

Hi Guinevere,

The failures seem to be due to "LWP" output (instead of "Thread") in test_gcore.

I.e., test_corefile succeeds with

===
(gdb) PASS: gdb.threads/threadcrash.exp: test_corefile: loading_corefile
info threads
  Id   Target Id  Frame 
* 1Thread 0xf7dbe7e0 (LWP 476389) 0x00830cea in crash_function () at 
/home/tcwg-buildslave/workspace/tcwg_gnu_4/abe/snapshots/gdb.git~master/gdb/testsuite/gdb.threads/threadcrash.c:381
  2Thread 0xf7c6f3a0 (LWP 476390) do_spin_task (location=NORMAL) at 
/home/tcwg-buildslave/workspace/tcwg_gnu_4/abe/snapshots/gdb.git~master/gdb/testsuite/gdb.threads/threadcrash.c:139
  3Thread 0xf746e3a0 (LWP 476391) do_spin_task (location=SIGNAL_HANDLER) at 
/home/tcwg-buildslave/workspace/tcwg_gnu_4/abe/snapshots/gdb.git~master/gdb/testsuite/gdb.threads/threadcrash.c:139
  4Thread 0xf6c6d3a0 (LWP 476392) do_spin_task (location=SIGNAL_ALT_STACK) 
at 
/home/tcwg-buildslave/workspace/tcwg_gnu_4/abe/snapshots/gdb.git~master/gdb/testsuite/gdb.threads/threadcrash.c:139
  5Thread 0xf52fe3a0 (LWP 476395) __libc_do_syscall () at 
../sysdeps/unix/sysv/linux/arm/libc-do-syscall.S:46
  6Thread 0xf646c3a0 (LWP 476393) __libc_do_syscall () at 
../sysdeps/unix/sysv/linux/arm/libc-do-syscall.S:46
  7Thread 0xf5aff3a0 (LWP 476394) __libc_do_syscall () at 
../sysdeps/unix/sysv/linux/arm/libc-do-syscall.S:46
(gdb) PASS: gdb.threads/threadcrash.exp: test_corefile: $thread_count == 7
===

and then test_gcore fails with

===
(gdb) PASS: gdb.threads/threadcrash.exp: test_gcore: loading_corefile
info threads
  Id   Target Id Frame 
* 1LWP 4764400x00400cea in crash_function () at 
/home/tcwg-buildslave/workspace/tcwg_gnu_4/abe/snapshots/gdb.git~master/gdb/testsuite/gdb.threads/threadcrash.c:381
  2LWP 476442do_spin_task (location=NORMAL) at 
/home/tcwg-buildslave/workspace/tcwg_gnu_4/abe/snapshots/gdb.git~master/gdb/testsuite/gdb.threads/threadcrash.c:139
  3LWP 476443do_spin_task (location=SIGNAL_HANDLER) at 
/home/tcwg-buildslave/workspace/tcwg_gnu_4/abe/snapshots/gdb.git~master/gdb/testsuite/gdb.threads/threadcrash.c:139
  4LWP 476444do_spin_task (location=SIGNAL_ALT_STACK) at 
/home/tcwg-buildslave/workspace/tcwg_gnu_4/abe/snapshots/gdb.git~master/gdb/testsuite/gdb.threads/threadcrash.c:139
  5LWP 4764450xf7eadb04 in ?? ()
  6LWP 4764460xf7eadb04 in ?? ()
  7LWP 4764470xf7eadb04 in ?? ()
(gdb) FAIL: gdb.threads/threadcrash.exp: test_gcore: $thread_count == 7
===

Could you please look into fixing the testcase?  [I assume "LWP" output is 
expected, but I'm not an expert in GDB.]

Thanks!

--
Maxim Kuvyrkov
https://www.linaro.org


> 
> === Results Summary ===
> 
> You can find the failure logs in *.log.1.xz files in
> - 
> https://ci.linaro.org/job/tcwg_gdb_check--master-arm-build/726/artifact/artifacts/00-sumfiles/
>  .
> The full lists of regressions and progressions are in
> - 
> https://ci.linaro.org/job/tcwg_gdb_check--master-arm-build/726/artifact/artifacts/notify/
>  .
> The list of [ignored] baseline and flaky failures are in
> - 
> https://ci.linar

Re: [Linaro-TCWG-CI] gcc patch #83662: FAIL: 8 regressions on arm

2024-01-19 Thread Maxim Kuvyrkov
> On Jan 19, 2024, at 17:31, H.J. Lu  wrote:
> 
> On Thu, Jan 18, 2024 at 11:15 PM Maxim Kuvyrkov
>  wrote:
>> 
>> Hi H.J.,
>> 
>> Did the email below made it to your inbox?  I wonder if some of our 
>> precommit CI emails are not reaching developers.
> 
> It has been fixed before the email.

Linaro pre-commit CI sent you below regression report on Jan 10.  Then you 
committed the patch on Jan 18.  Then later on Jan 18 Linaro post-commit CI sent 
you another report.

My question is whether the first report from precommit CI reached your inbox.  
If not, then I'll troubleshoot our email configuration.

Kind regards,

--
Maxim Kuvyrkov
https://www.linaro.org


> 
>> Kind regards,
>> 
>> --
>> Maxim Kuvyrkov
>> https://www.linaro.org
>> 
>>> On Jan 10, 2024, at 02:24, ci_not...@linaro.org wrote:
>>> 
>>> Dear contributor, our automatic CI has detected problems related to your 
>>> patch(es).  Please find some details below.  If you have any questions, 
>>> please follow up on linaro-toolchain@lists.linaro.org mailing list, 
>>> Libera's #linaro-tcwg channel, or ping your favourite Linaro toolchain 
>>> developer on the usual project channel.
>>> 
>>> We appreciate that it might be difficult to find the necessary logs or 
>>> reproduce the issue locally. If you can't get what you need from our CI 
>>> within minutes, let us know and we will be happy to help.
>>> 
>>> In gcc_check master-arm after:
>>> 
>>> | gcc patch https://patchwork.sourceware.org/patch/83662
>>> | Author: H.J. Lu 
>>> | Date:   Tue Jan 9 08:46:59 2024 -0800
>>> |
>>> | hwasan: Check if Intel LAM_U57 is enabled
>>> |
>>> | When -fsanitize=hwaddress is used, libhwasan will try to enable 
>>> LAM_U57
>>> | in the startup code.  Update the target check to enable hwaddress 
>>> tests
>>> | if LAM_U57 is enabled.  Also compile hwaddress tests with -mlam=u57 on
>>> | x86-64 since hwasan requires LAM_U57 on x86-64.
>>> |
>>> | ... 3 lines of the commit log omitted.
>>> | ... applied on top of baseline commit:
>>> | 9f7afa99c67 [committed] Adding missing prototype for __clzhi2 to xstormy 
>>> port
>>> 
>>> FAIL: 8 regressions
>>> 
>>> regressions.sum:
>>> === g++ tests ===
>>> 
>>> Running g++:g++.dg/hwasan/hwasan.exp ...
>>> ERROR: can't read "target_hwasan_flags": no such variable
>>> ERROR: tcl error code TCL LOOKUP VARNAME target_hwasan_flags
>>> ERROR: tcl error sourcing g++.dg/hwasan/hwasan.exp.
>>> UNRESOLVED: testcase g++.dg/hwasan/hwasan.exp' aborted due to Tcl error
>>> === gcc tests ===
>>> 
>>> Running gcc:gcc.dg/hwasan/hwasan.exp ...
>>> ... and 6 more entries
>>> 
>>> You can find the failure logs in *.log.1.xz files in
>>> - 
>>> https://ci.linaro.org/job/tcwg_gcc_check--master-arm-precommit/5612/artifact/artifacts/artifacts.precommit/00-sumfiles/
>>>  .
>>> The full lists of regressions and progressions are in
>>> - 
>>> https://ci.linaro.org/job/tcwg_gcc_check--master-arm-precommit/5612/artifact/artifacts/artifacts.precommit/notify/
>>>  .
>>> The list of [ignored] baseline and flaky failures are in
>>> - 
>>> https://ci.linaro.org/job/tcwg_gcc_check--master-arm-precommit/5612/artifact/artifacts/artifacts.precommit/sumfiles/xfails.xfail
>>>  .
>>> 
>>> The configuration of this build is:
>>> CI config tcwg_gcc_check master-arm
>>> 
>>> -8<--8<--8<--
>>> The information below can be used to reproduce a debug environment:
>>> 
>>> Current build   : 
>>> https://ci.linaro.org/job/tcwg_gcc_check--master-arm-precommit/5612/artifact/artifacts
>>> Reference build : 
>>> https://ci.linaro.org/job/tcwg_gcc_check--master-arm-build/1542/artifact/artifacts
>> 
>> 
> 
> 
> -- 
> H.J.

___
linaro-toolchain mailing list -- linaro-toolchain@lists.linaro.org
To unsubscribe send an email to linaro-toolchain-le...@lists.linaro.org


Re: [Linaro-TCWG-CI] gcc patch #83662: FAIL: 8 regressions on arm

2024-01-18 Thread Maxim Kuvyrkov
Hi H.J.,

Did the email below made it to your inbox?  I wonder if some of our precommit 
CI emails are not reaching developers.

Kind regards,

--
Maxim Kuvyrkov
https://www.linaro.org

> On Jan 10, 2024, at 02:24, ci_not...@linaro.org wrote:
> 
> Dear contributor, our automatic CI has detected problems related to your 
> patch(es).  Please find some details below.  If you have any questions, 
> please follow up on linaro-toolchain@lists.linaro.org mailing list, Libera's 
> #linaro-tcwg channel, or ping your favourite Linaro toolchain developer on 
> the usual project channel.
> 
> We appreciate that it might be difficult to find the necessary logs or 
> reproduce the issue locally. If you can't get what you need from our CI 
> within minutes, let us know and we will be happy to help.
> 
> In gcc_check master-arm after:
> 
>  | gcc patch https://patchwork.sourceware.org/patch/83662
>  | Author: H.J. Lu 
>  | Date:   Tue Jan 9 08:46:59 2024 -0800
>  | 
>  | hwasan: Check if Intel LAM_U57 is enabled
>  | 
>  | When -fsanitize=hwaddress is used, libhwasan will try to enable LAM_U57
>  | in the startup code.  Update the target check to enable hwaddress tests
>  | if LAM_U57 is enabled.  Also compile hwaddress tests with -mlam=u57 on
>  | x86-64 since hwasan requires LAM_U57 on x86-64.
>  | 
>  | ... 3 lines of the commit log omitted.
>  | ... applied on top of baseline commit:
>  | 9f7afa99c67 [committed] Adding missing prototype for __clzhi2 to xstormy 
> port
> 
> FAIL: 8 regressions
> 
> regressions.sum:
> === g++ tests ===
> 
> Running g++:g++.dg/hwasan/hwasan.exp ...
> ERROR: can't read "target_hwasan_flags": no such variable
> ERROR: tcl error code TCL LOOKUP VARNAME target_hwasan_flags
> ERROR: tcl error sourcing g++.dg/hwasan/hwasan.exp.
> UNRESOLVED: testcase g++.dg/hwasan/hwasan.exp' aborted due to Tcl error
> === gcc tests ===
> 
> Running gcc:gcc.dg/hwasan/hwasan.exp ...
> ... and 6 more entries
> 
> You can find the failure logs in *.log.1.xz files in
> - 
> https://ci.linaro.org/job/tcwg_gcc_check--master-arm-precommit/5612/artifact/artifacts/artifacts.precommit/00-sumfiles/
>  .
> The full lists of regressions and progressions are in
> - 
> https://ci.linaro.org/job/tcwg_gcc_check--master-arm-precommit/5612/artifact/artifacts/artifacts.precommit/notify/
>  .
> The list of [ignored] baseline and flaky failures are in
> - 
> https://ci.linaro.org/job/tcwg_gcc_check--master-arm-precommit/5612/artifact/artifacts/artifacts.precommit/sumfiles/xfails.xfail
>  .
> 
> The configuration of this build is:
> CI config tcwg_gcc_check master-arm
> 
> -8<--8<--8<--
> The information below can be used to reproduce a debug environment:
> 
> Current build   : 
> https://ci.linaro.org/job/tcwg_gcc_check--master-arm-precommit/5612/artifact/artifacts
> Reference build : 
> https://ci.linaro.org/job/tcwg_gcc_check--master-arm-build/1542/artifact/artifacts


___
linaro-toolchain mailing list -- linaro-toolchain@lists.linaro.org
To unsubscribe send an email to linaro-toolchain-le...@lists.linaro.org


Re: [Linaro-TCWG-CI] gcc-14-8168-g14338386970: FAIL: 3 regressions on arm

2024-01-18 Thread Maxim Kuvyrkov
Hi Nathaniel,

> On Jan 18, 2024, at 13:02, Nathaniel Shead via Gcc-regression 
>  wrote:
> 
> Thanks for the notification! I took a look and the error seems to be
> something specific to the TLS implementation for ARM requiring
> additional flags to link correctly, maybe?

Looking at [1], it seems that only the bare-metal configurations are affected.  
And looking at [2], the tests are failing with
===
undefined reference to `__aeabi_read_tp'
===
.

Given that we are configuring GCC for arm-none-eabi with "--enable-threads=no", 
the failure is not a surprise.

[1] https://linaro.atlassian.net/browse/GNU-1112
[2] 
https://ci.linaro.org/job/tcwg_gnu_embed_check_gcc--master-arm_eabi-build/572/artifact/artifacts/00-sumfiles/g++.log.1.xz

> 
> I'm unable to test locally (I don't have access to an ARM machine) but
> from looking at other testsuite examples which make use of thread
> locals, adding the following two lines to the testcases may resolve the
> failures:
> 
>  // { dg-add-options tls }
>  // { dg-require-effective-target tls_runtime }

Yes, this should skip the testcase for targets without thread/TLS support.

> 
> Please let me know if the issue is something else and I can take
> another look.

Thanks!

--
Maxim Kuvyrkov
https://www.linaro.org

> 
> Yours,
> Nathaniel.
> 
> On Thu, Jan 18, 2024 at 07:12:12AM +, ci_not...@linaro.org wrote:
>> Dear contributor, our automatic CI has detected problems related to your 
>> patch(es).  Please find some details below.  If you have any questions, 
>> please follow up on linaro-toolchain@lists.linaro.org mailing list, Libera's 
>> #linaro-tcwg channel, or ping your favourite Linaro toolchain developer on 
>> the usual project channel.
>> 
>> We appreciate that it might be difficult to find the necessary logs or 
>> reproduce the issue locally. If you can't get what you need from our CI 
>> within minutes, let us know and we will be happy to help.
>> 
>> We track this report status in https://linaro.atlassian.net/browse/GNU-1112 
>> , please let us know if you are looking at the problem and/or when you have 
>> a fix.
>> 
>> In  master-arm_eabi after:
>> 
>>  | commit gcc-14-8168-g14338386970
>>  | Author: Nathaniel Shead 
>>  | Date:   Thu Jan 11 16:49:39 2024 +1100
>>  | 
>>  | c++: Support thread_local statics in header modules [PR113292]
>>  | 
>>  | Currently, thread_locals in header modules cause ICEs. This patch 
>> makes
>>  | the required changes for them to work successfully.
>>  | 
>>  | This requires additionally writing the DECL_TLS_MODEL for thread-local
>>  | variables to the module interface, and the TLS wrapper function needs 
>> to
>>  | ... 24 lines of the commit log omitted.
>> 
>> FAIL: 3 regressions
>> 
>> regressions.sum:
>> === g++ tests ===
>> 
>> Running g++:g++.dg/modules/modules.exp ...
>> FAIL: g++.dg/modules/pr113292 -std=c++17 link
>> FAIL: g++.dg/modules/pr113292 -std=c++2a link
>> FAIL: g++.dg/modules/pr113292 -std=c++2b link
>> 
>> === Results Summary ===
>> 
>> You can find the failure logs in *.log.1.xz files in
>> - 
>> https://ci.linaro.org/job/tcwg_gnu_embed_check_gcc--master-arm_eabi-build/572/artifact/artifacts/00-sumfiles/
>>  .
>> The full lists of regressions and progressions are in
>> - 
>> https://ci.linaro.org/job/tcwg_gnu_embed_check_gcc--master-arm_eabi-build/572/artifact/artifacts/notify/
>>  .
>> The list of [ignored] baseline and flaky failures are in
>> - 
>> https://ci.linaro.org/job/tcwg_gnu_embed_check_gcc--master-arm_eabi-build/572/artifact/artifacts/sumfiles/xfails.xfail
>>  .
>> 
>> The configuration of this build is:
>> CI config tcwg_gnu_embed_check_gcc master-arm_eabi
>> 
>> -8<--8<--8<--
>> The information below can be used to reproduce a debug environment:
>> 
>> Current build   : 
>> https://ci.linaro.org/job/tcwg_gnu_embed_check_gcc--master-arm_eabi-build/572/artifact/artifacts
>> Reference build : 
>> https://ci.linaro.org/job/tcwg_gnu_embed_check_gcc--master-arm_eabi-build/571/artifact/artifacts
>> 
>> Reproduce last good and first bad builds: 
>> https://git-us.linaro.org/toolchain/ci/interesting-commits.git/plain/gcc/sha1/14338386970bc6c2d46b81181f48622fdf25d705/tcwg_gnu_embed_check_gcc/master-arm_eabi/reproduction_instructions.txt
>> 
>> Full commit : 
>> https://github.com/gcc-mirror/gcc/commit/14338386970bc6c2d46b81181f48622fdf25d705
>> 
>>

Re: some help with reproducing a ci fail

2024-01-15 Thread Maxim Kuvyrkov
Hi Ian,

[Apologies for late reply, your email got caught in moderation queue.]

Do you still need help in reproducing the build?

On our side we are working to include configure/make lines into reports to 
simplify reproduction.

Kind regards,

--
Maxim Kuvyrkov
https://www.linaro.org

> On Oct 22, 2023, at 23:46, Iain Sandoe  wrote:
> 
> Hi
> 
> So, I have a ci fail email, and it seems likely to be a valid complaint - 
> but the reproduce instructions do not work on my platform (bash is not new 
> enough)
> and also do not work on cfarm186 (../jenkins-scripts/jenkins-helpers.sh: line 
> 1762: ts: command not found)
> 
> For the record, the patch that is flagged as failing *was* tested on 
> aarch64-linux-gnu (cfarm185)
> 
> So, I am trying to figure out if the target is different, or some other 
> configure argument.
> 
> .. but I cannot work out the failing configure line at present - nor can I 
> see a place to download the console log from the actual failing GCC build 
> (which would presumably have that configure line).
> 
> any help much appreciated.
> Iain
> 
> ___
> linaro-toolchain mailing list -- linaro-toolchain@lists.linaro.org
> To unsubscribe send an email to linaro-toolchain-le...@lists.linaro.org

___
linaro-toolchain mailing list -- linaro-toolchain@lists.linaro.org
To unsubscribe send an email to linaro-toolchain-le...@lists.linaro.org


Re: [Linaro-TCWG-CI] 7 patches in gcc: Failure on arm

2024-01-15 Thread Maxim Kuvyrkov
Hi Lehua,

[Apologies for late reply, your email got caught in moderation queue.]

Do you still need help in reproducing the build?

On our side we are working to include configure/make lines into reports to 
simplify reproduction.

Kind regards,

--
Maxim Kuvyrkov
https://www.linaro.org

> On Nov 12, 2023, at 11:41, Lehua Ding  wrote:
> 
> Hi,
> 
> I received an error reported by CI for my patchs, I would like to ask how I 
> have to reproduce it locally? I looked at the logs and didn't find the way 
> how it compiles. Thanks in advance.
> 
> On 2023/11/8 16:59, ci_not...@linaro.org wrote:
>> Dear contributor, our automatic CI has detected problems related to your 
>> patch(es).  Please find some details below.  If you have any questions, 
>> please follow up on linaro-toolchain@lists.linaro.org mailing list, Libera's 
>> #linaro-tcwg channel, or ping your favourite Linaro toolchain developer on 
>> the usual project channel.
>> In gcc_build master-arm after:
>>   | 7 patches in gcc
>>   | Patchwork URL: https://patchwork.sourceware.org/patch/79366
>>   | ef64c12c9f0 lra: Support subreg live range track and conflict detect
>>   | a871f544e2b lra: Apply live_subreg df_problem to lra pass
>>   | 6437747d6f3 ira: Add all nregs >= 2 pseudos to tracke subreg list
>>   | 1a2da1ad5f0 ira: Support subreg copy
>>   | 4f8d8e764e0 ira: Support subreg live range track
>>   | ... and 2 more patches in gcc
>>   | ... applied on top of baseline commit:
>>   | ca281a7b971 [i386] APX: Fix ICE due to movti postreload splitter 
>> [PR112394]
>> Results changed to
>> # reset_artifacts:
>> -10
>> # true:
>> 0
>> # build_abe gcc:
>> # FAILED
>> # First few build errors in logs:
>> # 00:03:49 
>> /home/tcwg-build/workspace/tcwg_gnu_0/abe/snapshots/gcc.git~master/libgcc/libgcc2.c:2700:1:
>>  internal compiler error: Aborted
>> # 00:03:49 make[2]: *** [Makefile:505: _muldc3.o] Error 1
>> # 00:03:49 make[1]: *** [Makefile:14486: all-target-libgcc] Error 2
>> # 00:03:49 make: *** [Makefile:1056: all] Error 2
>> # 00:03:41 
>> /home/tcwg-build/workspace/tcwg_gnu_0/abe/snapshots/gcc.git~master/libgcc/fixed-bit.c:72:1:
>>  internal compiler error: Aborted
>> # 00:03:41 
>> /home/tcwg-build/workspace/tcwg_gnu_0/abe/snapshots/gcc.git~master/libgcc/libgcc2.c:2700:1:
>>  internal compiler error: Aborted
>> # 00:03:41 
>> /home/tcwg-build/workspace/tcwg_gnu_0/abe/snapshots/gcc.git~master/libgcc/fixed-bit.c:72:1:
>>  internal compiler error: Aborted
>> # 00:03:41 
>> /home/tcwg-build/workspace/tcwg_gnu_0/abe/snapshots/gcc.git~master/libgcc/fixed-bit.c:72:1:
>>  internal compiler error: Aborted
>> # 00:03:41 
>> /home/tcwg-build/workspace/tcwg_gnu_0/abe/snapshots/gcc.git~master/libgcc/fixed-bit.c:72:1:
>>  internal compiler error: Aborted
>> # 00:03:42 
>> /home/tcwg-build/workspace/tcwg_gnu_0/abe/snapshots/gcc.git~master/libgcc/libgcc2.c:2865:1:
>>  internal compiler error: Aborted
>> # 00:03:42 
>> /home/tcwg-build/workspace/tcwg_gnu_0/abe/snapshots/gcc.git~master/libgcc/fixed-bit.c:143:1:
>>  internal compiler error: Aborted
>> # 00:03:42 
>> /home/tcwg-build/workspace/tcwg_gnu_0/abe/snapshots/gcc.git~master/libgcc/fixed-bit.c:143:1:
>>  internal compiler error: Aborted
>> # 00:03:42 
>> /home/tcwg-build/workspace/tcwg_gnu_0/abe/snapshots/gcc.git~master/libgcc/fixed-bit.c:143:1:
>>  internal compiler error: Aborted
>> # 00:03:42 
>> /home/tcwg-build/workspace/tcwg_gnu_0/abe/snapshots/gcc.git~master/libgcc/fixed-bit.c:143:1:
>>  internal compiler error: Aborted
>> From
>> # reset_artifacts:
>> -10
>> # true:
>> 0
>> # build_abe gcc:
>> 1
>> The configuration of this build is:
>> CI config tcwg_gcc_build/master-arm
>> -8<--8<--8<--
>> The information below can be used to reproduce a debug environment:
>> Current build   : 
>> https://ci.linaro.org/job/tcwg_gcc_build--master-arm-precommit/4076/artifact/artifacts
>> Reference build : 
>> https://ci.linaro.org/job/tcwg_gcc_build--master-arm-build/1364/artifact/artifacts
> 
> -- 
> Best,
> Lehua (RiVAI)
> lehua.d...@rivai.ai
> 
> ___
> linaro-toolchain mailing list -- linaro-toolchain@lists.linaro.org
> To unsubscribe send an email to linaro-toolchain-le...@lists.linaro.org

___
linaro-toolchain mailing list -- linaro-toolchain@lists.linaro.org
To unsubscribe send an email to linaro-toolchain-le...@lists.linaro.org


Re: [Linaro-TCWG-CI] gcc patch #80969: Failure on arm

2024-01-15 Thread Maxim Kuvyrkov
Hi Rainer,

[Apologies for late reply, your reply got caught in moderation queue.]

We have considered automatically regenerating autoconf, etc. files, but have 
decided against that, at least for now.  My logic is that developers should 
receive feedback for the verbatim patches they posted, not for a version of 
their patch that has regenerated or otherwise edited parts.  I appreciate that 
this means that the developer will get a nag from our CI, but, at the very 
least, this nag might remind the developer to regenerate the necessary parts 
when committing the patch into mainline.

Kind regards,

--
Maxim Kuvyrkov
https://www.linaro.org

> On Nov 29, 2023, at 19:20, Rainer Orth  wrote:
> 
> ci_not...@linaro.org writes:
> 
>> Dear contributor, our automatic CI has detected problems related to your
>> patch(es).  Please find some details below.  If you have any questions,
>> please follow up on linaro-toolchain@lists.linaro.org mailing list,
>> Libera's #linaro-tcwg channel, or ping your favourite Linaro toolchain
>> developer on the usual project channel.
>> 
>> In gcc_build master-arm after:
>> 
>>  | gcc patch https://patchwork.sourceware.org/patch/80969
>>  | Author: Rainer Orth 
>>  | Date:   Wed Nov 29 15:10:00 2023 +0100
>>  | 
>>  | libiberty: Disable hwcaps for sha1.o
>>  | 
>>  | This patch
>>  | 
>>  | commit bf4f40cc3195eb7b900bf5535cdba1ee51fdbb8e
>>  | Author: Jakub Jelinek 
>>  | Date:   Tue Nov 28 13:14:05 2023 +0100
>>  | ... 26 lines of the commit log omitted.
>>  | ... applied on top of baseline commit:
>>  | 4c909c6ee38 In 'libgomp.c/target-simd-clone-{1,2,3}.c', restrict
>>  | 'scan-offload-ipa-dump's to 'only_for_offload_target amdgcn-amdhsa'
>> 
>> Results changed to
>> # reset_artifacts:
>> -10
>> # true:
>> 0
>> # build_abe gcc:
>> # FAILED
>> # First few build errors in logs:
>> # 00:02:15 gccgo: fatal error: Killed signal terminated program go1
>> # 00:02:23 gcc: error: @HWCAP_CFLAGS@: linker input file not found: No such
>> file or directory
>> # 00:02:23 make[2]: *** [Makefile:1219: regex.o] Error 1
>> # 00:02:23 make[1]: *** [Makefile:8370: all-libiberty] Error 2
>> # 00:02:23 make: *** [Makefile:1057: all] Error 2
>> # 00:01:06 gccgo: fatal error: Killed signal terminated program go1
>> # 00:02:10 gcc: error: @HWCAP_CFLAGS@: linker input file not found: No such
>> file or directory
>> # 00:02:10 make[2]: *** [Makefile:776: fdmatch.o] Error 1
>> # 00:02:10 gcc: error: @HWCAP_CFLAGS@: linker input file not found: No such
>> file or directory
>> # 00:02:10 make[2]: *** [Makefile:805: filedescriptor.o] Error 1
>> # 00:02:10 gcc: error: @HWCAP_CFLAGS@: linker input file not found: No such
>> file or directory
>> # 00:02:10 make[2]: *** [Makefile:837: fnmatch.o] Error 1
>> # 00:02:10 gcc: error: @HWCAP_CFLAGS@: linker input file not found: No such
>> file or directory
>> # 00:02:10 make[2]: *** [Makefile:817: filename_cmp.o] Error 1
>> # 00:02:10 gcc: error: @HWCAP_CFLAGS@: linker input file not found: No such
>> file or directory
>> 
>> From
>> # reset_artifacts:
>> -10
>> # true:
>> 0
>> # build_abe gcc:
>> 1
>> 
>> The configuration of this build is:
>> CI config tcwg_gcc_build master-arm
>> 
>> -8<--8<--8<--
>> The information below can be used to reproduce a debug environment:
>> 
>> Current build :
>> https://ci.linaro.org/job/tcwg_gcc_build--master-arm-precommit/4887/artifact/artifacts
>> Reference build :
>> https://ci.linaro.org/job/tcwg_gcc_build--master-arm-build/1444/artifact/artifacts
>> 
> 
> As is customary for gcc patches, this patch didn't include the generated
> files to simplify review.  Thus, to test it in any meaningful way, one
> needs to run aclocal and autoconf before configuring/building.  Not
> doing so just produces meaningless mails from the CI.
> 
> Rainer
> ___
> linaro-toolchain mailing list -- linaro-toolchain@lists.linaro.org
> To unsubscribe send an email to linaro-toolchain-le...@lists.linaro.org

___
linaro-toolchain mailing list -- linaro-toolchain@lists.linaro.org
To unsubscribe send an email to linaro-toolchain-le...@lists.linaro.org


Re: [Linaro-TCWG-CI] gcc-14-6861-g200531d5b9f: FAIL: 1 regressions on arm

2024-01-05 Thread Maxim Kuvyrkov
> On Dec 30, 2023, at 06:24, Andrew Pinski (QUIC) via Gcc-regression 
>  wrote:
> 
>> -Original Message-
>> From: ci_not...@linaro.org 
>> Sent: Friday, December 29, 2023 7:40 AM
>> To: Andrew Pinski (QUIC) 
>> Cc: gcc-regress...@gcc.gnu.org
>> Subject: [Linaro-TCWG-CI] gcc-14-6861-g200531d5b9f: FAIL: 1 regressions
>> on arm
>> 
>> Dear contributor, our automatic CI has detected problems related to your
>> patch(es).  Please find some details below.  If you have any questions, 
>> please
>> follow up on linaro-toolchain@lists.linaro.org mailing list, Libera's 
>> #linaro-tcwg
>> channel, or ping your favourite Linaro toolchain developer on the usual 
>> project
>> channel.
>> 
>> We appreciate that it might be difficult to find the necessary logs or 
>> reproduce
>> the issue locally. If you can't get what you need from our CI within 
>> minutes, let
>> us know and we will be happy to help.
>> 
>> We track this report status in https://linaro.atlassian.net/browse/GNU-1091 ,
>> please let us know if you are looking at the problem and/or when you have a
>> fix.
> 
> First I suspect this was failing before r14-6822-g01f4251b8775c8 and I just 
> return it back to that state.
> 
> The big ask I have is for reports like this, to include the exact gcc 
> configure line that was used.
> In this case, is GCC configured to include neon by default? If so then the 
> testcase needs to be updated to add an option to disable neon.

Hi Andrew,

We'll soon have configure and make info in the report.

> If not, then someone else will need to look into why the testcase is failing.
> Basically, the update I did was disable vectorization on a loop which was not 
> being vectorized before r14-6822-g01f4251b8775c8.

Ack.

Thanks,

--
Maxim Kuvyrkov
https://www.linaro.org


> 
> Thanks,
> Andrew Pinski
> 
>> 
>> In  master-arm after:
>> 
>>  | commit gcc-14-6861-g200531d5b9f
>>  | Author: Andrew Pinski 
>>  | Date:   Thu Dec 28 20:26:01 2023 -0800
>>  |
>>  | Fix gen-vect-26.c testcase after loops with multiple exits [PR113167]
>>  |
>>  | This fixes the gcc.dg/tree-ssa/gen-vect-26.c testcase by adding
>>  | `#pragma GCC novector` in front of the loop that is doing the checking
>>  | of the result. We only want to test the first loop to see if it can be
>>  | vectorize.
>>  |
>>  | ... 9 lines of the commit log omitted.
>> 
>> FAIL: 1 regressions
>> 
>> regressions.sum:
>> === gcc tests ===
>> 
>> Running gcc:gcc.dg/tree-ssa/tree-ssa.exp ...
>> FAIL: gcc.dg/tree-ssa/gen-vect-26.c scan-tree-dump-times vect "Alignment of
>> access forced using peeling" 1
>> 
>> === Results Summary ===
>> 
>> You can find the failure logs in *.log.1.xz files in
>> - https://ci.linaro.org/job/tcwg_gnu_cross_check_gcc--master-arm-
>> build/1147/artifact/artifacts/00-sumfiles/ .
>> The full lists of regressions and progressions are in
>> - https://ci.linaro.org/job/tcwg_gnu_cross_check_gcc--master-arm-
>> build/1147/artifact/artifacts/notify/ .
>> The list of [ignored] baseline and flaky failures are in
>> - https://ci.linaro.org/job/tcwg_gnu_cross_check_gcc--master-arm-
>> build/1147/artifact/artifacts/sumfiles/xfails.xfail .
>> 
>> The configuration of this build is:
>> CI config tcwg_gnu_cross_check_gcc master-arm
>> 
>> -8<--8<--8<
>> --
>> The information below can be used to reproduce a debug environment:
>> 
>> Current build   : https://ci.linaro.org/job/tcwg_gnu_cross_check_gcc--master-
>> arm-build/1147/artifact/artifacts
>> Reference build : https://ci.linaro.org/job/tcwg_gnu_cross_check_gcc--
>> master-arm-build/1146/artifact/artifacts
>> 
>> Reproduce last good and first bad builds: https://git-
>> us.linaro.org/toolchain/ci/interesting-
>> commits.git/plain/gcc/sha1/200531d5b9fb99eca2b0d6b8d1e42d17641322
>> 5f/tcwg_gnu_cross_check_gcc/master-arm/reproduction_instructions.txt
>> 
>> Full commit : https://github.com/gcc-
>> mirror/gcc/commit/200531d5b9fb99eca2b0d6b8d1e42d176413225f
>> 
>> List of configurations that regressed due to this commit :
>> * tcwg_gnu_cross_check_gcc
>> ** master-arm
>> *** FAIL: 1 regressions
>> *** https://git-us.linaro.org/toolchain/ci/interesting-
>> commits.git/plain/gcc/sha1/200531d5b9fb99eca2b0d6b8d1e42d17641322
>> 5f/tcwg_gnu_cross_check_gcc/master-arm/details.txt
>> *** https://ci.linaro.org/job/tcwg_gnu_cross_check_gcc--master-arm-
>> build/1147/artifact/artifacts


___
linaro-toolchain mailing list -- linaro-toolchain@lists.linaro.org
To unsubscribe send an email to linaro-toolchain-le...@lists.linaro.org


Re: [Linaro-TCWG-CI] glibc-2.38.9000-367-g667f277c78: FAIL: 1 regressions on aarch64

2023-12-22 Thread Maxim Kuvyrkov
Hi Szabolcs,
Hi Joe,

This report is really showing the power of Linaro TCWG CI.  A glibc patch 
causing a testsuite regression in GCC!  Catching such problems by hand is 
difficult.

This report is not a fluke, it has been confirmed on 2 independent 
configurations (see https://linaro.atlassian.net/browse/GNU-1084).

Would you please investigate this?  And don't hesitate to ask for our 
assistance in reproducing this.

Kind regards,

--
Maxim Kuvyrkov
https://www.linaro.org

> On Dec 21, 2023, at 03:20, ci_not...@linaro.org wrote:
> 
> Dear contributor, our automatic CI has detected problems related to your 
> patch(es).  Please find some details below.  If you have any questions, 
> please follow up on linaro-toolchain@lists.linaro.org mailing list, Libera's 
> #linaro-tcwg channel, or ping your favourite Linaro toolchain developer on 
> the usual project channel.
> 
> We appreciate that it might be difficult to find the necessary logs or 
> reproduce the issue locally. If you can't get what you need from our CI 
> within minutes, let us know and we will be happy to help.
> 
> We track this report status in https://linaro.atlassian.net/browse/GNU-1084 , 
> please let us know if you are looking at the problem and/or when you have a 
> fix.
> 
> In  master-aarch64 after:
> 
>  | commit glibc-2.38.9000-367-g667f277c78
>  | Author: Joe Ramsay 
>  | Date:   Mon Dec 18 15:51:16 2023 +
>  | 
>  | aarch64: Add SIMD attributes to math functions with vector versions
>  | 
>  | Added annotations for autovec by GCC and GFortran - this enables GCC
>  | >= 9 to autovectorise math calls at -Ofast.
>  | 
>  | Reviewed-by: Szabolcs Nagy 
> 
> FAIL: 1 regressions
> 
> regressions.sum:
> === gfortran tests ===
> 
> Running gfortran:gfortran.dg/vect/vect.exp ...
> FAIL: gfortran.dg/vect/vect-8.f90 -O   scan-tree-dump-times vect "vectorized 
> 24 loops" 1
> 
> === Results Summary ===
> 
> You can find the failure logs in *.log.1.xz files in
> - 
> https://ci.linaro.org/job/tcwg_gnu_native_check_gcc--master-aarch64-build/821/artifact/artifacts/00-sumfiles/
>  .
> The full lists of regressions and progressions are in
> - 
> https://ci.linaro.org/job/tcwg_gnu_native_check_gcc--master-aarch64-build/821/artifact/artifacts/notify/
>  .
> The list of [ignored] baseline and flaky failures are in
> - 
> https://ci.linaro.org/job/tcwg_gnu_native_check_gcc--master-aarch64-build/821/artifact/artifacts/sumfiles/xfails.xfail
>  .
> 
> The configuration of this build is:
> CI config tcwg_gnu_native_check_gcc master-aarch64
> 
> -8<--8<--8<--
> The information below can be used to reproduce a debug environment:
> 
> Current build   : 
> https://ci.linaro.org/job/tcwg_gnu_native_check_gcc--master-aarch64-build/821/artifact/artifacts
> Reference build : 
> https://ci.linaro.org/job/tcwg_gnu_native_check_gcc--master-aarch64-build/820/artifact/artifacts
> 
> Reproduce last good and first bad builds: 
> https://git-us.linaro.org/toolchain/ci/interesting-commits.git/plain/glibc/sha1/667f277c782f4457603e6d192bac294e5f2c5186/tcwg_gnu_native_check_gcc/master-aarch64/reproduction_instructions.txt
> 
> Full commit : 
> https://sourceware.org/git/?p=glibc.git;a=commitdiff;h=667f277c782f4457603e6d192bac294e5f2c5186
> 
> List of configurations that regressed due to this commit :
> * tcwg_gnu_native_check_gcc
> ** master-aarch64
> *** FAIL: 1 regressions
> *** 
> https://git-us.linaro.org/toolchain/ci/interesting-commits.git/plain/glibc/sha1/667f277c782f4457603e6d192bac294e5f2c5186/tcwg_gnu_native_check_gcc/master-aarch64/details.txt
> *** 
> https://ci.linaro.org/job/tcwg_gnu_native_check_gcc--master-aarch64-build/821/artifact/artifacts


___
linaro-toolchain mailing list -- linaro-toolchain@lists.linaro.org
To unsubscribe send an email to linaro-toolchain-le...@lists.linaro.org


Re: [Linaro-TCWG-CI] gcc-14-6741-ge7dd72aefed: Failure on arm

2023-12-22 Thread Maxim Kuvyrkov
Hi kernel folks,

It seems a new gcc patch uncovered a potential problem in btrfs code, see the 
warning/error below.

Does this look like a legit kernel problem?

--
Maxim Kuvyrkov
https://www.linaro.org

> On Dec 22, 2023, at 06:54, ci_not...@linaro.org wrote:
> 
> Dear contributor, our automatic CI has detected problems related to your 
> patch(es).  Please find some details below.  If you have any questions, 
> please follow up on linaro-toolchain@lists.linaro.org mailing list, Libera's 
> #linaro-tcwg channel, or ping your favourite Linaro toolchain developer on 
> the usual project channel.
> 
> We appreciate that it might be difficult to find the necessary logs or 
> reproduce the issue locally. If you can't get what you need from our CI 
> within minutes, let us know and we will be happy to help.
> 
> We track this report status in https://linaro.atlassian.net/browse/GNU-1087 , 
> please let us know if you are looking at the problem and/or when you have a 
> fix.
> 
> In CI config tcwg_kernel/gnu-master-arm-stable-allmodconfig after:
> 
>  | commit gcc-14-6741-ge7dd72aefed
>  | Author: Jakub Jelinek 
>  | Date:   Wed Dec 20 11:31:18 2023 +0100
>  | 
>  | c: Split -Wcalloc-transposed-args warning from -Walloc-size, 
> -Walloc-size fixes
>  | 
>  | The following patch changes -Walloc-size warning to no longer warn
>  | about int *p = calloc (1, sizeof (int));, because as discussed earlier,
>  | the size is IMNSHO sufficient in that case, for alloc_size with 2
>  | arguments warns if the product of the 2 arguments is insufficiently 
> small.
>  | 
>  | ... 37 lines of the commit log omitted.
> 
> Results changed to
> # reset_artifacts:
> -10
> # build_abe binutils:
> -9
> # build_abe stage1:
> -5
> # build_abe qemu:
> -2
> # linux_n_obj:
> 23978
> # First few build errors in logs:
> 
> # 00:33:29 fs/btrfs/send.c:8208:44: error: ‘kvcalloc’ sizes specified with 
> ‘sizeof’ in the earlier argument and not in the later argument 
> [-Werror=calloc-transposed-args]
> # 00:33:44 make[4]: *** [scripts/Makefile.build:243: fs/btrfs/send.o] Error 1
> # 00:35:42 make[3]: *** [scripts/Makefile.build:480: fs/btrfs] Error 2
> # 00:37:40 make[2]: *** [scripts/Makefile.build:480: fs] Error 2
> # 00:47:05 make[1]: *** 
> [/home/tcwg-buildslave/workspace/tcwg_kernel_1/linux/Makefile:1913: .] Error 2
> # 00:47:05 make: *** [Makefile:234: __sub-make] Error 2
> 
> From
> # reset_artifacts:
> -10
> # build_abe binutils:
> -9
> # build_abe stage1:
> -5
> # build_abe qemu:
> -2
> # linux_n_obj:
> 33156
> # linux build successful:
> all
> # linux boot successful:
> boot
> 
> The configuration of this build is:
> CI config tcwg_kernel/gnu-master-arm-stable-allmodconfig
> 
> -8<--8<--8<--
> The information below can be used to reproduce a debug environment:
> 
> Current build   : 
> https://ci.linaro.org/job/tcwg_kernel--gnu-master-arm-stable-allmodconfig-build/83/artifact/artifacts
> Reference build : 
> https://ci.linaro.org/job/tcwg_kernel--gnu-master-arm-stable-allmodconfig-build/82/artifact/artifacts
> 
> Reproduce last good and first bad builds: 
> https://git-us.linaro.org/toolchain/ci/interesting-commits.git/plain/gcc/sha1/e7dd72aefed851d11655aa301d6e394ec9805e0d/tcwg_kernel/gnu-master-arm-stable-allmodconfig/reproduction_instructions.txt
> 
> Full commit : 
> https://github.com/gcc-mirror/gcc/commit/e7dd72aefed851d11655aa301d6e394ec9805e0d
> 
> List of configurations that regressed due to this commit :
> * tcwg_kernel
> ** gnu-master-arm-stable-allmodconfig
> *** Failure
> *** 
> https://git-us.linaro.org/toolchain/ci/interesting-commits.git/plain/gcc/sha1/e7dd72aefed851d11655aa301d6e394ec9805e0d/tcwg_kernel/gnu-master-arm-stable-allmodconfig/details.txt
> *** 
> https://ci.linaro.org/job/tcwg_kernel--gnu-master-arm-stable-allmodconfig-build/83/artifact/artifacts


___
linaro-toolchain mailing list -- linaro-toolchain@lists.linaro.org
To unsubscribe send an email to linaro-toolchain-le...@lists.linaro.org


Re: [Linaro-TCWG-CI] gcc-14-5673-g33c2b70dbab: FAIL: 1 regressions: 8 progressions on aarch64

2023-12-06 Thread Maxim Kuvyrkov
> On Dec 4, 2023, at 12:25, Tamar Christina  wrote:
...

>> - tcwg_bmk-fujitsu_speed-cpu2017speed
>>  - gnu-aarch64-master-O2: slowed down by 19% - 644.nab_s:[.]
>> exp@@GLIBC_2.29
> 
> This is suspect, it says the slowdown is in exp in glibc, that would be 
> unrelated
> to my patch.

Hi Tamar,

Linaro benchmarking builds the whole sysroot with the "new" compiler, including 
glibc.  It may be interesting to double-check code-gen differences on glibc's 
exp() and make sure they are no obvious bad choices.

--
Maxim Kuvyrkov
https://www.linaro.org

> 
> Tamar
> 
>> 
>> The link above has the full results.
>> 
>> ci_not...@linaro.org writes:
>> 
>>> Dear contributor, our automatic CI has detected problems related to your
>> patch(es). Please
>>> find some details below. If you have any questions, please follow up on
>>> linaro-toolchain@lists.linaro.org mailing list, Libera's #linaro-tcwg 
>>> channel, or
>> ping
>>> your favourite Linaro toolchain developer on the usual project channel.
>>> 
>>> In  master-aarch64 after:
>>> 
>>>  | commit gcc-14-5673-g33c2b70dbab
>>>  | Author: Tamar Christina 
>>>  | Date:   Tue Nov 21 13:20:39 2023 +
>>>  |
>>>  | AArch64: Add new generic-armv8-a CPU and make it the default.
>>>  |
>>>  | This patch adds a new generic scheduling model "generic-armv8-a" and
>> makes it
>>>  | the default for all Armv8 architectures.
>>>  |
>>>  | -mcpu=generic and -mtune=generic is kept around for those that 
>>> really want
>> the
>>>  | previous cost model.
>>>  | ... 34 lines of the commit log omitted.
>>> 
>>> FAIL: 1 regressions: 8 progressions
>>> 
>>> regressions.sum:
>>>=== gcc tests ===
>>> 
>>> Running gcc:gcc.target/aarch64/sve/aarch64-sve.exp ...
>>> FAIL: gcc.target/aarch64/sve/mask_struct_load_3_run.c execution test
>>> 
>>>=== Results Summary ===
>>> 
>>> progressions.sum:
>>>=== gcc tests ===
>>> 
>>> Running gcc:gcc.dg/vect/vect.exp ...
>>> FAIL: gcc.dg/vect/vect-reduc-pattern-1b-big-array.c -flto -ffat-lto-objects 
>>> (test
>> for excess errors)
>>> UNRESOLVED: gcc.dg/vect/vect-reduc-pattern-1b-big-array.c compilation failed
>> to produce executable
>>> UNRESOLVED: gcc.dg/vect/vect-reduc-pattern-1b-big-array.c -flto -ffat-lto-
>> objects compilation failed to produce executable
>>> FAIL: gcc.dg/vect/vect-reduc-pattern-1b-big-array.c (test for excess errors)
>>> UNRESOLVED: gcc.dg/vect/vect-reduc-pattern-1b.c compilation failed to
>> produce executable
>>> UNRESOLVED: gcc.dg/vect/vect-reduc-pattern-1b.c -flto -ffat-lto-objects
>> compilation failed to produce executable
>>> FAIL: gcc.dg/vect/vect-reduc-pattern-1b.c (test for excess errors)
>>> ... and 3 more entries
>>> 
>>> You can find the failure logs in *.log.1.xz files in
>>> - https://ci.linaro.org/job/tcwg_gnu_cross_check_gcc--master-aarch64-
>> build/1102/artifact/artifacts/00-sumfiles/ .
>>> The full lists of regressions and progressions are in
>>> - https://ci.linaro.org/job/tcwg_gnu_cross_check_gcc--master-aarch64-
>> build/1102/artifact/artifacts/notify/ .
>>> The list of [ignored] baseline and flaky failures are in
>>> - https://ci.linaro.org/job/tcwg_gnu_cross_check_gcc--master-aarch64-
>> build/1102/artifact/artifacts/sumfiles/xfails.xfail .
>>> 
>>> The configuration of this build is:
>>> CI config tcwg_gnu_cross_check_gcc master-aarch64
>>> 
>>> -8<--8<--8<---
>> ---
>>> The information below can be used to reproduce a debug environment:
>>> 
>>> Current build   : 
>>> https://ci.linaro.org/job/tcwg_gnu_cross_check_gcc--master-
>> aarch64-build/1102/artifact/artifacts
>>> Reference build : 
>>> https://ci.linaro.org/job/tcwg_gnu_cross_check_gcc--master-
>> aarch64-build/1101/artifact/artifacts
>>> 
>>> Reproduce last good and first bad builds: https://git-
>> us.linaro.org/toolchain/ci/interesting-
>> commits.git/plain/gcc/sha1/33c2b70dbabc02788caabcbc66b7baeafeb95bcf/tcw
>> g_gnu_cross_check_gcc/master-aarch64/reproduction_instructions.txt
>>> 
>>> Full commit : https://github.com/gcc-
>> mirror/gcc/commit/33c2b70dbabc02788caabcbc66b7baeafeb95bcf
>

Re: [Linaro-TCWG-CI] v6.6-rc1-17-g1c6fdbd8f246: Failure on arm

2023-11-01 Thread Maxim Kuvyrkov
> On Nov 1, 2023, at 22:22, Nick Desaulniers  wrote:
> 
> On Wed, Nov 1, 2023 at 11:02 AM Maxim Kuvyrkov
>  wrote:
>> 
>> Hi Nick,
>> Hi Nathan,
>> 
>> I don't see mistakes from CI here.  Are you using tip-of-trunk LLVM?
>> 
>> This report was generated for LLVM revision 
>> llvmorg-18-init-10263-g1abd8d1a8d96 and linux revision 
>> v6.6-rc1-17-g1c6fdbd8f246 .  The build log with errors is at [1].
>> 
>> It seems that a later commit in Linux kernel fixed some of the errors in 
>> [1], but still with the current linux.git:master 2 errors remain (see [2]):
>> 
>> 00:28:15 fs/bcachefs/chardev.c:655:6: error: stack frame size (1032) exceeds 
>> limit (1024) in 'bch2_fs_ioctl' [-Werror,-Wframe-larger-than]
>> 00:28:15   655 | long bch2_fs_ioctl(struct bch_fs *c, unsigned cmd, void 
>> __user *arg)
>> 00:28:15   |  ^
>> 00:28:15 1 error generated.
>> 00:28:15 make[4]: *** [scripts/Makefile.build:243: fs/bcachefs/chardev.o] 
>> Error 1
>> 00:29:39 fs/bcachefs/fs-common.c:356:5: error: stack frame size (1128) 
>> exceeds limit (1024) in 'bch2_rename_trans' [-Werror,-Wframe-larger-than]
>> 00:29:39   356 | int bch2_rename_trans(struct btree_trans *trans,
>> 00:29:39   | ^
>> 00:29:39 1 error generated.
> 
> These are different warnings in different object files than from the
> initial report.

Oh, indeed.

> 
> Maybe bisection started due to those, but didn't notice different
> warnings going further back, because of -Werror, and reported the
> initial commit that was problematic (even if the warnings differed and
> were since fixed).

This is exactly right.

--
Maxim Kuvyrkov
https://www.linaro.org


> 
>> 
>> These errors are near-misses, so if you are using a different LLVM revision, 
>> they can disappear.
>> 
>> [1] 
>> https://ci.linaro.org/job/tcwg_kernel--llvm-master-arm-mainline-allmodconfig-build/110/artifact/artifacts/06-build_linux/console.log.xz
>> 
>> [2] 
>> https://ci.linaro.org/job/tcwg_kernel--llvm-master-arm-mainline-allmodconfig-build/111/artifact/artifacts/06-build_linux/console.log.xz
>> 
>> --
>> Maxim Kuvyrkov
>> https://www.linaro.org
>> 
>>> On Nov 1, 2023, at 21:22, Nathan Chancellor  wrote:
>>> 
>>> On Wed, Nov 01, 2023 at 08:54:26AM -0700, Nick Desaulniers wrote:
>>>> On Wed, Nov 1, 2023 at 7:42 AM  wrote:
>>>>> 
>>>>> Dear contributor, our automatic CI has detected problems related to your 
>>>>> patch(es).  Please find some details below.  If you have any questions, 
>>>>> please follow up on linaro-toolchain@lists.linaro.org mailing list, 
>>>>> Libera's #linaro-tcwg channel, or ping your favourite Linaro toolchain 
>>>>> developer on the usual project channel.
>>>>> 
>>>>> In CI config tcwg_kernel/llvm-master-arm-mainline-allmodconfig after:
>>>> 
>>>> ok, so ARCH=arm allmodconfig on mainline...
>>>> 
>>>>> 
>>>>> | commit v6.6-rc1-17-g1c6fdbd8f246
>>>>> | Author: Kent Overstreet 
>>>>> | Date:   Thu Mar 16 22:18:50 2017 -0800
>>>>> |
>>>>> | bcachefs: Initial commit
>>>>> |
>>>>> | Initially forked from drivers/md/bcache, bcachefs is a new 
>>>>> copy-on-write
>>>>> | filesystem with every feature you could possibly want.
>>>>> |
>>>>> | Website: https://bcachefs.org
>>>>> |
>>>>> | ... 1 lines of the commit log omitted.
>>>>> 
>>>>> Results changed to
>>>>> # reset_artifacts:
>>>>> -10
>>>>> # build_abe binutils:
>>>>> -9
>>>>> # build_kernel_llvm:
>>>>> -5
>>>>> # build_abe qemu:
>>>>> -2
>>>>> # linux_n_obj:
>>>>> 23730
>>>>> # First few build errors in logs:
>>>>> 
>>>>> # 00:23:16 fs/bcachefs/btree_cache.h:45:43: error: array index 0 is past 
>>>>> the end of the array (that has type 'const __u64[0]' (aka 'const unsigned 
>>>>> long long[0]')) [-Werror,-Warray-bounds]
>>>>> # 00:23:17 fs/bcachefs/alloc.c:332:9: error: call to undeclared function 
>>>>> 'COUNT_ARGS'; ISO C99 and later do not support implicit function 
>>>>> declarations [-Wimplicit-function-declaration]
>>>> 
>>>> ^
>>>> $ file fs/bcachefs/alloc.c
>>>> fs/bcachefs/

Re: [Linaro-TCWG-CI] gcc-14-5032-ge3da1d7bb28: Failure on arm

2023-11-01 Thread Maxim Kuvyrkov
Hi Richard,

This patch also breaks profiled_bootstrap on aarch64-linux-gnu, which may be 
easier to reproduce, see [1].

[1] 
https://git-us.linaro.org/toolchain/ci/interesting-commits.git/plain/gcc/sha1/e3da1d7bb288c8c864f0284bc4bc5877b466a2f7/tcwg_bootstrap_build/master-aarch64-bootstrap_profiled/details.txt

Regards,

--
Maxim Kuvyrkov
https://www.linaro.org

> On Nov 1, 2023, at 15:46, ci_not...@linaro.org wrote:
> 
> Dear contributor, our automatic CI has detected problems related to your 
> patch(es).  Please find some details below.  If you have any questions, 
> please follow up on linaro-toolchain@lists.linaro.org mailing list, Libera's 
> #linaro-tcwg channel, or ping your favourite Linaro toolchain developer on 
> the usual project channel.
> 
> In bootstrap_build master-arm-bootstrap_profiled_lto_lean after:
> 
>  | commit gcc-14-5032-ge3da1d7bb28
>  | Author: Richard Biener 
>  | Date:   Tue Oct 31 10:13:13 2023 +0100
>  | 
>  | tree-optimization/112305 - SCEV cprop and conditional undefined 
> overflow
>  | 
>  | The following adjusts final value replacement to also rewrite the
>  | replacement to defined overflow behavior if there's conditionally
>  | evaluated stmts (with possibly undefined overflow), not only when
>  | we "folded casts".  The patch hooks into expression_expensive for
>  | this.
>  | ... 10 lines of the commit log omitted.
> 
> Results changed to
> # reset_artifacts:
> -10
> # true:
> 0
> # build_abe bootstrap_profiled_lto_lean:
> # FAILED
> # First few build errors in logs:
> # 01:55:54 
> /home/tcwg-buildslave/workspace/tcwg_gnu_4/abe/snapshots/gcc.git~master/libiberty/regex.c:5549:
>  internal compiler error: Segmentation fault
> # 01:55:54 make[4]: *** [/tmp/ccj2LJs3.mk:380: 
> /tmp/ccIWXeRO.ltrans126.ltrans.o] Error 1
> # 01:57:45 lto-wrapper: fatal error: make returned 2 exit status
> # 01:57:46 /usr/bin/ld: error: lto-wrapper failed
> # 01:57:46 collect2: error: ld returned 1 exit status
> # 01:57:46 make[3]: *** 
> [/home/tcwg-buildslave/workspace/tcwg_gnu_4/abe/snapshots/gcc.git~master/gcc/cp/Make-lang.in:145:
>  cc1plus] Error 1
> # 01:57:46 make[2]: *** [Makefile:5260: all-stagefeedback-gcc] Error 2
> # 01:57:46 make[1]: *** [Makefile:26669: stagefeedback-bubble] Error 2
> # 01:57:46 make: *** [Makefile:26689: profiledbootstrap] Error 2
> # 00:06:15 make[3]: [Makefile:1822: 
> armv8l-unknown-linux-gnueabihf/bits/largefile-config.h] Error 1 (ignored)
> # 00:47:56 make[3]: [Makefile:1822: 
> armv8l-unknown-linux-gnueabihf/bits/largefile-config.h] Error 1 (ignored)
> # 01:09:35 make[3]: [Makefile:1822: 
> armv8l-unknown-linux-gnueabihf/bits/largefile-config.h] Error 1 (ignored)
> # 01:14:05 
> /home/tcwg-buildslave/workspace/tcwg_gnu_4/abe/snapshots/gcc.git~master/libiberty/regex.c:5549:1:
>  internal compiler error: Segmentation fault
> # 01:14:05 
> /home/tcwg-buildslave/workspace/tcwg_gnu_4/abe/snapshots/gcc.git~master/libiberty/regex.c:5549:1:
>  internal compiler error: Segmentation fault
> # 01:14:05 
> /home/tcwg-buildslave/workspace/tcwg_gnu_4/abe/snapshots/gcc.git~master/libiberty/regex.c:5549:1:
>  internal compiler error: Segmentation fault
> # 01:14:05 make[4]: *** [/tmp/ccCI7mJj.mk:20: /tmp/ccH1y1BV.ltrans6.ltrans.o] 
> Error 1
> # 01:14:05 make[4]: *** [/tmp/cciXJOsB.mk:20: /tmp/ccaXf4f5.ltrans6.ltrans.o] 
> Error 1
> # 01:14:05 make[4]: *** [/tmp/cccYndDm.mk:20: /tmp/cc6tlPuq.ltrans6.ltrans.o] 
> Error 1
> # 01:14:07 lto-wrapper: fatal error: make returned 2 exit status
> 
> From
> # reset_artifacts:
> -10
> # true:
> 0
> # build_abe bootstrap_profiled_lto_lean:
> 1
> 
> The configuration of this build is:
> CI config tcwg_bootstrap_build/master-arm-bootstrap_profiled_lto_lean
> 
> -8<--8<--8<--
> The information below can be used to reproduce a debug environment:
> 
> Current build   : 
> https://ci.linaro.org/job/tcwg_bootstrap_build--master-arm-bootstrap_profiled_lto_lean-build/264/artifact/artifacts
> Reference build : 
> https://ci.linaro.org/job/tcwg_bootstrap_build--master-arm-bootstrap_profiled_lto_lean-build/263/artifact/artifacts
> 
> Reproduce last good and first bad builds: 
> https://git-us.linaro.org/toolchain/ci/interesting-commits.git/plain/gcc/sha1/e3da1d7bb288c8c864f0284bc4bc5877b466a2f7/tcwg_bootstrap_build/master-arm-bootstrap_profiled_lto_lean/reproduction_instructions.txt
> 
> Full commit : 
> https://github.com/gcc-mirror/gcc/commit/e3da1d7bb288c8c864f0284bc4bc5877b466a2f7
> 
> Latest bug report status : https://linaro.atlassian.net/browse/GNU-989
> 
> List of configurations that regressed due to t

Re: [Linaro-TCWG-CI] v6.6-rc1-17-g1c6fdbd8f246: Failure on arm

2023-11-01 Thread Maxim Kuvyrkov
Hi Nick,
Hi Nathan,

I don't see mistakes from CI here.  Are you using tip-of-trunk LLVM?

This report was generated for LLVM revision llvmorg-18-init-10263-g1abd8d1a8d96 
and linux revision v6.6-rc1-17-g1c6fdbd8f246 .  The build log with errors is at 
[1].

It seems that a later commit in Linux kernel fixed some of the errors in [1], 
but still with the current linux.git:master 2 errors remain (see [2]):

00:28:15 fs/bcachefs/chardev.c:655:6: error: stack frame size (1032) exceeds 
limit (1024) in 'bch2_fs_ioctl' [-Werror,-Wframe-larger-than]
00:28:15   655 | long bch2_fs_ioctl(struct bch_fs *c, unsigned cmd, void __user 
*arg)
00:28:15   |  ^
00:28:15 1 error generated.
00:28:15 make[4]: *** [scripts/Makefile.build:243: fs/bcachefs/chardev.o] Error 
1
00:29:39 fs/bcachefs/fs-common.c:356:5: error: stack frame size (1128) exceeds 
limit (1024) in 'bch2_rename_trans' [-Werror,-Wframe-larger-than]
00:29:39   356 | int bch2_rename_trans(struct btree_trans *trans,
00:29:39   | ^
00:29:39 1 error generated.

These errors are near-misses, so if you are using a different LLVM revision, 
they can disappear.

[1] 
https://ci.linaro.org/job/tcwg_kernel--llvm-master-arm-mainline-allmodconfig-build/110/artifact/artifacts/06-build_linux/console.log.xz

[2] 
https://ci.linaro.org/job/tcwg_kernel--llvm-master-arm-mainline-allmodconfig-build/111/artifact/artifacts/06-build_linux/console.log.xz

--
Maxim Kuvyrkov
https://www.linaro.org

> On Nov 1, 2023, at 21:22, Nathan Chancellor  wrote:
> 
> On Wed, Nov 01, 2023 at 08:54:26AM -0700, Nick Desaulniers wrote:
>> On Wed, Nov 1, 2023 at 7:42 AM  wrote:
>>> 
>>> Dear contributor, our automatic CI has detected problems related to your 
>>> patch(es).  Please find some details below.  If you have any questions, 
>>> please follow up on linaro-toolchain@lists.linaro.org mailing list, 
>>> Libera's #linaro-tcwg channel, or ping your favourite Linaro toolchain 
>>> developer on the usual project channel.
>>> 
>>> In CI config tcwg_kernel/llvm-master-arm-mainline-allmodconfig after:
>> 
>> ok, so ARCH=arm allmodconfig on mainline...
>> 
>>> 
>>>  | commit v6.6-rc1-17-g1c6fdbd8f246
>>>  | Author: Kent Overstreet 
>>>  | Date:   Thu Mar 16 22:18:50 2017 -0800
>>>  |
>>>  | bcachefs: Initial commit
>>>  |
>>>  | Initially forked from drivers/md/bcache, bcachefs is a new 
>>> copy-on-write
>>>  | filesystem with every feature you could possibly want.
>>>  |
>>>  | Website: https://bcachefs.org
>>>  |
>>>  | ... 1 lines of the commit log omitted.
>>> 
>>> Results changed to
>>> # reset_artifacts:
>>> -10
>>> # build_abe binutils:
>>> -9
>>> # build_kernel_llvm:
>>> -5
>>> # build_abe qemu:
>>> -2
>>> # linux_n_obj:
>>> 23730
>>> # First few build errors in logs:
>>> 
>>> # 00:23:16 fs/bcachefs/btree_cache.h:45:43: error: array index 0 is past 
>>> the end of the array (that has type 'const __u64[0]' (aka 'const unsigned 
>>> long long[0]')) [-Werror,-Warray-bounds]
>>> # 00:23:17 fs/bcachefs/alloc.c:332:9: error: call to undeclared function 
>>> 'COUNT_ARGS'; ISO C99 and later do not support implicit function 
>>> declarations [-Wimplicit-function-declaration]
>> 
>> ^
>> $ file fs/bcachefs/alloc.c
>> fs/bcachefs/alloc.c: cannot open `fs/bcachefs/alloc.c' (No such file
>> or directory)
>> 
>>> # 00:23:17 make[4]: *** [scripts/Makefile.build:243: fs/bcachefs/alloc.o] 
>>> Error 1
>>> # 00:23:29 fs/bcachefs/btree_cache.h:45:43: error: array index 0 is past 
>>> the end of the array (that has type 'const __u64[0]' (aka 'const unsigned 
>>> long long[0]')) [-Werror,-Warray-bounds]
>>> # 00:23:30 make[4]: *** [scripts/Makefile.build:243: fs/bcachefs/bset.o] 
>>> Error 1
>> 
>> ^
>> $ make LLVM=1 ARCH=arm allmodconfig fs/bcachefs/bset.o
>>  CC [M]  fs/bcachefs/bset.o
>> $
>> 
>>> # 00:23:33 fs/bcachefs/btree_cache.h:45:43: error: array index 0 is past 
>>> the end of the array (that has type 'const __u64[0]' (aka 'const unsigned 
>>> long long[0]')) [-Werror,-Warray-bounds]
>>> # 00:23:33 fs/bcachefs/btree_cache.h:45:43: error: array index 0 is past 
>>> the end of the array (that has type 'const __u64[0]' (aka 'const unsigned 
>>> long long[0]')) [-Werror,-Warray-bounds]
>>> # 00:23:33 fs/bcachefs/btree_cache.c:67:9: error: array index 0 is past the 
>>> end of the array (that has type 'const __u64[0]' (aka 'const unsigned long 
>>>

Fwd: [Linaro-TCWG-CI] llvmorg-18-init-7933-ge13bed4c5f35: slowed down by 6% - 464.h264ref on aarch64 O2

2023-10-09 Thread Maxim Kuvyrkov
Hi Dmitriy,

Linaro Benchmarking CI has flagged several interesting code-speed and code-size 
regressions for your patch -- see [1].

In particular, could you check if below regressions can be avoided:
- grew in size by 21% - 473.astar:[.] _ZN7way2obj12releasepointEii
- slowed down by 61% - 505.mcf_r:[.] price_out_impl

Both of these are for 32-bit ARM, but AArch64 also has code-speed and code-size 
regressions.

Let me know if you need any assistance in reproducing these problems.

[1] https://linaro.atlassian.net/browse/LLVM-1001

Kind regards,

--
Maxim Kuvyrkov
https://www.linaro.org

> Begin forwarded message:
> 
> From: ci_not...@linaro.org
> Subject: [Linaro-TCWG-CI] llvmorg-18-init-7933-ge13bed4c5f35: slowed down by 
> 6% - 464.h264ref on aarch64 O2
> Date: October 8, 2023 at 04:26:39 GMT+4
> To: maxim.kuvyr...@linaro.org
> Reply-To: linaro-toolchain@lists.linaro.org
> 
> Dear contributor, our automatic CI has detected problems related to your 
> patch(es).  Please find some details below.  If you have any questions, 
> please follow up on linaro-toolchain@lists.linaro.org mailing list, Libera's 
> #linaro-tcwg channel, or ping your favourite Linaro toolchain developer on 
> the usual project channel.
> 
> In CI config tcwg_bmk-code_speed-spec2k6/llvm-aarch64-master-O2 after:
> 
>  | commit llvmorg-18-init-7933-ge13bed4c5f35
>  | Author: Dmitriy Smirnov 
>  | Date:   Fri Oct 6 11:15:00 2023 +0100
>  | 
>  | [PATCH] [llvm] [InstCombine] Canonicalise ADD+GEP
>  | 
>  | This patch tries to canonicalise add + gep to gep + gep.
>  | 
>  | Co-authored-by: Paul Walker 
>  | 
>  | Reviewed By: nikic
>  | ... 2 lines of the commit log omitted.
> 
> the following benchmarks slowed down by more than 3%:
> - slowed down by 6% - 464.h264ref - from 11126 to 11766 perf samples
> the following hot functions slowed down by more than 15% (but their 
> benchmarks slowed down by less than 3%):
> - slowed down by 44% - 464.h264ref:[.] FastFullPelBlockMotionSearch - from 
> 1531 to 2206 perf samples
> 
> The configuration of this build is:
> Below reproducer instructions can be used to re-build both "first_bad" and 
> "last_good" cross-toolchains used in this bisection.  Naturally, the scripts 
> will fail when triggerring benchmarking jobs if you don\'t have access to 
> Linaro TCWG CI.
> 
> Configuration:
> - Benchmark: 
> - Toolchain: Clang + Glibc + LLVM Linker
> - Version: all components were built from their tip of trunk
> - Target: aarch64-linux-gnu
> - Compiler flags: O2
> - Hardware: NVidia TX1 4x Cortex-A57
> 
> This benchmarking CI is work-in-progress, and we welcome feedback and 
> suggestions at linaro-toolchain@lists.linaro.org .  In our improvement plans 
> is to add support for SPEC CPU2017 benchmarks and provide "perf 
> report/annotate" data behind these reports.
> 
> -8<--8<--8<--
> The information below can be used to reproduce a debug environment:
> 
> Current build   : 
> https://ci.linaro.org/job/tcwg_bmk-code_speed-spec2k6--llvm-aarch64-master-O2-build/142/artifact/artifacts
> Reference build : 
> https://ci.linaro.org/job/tcwg_bmk-code_speed-spec2k6--llvm-aarch64-master-O2-build/141/artifact/artifacts
> 
> Reproduce last good and first bad builds: 
> https://git-us.linaro.org/toolchain/ci/interesting-commits.git/plain/llvm/sha1/e13bed4c5f3544c076ce57e36d9a11eefa5a7815/tcwg_bmk-code_speed-spec2k6/llvm-aarch64-master-O2/reproduction_instructions.txt
> 
> Full commit : 
> https://github.com/llvm/llvm-project/commit/e13bed4c5f3544c076ce57e36d9a11eefa5a7815
> 
> Latest bug report status : https://linaro.atlassian.net/browse/LLVM-1001
> 
> List of configurations that regressed due to this commit :
> * tcwg_bmk-code_speed-spec2k6
> ** llvm-aarch64-master-O2
> *** slowed down by 6% - 464.h264ref
> *** 
> https://git-us.linaro.org/toolchain/ci/interesting-commits.git/plain/llvm/sha1/e13bed4c5f3544c076ce57e36d9a11eefa5a7815/tcwg_bmk-code_speed-spec2k6/llvm-aarch64-master-O2/details.txt
> *** 
> https://ci.linaro.org/job/tcwg_bmk-code_speed-spec2k6--llvm-aarch64-master-O2-build/142/


___
linaro-toolchain mailing list -- linaro-toolchain@lists.linaro.org
To unsubscribe send an email to linaro-toolchain-le...@lists.linaro.org


Re: [Linaro-TCWG-CI] 13 patches in glibc: FAIL: 7 regressions

2023-10-04 Thread Maxim Kuvyrkov
Hi Arjun,

Please ignore this report.  We had a new machine added to the testing pool, and 
it behaved differently than the others.

--
Maxim Kuvyrkov
https://www.linaro.org

> On Oct 4, 2023, at 22:30, ci_not...@linaro.org wrote:
> 
> Dear contributor, our automatic CI has detected problems related to your 
> patch(es).  Please find some details below.  If you have any questions, 
> please follow up on linaro-toolchain@lists.linaro.org mailing list, Libera's 
> #linaro-tcwg channel, or ping your favourite Linaro toolchain developer on 
> the usual project channel.
> 
> In CI config tcwg_glibc_check/master-arm after:
> 
>  | 13 patches in glibc
>  | Patchwork URL: https://patchwork.sourceware.org/patch/76970
>  | 72a0ec189c Move 'rpc' routines from 'inet' into 'nss'
>  | e9e10ad39b Move 'protocols' routines from 'inet' into 'nss'
>  | e0694af485 Move 'networks' routines from 'inet' into 'nss'
>  | 547bcf6d44 Move 'netgroup' routines from 'inet' into 'nss'
>  | 6c43eb641a Move 'hosts' routines from 'inet' into 'nss'
>  | ... and 8 more patches in glibc
>  | ... applied on top of baseline commit:
>  | 1056e5b4c3 tunables: Terminate if end of input is reached (CVE-2023-4911)
> 
> FAIL: 7 regressions
> 
> regressions.sum:
> === glibc tests ===
> 
> Running glibc:io ...
> FAIL: io/tst-close_range 
> 
> Running glibc:misc ...
> FAIL: misc/tst-epoll 
> FAIL: misc/tst-epoll-time64 
> FAIL: misc/tst-mount 
> FAIL: misc/tst-process_mrelease 
> ... and 6 more entries
> 
> You can find the failure logs in *.log.1.xz files in
> - 
> https://ci.linaro.org/job/tcwg_glibc_check--master-arm-precommit/802/artifact/artifacts/artifacts.precommit/00-sumfiles/
>  .
> The full lists of regressions and progressions are in
> - 
> https://ci.linaro.org/job/tcwg_glibc_check--master-arm-precommit/802/artifact/artifacts/artifacts.precommit/notify/
>  .
> The list of [ignored] baseline and flaky failures are in
> - 
> https://ci.linaro.org/job/tcwg_glibc_check--master-arm-precommit/802/artifact/artifacts/artifacts.precommit/sumfiles/xfails.xfail
>  .
> 
> 
> 
> -8<--8<--8<--
> The information below can be used to reproduce a debug environment:
> 
> Current build   : 
> https://ci.linaro.org/job/tcwg_glibc_check--master-arm-precommit/802/artifact/artifacts
> Reference build : 
> https://ci.linaro.org/job/tcwg_glibc_check--master-arm-build/650/artifact/artifacts


___
linaro-toolchain mailing list -- linaro-toolchain@lists.linaro.org
To unsubscribe send an email to linaro-toolchain-le...@lists.linaro.org


Re: [Linaro-TCWG-CI] glibc patch #77093: FAIL: 5 regressions

2023-10-04 Thread Maxim Kuvyrkov
Hi Joe,

Please ignore this report.  We had a new machine added to the testing pool, and 
it behaved differently than the others.

--
Maxim Kuvyrkov
https://www.linaro.org

> On Oct 4, 2023, at 22:32, ci_not...@linaro.org wrote:
> 
> Dear contributor, our automatic CI has detected problems related to your 
> patch(es).  Please find some details below.  If you have any questions, 
> please follow up on linaro-toolchain@lists.linaro.org mailing list, Libera's 
> #linaro-tcwg channel, or ping your favourite Linaro toolchain developer on 
> the usual project channel.
> 
> In CI config tcwg_glibc_check/master-arm after:
> 
>  | glibc patch https://patchwork.sourceware.org/patch/77093
>  | Author: Joe Ramsay 
>  | Date:   Wed Oct 4 11:58:09 2023 +0100
>  | 
>  | aarch64: Improve vecmath sin routines
>  | 
>  | * Update ULP comment reflecting a new observed max in [-pi/2, pi/2]
>  | * Use the same polynomial in AdvSIMD and SVE, rather than FTRIG 
> instructions
>  | * Improve register use near special-case branch
>  | 
>  | Also use overloaded intrinsics for SVE.
>  | ... applied on top of baseline commit:
>  | 1056e5b4c3 tunables: Terminate if end of input is reached (CVE-2023-4911)
> 
> FAIL: 5 regressions
> 
> regressions.sum:
> === glibc tests ===
> 
> Running glibc:io ...
> FAIL: io/tst-close_range 
> 
> Running glibc:misc ...
> FAIL: misc/tst-epoll 
> FAIL: misc/tst-epoll-time64 
> FAIL: misc/tst-mount 
> FAIL: misc/tst-process_mrelease 
> ... and 2 more entries
> 
> You can find the failure logs in *.log.1.xz files in
> - 
> https://ci.linaro.org/job/tcwg_glibc_check--master-arm-precommit/810/artifact/artifacts/artifacts.precommit/00-sumfiles/
>  .
> The full lists of regressions and progressions are in
> - 
> https://ci.linaro.org/job/tcwg_glibc_check--master-arm-precommit/810/artifact/artifacts/artifacts.precommit/notify/
>  .
> The list of [ignored] baseline and flaky failures are in
> - 
> https://ci.linaro.org/job/tcwg_glibc_check--master-arm-precommit/810/artifact/artifacts/artifacts.precommit/sumfiles/xfails.xfail
>  .
> 
> 
> 
> -8<--8<--8<--
> The information below can be used to reproduce a debug environment:
> 
> Current build   : 
> https://ci.linaro.org/job/tcwg_glibc_check--master-arm-precommit/810/artifact/artifacts
> Reference build : 
> https://ci.linaro.org/job/tcwg_glibc_check--master-arm-build/650/artifact/artifacts


___
linaro-toolchain mailing list -- linaro-toolchain@lists.linaro.org
To unsubscribe send an email to linaro-toolchain-le...@lists.linaro.org


Re: [Linaro-TCWG-CI] glibc patch #77054: FAIL: 5 regressions

2023-10-04 Thread Maxim Kuvyrkov
Hi Siddhesh,

Please ignore this report.  We had a new machine added to the testing pool, and 
it behaved differently than the others.

--
Maxim Kuvyrkov
https://www.linaro.org

> On Oct 4, 2023, at 22:33, ci_not...@linaro.org wrote:
> 
> Dear contributor, our automatic CI has detected problems related to your 
> patch(es).  Please find some details below.  If you have any questions, 
> please follow up on linaro-toolchain@lists.linaro.org mailing list, Libera's 
> #linaro-tcwg channel, or ping your favourite Linaro toolchain developer on 
> the usual project channel.
> 
> In CI config tcwg_glibc_check/master-arm after:
> 
>  | glibc patch https://patchwork.sourceware.org/patch/77054
>  | Author: Siddhesh Poyarekar 
>  | Date:   Tue Oct 3 16:11:50 2023 -0400
>  | 
>  | Make all malloc tunables SXID_ERASE
>  | 
>  | The malloc tunables were made SXID_IGNORE to mimic the environment
>  | variables they aliased, in order to maintain compatibility.  This
>  | allowed alteration of allocator behaviour across setuid boundaries,
>  | where a setuid program may ignore the tunable but its non-setuid child
>  | can read it and adjust allocator behaviour accordingly.
>  | ... 10 lines of the commit log omitted.
>  | ... applied on top of baseline commit:
>  | 1056e5b4c3 tunables: Terminate if end of input is reached (CVE-2023-4911)
> 
> FAIL: 5 regressions
> 
> regressions.sum:
> === glibc tests ===
> 
> Running glibc:io ...
> FAIL: io/tst-close_range 
> 
> Running glibc:misc ...
> FAIL: misc/tst-epoll 
> FAIL: misc/tst-epoll-time64 
> FAIL: misc/tst-mount 
> FAIL: misc/tst-process_mrelease 
> ... and 2 more entries
> 
> You can find the failure logs in *.log.1.xz files in
> - 
> https://ci.linaro.org/job/tcwg_glibc_check--master-arm-precommit/806/artifact/artifacts/artifacts.precommit/00-sumfiles/
>  .
> The full lists of regressions and progressions are in
> - 
> https://ci.linaro.org/job/tcwg_glibc_check--master-arm-precommit/806/artifact/artifacts/artifacts.precommit/notify/
>  .
> The list of [ignored] baseline and flaky failures are in
> - 
> https://ci.linaro.org/job/tcwg_glibc_check--master-arm-precommit/806/artifact/artifacts/artifacts.precommit/sumfiles/xfails.xfail
>  .
> 
> 
> 
> -8<--8<--8<--
> The information below can be used to reproduce a debug environment:
> 
> Current build   : 
> https://ci.linaro.org/job/tcwg_glibc_check--master-arm-precommit/806/artifact/artifacts
> Reference build : 
> https://ci.linaro.org/job/tcwg_glibc_check--master-arm-build/650/artifact/artifacts


___
linaro-toolchain mailing list -- linaro-toolchain@lists.linaro.org
To unsubscribe send an email to linaro-toolchain-le...@lists.linaro.org


Re: Regarding installation

2023-10-04 Thread Maxim Kuvyrkov
Hi Ashu,

Go ahead, this is the right place to ask.

--
Maxim Kuvyrkov
https://www.linaro.org

> On Sep 19, 2023, at 11:32, Ashu Jain  wrote:
> 
> We are installing SDK we need some details regarding linaro toolchain
> ___
> linaro-toolchain mailing list -- linaro-toolchain@lists.linaro.org
> To unsubscribe send an email to linaro-toolchain-le...@lists.linaro.org

___
linaro-toolchain mailing list -- linaro-toolchain@lists.linaro.org
To unsubscribe send an email to linaro-toolchain-le...@lists.linaro.org


Re: [PATCH v2] ARM: Block predication on atomics [PR111235]

2023-10-03 Thread Maxim Kuvyrkov
> On Oct 1, 2023, at 00:36, Ramana Radhakrishnan  
> wrote:
> 
> + linaro-toolchain as I don't understand the CI issues on patchwork.
> 
> 
...
> Ok if no regressions but as you might get nagged by the post commit CI ...

I don't see any pre-commit failures for this patch, but regardless of what 
results are for pre-commit CI, there's always a chance to identify problems in 
post-commit CI -- simply because we test wa-a-ay more configurations in 
post-commit CI than in pre-commit CI.

> 
> While it is not policy yet to look at these bots but given the
> enthusiasm at Cauldron for patchwork and pre-commit CI and because all
> my armhf boxes are boxed up, I decided to do something a bit novel !
> 
> I tried reviewing this via patchwork
> 
> https://patchwork.sourceware.org/project/gcc/patch/pawpr08mb8982a6aa40749b74cad14c5783...@pawpr08mb8982.eurprd08.prod.outlook.com/
> 
> and notice that
> 
> https://ci.linaro.org/job/tcwg_gcc_build--master-arm-precommit/2393/artifact/artifacts/artifacts.precommit/notify/mail-body.txt
> says nothing could be built.

Um, no.  This says ...
===
Results changed to
# reset_artifacts:
-10
# true:
0
# build_abe gcc:
1

From
# reset_artifacts:
-10
# true:
0
# build_abe gcc:
1
===
... i.e., build succeeded both before and after patch.  We'll change the 
boilerplate intro for successful builds from ...
"Dear contributor, our automatic CI has detected problems related to your 
patch(es)."
... to ...
"Dear contributor, you are awesome, no CI failures related to your patch(es)".

One things that is strange -- testsuite builds were not triggered, we have only 
2 reports from build tests, but are missing another 2 reports from testsuite 
tests.

> 
> Possibly worth double checking the status for it being a false
> negative as to why the build failed.

Pre-commit CI is happy with the patch, albeit testsuite checks didn't run for 
some reason.  Regardless, we'll quickly catch and report any fallout in the 
post-commit CI once the patch is merged.

> 
> It was green on patchwork but remembering that Green is not Green for
> CI in patchwork I clicked on the afore mentioned ci.linaro.org link
> and see that it's actually broken.

Unfortunately, I seem to have confused developers about green and red at my 
Cauldron presentation.  "Green/Red" in patchwork mean the usual PASS/FAIL.  
It's only in post-commit CI in jenkins interface green and red mean something 
different.

--
Maxim Kuvyrkov
https://www.linaro.org

___
linaro-toolchain mailing list -- linaro-toolchain@lists.linaro.org
To unsubscribe send an email to linaro-toolchain-le...@lists.linaro.org


Re: [Linaro-TCWG-CI] gcc-14-4111-g6e92a6a2a72: 483.xalancbmk failed to build

2023-09-27 Thread Maxim Kuvyrkov
Oh, I see this is tracked in 
https://gcc.gnu.org/bugzilla/show_bug.cgi?id=111544 .

--
Maxim Kuvyrkov
https://www.linaro.org

> On Sep 27, 2023, at 13:47, Maxim Kuvyrkov  wrote:
> 
> Hi Patrick,
> 
> Did you already get any bug reports for gcc-14-4111-g6e92a6a2a72 ?
> 
> In our benchmarking we see that 483.xalancbmk (from SPEC CPU2006) fails to 
> build on 32-bit ARM.  Let me know if you need any help in reproducing and 
> troubleshooting this.
> 
> Thanks!
> 
> --
> Maxim Kuvyrkov
> https://www.linaro.org
> 
>> On Sep 27, 2023, at 11:05, ci_not...@linaro.org wrote:
>> 
>> Dear contributor, our automatic CI has detected problems related to your 
>> patch(es).  Please find some details below.  If you have any questions, 
>> please follow up on linaro-toolchain@lists.linaro.org mailing list, Libera's 
>> #linaro-tcwg channel, or ping your favourite Linaro toolchain developer on 
>> the usual project channel.
>> 
>> In CI config tcwg_bmk-code_speed-spec2k6/gnu-arm-master-O3 after:
>> 
>> | commit gcc-14-4111-g6e92a6a2a72
>> | Author: Patrick Palka 
>> | Date:   Mon Sep 18 14:47:52 2023 -0400
>> | 
>> | c++: non-dependent assignment checking [PR63198, PR18474]
>> | 
>> | This patch makes us recognize and check non-dependent simple assigments
>> | ahead of time, like we already do for compound assignments.  This means
>> | the templated representation of such assignments will now usually have
>> | an implicit INDIRECT_REF (due to the reference return type), which the
>> | -Wparentheses code needs to handle.  As a drive-by improvement, this
>> | ... 51 lines of the commit log omitted.
>> 
>> the following benchmarks slowed down by more than 3%:
>> - 483.xalancbmk failed to build
>> 
>> Below reproducer instructions can be used to re-build both "first_bad" and 
>> "last_good" cross-toolchains used in this bisection.  Naturally, the scripts 
>> will fail when triggerring benchmarking jobs if you don\'t have access to 
>> Linaro TCWG CI.
>> 
>> Configuration:
>> - Benchmark: 
>> - Toolchain: GCC + Glibc + GNU Linker
>> - Version: all components were built from their tip of trunk
>> - Target: arm-linux-gnueabihf
>> - Compiler flags: O3
>> - Hardware: NVidia TK1 4x Cortex-A15
>> 
>> This benchmarking CI is work-in-progress, and we welcome feedback and 
>> suggestions at linaro-toolchain@lists.linaro.org .  In our improvement plans 
>> is to add support for SPEC CPU2017 benchmarks and provide "perf 
>> report/annotate" data behind these reports.
>> 
>> -8<--8<--8<--
>> The information below can be used to reproduce a debug environment:
>> 
>> Current build   : 
>> https://ci.linaro.org/job/tcwg_bmk-code_speed-spec2k6--gnu-arm-master-O3-build/141/artifact/artifacts
>> Reference build : 
>> https://ci.linaro.org/job/tcwg_bmk-code_speed-spec2k6--gnu-arm-master-O3-build/140/artifact/artifacts
>> 
>> Reproduce last good and first bad builds: 
>> https://git-us.linaro.org/toolchain/ci/interesting-commits.git/plain/gcc/sha1/6e92a6a2a72d3b7a5e1b29042d8a6a43fe1085aa/tcwg_bmk-code_speed-spec2k6/gnu-arm-master-O3/reproduction_instructions.txt
>> 
>> Full commit : 
>> https://github.com/gcc-mirror/gcc/commit/6e92a6a2a72d3b7a5e1b29042d8a6a43fe1085aa
>> 
>> Latest bug report status : https://linaro.atlassian.net/browse/GNU-951
>> 
>> List of configurations that regressed due to this commit :
>> * tcwg_bmk-code_speed-spec2k6
>> ** gnu-arm-master-O3
>> *** 483.xalancbmk failed to build
>> *** 
>> https://git-us.linaro.org/toolchain/ci/interesting-commits.git/plain/gcc/sha1/6e92a6a2a72d3b7a5e1b29042d8a6a43fe1085aa/tcwg_bmk-code_speed-spec2k6/gnu-arm-master-O3/details.txt
>> *** 
>> https://ci.linaro.org/job/tcwg_bmk-code_speed-spec2k6--gnu-arm-master-O3-build/141/
> 
> 

___
linaro-toolchain mailing list -- linaro-toolchain@lists.linaro.org
To unsubscribe send an email to linaro-toolchain-le...@lists.linaro.org


Re: [Linaro-TCWG-CI] gcc-14-4111-g6e92a6a2a72: 483.xalancbmk failed to build

2023-09-27 Thread Maxim Kuvyrkov
Hi Patrick,

Did you already get any bug reports for gcc-14-4111-g6e92a6a2a72 ?

In our benchmarking we see that 483.xalancbmk (from SPEC CPU2006) fails to 
build on 32-bit ARM.  Let me know if you need any help in reproducing and 
troubleshooting this.

Thanks!

--
Maxim Kuvyrkov
https://www.linaro.org

> On Sep 27, 2023, at 11:05, ci_not...@linaro.org wrote:
> 
> Dear contributor, our automatic CI has detected problems related to your 
> patch(es).  Please find some details below.  If you have any questions, 
> please follow up on linaro-toolchain@lists.linaro.org mailing list, Libera's 
> #linaro-tcwg channel, or ping your favourite Linaro toolchain developer on 
> the usual project channel.
> 
> In CI config tcwg_bmk-code_speed-spec2k6/gnu-arm-master-O3 after:
> 
>  | commit gcc-14-4111-g6e92a6a2a72
>  | Author: Patrick Palka 
>  | Date:   Mon Sep 18 14:47:52 2023 -0400
>  | 
>  | c++: non-dependent assignment checking [PR63198, PR18474]
>  | 
>  | This patch makes us recognize and check non-dependent simple assigments
>  | ahead of time, like we already do for compound assignments.  This means
>  | the templated representation of such assignments will now usually have
>  | an implicit INDIRECT_REF (due to the reference return type), which the
>  | -Wparentheses code needs to handle.  As a drive-by improvement, this
>  | ... 51 lines of the commit log omitted.
> 
> the following benchmarks slowed down by more than 3%:
> - 483.xalancbmk failed to build
> 
> Below reproducer instructions can be used to re-build both "first_bad" and 
> "last_good" cross-toolchains used in this bisection.  Naturally, the scripts 
> will fail when triggerring benchmarking jobs if you don\'t have access to 
> Linaro TCWG CI.
> 
> Configuration:
> - Benchmark: 
> - Toolchain: GCC + Glibc + GNU Linker
> - Version: all components were built from their tip of trunk
> - Target: arm-linux-gnueabihf
> - Compiler flags: O3
> - Hardware: NVidia TK1 4x Cortex-A15
> 
> This benchmarking CI is work-in-progress, and we welcome feedback and 
> suggestions at linaro-toolchain@lists.linaro.org .  In our improvement plans 
> is to add support for SPEC CPU2017 benchmarks and provide "perf 
> report/annotate" data behind these reports.
> 
> -8<--8<--8<--
> The information below can be used to reproduce a debug environment:
> 
> Current build   : 
> https://ci.linaro.org/job/tcwg_bmk-code_speed-spec2k6--gnu-arm-master-O3-build/141/artifact/artifacts
> Reference build : 
> https://ci.linaro.org/job/tcwg_bmk-code_speed-spec2k6--gnu-arm-master-O3-build/140/artifact/artifacts
> 
> Reproduce last good and first bad builds: 
> https://git-us.linaro.org/toolchain/ci/interesting-commits.git/plain/gcc/sha1/6e92a6a2a72d3b7a5e1b29042d8a6a43fe1085aa/tcwg_bmk-code_speed-spec2k6/gnu-arm-master-O3/reproduction_instructions.txt
> 
> Full commit : 
> https://github.com/gcc-mirror/gcc/commit/6e92a6a2a72d3b7a5e1b29042d8a6a43fe1085aa
> 
> Latest bug report status : https://linaro.atlassian.net/browse/GNU-951
> 
> List of configurations that regressed due to this commit :
> * tcwg_bmk-code_speed-spec2k6
> ** gnu-arm-master-O3
> *** 483.xalancbmk failed to build
> *** 
> https://git-us.linaro.org/toolchain/ci/interesting-commits.git/plain/gcc/sha1/6e92a6a2a72d3b7a5e1b29042d8a6a43fe1085aa/tcwg_bmk-code_speed-spec2k6/gnu-arm-master-O3/details.txt
> *** 
> https://ci.linaro.org/job/tcwg_bmk-code_speed-spec2k6--gnu-arm-master-O3-build/141/


___
linaro-toolchain mailing list -- linaro-toolchain@lists.linaro.org
To unsubscribe send an email to linaro-toolchain-le...@lists.linaro.org


Re: [EXT] [Linaro-TCWG-CI] basepoints/gcc-14-4038-gb975c0dc3be: Failure

2023-09-18 Thread Maxim Kuvyrkov
> On Sep 17, 2023, at 00:10, Andrew Pinski via Gcc-regression 
>  wrote:
> 
> On Sat, Sep 16, 2023 at 12:26 PM Andrew Pinski  wrote:
>> 
>> I could not reproduce the bootstrap failure at -O3 on x86_64.
>> I used --with-build-config=bootstrap-O3 .
>> Maybe this is an arm (32?) only issue.
> 
> It looks like it is only reproducible with ILP32.
> And reported as https://gcc.gnu.org/bugzilla/show_bug.cgi?id=111435 now.
> And I have a fix.

Hi Andrew,

The problem also reproduces on AArch64 when building linux kernel -- see [1].  
FYI, Linaro CI sends out emails notification only for the first configuration 
that caught a regression, and all subsequent configurations are recorded in 
jira cards -- see [2]. 

[1] 
https://ci.linaro.org/job/tcwg_kernel--gnu-master-aarch64-mainline-allmodconfig-build/72/artifact/artifacts/notify/mail-body.txt/*view*/
[2] https://linaro.atlassian.net/browse/GNU-942

--
Maxim Kuvyrkov
https://www.linaro.org


> 
> Thanks,
> Andrew
> 
>> 
>> Thanks,
>> Andrew
>> 
>> 
>> From: ci_not...@linaro.org 
>> Sent: Saturday, September 16, 2023 5:33 AM
>> To: Andrew Pinski
>> Cc: gcc-regress...@gcc.gnu.org
>> Subject: [EXT] [Linaro-TCWG-CI] basepoints/gcc-14-4038-gb975c0dc3be: Failure
>> 
>> External Email
>> 
>> --
>> Dear contributor, our automatic CI has detected problems related to your 
>> patch(es).  Please find some details below.  If you have any questions, 
>> please follow up on linaro-toolchain@lists.linaro.org mailing list, Libera's 
>> #linaro-tcwg channel, or ping your favourite Linaro toolchain developer on 
>> the usual project channel.
>> 
>> In CI config tcwg_bootstrap_build/master-arm-bootstrap_O3 after:
>> 
>>  | commit basepoints/gcc-14-4038-gb975c0dc3be
>>  | Author: Andrew Pinski 
>>  | Date:   Thu Sep 14 14:47:04 2023 -0700
>>  |
>>  | MATCH: Improve zero_one_valued_p for cases without range information
>>  |
>>  | I noticed we sometimes lose range information in forwprop due to a few
>>  | match and simplify patterns optimizing away casts. So the easier way
>>  | to these cases is to add a match for zero_one_valued_p wich mathes
>>  | a cast from another zero_one_valued_p.
>>  | This also adds the case of `x & zero_one_valued_p` as being 
>> zero_one_valued_p
>>  | ... 13 lines of the commit log omitted.
>> 
>> Results changed to
>> # reset_artifacts:
>> -10
>> # true:
>> 0
>> # build_abe bootstrap_O3:
>> # FAILED
>> # First few build errors in logs:
>> # 00:30:42 xg++: internal compiler error: Segmentation fault signal 
>> terminated program cc1plus
>> # 00:30:42 make[3]: *** [Makefile:1184: tree-ssa-loop-niter.o] Error 4
>> # 00:30:42 make[2]: *** [Makefile:5051: all-stage2-gcc] Error 2
>> # 00:30:42 make[1]: *** [Makefile:25871: stage2-bubble] Error 2
>> # 00:30:42 make: *** [Makefile:1090: all] Error 2
>> # 00:07:25 make[3]: [Makefile:1822: 
>> armv8l-unknown-linux-gnueabihf/bits/largefile-config.h] Error 1 (ignored)
>> # 00:25:31 xg++: internal compiler error: Segmentation fault signal 
>> terminated program cc1plus
>> # 00:25:31 make[3]: *** [Makefile:1184: tree-ssa-loop-niter.o] Error 4
>> # 00:30:14 make[2]: *** [Makefile:5051: all-stage2-gcc] Error 2
>> # 00:30:14 make[1]: *** [Makefile:25871: stage2-bubble] Error 2
>> # 00:30:14 make: *** [Makefile:1090: all] Error 2
>> 
>> From
>> # reset_artifacts:
>> -10
>> # true:
>> 0
>> # build_abe bootstrap_O3:
>> 1
>> 
>> 
>> 
>> -8<--8<--8<--
>> The information below can be used to reproduce a debug environment:
>> 
>> Current build   : 
>> https://urldefense.proofpoint.com/v2/url?u=https-3A__ci.linaro.org_job_tcwg-5Fbootstrap-5Fbuild-2D-2Dmaster-2Darm-2Dbootstrap-5FO3-2Dbuild_211_artifact_artifacts=DwICaQ=nKjWec2b6R0mOyPaz7xtfQ=L_uAQMgirzaBwiEk05NHY-AMcNfJzugOS_xTjrtS94k=nfk6uFrO_t-wguOEbA32pDyvGuUhTwdn9_uQ8Gblwaazik2TcSd17GQcH0o2o8O6=WYoaCjYI6xWmbewg03bGLUGOIAecJfgBHCaZiZC1JZk=
>> Reference build : 
>> https://urldefense.proofpoint.com/v2/url?u=https-3A__ci.linaro.org_job_tcwg-5Fbootstrap-5Fbuild-2D-2Dmaster-2Darm-2Dbootstrap-5FO3-2Dbuild_210_artifact_artifacts=DwICaQ=nKjWec2b6R0mOyPaz7xtfQ=L_uAQMgirzaBwiEk05NHY-AMcNfJzugOS_xTjrtS94k=nfk6uFrO_t-wguOEbA32pDyvGuUhTwdn9_uQ8Gblwaazik2TcSd17GQcH0o2o8O6=hG-1jEiAihzbhIeXE_9N6A65VKNtSbURkRK1Bp2ccm4=
>> 
>> Reproduce la

Re: [Linaro-TCWG-CI] glibc patch #75959: FAIL: 1 regressions

2023-09-15 Thread Maxim Kuvyrkov
Hi All,

As Siddhesh has pointed out, 32-bit ARM has other similar nss/* tests failing 
with "original exit status 127".  We do not have these failures for 64-bit 
AArch64.

There seems to be a problem unrelated to Siddhesh's patch.
Adhemerval, would you please investigate what's causing it?  I see plenty of 
fails in AArch32 results that do not appear in AArch64 results.  See the 
non-flaky entries at the bottom of [1] and [2].

For background, both aarch64 and aarch32 testing is done in docker containers 
with very similar-looking rootfs images constructed from the same Dockerfile 
template [3].  Therefore, it's unlikely that we forgot to include a tool or 
binary in the aarch32 image.

[1] 
https://ci.linaro.org/job/tcwg_glibc_check--master-arm-build/lastStableBuild/artifact/artifacts/sumfiles/xfails.xfail/*view*/

[2] 
https://ci.linaro.org/job/tcwg_glibc_check--master-aarch64-build/lastStableBuild/artifact/artifacts/sumfiles/xfails.xfail/*view*/

[3] https://git.linaro.org/ci/dockerfiles.git/tree/tcwg-base/Dockerfile.in

--
Maxim Kuvyrkov
https://www.linaro.org

> On Sep 15, 2023, at 10:16, Andreas Schwab  wrote:
> 
> On Sep 14 2023, Siddhesh Poyarekar wrote:
> 
>> I'm looking at the logs and all it has is:
>> 
>> original exit status 127
>> running post-clean rsync
>> 
>> for the new test.
> 
> 127 is command not found in the shell.
> 
> -- 
> Andreas Schwab, sch...@linux-m68k.org
> GPG Key fingerprint = 7578 EB47 D4E5 4D69 2510  2552 DF73 E780 A9DA AEC1
> "And now for something completely different."

___
linaro-toolchain mailing list -- linaro-toolchain@lists.linaro.org
To unsubscribe send an email to linaro-toolchain-le...@lists.linaro.org


Re: [Linaro-TCWG-CI] FAIL: 1 regressions after glibc-2.38.9000-71-g7086332e06 elf: Check that --list-diagnostics output has the expected syntax

2023-08-26 Thread Maxim Kuvyrkov
> On Aug 25, 2023, at 21:02, Florian Weimer  wrote:
> 
> * Maxim Kuvyrkov:
> 
>>> On Aug 25, 2023, at 19:18, ci_not...@linaro.org wrote:
>>> 
>>> Dear contributor, our automatic CI has detected problems related to your 
>>> patch.
>>> Please find below some details about it.  If you have any questions, please
>>> follow up on linaro-toolchain@lists.linaro.org mailing list.
>>> 
>>> In CI config tcwg_glibc_check/master-aarch64 after:
>>> 
>>> | Patchwork URL: 
>>> https://patchwork.sourceware.org/project/glibc/patch/871qfr9te6@oldenburg.str.redhat.com/
>>> | commit 7086332e068cbe778cb47a9baf23cd1d2401444a
>>> | Author: Florian Weimer 
>>> | Date:   Fri Aug 25 14:52:01 2023 +0200
>>> | 
>>> | elf: Check that --list-diagnostics output has the expected syntax
>>> | 
>>> | Parts of elf/tst-rtld-list-diagnostics.py have been copied from
>>> | scripts/tst-ld-trace.py.
>>> | 
>>> | The abnf module is entirely optional and used to verify the
>>> | ... 3 lines of the commit log omitted.
>>> 
>>> FAIL: 1 regressions
>>> 
>>> regressions.sum:
>>> === glibc tests ===
>>> 
>>> Running glibc:elf ...
>>> FAIL: elf/tst-rtld-list-diagnostics 
>> 
>> 
>> Hi Florian,
>> 
>> Output of failed test is in [1].
>> 
>> [1] 
>> https://ci.linaro.org/job/tcwg_glibc_check--master-aarch64-precommit/597/artifact/artifacts/artifacts.precommit/00-sumfiles/tests.log.1.xz
>>  .
> 
> Nope:
> 
> | FAIL: elf/tst-rtld-list-diagnostics
> | original exit status 1
> | info: skipping ABNF validation because the abnf module is missing
> 
> The failure is in the “make check” logs:
> 
> | Traceback (most recent call last):
> |   File 
> "/home/tcwg-build/workspace/tcwg_gnu_1/glibc/elf/tst-rtld-list-diagnostics.py",
>  line 303, in 
> | main(sys.argv[1:])
> |   File 
> "/home/tcwg-build/workspace/tcwg_gnu_1/glibc/elf/tst-rtld-list-diagnostics.py",
>  line 294, in main
> | check_consistency_with_manual(opts.manual)
> |   File 
> "/home/tcwg-build/workspace/tcwg_gnu_1/glibc/elf/tst-rtld-list-diagnostics.py",
>  line 188, in check_consistency_with_manual
> | manual_abnf = extract_lines(manual_path,
> |   File 
> "/home/tcwg-build/workspace/tcwg_gnu_1/glibc/elf/tst-rtld-list-diagnostics.py",
>  line 172, in extract_lines
> | raise ValueError('{!r} not found in {!r}'.format(start_line, path))
> | ValueError: '@c ABNF-START' not found in '../manual/dynlink.texi'
> 
> Arguably this is a problem in the test/test machinery (we do not
> redirect standard error with the Python exceptions).
> 
> This likely means that
> 
> commit f21962ddfc8bb23e92597da1f98e313dbde11cc1
> Author: Florian Weimer 
> Date:   Fri Aug 25 14:15:28 2023 +0200
> 
>manual: Document ld.so --list-diagnostics output
> 
>Reviewed-by: Adhemerval Zanella  
> 
> was missing during the build.

Right.  The scenario was:
1. Post-commit build starts against glibc:master == glibc:abc123
2. Developer (you in this case) commits patch:123 and posts it to libc-alpha@.
3. Post-commit build completes and sets baseline for pre-commit testing to 
glibc:abc123, which is now 30+ minutes old.
4. Post-commit build triggers pre-commit testing, including patch:123.
5. Pre-commit testing applies patch:123 to baseline glibc:abc123, which 
succeeds.

The problem occurs if committed patch depends on another patch outside of its 
patch series.

>  The original notification said that it
> was against this commit:
> 
> | Full commit : 
> https://sourceware.org/git/?p=glibc.git;a=commitdiff;h=7086332e068cbe778cb47a9baf23cd1d2401444a
> 
> But that's not a commit hash I can find anywhere else.

Above "Full commit" is what was tested; it was your patch applied on top of 
pre-commit baseline.  I need to remove that line from the email.

Regards,

--
Maxim Kuvyrkov
https://www.linaro.org


___
linaro-toolchain mailing list -- linaro-toolchain@lists.linaro.org
To unsubscribe send an email to linaro-toolchain-le...@lists.linaro.org


Re: [Linaro-TCWG-CI] FAIL: 1 regressions after glibc-2.38.9000-71-g7086332e06 elf: Check that --list-diagnostics output has the expected syntax

2023-08-25 Thread Maxim Kuvyrkov
> On Aug 25, 2023, at 19:18, ci_not...@linaro.org wrote:
> 
> Dear contributor, our automatic CI has detected problems related to your 
> patch.
> Please find below some details about it.  If you have any questions, please
> follow up on linaro-toolchain@lists.linaro.org mailing list.
> 
> In CI config tcwg_glibc_check/master-aarch64 after:
> 
>  | Patchwork URL: 
> https://patchwork.sourceware.org/project/glibc/patch/871qfr9te6@oldenburg.str.redhat.com/
>  | commit 7086332e068cbe778cb47a9baf23cd1d2401444a
>  | Author: Florian Weimer 
>  | Date:   Fri Aug 25 14:52:01 2023 +0200
>  | 
>  | elf: Check that --list-diagnostics output has the expected syntax
>  | 
>  | Parts of elf/tst-rtld-list-diagnostics.py have been copied from
>  | scripts/tst-ld-trace.py.
>  | 
>  | The abnf module is entirely optional and used to verify the
>  | ... 3 lines of the commit log omitted.
> 
> FAIL: 1 regressions
> 
> regressions.sum:
> === glibc tests ===
> 
> Running glibc:elf ...
> FAIL: elf/tst-rtld-list-diagnostics 


Hi Florian,

Output of failed test is in [1].

[1] 
https://ci.linaro.org/job/tcwg_glibc_check--master-aarch64-precommit/597/artifact/artifacts/artifacts.precommit/00-sumfiles/tests.log.1.xz
 .

Regards,

--
Maxim Kuvyrkov
https://www.linaro.org

> 
> === Results Summary ===
> 
> 
> 
> -8<--8<--8<--
> The information below can be used to reproduce a debug environment:
> 
> Current build   : 
> https://ci.linaro.org/job/tcwg_glibc_check--master-aarch64-precommit/597/artifact/artifacts
> Reference build : 
> https://ci.linaro.org/job/tcwg_glibc_check--master-aarch64-build/622/artifact/artifacts
> 
> Reproduce last good and first bad builds: 
> https://git.linaro.org/toolchain/ci/interesting-commits.git/plain/glibc/sha1/7086332e068cbe778cb47a9baf23cd1d2401444a/reproduction_instructions.txt
> 
> Full commit : 
> https://sourceware.org/git/?p=glibc.git;a=commitdiff;h=7086332e068cbe778cb47a9baf23cd1d2401444a
> 
> Latest bug report status : https://linaro.atlassian.net/browse/GNU-692
> 
> List of configurations that regressed due to this commit :
> * tcwg_glibc_check
> ** master-aarch64
> *** FAIL: 1 regressions
> *** https://ci.linaro.org/job/tcwg_glibc_check--master-aarch64-precommit/597/


___
linaro-toolchain mailing list -- linaro-toolchain@lists.linaro.org
To unsubscribe send an email to linaro-toolchain-le...@lists.linaro.org


Re: [Linaro-TCWG-CI] FAIL: 6 regressions after basepoints/gcc-14-3441-ga1558e9ad85 tree-optimization/111115 - SLP of masked stores

2023-08-24 Thread Maxim Kuvyrkov
Hi Richard,

Your patch below ICEs on aarch64-linux-gnu.  Should reproduce easily on native 
or cross aarch64-linux-gnu build.

Let me know if you need any assistance in reproducing this.

Thanks,

--
Maxim Kuvyrkov
https://www.linaro.org

> On Aug 24, 2023, at 22:03, ci_not...@linaro.org wrote:
> 
> Dear contributor, our automatic CI has detected problems related to your 
> patch.
> Please find below some details about it.  If you have any questions, please
> follow up on linaro-toolchain@lists.linaro.org mailing list.
> 
> In CI config tcwg_gcc_check/master-aarch64 after:
> 
>  | commit a1558e9ad856938f165f838733955b331ebbec09
>  | Author: Richard Biener 
>  | Date:   Wed Aug 23 14:28:26 2023 +0200
>  | 
>  | tree-optimization/15 - SLP of masked stores
>  | 
>  | The following adds the capability to do SLP on .MASK_STORE, I do not
>  | plan to add interleaving support.
>  | 
>  | PR tree-optimization/15
>  | ... 21 lines of the commit log omitted.
> 
> FAIL: 6 regressions
> 
> regressions.sum:
> === gcc tests ===
> 
> Running gcc:gcc.target/aarch64/sve/aarch64-sve.exp ...
> FAIL: gcc.target/aarch64/sve/mask_struct_store_4.c (internal compiler error: 
> in get_group_load_store_type, at tree-vect-stmts.cc:2121)
> FAIL: gcc.target/aarch64/sve/mask_struct_store_4.c (test for excess errors)
> UNRESOLVED: gcc.target/aarch64/sve/mask_struct_store_4.c scan-assembler-not 
> \\tst2b\\t.z[0-9]
> UNRESOLVED: gcc.target/aarch64/sve/mask_struct_store_4.c scan-assembler-not 
> \\tst2d\\t.z[0-9]
> UNRESOLVED: gcc.target/aarch64/sve/mask_struct_store_4.c scan-assembler-not 
> \\tst2h\\t.z[0-9]
> UNRESOLVED: gcc.target/aarch64/sve/mask_struct_store_4.c scan-assembler-not 
> \\tst2w\\t.z[0-9]
> 
> ... and 1 more entries
> 
> 
> 
> -8<--8<--8<--
> The information below can be used to reproduce a debug environment:
> 
> Current build   : 
> https://ci.linaro.org/job/tcwg_gcc_check--master-aarch64-build/857/artifact/artifacts
> Reference build : 
> https://ci.linaro.org/job/tcwg_gcc_check--master-aarch64-build/856/artifact/artifacts
> 
> Reproduce last good and first bad builds: 
> https://git.linaro.org/toolchain/ci/interesting-commits.git/plain/gcc/sha1/a1558e9ad856938f165f838733955b331ebbec09/reproduction_instructions.txt
> 
> Full commit : 
> https://github.com/gcc-mirror/gcc/commit/a1558e9ad856938f165f838733955b331ebbec09
> 
> Latest bug report status : https://linaro.atlassian.net/browse/GNU-893
> 
> List of configurations that regressed due to this commit :
> * tcwg_gcc_check
> ** master-aarch64
> *** FAIL: 6 regressions
> *** https://ci.linaro.org/job/tcwg_gcc_check--master-aarch64-build/857/


___
linaro-toolchain mailing list -- linaro-toolchain@lists.linaro.org
To unsubscribe send an email to linaro-toolchain-le...@lists.linaro.org


Re: [Linaro-TCWG-CI] FAIL: 10 regressions after gcc commit: 5 commits in gcc

2023-08-21 Thread Maxim Kuvyrkov
> On Aug 21, 2023, at 15:14, Julian Brown  wrote:
> 
> On Sat, 19 Aug 2023 10:47:45 +0400
> Maxim Kuvyrkov  wrote:
> 
>> Hi Julian,
>> 
>> Your patch series causes regressions on aarch64-linux-gnu.  Would you
>> please investigate?
>> 
>> Let me know if you need any assistance in reproducing these.
> 
> Hi Maxim!
> 
> I'm a little confused -- to be clear, those patches aren't committed to
> mainline yet, are they? If not, thanks for the proactive testing! If
> so, oops, they've not been reviewed yet (!).

Hi Julian,

We (Linaro) are working on pre-commit testing for AArch64 and AArch32, and your 
patch was flagged [1].

[1] 
https://patchwork.sourceware.org/project/gcc/patch/b32f791688b577bf57cefb38ad16594d17975c6c.1692398074.git.jul...@codesourcery.com/

> 
> I'll take a look.

Thanks!

--
Maxim Kuvyrkov
https://www.linaro.org

___
linaro-toolchain mailing list -- linaro-toolchain@lists.linaro.org
To unsubscribe send an email to linaro-toolchain-le...@lists.linaro.org


Re: [Linaro-TCWG-CI] FAIL: 6 regressions after gcc commit: basepoints/gcc-14-3331-gcddc26e0274 aarch64: Fine-grained ldp and stp policies with test-cases.

2023-08-19 Thread Maxim Kuvyrkov
Hi Manos,

New tests in your patch [1] fail on aarch64-linux-gnu build in our CI.  Would 
you please investigate why?  Testing logs are at [2].

[1] 
https://patchwork.sourceware.org/project/gcc/patch/20230818074943.41754-1-manos.anagnosta...@vrull.eu/
[2] 
https://ci.linaro.org/job/tcwg_gcc_check--master-aarch64-precommit/1602/artifact/artifacts/artifacts.precommit/00-sumfiles/

--
Maxim Kuvyrkov
https://www.linaro.org

> On Aug 19, 2023, at 08:37, ci_not...@linaro.org wrote:
> 
> [Linaro-TCWG-CI] FAIL: 6 regressions after gcc commit: 
> basepoints/gcc-14-3331-gcddc26e0274 aarch64: Fine-grained ldp and stp 
> policies with test-cases.
> 
> Dear contributor, our automatic CI has detected problems related to your 
> patch.
> Please find below some details about it.
> 
> In CI config tcwg_gcc_check/master-aarch64 after:
> 
>  | commit cddc26e0274b51e775929e497f89d203211689d2
>  | Author: Manos Anagnostakis 
>  | Date:   Fri Aug 18 10:49:43 2023 +0300
>  | 
>  | aarch64: Fine-grained ldp and stp policies with test-cases.
>  | 
>  | This patch implements the following TODO in 
> gcc/config/aarch64/aarch64.cc
>  | to provide the requested behaviour for handling ldp and stp:
>  | 
>  |   /* Allow the tuning structure to disable LDP instruction formation
>  | ... 47 lines of the commit log omitted.
> 
> FAIL: 6 regressions
> 
> regressions.sum:
> === gcc tests ===
> 
> Running gcc:gcc.target/aarch64/aarch64.exp ...
> FAIL: gcc.target/aarch64/ldp_aligned.c scan-assembler-times ldp\tq[0-9]+, 
> q[0-9] 1
> FAIL: gcc.target/aarch64/ldp_aligned.c scan-assembler-times ldp\tw[0-9]+, 
> w[0-9] 3
> FAIL: gcc.target/aarch64/ldp_aligned.c scan-assembler-times ldp\tx[0-9]+, 
> x[0-9] 3
> FAIL: gcc.target/aarch64/ldp_always.c scan-assembler-times ldp\tq[0-9]+, 
> q[0-9] 2
> FAIL: gcc.target/aarch64/ldp_always.c scan-assembler-times ldp\tw[0-9]+, 
> w[0-9] 6
> FAIL: gcc.target/aarch64/ldp_always.c scan-assembler-times ldp\tx[0-9]+, 
> x[0-9] 6
> 
> ... and 1 more entries
> 
> 
> 
> -8<--8<--8<--
> The information below can be used to reproduce a debug environment:
> 
> Current build   : 
> https://ci.linaro.org/job/tcwg_gcc_check--master-aarch64-precommit/1602/artifact/artifacts
> Reference build : 
> https://ci.linaro.org/job/tcwg_gcc_check--master-aarch64-build/836/artifact/artifacts
> 
> Reproduce last good and first bad builds: 
> https://git.linaro.org/toolchain/ci/interesting-commits.git/plain/gcc/sha1/cddc26e0274b51e775929e497f89d203211689d2/reproduction_instructions.txt
> 
> Full commit : 
> https://github.com/gcc-mirror/gcc/commit/cddc26e0274b51e775929e497f89d203211689d2
> 
> Latest bug report status : https://linaro.atlassian.net/browse/GNU-692
> 
> List of configurations that regressed due to this commit :
> * tcwg_gcc_check
> ** master-aarch64
> *** FAIL: 6 regressions
> *** https://ci.linaro.org/job/tcwg_gcc_check--master-aarch64-precommit/1602/


___
linaro-toolchain mailing list -- linaro-toolchain@lists.linaro.org
To unsubscribe send an email to linaro-toolchain-le...@lists.linaro.org


Re: [Linaro-TCWG-CI] FAIL: 10 regressions after gcc commit: 5 commits in gcc

2023-08-19 Thread Maxim Kuvyrkov
Hi Julian,

Your patch series causes regressions on aarch64-linux-gnu.  Would you please 
investigate?

Let me know if you need any assistance in reproducing these.

Thanks!

--
Maxim Kuvyrkov
https://www.linaro.org

> On Aug 19, 2023, at 09:32, ci_not...@linaro.org wrote:
> 
> [Linaro-TCWG-CI] FAIL: 10 regressions after gcc commit: 5 commits in gcc
> 
> Dear contributor, our automatic CI has detected problems related to your 
> patch.
> Please find below some details about it.
> 
> In CI config tcwg_gcc_check/master-aarch64 after:
> 
>  | gcc commits:
>  | dce6c135fb52fd631c2fc82d8048d32ce41ece21 OpenMP/OpenACC: Reorganise OMP 
> map clause handling in gimplify.cc
>  | dd49dd178e3eac8e9925baa3d71325d8d5f69215 OpenMP/OpenACC: 
> Unordered/non-constant component offset runtime diagnostic
>  | bd5a53e6b47907d05672cbb603af363a665b45a4 OpenMP: Pointers and member 
> mappings
>  | 4e0359d8a659c8abdca3297fc9b0e20ff89f7f82 OpenMP/OpenACC: Rework clause 
> expansion and nested struct handling
>  | a855174e5461d2b423af7f892fd31dfb10ce09ec OpenMP/OpenACC: Reindent 
> TO/FROM/_CACHE_ stanza in {c_}finish_omp_clause
> 
> FAIL: 10 regressions
> 
> regressions.sum:
> === libgomp tests ===
> 
> Running libgomp:libgomp.c++/c++.exp ...
> FAIL: libgomp.c++/../libgomp.c-c++-common/map-arrayofstruct-2.c output 
> pattern test
> FAIL: libgomp.c++/../libgomp.c-c++-common/map-arrayofstruct-3.c output 
> pattern test
> 
> Running libgomp:libgomp.c/c.exp ...
> FAIL: libgomp.c/../libgomp.c-c++-common/map-arrayofstruct-2.c output pattern 
> test
> FAIL: libgomp.c/../libgomp.c-c++-common/map-arrayofstruct-3.c output pattern 
> test
> 
> ... and 9 more entries
> 
> 
> 
> -8<--8<--8<--
> The information below can be used to reproduce a debug environment:
> 
> Current build   : 
> https://ci.linaro.org/job/tcwg_gcc_check--master-aarch64-precommit/1593/artifact/artifacts
> Reference build : 
> https://ci.linaro.org/job/tcwg_gcc_check--master-aarch64-build/836/artifact/artifacts


___
linaro-toolchain mailing list -- linaro-toolchain@lists.linaro.org
To unsubscribe send an email to linaro-toolchain-le...@lists.linaro.org


Re: [Linaro-TCWG-CI] Failure after gcc commit: basepoints/gcc-14-3309-g360cabf45a0 arm: Add define_attr to to create a mapping between MVE predicated and unpredicated insns

2023-08-18 Thread Maxim Kuvyrkov
Hi Stamatis,

Let us know if you need any help in troubleshooting the failures in 
https://patchwork.sourceware.org/project/gcc/list/?series=23558 .

Kind regards,

--
Maxim Kuvyrkov
https://www.linaro.org


> On Aug 18, 2023, at 01:13, ci_not...@linaro.org wrote:
> 
> [Linaro-TCWG-CI] Failure after gcc commit: 
> basepoints/gcc-14-3309-g360cabf45a0 arm: Add define_attr to to create a 
> mapping between MVE predicated and unpredicated insns
> 
> Dear contributor, our automatic CI has detected problems related to your 
> patch.
> Please find below some details about it.
> 
> In CI config tcwg_gcc_build/master-arm after:
> 
>  | commit 360cabf45a05d5d2d5a98f0e081cb8f53e01e8aa
>  | Author: Stamatis Markianos-Wright 
>  | Date:   Thu Aug 17 11:30:58 2023 +0100
>  | 
>  | arm: Add define_attr to to create a mapping between MVE predicated and 
> unpredicated insns
>  | 
>  | Hi all,
>  | 
>  | I'd like to submit two patches that add support for Arm's MVE
>  | Tail Predicated Low Overhead Loop feature.
> 
> Results changed to
> # reset_artifacts:
> -10
> # true:
> 0
> # build_abe gcc:
> # FAILED
> # First few build errors in logs:
> # 00:02:46 
> /home/tcwg-build/workspace/tcwg_gnu_1/abe/snapshots/gcc.git~master/gcc/config/arm/mve.md:7018:8:
>  error: invalid decimal constant 
> # 00:02:46 make[2]: *** [Makefile:2647: s-preds-h] Error 1
> # 00:02:46 make[1]: *** [Makefile:4655: all-gcc] Error 2
> # 00:02:46 make: *** [Makefile:1051: all] Error 2
> # 00:02:28 
> /home/tcwg-build/workspace/tcwg_gnu_1/abe/snapshots/gcc.git~master/gcc/config/arm/mve.md:7018:8:
>  error: invalid decimal constant 
> # 00:02:28 make[2]: *** [Makefile:2586: s-conditions] Error 1
> # 00:02:28 
> /home/tcwg-build/workspace/tcwg_gnu_1/abe/snapshots/gcc.git~master/gcc/config/arm/mve.md:7018:8:
>  error: invalid decimal constant 
> # 00:02:28 
> /home/tcwg-build/workspace/tcwg_gnu_1/abe/snapshots/gcc.git~master/gcc/config/arm/mve.md:7018:8:
>  error: invalid decimal constant 
> # 00:02:28 make[2]: *** [Makefile:2647: s-preds-h] Error 1
> # 00:02:28 make[2]: *** [Makefile:2652: s-constrs-h] Error 1
> # 00:02:28 
> /home/tcwg-build/workspace/tcwg_gnu_1/abe/snapshots/gcc.git~master/gcc/config/arm/mve.md:7018:8:
>  error: invalid decimal constant 
> # 00:02:28 make[2]: *** [Makefile:2642: s-preds] Error 1
> # 00:02:35 make[1]: *** [Makefile:4655: all-gcc] Error 2
> # 00:02:35 make: *** [Makefile:1051: all] Error 2
> 
> From
> # reset_artifacts:
> -10
> # true:
> 0
> # build_abe gcc:
> 1
> 
> 
> 
> -8<--8<--8<--
> The information below can be used to reproduce a debug environment:
> 
> Current build   : 
> https://ci.linaro.org/job/tcwg_gcc_build--master-arm-precommit/1787/artifact/artifacts
> Reference build : 
> https://ci.linaro.org/job/tcwg_gcc_build--master-arm-build/1060/artifact/artifacts
> 
> Reproduce last good and first bad builds: 
> https://git.linaro.org/toolchain/ci/interesting-commits.git/plain/gcc/sha1/360cabf45a05d5d2d5a98f0e081cb8f53e01e8aa/reproduction_instructions.txt
> 
> Full commit : 
> https://github.com/gcc-mirror/gcc/commit/360cabf45a05d5d2d5a98f0e081cb8f53e01e8aa
> 
> Latest bug report status : https://linaro.atlassian.net/browse/GNU-692
> 
> List of configurations that regressed due to this commit :
> * tcwg_gcc_build
> ** master-arm
> *** Failure
> *** https://ci.linaro.org/job/tcwg_gcc_build--master-arm-precommit/1787/


___
linaro-toolchain mailing list -- linaro-toolchain@lists.linaro.org
To unsubscribe send an email to linaro-toolchain-le...@lists.linaro.org


Re: LLVM bots down for extended period

2023-08-16 Thread Maxim Kuvyrkov
Hi Aaron,

This email from April got caught in moderation.  I believe the problem with 
Linaro bots was already addressed.

I'll try to monitor moderation requests better.

Regards,

--
Maxim Kuvyrkov
https://www.linaro.org




> On Apr 3, 2023, at 23:05, Aaron Ballman  wrote:
> 
> Hello! You are the point of contact for some build bots in the LLVM
> build lab that have been down for an extended period of time. If
> you're not the correct point of contact, can you please CC the right
> individual (if you know who they are)?
> 
> The following bot(s) appear to have been down for multiple weeks:
> 
> https://lab.llvm.org/buildbot/#/builders/clang-armv7-global-isel
> https://lab.llvm.org/buildbot/#/builders/clang-native-arm-lnt-perf
> https://lab.llvm.org/buildbot/#/builders/clang-aarch64-sve-vla-2stage
> https://lab.llvm.org/buildbot/#/builders/clang-aarch64-sve-vls-2stage
> https://lab.llvm.org/buildbot/#/builders/clang-aarch64-sve-vls
> https://lab.llvm.org/buildbot/#/builders/clang-armv7-vfpv3-full-2stage
> https://lab.llvm.org/buildbot/#/builders/clang-thumbv7-full-2stage
> 
> Do you have plans to bring the bot back online? If not, that's okay --
> just let me know and I can remove the bots from the lab for you. Thank
> you!
> 
> ~Aaron
> ___
> linaro-toolchain mailing list -- linaro-toolchain@lists.linaro.org
> To unsubscribe send an email to linaro-toolchain-le...@lists.linaro.org

___
linaro-toolchain mailing list -- linaro-toolchain@lists.linaro.org
To unsubscribe send an email to linaro-toolchain-le...@lists.linaro.org


Re: LLVM buildbot timing out

2023-05-15 Thread Maxim Kuvyrkov
Thanks, Andrzej!

Hi Antoine, would you please take a look at this?

Kind regards,

--
Maxim Kuvyrkov
https://www.linaro.org




> On May 15, 2023, at 22:55, Andrzej Warzynski  
> wrote:
> 
> Hi,
> 
> Has anyone noticed that https://lab.llvm.org/buildbot/#/builders/198 
> (clang-aarch64-sve-vla-2stage<https://lab.llvm.org/buildbot/#/builders/198>) 
> has been timing out for the past few days? "Duration" is often less than 1hr, 
> so that's odd. And all Flang buildbots are green, so it's unlikely caused by 
> changes to that sub-project 
> (https://lab.llvm.org/buildbot/#/builders/198/builds/1804). Would anyone be 
> able to take a look?
> 
> Best regards,
> 
> Andrzej
> 
> IMPORTANT NOTICE: The contents of this email and any attachments are 
> confidential and may also be privileged. If you are not the intended 
> recipient, please notify the sender immediately and do not disclose the 
> contents to any other person, use it for any purpose, or store or copy the 
> information in any medium. Thank you.
> ___
> linaro-toolchain mailing list -- linaro-toolchain@lists.linaro.org
> To unsubscribe send an email to linaro-toolchain-le...@lists.linaro.org

___
linaro-toolchain mailing list -- linaro-toolchain@lists.linaro.org
To unsubscribe send an email to linaro-toolchain-le...@lists.linaro.org


Re: LLVM bots down for extended period

2023-04-12 Thread Maxim Kuvyrkov
> On Apr 11, 2023, at 7:55 PM, Aaron Ballman  wrote:
> 
> Hello! I noticed that some of the build bots in the LLVM build lab
> have been down for an extended period of time. I tried emailing the
> Linaro Toolchain Working Group address as they're listed as the admin
> for the machine, but I've not heard a response in a week. Renato
> mentioned you might be a good person to contact instead.

Hi Aaron,

Sorry, your email might've caught in moderation of the mailing list -- I'll 
check what has happened.

> 
> The following bot(s) appear to have been down for multiple weeks:
> 
> https://lab.llvm.org/buildbot/#/builders/clang-armv7-global-isel
> https://lab.llvm.org/buildbot/#/builders/clang-native-arm-lnt-perf
> https://lab.llvm.org/buildbot/#/builders/clang-aarch64-sve-vla-2stage
> https://lab.llvm.org/buildbot/#/builders/clang-aarch64-sve-vls-2stage
> https://lab.llvm.org/buildbot/#/builders/clang-aarch64-sve-vls
> https://lab.llvm.org/buildbot/#/builders/clang-armv7-vfpv3-full-2stage
> https://lab.llvm.org/buildbot/#/builders/clang-thumbv7-full-2stage
> 
> Do you have plans to bring the bots back online? If not, that's okay
> -- just let me know and I can remove the bots from the lab for you.
> Thank you!

All these bots are currently connected to silent master.  They have problems of 
varied complexity, and we are working on fixing them.  In particular, the 
*-sve-* bots should return to production master in the next few weeks.

For the armv7 bots -- these are more complicated to put into production, but we 
are looking for the solution.

Kind regards,

--
Maxim Kuvyrkov
https://www.linaro.org




___
linaro-toolchain mailing list -- linaro-toolchain@lists.linaro.org
To unsubscribe send an email to linaro-toolchain-le...@lists.linaro.org


[TCWG CI] 602.gcc_s fails to run after llvmorg-17-init-4880-gf242291f59bf

2023-04-06 Thread Maxim Kuvyrkov
Hi Alexandros,

Linaro benchmarking CI flagged this patch.  After it clang seems to miscompile 
602.gcc_s from SPEC CPU2017 for  "-O3 -flto" on aarch64-linux-gnu.  Also, it 
appears that 600.perlbench_s slows down by 9%.

Could you investigate, please?  Let me know if you need any assistance in 
reproducing the problem.

Our Benchmarking CI is still in active development, and there are false 
positives, but this report seems to be legit [1].

Kind regards,

[1] 
https://ci.linaro.org/job/tcwg_bmk-code_speed-cpu2017speed--llvm-aarch64-master-O3_LTO-build/21/artifact/artifacts/mail/mail-body.txt/*view*/

--
Maxim Kuvyrkov
https://www.linaro.org




___
linaro-toolchain mailing list -- linaro-toolchain@lists.linaro.org
To unsubscribe send an email to linaro-toolchain-le...@lists.linaro.org


Re: Seeking toolchain-arm_cortex-a7_gcc-4.8-linaro_uClibc-1.0.14_eabi

2023-02-14 Thread Maxim Kuvyrkov
Hi Bryan,


> On Feb 7, 2023, at 9:13 PM, Bryan Phillippe  wrote:
> 
...
> -rwxr-xr-x1 config   root   2765178 Dec  9  2018 
> /lib/libuClibc-1.0.14.so
> /root # strings /lib/libuClibc-1.0.14.so |grep -i linaro|head -n 1
> GCC: (OpenWrt/Linaro GCC 4.8-2014.04 r35193) 4.8.3

This indicates that it was built with an OpenWRT toolchain, and OpenWRT project 
maintainers used Linaro GCC 4.8 source release, instead of FSF GCC 4.8 source 
release.  In the days of GCC 4.8 it was very common to use Linaro GCC source 
release instead of FSF ones for building compilers for 32-bit and 64-bit ARM.

Try searching in OpenWRT archives for a copy of GCC 4.8-based toolchain.

> /root # strings /lib/libuClibc-1.0.14.so |grep -i gcc-4.8|head -n 1
> /home/test/work/sudhan-qsdk/qsdk/build_dir/toolchain-arm_cortex-a7_gcc-4.8-linaro_uClibc-1.0.14_eabi/uClibc-ng-1.0.14
> /root #
> 
> I only need to rebuild a single binary on this platform, and I don't have the 
> source or the toolchain for the existing binaries. If I have to recreate a 
> toolchain based on the versions only, it should be possible, but will be a 
> good deal of work and effort. If you know where I can find this toolchain - 
> or have any advice on how I can build my own compatible version - I would be 
> very grateful.

If you want to rebuild a single executable, then it may be easier to use a 
modern toolchain for arm-linux-gnueabihf [1] and build the static binary of 
your package (add "-static" to compiler flags).  This was the binary will 
include all necessary bits of system libraries.  This is a good approach if 
your package does not have dependencies outside of C library; otherwise you 
would need to find static versions of all other libraries.

--
Maxim Kuvyrkov
https://www.linaro.org

> 
> Thank you so much!
> 
> --
> -bp
> 
>> On Feb 7, 2023, at 04:24, Maxim Kuvyrkov  wrote:
>> 
>> [CC: linaro-toolchain@]
>> 
>> Hi Bryan,
>> 
>> I don't think that Linaro has ever released a toolchain with uClibc, but I 
>> may be wrong.  Could you provide additional information about the target, 
>> rootfs and your setup?
>> 
>> --
>> Maxim Kuvyrkov
>> https://www.linaro.org
>> 
>> 
>> 
>> 
>>> On Feb 7, 2023, at 10:31 AM, Bryan Phillippe  wrote:
>>> 
>>> 
>>> Hello! I know this is a long shot, but I have a few devices with code that 
>>> was built using this toolchain: 
>>> toolchain-arm_cortex-a7_gcc-4.8-linaro_uClibc-1.0.14_eabi
>>> 
>>> I'm trying to find a copy of that so I can rebuild 1 binary/package on the 
>>> system without blowing everything up. Do you have any idea where I can find 
>>> this toolchain? Thank you so much in advance!
>>> 
>>> --
>>> -bp
>>> 
>> 
> 

___
linaro-toolchain mailing list -- linaro-toolchain@lists.linaro.org
To unsubscribe send an email to linaro-toolchain-le...@lists.linaro.org


Re: Seeking toolchain-arm_cortex-a7_gcc-4.8-linaro_uClibc-1.0.14_eabi

2023-02-07 Thread Maxim Kuvyrkov
[CC: linaro-toolchain@]

Hi Bryan,

I don't think that Linaro has ever released a toolchain with uClibc, but I may 
be wrong.  Could you provide additional information about the target, rootfs 
and your setup?

--
Maxim Kuvyrkov
https://www.linaro.org




> On Feb 7, 2023, at 10:31 AM, Bryan Phillippe  wrote:
> 
> 
> Hello! I know this is a long shot, but I have a few devices with code that 
> was built using this toolchain: 
> toolchain-arm_cortex-a7_gcc-4.8-linaro_uClibc-1.0.14_eabi
> 
> I'm trying to find a copy of that so I can rebuild 1 binary/package on the 
> system without blowing everything up. Do you have any idea where I can find 
> this toolchain? Thank you so much in advance!
> 
> --
> -bp
> 

___
linaro-toolchain mailing list -- linaro-toolchain@lists.linaro.org
To unsubscribe send an email to linaro-toolchain-le...@lists.linaro.org


Re: [TCWG CI] 403.gcc failed to run after working-3971-g5f7f484ee54e: [AArch64] Add GPR rr instructions to isAssociativeAndCommutative

2022-11-22 Thread Maxim Kuvyrkov
Hi David,

Also happens for -O2 -flto.  Other affected configurations will be 
automatically added to [1].

[1] 
https://git.linaro.org/toolchain/ci/interesting-commits.git/tree/llvm/sha1/5f7f484ee54ebbf702ee4c5fe9852502dc237121/tcwg_bmk_llvm_tx1

--
Maxim Kuvyrkov
https://www.linaro.org




> On Nov 22, 2022, at 3:38 PM, Maxim Kuvyrkov  wrote:
> 
> Hi David,
> 
> Our CI flagged your commit; it seems it miscompiles 403.gcc from SPEC CPU2006 
> at -O3 -flto for aarch64-linux-gnu.  Would you please investigate?
> 
> Let me know if you need any assistance in reproducing this.
> 
> Thanks!
> 
> ===
> 
> After working-3971-g5f7f484ee54e commit 
> 5f7f484ee54ebbf702ee4c5fe9852502dc237121
> Author: David Green 
> 
>[AArch64] Add GPR rr instructions to isAssociativeAndCommutative
> 
> the following benchmarks slowed down by more than 3%:
> - 403.gcc failed to run
> 
> Configuration:
> - Benchmark: SPEC CPU2006
> - Toolchain: Clang + Glibc + LLVM Linker
> - Version: all components were built from their tip of trunk
> - Target: aarch64-linux-gnu
> - Compiler flags: -O3 -flto
> - Hardware: NVidia TX1 4x Cortex-A57
> 
> --
> Maxim Kuvyrkov
> https://www.linaro.org
> 
> 
> 
> 

___
linaro-toolchain mailing list -- linaro-toolchain@lists.linaro.org
To unsubscribe send an email to linaro-toolchain-le...@lists.linaro.org


[TCWG CI] 403.gcc failed to run after working-3971-g5f7f484ee54e: [AArch64] Add GPR rr instructions to isAssociativeAndCommutative

2022-11-22 Thread Maxim Kuvyrkov
Hi David,

Our CI flagged your commit; it seems it miscompiles 403.gcc from SPEC CPU2006 
at -O3 -flto for aarch64-linux-gnu.  Would you please investigate?

Let me know if you need any assistance in reproducing this.

Thanks!

===

After working-3971-g5f7f484ee54e commit 5f7f484ee54ebbf702ee4c5fe9852502dc237121
Author: David Green 

[AArch64] Add GPR rr instructions to isAssociativeAndCommutative

the following benchmarks slowed down by more than 3%:
- 403.gcc failed to run

Configuration:
- Benchmark: SPEC CPU2006
- Toolchain: Clang + Glibc + LLVM Linker
- Version: all components were built from their tip of trunk
- Target: aarch64-linux-gnu
- Compiler flags: -O3 -flto
- Hardware: NVidia TX1 4x Cortex-A57

--
Maxim Kuvyrkov
https://www.linaro.org




___
linaro-toolchain mailing list -- linaro-toolchain@lists.linaro.org
To unsubscribe send an email to linaro-toolchain-le...@lists.linaro.org


Re: Enabling CCache on LLVM bots

2022-06-29 Thread Maxim Kuvyrkov
To add to David's answer, here is the logic that enables ccache in 
Linaro-maintained buildbots: 
https://git.linaro.org/ci/dockerfiles.git/tree/tcwg-base/tcwg-llvmbot/run.sh#n48
 .

We have experimented with using zorg's CCACHE settings a few years back, and it 
turned out to be more robust to configure ccache at the level of default system 
(well, container) compiler.

One thing to check is whether default 5GB cache limit fits us well.  IIUC, 
flang builds are particularly big, and they may overflow the cache size.

--
Maxim Kuvyrkov
https://www.linaro.org

> On 29 Jun 2022, at 16:33, David Spickett  wrote:
> 
> While it's not visible in the zorg config we are using ccache. Except
> we do it by setting the compiler to a script that runs the expected
> clang/gcc via ccache. We can certainly look at using the ccache enable
> in zorg instead (for the first attempt it was easier to do it in a way
> we could control on our end).
> 
> Looking at the our flang bots overall 2 hours seems to be the average
> (out of tree is an outlier), I don't know anything about non Linaro
> flang bots. We will check if there is some obvious bottleneck here but
> we have resource constraints that limit how fast we can go even with
> perfect caching. Are there any other bots you were interested in? We
> can check those too.
> 
> What build times were you expecting to see? It is useful for us to
> know what expectations are even if, unfortunately, we don't meet them
> at this time.
> 
> Thanks,
> David Spickett.
> 
> 
> On Tue, 28 Jun 2022 at 18:05, Mehdi AMINI  wrote:
>> 
>> Hi,
>> 
>> I noticed that bots like flang-aarch64-latest-gcc are quite slow and could
>> benefit from enabling ccache. Could you make it available on the system so
>> it could be turned on for all these builds?
>> 
>> Thanks,
>> 
>> --
>> Mehdi
>> ___
>> linaro-toolchain mailing list -- linaro-toolchain@lists.linaro.org
>> To unsubscribe send an email to linaro-toolchain-le...@lists.linaro.org
> ___
> linaro-toolchain mailing list -- linaro-toolchain@lists.linaro.org
> To unsubscribe send an email to linaro-toolchain-le...@lists.linaro.org

___
linaro-toolchain mailing list -- linaro-toolchain@lists.linaro.org
To unsubscribe send an email to linaro-toolchain-le...@lists.linaro.org


Re: [TCWG CI] Regression caused by gcc: rs6000: Harden mma_init_builtins

2022-06-01 Thread Maxim Kuvyrkov
> On 31 May 2022, at 17:43, Peter Bergner  wrote:
> 
> On 5/31/22 8:09 AM, ci_not...@linaro.org wrote:
>> [TCWG CI] Regression caused by gcc: rs6000: Harden mma_init_builtins:
>> commit 6278065af07634278ba30029d92a82b089969baa
>> Author: Peter Bergner 
>> 
>>rs6000: Harden mma_init_builtins
>> 
>> Results regressed to
>> # reset_artifacts:
>> -10
>> # build_abe binutils:
>> -9
>> # build_abe stage1:
>> -5
>> # build_abe qemu:
>> -2
>> # linux_n_obj:
>> 33
>> # First few build errors in logs:
>> # 00:03:13 cc1plus: fatal error: ./include/generated/utsrelease.h: No such 
>> file or directory
>> # 00:03:13 make[2]: *** [scripts/gcc-plugins/latent_entropy_plugin.so] Error 
>> 1
>> # 00:03:13 cc1plus: fatal error: ./include/generated/utsrelease.h: No such 
>> file or directory
>> # 00:03:13 make[2]: *** [scripts/gcc-plugins/stackleak_plugin.so] Error 1
>> # 00:03:13 cc1plus: fatal error: ./include/generated/utsrelease.h: No such 
>> file or directory
>> # 00:03:13 make[2]: *** [scripts/gcc-plugins/randomize_layout_plugin.so] 
>> Error 1
>> # 00:03:13 make[1]: *** [scripts/gcc-plugins] Error 2
>> # 00:03:14 make: *** [scripts] Error 2
> 
> It seems your CI tester really doesn't like me! ;-)
> Given my patch above could not have affected the existence of that
> header file, I'll ignore this one too.

Hi Peter,

I'm suspecting a makefile bug in Linux kernel that makes build process 
unreliable.  It seems there's a missing dependency between 
latent_entropy_plugin.so and generated/utsrelease.h header.

I'm continuing to investigate this, but, meanwhile, [1] should fix spamming of 
upstream developers.

[1] https://review.linaro.org/c/toolchain/jenkins-scripts/+/41433

Regards,

--
Maxim Kuvyrkov
https://www.linaro.org

___
linaro-toolchain mailing list -- linaro-toolchain@lists.linaro.org
To unsubscribe send an email to linaro-toolchain-le...@lists.linaro.org


Re: [TCWG CI] Regression caused by gcc: rs6000: MMA test case ICEs using -O3 [PR99842]

2022-05-30 Thread Maxim Kuvyrkov
Hi Peter,

This is, obviously, a fluke.  We'll investigate.

Sorry for the noise,

--
Maxim Kuvyrkov
https://www.linaro.org

> On 30 May 2022, at 09:56, ci_not...@linaro.org wrote:
> 
> [TCWG CI] Regression caused by gcc: rs6000: MMA test case ICEs using -O3 
> [PR99842]:
> commit df4e0359dad239854af0ea9eacb8e7e3719557d0
> Author: Peter Bergner 
> 
>rs6000: MMA test case ICEs using -O3 [PR99842]
> 
> Results regressed to
> # reset_artifacts:
> -10
> # build_abe binutils:
> -9
> # build_abe stage1:
> -5
> # build_abe qemu:
> -2
> # linux_n_obj:
> 33
> # First few build errors in logs:
> # 00:00:57 cc1plus: fatal error: ./include/generated/utsrelease.h: No such 
> file or directory
> # 00:00:57 cc1plus: fatal error: ./include/generated/utsrelease.h: No such 
> file or directory
> # 00:00:57 cc1plus: fatal error: ./include/generated/utsrelease.h: No such 
> file or directory
> # 00:00:57 make[2]: *** [scripts/gcc-plugins/stackleak_plugin.so] Error 1
> # 00:00:57 make[2]: *** [scripts/gcc-plugins/latent_entropy_plugin.so] Error 1
> # 00:00:57 make[2]: *** [scripts/gcc-plugins/randomize_layout_plugin.so] 
> Error 1
> # 00:00:57 make[1]: *** [scripts/gcc-plugins] Error 2
> # 00:00:59 make: *** [scripts] Error 2
> 
> from
> # reset_artifacts:
> -10
> # build_abe binutils:
> -9
> # build_abe stage1:
> -5
> # build_abe qemu:
> -2
> # linux_n_obj:
> 21071
> # linux build successful:
> all
> 
> THIS IS THE END OF INTERESTING STUFF.  BELOW ARE LINKS TO BUILDS, 
> REPRODUCTION INSTRUCTIONS, AND THE RAW COMMIT.
> 
> This commit has regressed these CI configurations:
> - tcwg_kernel/gnu-release-aarch64-next-allyesconfig
> 
> First_bad build: 
> https://ci.linaro.org/job/tcwg_kernel-gnu-bisect-gnu-release-aarch64-next-allyesconfig/140/artifact/artifacts/build-df4e0359dad239854af0ea9eacb8e7e3719557d0/
> Last_good build: 
> https://ci.linaro.org/job/tcwg_kernel-gnu-bisect-gnu-release-aarch64-next-allyesconfig/140/artifact/artifacts/build-e21e93407202e62a10c372595076c593c561bb11/
> Baseline build: 
> https://ci.linaro.org/job/tcwg_kernel-gnu-bisect-gnu-release-aarch64-next-allyesconfig/140/artifact/artifacts/build-baseline/
> Even more details: 
> https://ci.linaro.org/job/tcwg_kernel-gnu-bisect-gnu-release-aarch64-next-allyesconfig/140/artifact/artifacts/
> 
> Reproduce builds:
> 
> mkdir investigate-gcc-df4e0359dad239854af0ea9eacb8e7e3719557d0
> cd investigate-gcc-df4e0359dad239854af0ea9eacb8e7e3719557d0
> 
> # Fetch scripts
> git clone https://git.linaro.org/toolchain/jenkins-scripts
> 
> # Fetch manifests and test.sh script
> mkdir -p artifacts/manifests
> curl -o artifacts/manifests/build-baseline.sh 
> https://ci.linaro.org/job/tcwg_kernel-gnu-bisect-gnu-release-aarch64-next-allyesconfig/140/artifact/artifacts/manifests/build-baseline.sh
>  --fail
> curl -o artifacts/manifests/build-parameters.sh 
> https://ci.linaro.org/job/tcwg_kernel-gnu-bisect-gnu-release-aarch64-next-allyesconfig/140/artifact/artifacts/manifests/build-parameters.sh
>  --fail
> curl -o artifacts/test.sh 
> https://ci.linaro.org/job/tcwg_kernel-gnu-bisect-gnu-release-aarch64-next-allyesconfig/140/artifact/artifacts/test.sh
>  --fail
> chmod +x artifacts/test.sh
> 
> # Reproduce the baseline build (build all pre-requisites)
> ./jenkins-scripts/tcwg_kernel-build.sh @@ 
> artifacts/manifests/build-baseline.sh
> 
> # Save baseline build state (which is then restored in artifacts/test.sh)
> mkdir -p ./bisect
> rsync -a --del --delete-excluded --exclude /bisect/ --exclude /artifacts/ 
> --exclude /gcc/ ./ ./bisect/baseline/
> 
> cd gcc
> 
> # Reproduce first_bad build
> git checkout --detach df4e0359dad239854af0ea9eacb8e7e3719557d0
> ../artifacts/test.sh
> 
> # Reproduce last_good build
> git checkout --detach e21e93407202e62a10c372595076c593c561bb11
> ../artifacts/test.sh
> 
> cd ..
> 
> 
> Full commit (up to 1000 lines):
> 
> commit df4e0359dad239854af0ea9eacb8e7e3719557d0
> Author: Peter Bergner 
> Date:   Sun May 30 22:45:55 2021 -0500
> 
>rs6000: MMA test case ICEs using -O3 [PR99842]
> 
>The mma_assemble_input_operand predicate does not accept reg+reg indexed
>addresses which can lead to ICEs.  The lxv and lxvp instructions have
>indexed forms (lxvx and lxvpx), so the simple solution is to just allow
>indexed addresses in the predicate.
> 
>2021-05-30  Peter Bergner  
> 
>gcc/
>PR target/99842
>* config/rs6000/predicates.md(mma_assemble_input_operand): Allow
>indexed form addresses.
> 
>gcc/testsuite/
>PR target/99842
>* g++.target/powerpc/pr99842.C: New.
> ---

Re: [TCWG CI] 401.bzip2 grew in size by 11% after llvm: [MachineSink] Disable if there are any irreducible cycles

2022-03-29 Thread Maxim Kuvyrkov
Hi Nikita,

Your patch seems to increase code-size of 401.bzip2 by 11% at -Oz.  This is due 
to BZ2_decompress() function growing by 56%.

Would you please investigate and see if this regression can be avoided?

Please let us know if you need help reproducing or analyzing the problem.

Regards,

--
Maxim Kuvyrkov
https://www.linaro.org




> On Mar 27, 2022, at 11:26 AM, ci_not...@linaro.org wrote:
> 
> After llvm commit 6fde0439512580df793f3f48f95757b47de40d2b
> Author: Nikita Popov 
> 
>[MachineSink] Disable if there are any irreducible cycles
> 
> the following benchmarks grew in size by more than 1%:
> - 401.bzip2 grew in size by 11% from 36213 to 40325 bytes
>  - 401.bzip2:[.] BZ2_decompress grew in size by 56% from 7400 to 11560 bytes
> 
> Below reproducer instructions can be used to re-build both "first_bad" and 
> "last_good" cross-toolchains used in this bisection.  Naturally, the scripts 
> will fail when triggerring benchmarking jobs if you don't have access to 
> Linaro TCWG CI.
> 
> For your convenience, we have uploaded tarballs with pre-processed source and 
> assembly files at:
> - First_bad save-temps: 
> https://ci.linaro.org/job/tcwg_bmk_ci_llvm-bisect-tcwg_bmk_apm-llvm-master-arm-spec2k6-Oz/20/artifact/artifacts/build-6fde0439512580df793f3f48f95757b47de40d2b/save-temps/
> - Last_good save-temps: 
> https://ci.linaro.org/job/tcwg_bmk_ci_llvm-bisect-tcwg_bmk_apm-llvm-master-arm-spec2k6-Oz/20/artifact/artifacts/build-eb27da7dec67f1a36505b589b786ba1a499c274a/save-temps/
> - Baseline save-temps: 
> https://ci.linaro.org/job/tcwg_bmk_ci_llvm-bisect-tcwg_bmk_apm-llvm-master-arm-spec2k6-Oz/20/artifact/artifacts/build-baseline/save-temps/
> 
> Configuration:
> - Benchmark: SPEC CPU2006
> - Toolchain: Clang + Glibc + LLVM Linker
> - Version: all components were built from their tip of trunk
> - Target: arm-linux-gnueabihf
> - Compiler flags: -Oz -mthumb
> - Hardware: APM Mustang 8x X-Gene1
> 
> This benchmarking CI is work-in-progress, and we welcome feedback and 
> suggestions at linaro-toolchain@lists.linaro.org .  In our improvement plans 
> is to add support for SPEC CPU2017 benchmarks and provide "perf 
> report/annotate" data behind these reports.
> 
> THIS IS THE END OF INTERESTING STUFF.  BELOW ARE LINKS TO BUILDS, 
> REPRODUCTION INSTRUCTIONS, AND THE RAW COMMIT.
> 
> This commit has regressed these CI configurations:
> - tcwg_bmk_llvm_apm/llvm-master-arm-spec2k6-Oz
> - tcwg_bmk_llvm_apm/llvm-master-arm-spec2k6-Oz_LTO
> 
> First_bad build: 
> https://ci.linaro.org/job/tcwg_bmk_ci_llvm-bisect-tcwg_bmk_apm-llvm-master-arm-spec2k6-Oz/20/artifact/artifacts/build-6fde0439512580df793f3f48f95757b47de40d2b/
> Last_good build: 
> https://ci.linaro.org/job/tcwg_bmk_ci_llvm-bisect-tcwg_bmk_apm-llvm-master-arm-spec2k6-Oz/20/artifact/artifacts/build-eb27da7dec67f1a36505b589b786ba1a499c274a/
> Baseline build: 
> https://ci.linaro.org/job/tcwg_bmk_ci_llvm-bisect-tcwg_bmk_apm-llvm-master-arm-spec2k6-Oz/20/artifact/artifacts/build-baseline/
> Even more details: 
> https://ci.linaro.org/job/tcwg_bmk_ci_llvm-bisect-tcwg_bmk_apm-llvm-master-arm-spec2k6-Oz/20/artifact/artifacts/
> 
> Reproduce builds:
> 
> mkdir investigate-llvm-6fde0439512580df793f3f48f95757b47de40d2b
> cd investigate-llvm-6fde0439512580df793f3f48f95757b47de40d2b
> 
> # Fetch scripts
> git clone https://git.linaro.org/toolchain/jenkins-scripts
> 
> # Fetch manifests and test.sh script
> mkdir -p artifacts/manifests
> curl -o artifacts/manifests/build-baseline.sh 
> https://ci.linaro.org/job/tcwg_bmk_ci_llvm-bisect-tcwg_bmk_apm-llvm-master-arm-spec2k6-Oz/20/artifact/artifacts/manifests/build-baseline.sh
>  --fail
> curl -o artifacts/manifests/build-parameters.sh 
> https://ci.linaro.org/job/tcwg_bmk_ci_llvm-bisect-tcwg_bmk_apm-llvm-master-arm-spec2k6-Oz/20/artifact/artifacts/manifests/build-parameters.sh
>  --fail
> curl -o artifacts/test.sh 
> https://ci.linaro.org/job/tcwg_bmk_ci_llvm-bisect-tcwg_bmk_apm-llvm-master-arm-spec2k6-Oz/20/artifact/artifacts/test.sh
>  --fail
> chmod +x artifacts/test.sh
> 
> # Reproduce the baseline build (build all pre-requisites)
> ./jenkins-scripts/tcwg_bmk-build.sh @@ artifacts/manifests/build-baseline.sh
> 
> # Save baseline build state (which is then restored in artifacts/test.sh)
> mkdir -p ./bisect
> rsync -a --del --delete-excluded --exclude /bisect/ --exclude /artifacts/ 
> --exclude /llvm/ ./ ./bisect/baseline/
> 
> cd llvm
> 
> # Reproduce first_bad build
> git checkout --detach 6fde0439512580df793f3f48f95757b47de40d2b
> ../artifacts/test.sh
> 
> # Reproduce last_good build
> git checkout --detach eb27da7dec67f1a36505b589b786ba1a499c274a
> ../artifacts/test.sh
> 
> cd ..
> 
> 
> 

Re: [TCWG CI] 456.hmmer grew in size by 9% after llvm: Extend the `uwtable` attribute with unwind table kind

2022-03-29 Thread Maxim Kuvyrkov
Hi Momchil,

Thanks for looking into this!

Looking at the binaries (attached) text size has increased ...

$ size 456.hmmer-before.elf
   textdata bss dec hex filename
 1049603221   81752  189933   2e5ed ./456.hmmer-before.elf
$ size 456.hmmer-after.elf
   textdata bss dec hex filename
 1139123221   81752  198885   308e5 ./456.hmmer-after.elf

... due to .eh_frame_hdr and .eh_frame sections:

  [Nr] Name  TypeAddress  OffSize   ES Flg 
Lk Inf Al
BEFORE:
  [12] .eh_frame_hdr PROGBITS00204848 004848 0004cc 00   A  
0   0  4
  [13] .eh_frame PROGBITS00204d18 004d18 0015bc 00   A  
0   0  8
AFTER:
  [12] .eh_frame_hdr PROGBITS00204848 004848 0004cc 00   A  
0   0  4
  [13] .eh_frame PROGBITS00204d18 004d18 0015bc 00   A  
0   0  8

.

This problem seems to be occurring only at -Oz -flto, and it is the outlined 
functions most affected.  CC’ing Yvan (who worked on outliner recently) who may 
have some insight into inner workings of outliner, LTO and unwind info.

Regards,

--
Maxim Kuvyrkov
https://www.linaro.org

> On 28 Mar 2022, at 19:46, Momchil Velikov  wrote:
> 
>> Your patch seems to significantly increase code-size of several benchmarks — 
>> by up to 9%.  Would
>> you please investigate whether this can be avoided?
> 
> Could you, please, confirm if the size increase is due to having bigger
> `.eh_frame`/`.debug_frame` sections?
> 
> It looks like the reason is generating a bunch of non-sensical unwind info
> entries for outlined functions, e.g.:
> 
>00bc 0010 00c0 FDE cie= 
> pc=0c34..0c3c
>  DW_CFA_nop
>  DW_CFA_nop
>  DW_CFA_nop
> 
> I'm working on a patch to not emit .cfi_startproc/.cfi_endproc if a function 
> does not contain any CFI instructions.
> 
> ~chill
> IMPORTANT NOTICE: The contents of this email and any attachments are 
> confidential and may also be privileged. If you are not the intended 
> recipient, please notify the sender immediately and do not disclose the 
> contents to any other person, use it for any purpose, or store or copy the 
> information in any medium. Thank you.
___
linaro-toolchain mailing list -- linaro-toolchain@lists.linaro.org
To unsubscribe send an email to linaro-toolchain-le...@lists.linaro.org


Re: [TCWG CI] 456.hmmer grew in size by 9% after llvm: Extend the `uwtable` attribute with unwind table kind

2022-03-25 Thread Maxim Kuvyrkov
Hi Momchil,

Your patch seems to significantly increase code-size of several benchmarks — by 
up to 9%.  Would you please investigate whether this can be avoided?

Please let us know if you need assistance with reproducing the regressions.

Thank you,

--
Maxim Kuvyrkov
https://www.linaro.org

> On 25 Mar 2022, at 14:45, ci_not...@linaro.org wrote:
> 
> After llvm commit 6398903ac8c141820a84f3063b7956abe1742500
> Author: Momchil Velikov 
> 
>Extend the `uwtable` attribute with unwind table kind
> 
> the following benchmarks grew in size by more than 1%:
> - 456.hmmer grew in size by 9% from 104960 to 113912 bytes
> - 403.gcc grew in size by 9% from 2191632 to 2394404 bytes
> - 400.perlbench grew in size by 9% from 803690 to 879478 bytes
> - 458.sjeng grew in size by 5% from 102355 to 107719 bytes
> - 401.bzip2 grew in size by 3% from 43428 to 44772 bytes
> 
> Below reproducer instructions can be used to re-build both "first_bad" and 
> "last_good" cross-toolchains used in this bisection.  Naturally, the scripts 
> will fail when triggerring benchmarking jobs if you don't have access to 
> Linaro TCWG CI.
> 
> For your convenience, we have uploaded tarballs with pre-processed source and 
> assembly files at:
> - First_bad save-temps: 
> https://ci.linaro.org/job/tcwg_bmk_ci_llvm-bisect-tcwg_bmk_apm-llvm-master-aarch64-spec2k6-Oz_LTO/13/artifact/artifacts/build-6398903ac8c141820a84f3063b7956abe1742500/save-temps/
> - Last_good save-temps: 
> https://ci.linaro.org/job/tcwg_bmk_ci_llvm-bisect-tcwg_bmk_apm-llvm-master-aarch64-spec2k6-Oz_LTO/13/artifact/artifacts/build-48f18845846bba4cf3e5e9fa2150d4c0253b/save-temps/
> - Baseline save-temps: 
> https://ci.linaro.org/job/tcwg_bmk_ci_llvm-bisect-tcwg_bmk_apm-llvm-master-aarch64-spec2k6-Oz_LTO/13/artifact/artifacts/build-baseline/save-temps/
> 
> Configuration:
> - Benchmark: SPEC CPU2006
> - Toolchain: Clang + Glibc + LLVM Linker
> - Version: all components were built from their tip of trunk
> - Target: aarch64-linux-gnu
> - Compiler flags: -Oz -flto
> - Hardware: APM Mustang 8x X-Gene1
> 
> This benchmarking CI is work-in-progress, and we welcome feedback and 
> suggestions at linaro-toolchain@lists.linaro.org .  In our improvement plans 
> is to add support for SPEC CPU2017 benchmarks and provide "perf 
> report/annotate" data behind these reports.
> 
> THIS IS THE END OF INTERESTING STUFF.  BELOW ARE LINKS TO BUILDS, 
> REPRODUCTION INSTRUCTIONS, AND THE RAW COMMIT.
> 
> This commit has regressed these CI configurations:
> - tcwg_bmk_llvm_apm/llvm-master-aarch64-spec2k6-Oz_LTO
> 
> First_bad build: 
> https://ci.linaro.org/job/tcwg_bmk_ci_llvm-bisect-tcwg_bmk_apm-llvm-master-aarch64-spec2k6-Oz_LTO/13/artifact/artifacts/build-6398903ac8c141820a84f3063b7956abe1742500/
> Last_good build: 
> https://ci.linaro.org/job/tcwg_bmk_ci_llvm-bisect-tcwg_bmk_apm-llvm-master-aarch64-spec2k6-Oz_LTO/13/artifact/artifacts/build-48f18845846bba4cf3e5e9fa2150d4c0253b/
> Baseline build: 
> https://ci.linaro.org/job/tcwg_bmk_ci_llvm-bisect-tcwg_bmk_apm-llvm-master-aarch64-spec2k6-Oz_LTO/13/artifact/artifacts/build-baseline/
> Even more details: 
> https://ci.linaro.org/job/tcwg_bmk_ci_llvm-bisect-tcwg_bmk_apm-llvm-master-aarch64-spec2k6-Oz_LTO/13/artifact/artifacts/
> 
> Reproduce builds:
> 
> mkdir investigate-llvm-6398903ac8c141820a84f3063b7956abe1742500
> cd investigate-llvm-6398903ac8c141820a84f3063b7956abe1742500
> 
> # Fetch scripts
> git clone https://git.linaro.org/toolchain/jenkins-scripts
> 
> # Fetch manifests and test.sh script
> mkdir -p artifacts/manifests
> curl -o artifacts/manifests/build-baseline.sh 
> https://ci.linaro.org/job/tcwg_bmk_ci_llvm-bisect-tcwg_bmk_apm-llvm-master-aarch64-spec2k6-Oz_LTO/13/artifact/artifacts/manifests/build-baseline.sh
>  --fail
> curl -o artifacts/manifests/build-parameters.sh 
> https://ci.linaro.org/job/tcwg_bmk_ci_llvm-bisect-tcwg_bmk_apm-llvm-master-aarch64-spec2k6-Oz_LTO/13/artifact/artifacts/manifests/build-parameters.sh
>  --fail
> curl -o artifacts/test.sh 
> https://ci.linaro.org/job/tcwg_bmk_ci_llvm-bisect-tcwg_bmk_apm-llvm-master-aarch64-spec2k6-Oz_LTO/13/artifact/artifacts/test.sh
>  --fail
> chmod +x artifacts/test.sh
> 
> # Reproduce the baseline build (build all pre-requisites)
> ./jenkins-scripts/tcwg_bmk-build.sh @@ artifacts/manifests/build-baseline.sh
> 
> # Save baseline build state (which is then restored in artifacts/test.sh)
> mkdir -p ./bisect
> rsync -a --del --delete-excluded --exclude /bisect/ --exclude /artifacts/ 
> --exclude /llvm/ ./ ./bisect/baseline/
> 
> cd llvm
> 
> # Reproduce first_bad build
> git checkout --detach 6398903ac8c141820a84f3063b7956abe1742500
> ../artifacts/test.sh
> 
&

Re: [TCWG CI] 401.bzip2 grew in size by 9% after llvm: [LV] Remove `LoopVectorizationCostModel::useEmulatedMaskMemRefHack()`

2022-02-10 Thread Maxim Kuvyrkov
Hi Roman,

Your below patch increased code-size of 401.bzip2 by 9% on 32-bit ARM when 
compiled with -Os.  That’s quite a lot, would you please investigate whether 
this regression can be avoided?

Please let me know if this doesn’t reproduce for you and I’ll try to help.

Thank you,

--
Maxim Kuvyrkov
https://www.linaro.org

> On 9 Feb 2022, at 17:10, ci_not...@linaro.org wrote:
> 
> After llvm commit 77a0da926c9ea86afa9baf28158d79c7678fc6b9
> Author: Roman Lebedev 
> 
>[LV] Remove `LoopVectorizationCostModel::useEmulatedMaskMemRefHack()`
> 
> the following benchmarks grew in size by more than 1%:
> - 401.bzip2 grew in size by 9% from 37909 to 41405 bytes
>  - 401.bzip2:[.] BZ2_decompress grew in size by 42% from 7664 to 10864 bytes
> - 429.mcf grew in size by 2% from 7732 to 7908 bytes
> 
> Below reproducer instructions can be used to re-build both "first_bad" and 
> "last_good" cross-toolchains used in this bisection.  Naturally, the scripts 
> will fail when triggerring benchmarking jobs if you don't have access to 
> Linaro TCWG CI.
> 
> For your convenience, we have uploaded tarballs with pre-processed source and 
> assembly files at:
> - First_bad save-temps: 
> https://ci.linaro.org/job/tcwg_bmk_ci_llvm-bisect-tcwg_bmk_apm-llvm-master-arm-spec2k6-Os/2/artifact/artifacts/build-77a0da926c9ea86afa9baf28158d79c7678fc6b9/save-temps/
> - Last_good save-temps: 
> https://ci.linaro.org/job/tcwg_bmk_ci_llvm-bisect-tcwg_bmk_apm-llvm-master-arm-spec2k6-Os/2/artifact/artifacts/build-f59787084e09aeb787cb3be3103b2419ccd14163/save-temps/
> - Baseline save-temps: 
> https://ci.linaro.org/job/tcwg_bmk_ci_llvm-bisect-tcwg_bmk_apm-llvm-master-arm-spec2k6-Os/2/artifact/artifacts/build-baseline/save-temps/
> 
> Configuration:
> - Benchmark: SPEC CPU2006
> - Toolchain: Clang + Glibc + LLVM Linker
> - Version: all components were built from their tip of trunk
> - Target: arm-linux-gnueabihf
> - Compiler flags: -Os -mthumb
> - Hardware: APM Mustang 8x X-Gene1
> 
> This benchmarking CI is work-in-progress, and we welcome feedback and 
> suggestions at linaro-toolchain@lists.linaro.org .  In our improvement plans 
> is to add support for SPEC CPU2017 benchmarks and provide "perf 
> report/annotate" data behind these reports.
> 
> THIS IS THE END OF INTERESTING STUFF.  BELOW ARE LINKS TO BUILDS, 
> REPRODUCTION INSTRUCTIONS, AND THE RAW COMMIT.
> 
> This commit has regressed these CI configurations:
> - tcwg_bmk_llvm_apm/llvm-master-aarch64-spec2k6-Os_LTO
> - tcwg_bmk_llvm_apm/llvm-master-arm-spec2k6-Os
> - tcwg_bmk_llvm_apm/llvm-master-arm-spec2k6-Os_LTO
> 
> First_bad build: 
> https://ci.linaro.org/job/tcwg_bmk_ci_llvm-bisect-tcwg_bmk_apm-llvm-master-arm-spec2k6-Os/2/artifact/artifacts/build-77a0da926c9ea86afa9baf28158d79c7678fc6b9/
> Last_good build: 
> https://ci.linaro.org/job/tcwg_bmk_ci_llvm-bisect-tcwg_bmk_apm-llvm-master-arm-spec2k6-Os/2/artifact/artifacts/build-f59787084e09aeb787cb3be3103b2419ccd14163/
> Baseline build: 
> https://ci.linaro.org/job/tcwg_bmk_ci_llvm-bisect-tcwg_bmk_apm-llvm-master-arm-spec2k6-Os/2/artifact/artifacts/build-baseline/
> Even more details: 
> https://ci.linaro.org/job/tcwg_bmk_ci_llvm-bisect-tcwg_bmk_apm-llvm-master-arm-spec2k6-Os/2/artifact/artifacts/
> 
> Reproduce builds:
> 
> mkdir investigate-llvm-77a0da926c9ea86afa9baf28158d79c7678fc6b9
> cd investigate-llvm-77a0da926c9ea86afa9baf28158d79c7678fc6b9
> 
> # Fetch scripts
> git clone https://git.linaro.org/toolchain/jenkins-scripts
> 
> # Fetch manifests and test.sh script
> mkdir -p artifacts/manifests
> curl -o artifacts/manifests/build-baseline.sh 
> https://ci.linaro.org/job/tcwg_bmk_ci_llvm-bisect-tcwg_bmk_apm-llvm-master-arm-spec2k6-Os/2/artifact/artifacts/manifests/build-baseline.sh
>  --fail
> curl -o artifacts/manifests/build-parameters.sh 
> https://ci.linaro.org/job/tcwg_bmk_ci_llvm-bisect-tcwg_bmk_apm-llvm-master-arm-spec2k6-Os/2/artifact/artifacts/manifests/build-parameters.sh
>  --fail
> curl -o artifacts/test.sh 
> https://ci.linaro.org/job/tcwg_bmk_ci_llvm-bisect-tcwg_bmk_apm-llvm-master-arm-spec2k6-Os/2/artifact/artifacts/test.sh
>  --fail
> chmod +x artifacts/test.sh
> 
> # Reproduce the baseline build (build all pre-requisites)
> ./jenkins-scripts/tcwg_bmk-build.sh @@ artifacts/manifests/build-baseline.sh
> 
> # Save baseline build state (which is then restored in artifacts/test.sh)
> mkdir -p ./bisect
> rsync -a --del --delete-excluded --exclude /bisect/ --exclude /artifacts/ 
> --exclude /llvm/ ./ ./bisect/baseline/
> 
> cd llvm
> 
> # Reproduce first_bad build
> git checkout --detach 77a0da926c9ea86afa9baf28158d79c7678fc6b9
> ../artifacts/test.sh
> 
> # Reproduce last_good build
> git checkout --detac

Re: [TCWG CI] 453.povray failed to build after llvm: [SLP]Fix reused extracts cost.

2021-12-07 Thread Maxim Kuvyrkov
Cool, thanks!
--
Maxim Kuvyrkov
https://www.linaro.org

> On 7 Dec 2021, at 15:10, Alexey Bataev  wrote:
> 
> I committed a fix yesterday, should be fixed. Another one planning to commit 
> later today or tomorrow. 
> 
> Best regards,
> Alexey Bataev
> 
>> 7 дек. 2021 г., в 07:08, Maxim Kuvyrkov  
>> написал(а):
>> 
>> Hi Alexey,
>> 
>> After your patch Clang crashes while building 453.povray for 
>> aarch64-linux-gnu.  Apparently, this happens only with LTO enabled at -O2 
>> and -O3.
>> 
>> Did you get any bug reports against this patch already?
>> 
>> Thanks,
>> 
>> --
>> Maxim Kuvyrkov
>> https://www.linaro.org
>> 
>>> On 5 Dec 2021, at 02:55, ci_not...@linaro.org wrote:
>>> 
>>> After llvm commit ba74bb3a226e1b4660537f274627285b1bf41ee1
>>> Author: Alexey Bataev 
>>> 
>>>  [SLP]Fix reused extracts cost.
>>> 
>>> the following benchmarks slowed down by more than 2%:
>>> - 453.povray failed to build
>>> 
>>> Below reproducer instructions can be used to re-build both "first_bad" and 
>>> "last_good" cross-toolchains used in this bisection.  Naturally, the 
>>> scripts will fail when triggerring benchmarking jobs if you don't have 
>>> access to Linaro TCWG CI.
>>> 
>>> For your convenience, we have uploaded tarballs with pre-processed source 
>>> and assembly files at:
>>> - First_bad save-temps: 
>>> https://ci.linaro.org/job/tcwg_bmk_ci_llvm-bisect-tcwg_bmk_tx1-llvm-master-aarch64-spec2k6-O3_LTO/39/artifact/artifacts/build-ba74bb3a226e1b4660537f274627285b1bf41ee1/save-temps/
>>> - Last_good save-temps: 
>>> https://ci.linaro.org/job/tcwg_bmk_ci_llvm-bisect-tcwg_bmk_tx1-llvm-master-aarch64-spec2k6-O3_LTO/39/artifact/artifacts/build-78cc133c63173a4b5b7a43750cc507d4cff683cf/save-temps/
>>> - Baseline save-temps: 
>>> https://ci.linaro.org/job/tcwg_bmk_ci_llvm-bisect-tcwg_bmk_tx1-llvm-master-aarch64-spec2k6-O3_LTO/39/artifact/artifacts/build-baseline/save-temps/
>>> 
>>> Configuration:
>>> - Benchmark: SPEC CPU2006
>>> - Toolchain: Clang + Glibc + LLVM Linker
>>> - Version: all components were built from their tip of trunk
>>> - Target: aarch64-linux-gnu
>>> - Compiler flags: -O3 -flto
>>> - Hardware: NVidia TX1 4x Cortex-A57
>>> 
>>> This benchmarking CI is work-in-progress, and we welcome feedback and 
>>> suggestions at linaro-toolchain@lists.linaro.org .  In our improvement 
>>> plans is to add support for SPEC CPU2017 benchmarks and provide "perf 
>>> report/annotate" data behind these reports.
>>> 
>>> THIS IS THE END OF INTERESTING STUFF.  BELOW ARE LINKS TO BUILDS, 
>>> REPRODUCTION INSTRUCTIONS, AND THE RAW COMMIT.
>>> 
>>> This commit has regressed these CI configurations:
>>> - tcwg_bmk_llvm_tx1/llvm-master-aarch64-spec2k6-O3_LTO
>>> 
>>> First_bad build: 
>>> https://ci.linaro.org/job/tcwg_bmk_ci_llvm-bisect-tcwg_bmk_tx1-llvm-master-aarch64-spec2k6-O3_LTO/39/artifact/artifacts/build-ba74bb3a226e1b4660537f274627285b1bf41ee1/
>>> Last_good build: 
>>> https://ci.linaro.org/job/tcwg_bmk_ci_llvm-bisect-tcwg_bmk_tx1-llvm-master-aarch64-spec2k6-O3_LTO/39/artifact/artifacts/build-78cc133c63173a4b5b7a43750cc507d4cff683cf/
>>> Baseline build: 
>>> https://ci.linaro.org/job/tcwg_bmk_ci_llvm-bisect-tcwg_bmk_tx1-llvm-master-aarch64-spec2k6-O3_LTO/39/artifact/artifacts/build-baseline/
>>> Even more details: 
>>> https://ci.linaro.org/job/tcwg_bmk_ci_llvm-bisect-tcwg_bmk_tx1-llvm-master-aarch64-spec2k6-O3_LTO/39/artifact/artifacts/
>>> 
>>> Reproduce builds:
>>> 
>>> mkdir investigate-llvm-ba74bb3a226e1b4660537f274627285b1bf41ee1
>>> cd investigate-llvm-ba74bb3a226e1b4660537f274627285b1bf41ee1
>>> 
>>> # Fetch scripts
>>> git clone https://git.linaro.org/toolchain/jenkins-scripts
>>> 
>>> # Fetch manifests and test.sh script
>>> mkdir -p artifacts/manifests
>>> curl -o artifacts/manifests/build-baseline.sh 
>>> https://ci.linaro.org/job/tcwg_bmk_ci_llvm-bisect-tcwg_bmk_tx1-llvm-master-aarch64-spec2k6-O3_LTO/39/artifact/artifacts/manifests/build-baseline.sh
>>>  --fail
>>> curl -o artifacts/manifests/build-parameters.sh 
>>> https://ci.linaro.org/job/tcwg_bmk_ci_llvm-bisect-tcwg_bmk_tx1-llvm-master-aarch64-spec2k6-O3_LTO/39/artifact/artifacts/manifests/build-parameters.sh
>>>  --fail
>>> curl -o artifacts/test.sh 
>>> https://c

Re: [TCWG CI] 453.povray failed to build after llvm: [SLP]Fix reused extracts cost.

2021-12-07 Thread Maxim Kuvyrkov
Hi Alexey,

After your patch Clang crashes while building 453.povray for aarch64-linux-gnu. 
 Apparently, this happens only with LTO enabled at -O2 and -O3.

Did you get any bug reports against this patch already?

Thanks,

--
Maxim Kuvyrkov
https://www.linaro.org

> On 5 Dec 2021, at 02:55, ci_not...@linaro.org wrote:
> 
> After llvm commit ba74bb3a226e1b4660537f274627285b1bf41ee1
> Author: Alexey Bataev 
> 
>[SLP]Fix reused extracts cost.
> 
> the following benchmarks slowed down by more than 2%:
> - 453.povray failed to build
> 
> Below reproducer instructions can be used to re-build both "first_bad" and 
> "last_good" cross-toolchains used in this bisection.  Naturally, the scripts 
> will fail when triggerring benchmarking jobs if you don't have access to 
> Linaro TCWG CI.
> 
> For your convenience, we have uploaded tarballs with pre-processed source and 
> assembly files at:
> - First_bad save-temps: 
> https://ci.linaro.org/job/tcwg_bmk_ci_llvm-bisect-tcwg_bmk_tx1-llvm-master-aarch64-spec2k6-O3_LTO/39/artifact/artifacts/build-ba74bb3a226e1b4660537f274627285b1bf41ee1/save-temps/
> - Last_good save-temps: 
> https://ci.linaro.org/job/tcwg_bmk_ci_llvm-bisect-tcwg_bmk_tx1-llvm-master-aarch64-spec2k6-O3_LTO/39/artifact/artifacts/build-78cc133c63173a4b5b7a43750cc507d4cff683cf/save-temps/
> - Baseline save-temps: 
> https://ci.linaro.org/job/tcwg_bmk_ci_llvm-bisect-tcwg_bmk_tx1-llvm-master-aarch64-spec2k6-O3_LTO/39/artifact/artifacts/build-baseline/save-temps/
> 
> Configuration:
> - Benchmark: SPEC CPU2006
> - Toolchain: Clang + Glibc + LLVM Linker
> - Version: all components were built from their tip of trunk
> - Target: aarch64-linux-gnu
> - Compiler flags: -O3 -flto
> - Hardware: NVidia TX1 4x Cortex-A57
> 
> This benchmarking CI is work-in-progress, and we welcome feedback and 
> suggestions at linaro-toolchain@lists.linaro.org .  In our improvement plans 
> is to add support for SPEC CPU2017 benchmarks and provide "perf 
> report/annotate" data behind these reports.
> 
> THIS IS THE END OF INTERESTING STUFF.  BELOW ARE LINKS TO BUILDS, 
> REPRODUCTION INSTRUCTIONS, AND THE RAW COMMIT.
> 
> This commit has regressed these CI configurations:
> - tcwg_bmk_llvm_tx1/llvm-master-aarch64-spec2k6-O3_LTO
> 
> First_bad build: 
> https://ci.linaro.org/job/tcwg_bmk_ci_llvm-bisect-tcwg_bmk_tx1-llvm-master-aarch64-spec2k6-O3_LTO/39/artifact/artifacts/build-ba74bb3a226e1b4660537f274627285b1bf41ee1/
> Last_good build: 
> https://ci.linaro.org/job/tcwg_bmk_ci_llvm-bisect-tcwg_bmk_tx1-llvm-master-aarch64-spec2k6-O3_LTO/39/artifact/artifacts/build-78cc133c63173a4b5b7a43750cc507d4cff683cf/
> Baseline build: 
> https://ci.linaro.org/job/tcwg_bmk_ci_llvm-bisect-tcwg_bmk_tx1-llvm-master-aarch64-spec2k6-O3_LTO/39/artifact/artifacts/build-baseline/
> Even more details: 
> https://ci.linaro.org/job/tcwg_bmk_ci_llvm-bisect-tcwg_bmk_tx1-llvm-master-aarch64-spec2k6-O3_LTO/39/artifact/artifacts/
> 
> Reproduce builds:
> 
> mkdir investigate-llvm-ba74bb3a226e1b4660537f274627285b1bf41ee1
> cd investigate-llvm-ba74bb3a226e1b4660537f274627285b1bf41ee1
> 
> # Fetch scripts
> git clone https://git.linaro.org/toolchain/jenkins-scripts
> 
> # Fetch manifests and test.sh script
> mkdir -p artifacts/manifests
> curl -o artifacts/manifests/build-baseline.sh 
> https://ci.linaro.org/job/tcwg_bmk_ci_llvm-bisect-tcwg_bmk_tx1-llvm-master-aarch64-spec2k6-O3_LTO/39/artifact/artifacts/manifests/build-baseline.sh
>  --fail
> curl -o artifacts/manifests/build-parameters.sh 
> https://ci.linaro.org/job/tcwg_bmk_ci_llvm-bisect-tcwg_bmk_tx1-llvm-master-aarch64-spec2k6-O3_LTO/39/artifact/artifacts/manifests/build-parameters.sh
>  --fail
> curl -o artifacts/test.sh 
> https://ci.linaro.org/job/tcwg_bmk_ci_llvm-bisect-tcwg_bmk_tx1-llvm-master-aarch64-spec2k6-O3_LTO/39/artifact/artifacts/test.sh
>  --fail
> chmod +x artifacts/test.sh
> 
> # Reproduce the baseline build (build all pre-requisites)
> ./jenkins-scripts/tcwg_bmk-build.sh @@ artifacts/manifests/build-baseline.sh
> 
> # Save baseline build state (which is then restored in artifacts/test.sh)
> mkdir -p ./bisect
> rsync -a --del --delete-excluded --exclude /bisect/ --exclude /artifacts/ 
> --exclude /llvm/ ./ ./bisect/baseline/
> 
> cd llvm
> 
> # Reproduce first_bad build
> git checkout --detach ba74bb3a226e1b4660537f274627285b1bf41ee1
> ../artifacts/test.sh
> 
> # Reproduce last_good build
> git checkout --detach 78cc133c63173a4b5b7a43750cc507d4cff683cf
> ../artifacts/test.sh
> 
> cd ..
> 
> 
> Full commit (up to 1000 lines):
> 
> commit ba74bb3a226e1b4660537f274627285b1bf41ee1
> Author: Alexey Bataev 
> Date:   Thu Dec 2 04:22:55 2021 -0800
> 
>[SLP]Fix r

Re: [TCWG CI] 433.milc slowed down by 4% after llvm: Add missing header

2021-12-07 Thread Maxim Kuvyrkov
Hi David,

This is a false positive, sorry for the noise.  Our CI bisects performance 
regressions down to a single commit, and notifies patch authors about 
significant regressions.

Since this is benchmarking CI, some noise is expected.  Apparently, 433.milc 
started to show bi-modal performance, and bisection [mistakenly] converged on 
this patch.  We work to reduce benchmarking noise and otherwise improve 
benchmarking CI.

Regards!

--
Maxim Kuvyrkov
https://www.linaro.org

> On 7 Dec 2021, at 06:12, David Blaikie  wrote:
> 
> Seems... unlikely this change had a performance impact.
> 
> Also is this email meant to be sent to public contributors like myself, or
> only intended for some Linaro folks?
> 
> On Sun, Dec 5, 2021 at 6:18 AM  wrote:
> 
>> After llvm commit bd4c6a476fd037fb07a1c484f75d93ee40713d3d
>> Author: David Blaikie 
>> 
>>Add missing header
>> 
>> the following benchmarks slowed down by more than 2%:
>> - 433.milc slowed down by 4% from 12427 to 12916 perf samples
>> 
>> Below reproducer instructions can be used to re-build both "first_bad" and
>> "last_good" cross-toolchains used in this bisection.  Naturally, the
>> scripts will fail when triggerring benchmarking jobs if you don't have
>> access to Linaro TCWG CI.
>> 
>> For your convenience, we have uploaded tarballs with pre-processed source
>> and assembly files at:
>> - First_bad save-temps:
>> https://ci.linaro.org/job/tcwg_bmk_ci_llvm-bisect-tcwg_bmk_tx1-llvm-master-aarch64-spec2k6-O2_LTO/34/artifact/artifacts/build-bd4c6a476fd037fb07a1c484f75d93ee40713d3d/save-temps/
>> - Last_good save-temps:
>> https://ci.linaro.org/job/tcwg_bmk_ci_llvm-bisect-tcwg_bmk_tx1-llvm-master-aarch64-spec2k6-O2_LTO/34/artifact/artifacts/build-7d4da4e1ab7f79e51db0d5c2a0f5ef1711122dd7/save-temps/
>> - Baseline save-temps:
>> https://ci.linaro.org/job/tcwg_bmk_ci_llvm-bisect-tcwg_bmk_tx1-llvm-master-aarch64-spec2k6-O2_LTO/34/artifact/artifacts/build-baseline/save-temps/
>> 
>> Configuration:
>> - Benchmark: SPEC CPU2006
>> - Toolchain: Clang + Glibc + LLVM Linker
>> - Version: all components were built from their tip of trunk
>> - Target: aarch64-linux-gnu
>> - Compiler flags: -O2 -flto
>> - Hardware: NVidia TX1 4x Cortex-A57
>> 
>> This benchmarking CI is work-in-progress, and we welcome feedback and
>> suggestions at linaro-toolchain@lists.linaro.org .  In our improvement
>> plans is to add support for SPEC CPU2017 benchmarks and provide "perf
>> report/annotate" data behind these reports.
>> 
>> THIS IS THE END OF INTERESTING STUFF.  BELOW ARE LINKS TO BUILDS,
>> REPRODUCTION INSTRUCTIONS, AND THE RAW COMMIT.
>> 
>> This commit has regressed these CI configurations:
>> - tcwg_bmk_llvm_tx1/llvm-master-aarch64-spec2k6-O2_LTO
>> 
>> First_bad build:
>> https://ci.linaro.org/job/tcwg_bmk_ci_llvm-bisect-tcwg_bmk_tx1-llvm-master-aarch64-spec2k6-O2_LTO/34/artifact/artifacts/build-bd4c6a476fd037fb07a1c484f75d93ee40713d3d/
>> Last_good build:
>> https://ci.linaro.org/job/tcwg_bmk_ci_llvm-bisect-tcwg_bmk_tx1-llvm-master-aarch64-spec2k6-O2_LTO/34/artifact/artifacts/build-7d4da4e1ab7f79e51db0d5c2a0f5ef1711122dd7/
>> Baseline build:
>> https://ci.linaro.org/job/tcwg_bmk_ci_llvm-bisect-tcwg_bmk_tx1-llvm-master-aarch64-spec2k6-O2_LTO/34/artifact/artifacts/build-baseline/
>> Even more details:
>> https://ci.linaro.org/job/tcwg_bmk_ci_llvm-bisect-tcwg_bmk_tx1-llvm-master-aarch64-spec2k6-O2_LTO/34/artifact/artifacts/
>> 
>> Reproduce builds:
>> 
>> mkdir investigate-llvm-bd4c6a476fd037fb07a1c484f75d93ee40713d3d
>> cd investigate-llvm-bd4c6a476fd037fb07a1c484f75d93ee40713d3d
>> 
>> # Fetch scripts
>> git clone https://git.linaro.org/toolchain/jenkins-scripts
>> 
>> # Fetch manifests and test.sh script
>> mkdir -p artifacts/manifests
>> curl -o artifacts/manifests/build-baseline.sh
>> https://ci.linaro.org/job/tcwg_bmk_ci_llvm-bisect-tcwg_bmk_tx1-llvm-master-aarch64-spec2k6-O2_LTO/34/artifact/artifacts/manifests/build-baseline.sh
>> --fail
>> curl -o artifacts/manifests/build-parameters.sh
>> https://ci.linaro.org/job/tcwg_bmk_ci_llvm-bisect-tcwg_bmk_tx1-llvm-master-aarch64-spec2k6-O2_LTO/34/artifact/artifacts/manifests/build-parameters.sh
>> --fail
>> curl -o artifacts/test.sh
>> https://ci.linaro.org/job/tcwg_bmk_ci_llvm-bisect-tcwg_bmk_tx1-llvm-master-aarch64-spec2k6-O2_LTO/34/artifact/artifacts/test.sh
>> --fail
>> chmod +x artifacts/test.sh
>> 
>> # Reproduce the baseline build (build all pre-requisites)
>> ./jenkins-scripts/tcwg_bmk-build.sh @@
>> artifacts/mani

Re: [TCWG CI] 400.perlbench slowed down by 6% after llvm: [AArch64] Remove redundant ORRWrs which is generated by zero-extend

2021-10-27 Thread Maxim Kuvyrkov
Hi David,

Thanks for looking at this!

I can’t immediately say that this is a false positive, the performance 
difference reproduces in several independent builds.

Looking at the save-temps -- at least 400.perlbench’es regexec.s (which hosts 
S_regmatch()) has 19 extra instructions, which are, if I spotted correctly, for 
a couple of additional stack spills.  To get to the bottom of this we need to 
look at the runtime profiles, which are not automatically generated yet.  One 
need to dig them up from the raw benchmarking data we have stored.

--
Maxim Kuvyrkov
https://www.linaro.org

> On 27 Oct 2021, at 17:14, David Spickett  wrote:
> 
> I think this is a false positive/one off disturbance in the
> benchmarking. Based on the contents of the saved temps.
> 
> FastFullPelBlockMotionSearch has not changed at all. (so unless perf
> is saying time spent in that function and its callees went up, it must
> be something other than code change)
> 
> perlbench has assorted changes that all make sense with the llvm
> change. Some redundant movs removed, register numbers shuffled.
> Potentially it hit some slow path for this core, but doesn't seem like
> something to be concerned about for this change.
> 
> @Maxim Kuvyrkov Please correct me if I'm wrong.
> 
> On Wed, 27 Oct 2021 at 14:33,  wrote:
>> 
>> After llvm commit a502436259307f95e9c95437d8a1d2d07294341c
>> Author: Jingu Kang 
>> 
>>[AArch64] Remove redundant ORRWrs which is generated by zero-extend
>> 
>> the following benchmarks slowed down by more than 2%:
>> - 400.perlbench slowed down by 6% from 9792 to 10354 perf samples
>> - 464.h264ref slowed down by 4% from 11023 to 11509 perf samples
>>  - 464.h264ref:[.] FastFullPelBlockMotionSearch slowed down by 33% from 1634 
>> to 2180 perf samples
>> 
>> Below reproducer instructions can be used to re-build both "first_bad" and 
>> "last_good" cross-toolchains used in this bisection.  Naturally, the scripts 
>> will fail when triggerring benchmarking jobs if you don't have access to 
>> Linaro TCWG CI.
>> 
>> For your convenience, we have uploaded tarballs with pre-processed source 
>> and assembly files at:
>> - First_bad save-temps: 
>> https://ci.linaro.org/job/tcwg_bmk_ci_llvm-bisect-tcwg_bmk_tx1-llvm-master-aarch64-spec2k6-O3/30/artifact/artifacts/build-a502436259307f95e9c95437d8a1d2d07294341c/save-temps/
>> - Last_good save-temps: 
>> https://ci.linaro.org/job/tcwg_bmk_ci_llvm-bisect-tcwg_bmk_tx1-llvm-master-aarch64-spec2k6-O3/30/artifact/artifacts/build-6fa1b4ff4b05b9b9a432f7310802255c160c8f4f/save-temps/
>> - Baseline save-temps: 
>> https://ci.linaro.org/job/tcwg_bmk_ci_llvm-bisect-tcwg_bmk_tx1-llvm-master-aarch64-spec2k6-O3/30/artifact/artifacts/build-baseline/save-temps/
>> 
>> Configuration:
>> - Benchmark: SPEC CPU2006
>> - Toolchain: Clang + Glibc + LLVM Linker
>> - Version: all components were built from their tip of trunk
>> - Target: aarch64-linux-gnu
>> - Compiler flags: -O3
>> - Hardware: NVidia TX1 4x Cortex-A57
>> 
>> This benchmarking CI is work-in-progress, and we welcome feedback and 
>> suggestions at linaro-toolchain@lists.linaro.org .  In our improvement plans 
>> is to add support for SPEC CPU2017 benchmarks and provide "perf 
>> report/annotate" data behind these reports.
>> 
>> THIS IS THE END OF INTERESTING STUFF.  BELOW ARE LINKS TO BUILDS, 
>> REPRODUCTION INSTRUCTIONS, AND THE RAW COMMIT.
>> 
>> This commit has regressed these CI configurations:
>> - tcwg_bmk_llvm_tx1/llvm-master-aarch64-spec2k6-O3
>> 
>> First_bad build: 
>> https://ci.linaro.org/job/tcwg_bmk_ci_llvm-bisect-tcwg_bmk_tx1-llvm-master-aarch64-spec2k6-O3/30/artifact/artifacts/build-a502436259307f95e9c95437d8a1d2d07294341c/
>> Last_good build: 
>> https://ci.linaro.org/job/tcwg_bmk_ci_llvm-bisect-tcwg_bmk_tx1-llvm-master-aarch64-spec2k6-O3/30/artifact/artifacts/build-6fa1b4ff4b05b9b9a432f7310802255c160c8f4f/
>> Baseline build: 
>> https://ci.linaro.org/job/tcwg_bmk_ci_llvm-bisect-tcwg_bmk_tx1-llvm-master-aarch64-spec2k6-O3/30/artifact/artifacts/build-baseline/
>> Even more details: 
>> https://ci.linaro.org/job/tcwg_bmk_ci_llvm-bisect-tcwg_bmk_tx1-llvm-master-aarch64-spec2k6-O3/30/artifact/artifacts/
>> 
>> Reproduce builds:
>> 
>> mkdir investigate-llvm-a502436259307f95e9c95437d8a1d2d07294341c
>> cd investigate-llvm-a502436259307f95e9c95437d8a1d2d07294341c
>> 
>> # Fetch scripts
>> git clone https://git.linaro.org/toolchain/jenkins-scripts
>> 
>> # Fetch manifests and test.sh script
>> mkdir -p artifacts/manifests
>> curl -o artifacts/man

Re: [TCWG CI] 433.milc:[.] mult_su3_mat_vec slowed down by 11% after llvm: [AMDGPU] Enable load clustering in the post-RA scheduler

2021-10-26 Thread Maxim Kuvyrkov
Hi Jay,

This is a false positive.  We’ll take a look why this report was sent out.

Regards,

--
Maxim Kuvyrkov
https://www.linaro.org

> On 26 Oct 2021, at 22:19, ci_not...@linaro.org wrote:
> 
> After llvm commit 66e13c7f439cf162d7ed1d25883e71a5755ac7ec
> Author: Jay Foad 
> 
>[AMDGPU] Enable load clustering in the post-RA scheduler
> 
> the following hot functions slowed down by more than 10% (but their 
> benchmarks slowed down by less than 2%):
> - 433.milc:[.] mult_su3_mat_vec slowed down by 11% from 2163 to 2391 perf 
> samples
> 
> Below reproducer instructions can be used to re-build both "first_bad" and 
> "last_good" cross-toolchains used in this bisection.  Naturally, the scripts 
> will fail when triggerring benchmarking jobs if you don't have access to 
> Linaro TCWG CI.
> 
> For your convenience, we have uploaded tarballs with pre-processed source and 
> assembly files at:
> - First_bad save-temps: 
> https://ci.linaro.org/job/tcwg_bmk_ci_llvm-bisect-tcwg_bmk_tx1-llvm-master-aarch64-spec2k6-O2/27/artifact/artifacts/build-66e13c7f439cf162d7ed1d25883e71a5755ac7ec/save-temps/
> - Last_good save-temps: 
> https://ci.linaro.org/job/tcwg_bmk_ci_llvm-bisect-tcwg_bmk_tx1-llvm-master-aarch64-spec2k6-O2/27/artifact/artifacts/build-838b4a533e6853d44e0c6d1977bcf0b06557d4ab/save-temps/
> - Baseline save-temps: 
> https://ci.linaro.org/job/tcwg_bmk_ci_llvm-bisect-tcwg_bmk_tx1-llvm-master-aarch64-spec2k6-O2/27/artifact/artifacts/build-baseline/save-temps/
> 
> Configuration:
> - Benchmark: SPEC CPU2006
> - Toolchain: Clang + Glibc + LLVM Linker
> - Version: all components were built from their tip of trunk
> - Target: aarch64-linux-gnu
> - Compiler flags: -O2
> - Hardware: NVidia TX1 4x Cortex-A57
> 
> This benchmarking CI is work-in-progress, and we welcome feedback and 
> suggestions at linaro-toolchain@lists.linaro.org .  In our improvement plans 
> is to add support for SPEC CPU2017 benchmarks and provide "perf 
> report/annotate" data behind these reports.
> 
> THIS IS THE END OF INTERESTING STUFF.  BELOW ARE LINKS TO BUILDS, 
> REPRODUCTION INSTRUCTIONS, AND THE RAW COMMIT.
> 
> This commit has regressed these CI configurations:
> - tcwg_bmk_llvm_tx1/llvm-master-aarch64-spec2k6-O2
> 
> First_bad build: 
> https://ci.linaro.org/job/tcwg_bmk_ci_llvm-bisect-tcwg_bmk_tx1-llvm-master-aarch64-spec2k6-O2/27/artifact/artifacts/build-66e13c7f439cf162d7ed1d25883e71a5755ac7ec/
> Last_good build: 
> https://ci.linaro.org/job/tcwg_bmk_ci_llvm-bisect-tcwg_bmk_tx1-llvm-master-aarch64-spec2k6-O2/27/artifact/artifacts/build-838b4a533e6853d44e0c6d1977bcf0b06557d4ab/
> Baseline build: 
> https://ci.linaro.org/job/tcwg_bmk_ci_llvm-bisect-tcwg_bmk_tx1-llvm-master-aarch64-spec2k6-O2/27/artifact/artifacts/build-baseline/
> Even more details: 
> https://ci.linaro.org/job/tcwg_bmk_ci_llvm-bisect-tcwg_bmk_tx1-llvm-master-aarch64-spec2k6-O2/27/artifact/artifacts/
> 
> Reproduce builds:
> 
> mkdir investigate-llvm-66e13c7f439cf162d7ed1d25883e71a5755ac7ec
> cd investigate-llvm-66e13c7f439cf162d7ed1d25883e71a5755ac7ec
> 
> # Fetch scripts
> git clone https://git.linaro.org/toolchain/jenkins-scripts
> 
> # Fetch manifests and test.sh script
> mkdir -p artifacts/manifests
> curl -o artifacts/manifests/build-baseline.sh 
> https://ci.linaro.org/job/tcwg_bmk_ci_llvm-bisect-tcwg_bmk_tx1-llvm-master-aarch64-spec2k6-O2/27/artifact/artifacts/manifests/build-baseline.sh
>  --fail
> curl -o artifacts/manifests/build-parameters.sh 
> https://ci.linaro.org/job/tcwg_bmk_ci_llvm-bisect-tcwg_bmk_tx1-llvm-master-aarch64-spec2k6-O2/27/artifact/artifacts/manifests/build-parameters.sh
>  --fail
> curl -o artifacts/test.sh 
> https://ci.linaro.org/job/tcwg_bmk_ci_llvm-bisect-tcwg_bmk_tx1-llvm-master-aarch64-spec2k6-O2/27/artifact/artifacts/test.sh
>  --fail
> chmod +x artifacts/test.sh
> 
> # Reproduce the baseline build (build all pre-requisites)
> ./jenkins-scripts/tcwg_bmk-build.sh @@ artifacts/manifests/build-baseline.sh
> 
> # Save baseline build state (which is then restored in artifacts/test.sh)
> mkdir -p ./bisect
> rsync -a --del --delete-excluded --exclude /bisect/ --exclude /artifacts/ 
> --exclude /llvm/ ./ ./bisect/baseline/
> 
> cd llvm
> 
> # Reproduce first_bad build
> git checkout --detach 66e13c7f439cf162d7ed1d25883e71a5755ac7ec
> ../artifacts/test.sh
> 
> # Reproduce last_good build
> git checkout --detach 838b4a533e6853d44e0c6d1977bcf0b06557d4ab
> ../artifacts/test.sh
> 
> cd ..
> 
> 
> Full commit (up to 1000 lines):
> 
> commit 66e13c7f439cf162d7ed1d25883e71a5755ac7ec
> Author: Jay Foad 
> Date:   Tue Oct 12 15:39:43 2021 +0100
> 
>[AMDGPU] Enable load clustering in the post-RA schedul

Re: [TCWG CI] 444.namd grew in size by 2% after llvm: [SLP]Improve graph reordering.

2021-10-22 Thread Maxim Kuvyrkov
Hi Alexey,

[I’ve only now noticed this CI report.]

Your patch appears to slightly (but consistently) increase code size of 
444.namd at least on aarch64-linux-gnu.  Could you check if this is something 
that triggers a corner case and could be easily fixed?

At -Oz  444.namd grew in size by 2% from 182239 to 184991 bytes
At -Os  444.namd grew in size by 2% from 192302 to 195218 bytes
At -Os -flto444.namd grew in size by 2% from 152254 to 155346 bytes

Comparing -Oz save-temps [1] before and after I see that several hot functions 
grew in size [2], with

444.namd,[.] ComputeNonbondedUtil::calc_pair 
,97,105,1001,972,2756,2892

… growing the most at 5% from 2756 to 2892 bytes.  In particular, parts just 
before .LBB25_87 and .LBB25_89 labels look substantially worse (these are in 
ComputeNonbondedUtil.s in the linked save-temps tarballs).


[1]
- First_bad save-temps: 
https://ci.linaro.org/job/tcwg_bmk_ci_llvm-bisect-tcwg_bmk_apm-llvm-master-aarch64-spec2k6-Oz/14/artifact/artifacts/build-bc69dd62c04a70d29943c1c06c7effed150b70e1/save-temps/
- Last_good save-temps: 
https://ci.linaro.org/job/tcwg_bmk_ci_llvm-bisect-tcwg_bmk_apm-llvm-master-aarch64-spec2k6-Oz/14/artifact/artifacts/build-5661317f864abf750cf893c6a4cc7a977be0995a/save-temps/

[2] 
https://ci.linaro.org/job/tcwg_bmk_ci_llvm-bisect-tcwg_bmk_apm-llvm-master-aarch64-spec2k6-Oz/14/artifact/artifacts/build-bc69dd62c04a70d29943c1c06c7effed150b70e1/12-check_regression/results-full.csv/*view*/
 

Regards,

--
Maxim Kuvyrkov
https://www.linaro.org

> On 23 Sep 2021, at 08:21, ci_not...@linaro.org wrote:
> 
> After llvm commit bc69dd62c04a70d29943c1c06c7effed150b70e1
> Author: Alexey Bataev 
> 
>[SLP]Improve graph reordering.
> 
> the following benchmarks grew in size by more than 1%:
> - 444.namd grew in size by 2% from 192302 to 195218 bytes
> 
> Below reproducer instructions can be used to re-build both "first_bad" and 
> "last_good" cross-toolchains used in this bisection.  Naturally, the scripts 
> will fail when triggerring benchmarking jobs if you don't have access to 
> Linaro TCWG CI.
> 
> For your convenience, we have uploaded tarballs with pre-processed source and 
> assembly files at:
> - First_bad save-temps: 
> https://ci.linaro.org/job/tcwg_bmk_ci_llvm-bisect-tcwg_bmk_apm-llvm-master-aarch64-spec2k6-Os/12/artifact/artifacts/build-bc69dd62c04a70d29943c1c06c7effed150b70e1/save-temps/
> - Last_good save-temps: 
> https://ci.linaro.org/job/tcwg_bmk_ci_llvm-bisect-tcwg_bmk_apm-llvm-master-aarch64-spec2k6-Os/12/artifact/artifacts/build-5661317f864abf750cf893c6a4cc7a977be0995a/save-temps/
> - Baseline save-temps: 
> https://ci.linaro.org/job/tcwg_bmk_ci_llvm-bisect-tcwg_bmk_apm-llvm-master-aarch64-spec2k6-Os/12/artifact/artifacts/build-baseline/save-temps/
> 
> Configuration:
> - Benchmark: SPEC CPU2006
> - Toolchain: Clang + Glibc + LLVM Linker
> - Version: all components were built from their tip of trunk
> - Target: aarch64-linux-gnu
> - Compiler flags: -Os
> - Hardware: APM Mustang 8x X-Gene1
> 
> This benchmarking CI is work-in-progress, and we welcome feedback and 
> suggestions at linaro-toolchain@lists.linaro.org .  In our improvement plans 
> is to add support for SPEC CPU2017 benchmarks and provide "perf 
> report/annotate" data behind these reports.
> 
> THIS IS THE END OF INTERESTING STUFF.  BELOW ARE LINKS TO BUILDS, 
> REPRODUCTION INSTRUCTIONS, AND THE RAW COMMIT.
> 
> This commit has regressed these CI configurations:
> - tcwg_bmk_llvm_apm/llvm-master-aarch64-spec2k6-Os
> 
> First_bad build: 
> https://ci.linaro.org/job/tcwg_bmk_ci_llvm-bisect-tcwg_bmk_apm-llvm-master-aarch64-spec2k6-Os/12/artifact/artifacts/build-bc69dd62c04a70d29943c1c06c7effed150b70e1/
> Last_good build: 
> https://ci.linaro.org/job/tcwg_bmk_ci_llvm-bisect-tcwg_bmk_apm-llvm-master-aarch64-spec2k6-Os/12/artifact/artifacts/build-5661317f864abf750cf893c6a4cc7a977be0995a/
> Baseline build: 
> https://ci.linaro.org/job/tcwg_bmk_ci_llvm-bisect-tcwg_bmk_apm-llvm-master-aarch64-spec2k6-Os/12/artifact/artifacts/build-baseline/
> Even more details: 
> https://ci.linaro.org/job/tcwg_bmk_ci_llvm-bisect-tcwg_bmk_apm-llvm-master-aarch64-spec2k6-Os/12/artifact/artifacts/
> 
> Reproduce builds:
> 
> mkdir investigate-llvm-bc69dd62c04a70d29943c1c06c7effed150b70e1
> cd investigate-llvm-bc69dd62c04a70d29943c1c06c7effed150b70e1
> 
> # Fetch scripts
> git clone https://git.linaro.org/toolchain/jenkins-scripts
> 
> # Fetch manifests and test.sh script
> mkdir -p artifacts/manifests
> curl -o artifacts/manifests/build-baseline.sh 
> https://ci.linaro.org/job/tcwg_bmk_ci_llvm-bisect-tcwg_bmk_apm-llvm-master-aarch64-spec2k6-Os/12/artifact/artifacts/manifests/build-baseline.sh
>  --fail
> curl -o artifacts/manifests/build-p

Re: [CI-NOTIFY]: TCWG Bisect tcwg_bmk_tk1/llvm-master-arm-spec2k6-Os - Build # 9 - Successful!

2021-10-22 Thread Maxim Kuvyrkov
[Better late than never!]

Hi Sanjay,

Your patch seems to have fixed the regression.  Thanks!

On Wed, 14 Jul 2021 at 19:14, Sanjay Patel  wrote:

> I have a hunch about what went wrong. Please see if this commit changes
> anything for you:
> https://reviews.llvm.org/rGca6e117d8634
>
> On Wed, Jul 14, 2021 at 11:12 AM Sanjay Patel 
> wrote:
>
>> Thanks for letting me know. I could use some help to repro because I'm
>> not very familiar with that benchmark or ARM32.
>> 1. Can you provide the unoptimized IR for "BZ2_decompress"?
>> 2. What is the particular flavor/CPU of ARM32 to target?
>> 3. Was there a speed regression in addition to the size regression?
>>
>> If you can file a bugzilla with any of this, that would be best of
>> course. That way, we can pull in ARM or other experts as needed if this is
>> a codegen issue or problem with another IR pass.
>>
>>
>> On Wed, Jul 14, 2021 at 10:55 AM Maxim Kuvyrkov <
>> maxim.kuvyr...@linaro.org> wrote:
>>
>>> Hi Sanjay,
>>>
>>> On 32-bit ARM your patch appears to increase code size of BZ2_decompress
>>> from SPEC2006’s 401.bzip2 by 50% — from 7.5K to 11K.  This increases
>>> overall code size of 401.bzip2 benchmark by 10%.
>>>
>>> Would you please investigate?
>>>
>>> Please let us know if you need help reproducing the problem.
>>>
>>> Regards,
>>>
>>> --
>>> Maxim Kuvyrkov
>>> https://www.linaro.org
>>>
>>> > On 14 Jul 2021, at 17:32, ci_not...@linaro.org wrote:
>>> >
>>> > Successfully identified regression in *llvm* in CI configuration
>>> tcwg_bmk_llvm_tk1/llvm-master-arm-spec2k6-Os.  So far, this commit has
>>> regressed CI configurations:
>>> > - tcwg_bmk_llvm_tk1/llvm-master-arm-spec2k6-Os
>>> >
>>> > Culprit:
>>> > 
>>> > commit 40b752d28d95158e52dba7cfeea92e41b7ccff9a
>>> > Author: Sanjay Patel 
>>> > Date:   Mon Jul 5 09:57:39 2021 -0400
>>> >
>>> >[InstCombine] fold icmp slt/sgt of offset value with constant
>>> >
>>> >This follows up patches for the unsigned siblings:
>>> >0c400e895306
>>> >c7b658aeb526
>>> >
>>> >We are translating an offset signed compare to its
>>> >unsigned equivalent when one end of the range is
>>> >at the limit (zero or unsigned max).
>>> >
>>> >(X + C2) >s C --> X >> >(X + C2)  X >u (C ^ SMAX) (if C == C2)
>>> >
>>> >This probably does not show up much in IR derived
>>> >from C/C++ source because that would likely have
>>> >'nsw', and we have folds for that already.
>>> >
>>> >As with the previous unsigned transforms, the folds
>>> >could be generalized to handle non-constant patterns:
>>> >
>>> >https://alive2.llvm.org/ce/z/Y8Xrrm
>>> >
>>> >  ; sgt
>>> >  define i1 @src(i8 %a, i8 %c) {
>>> >%c2 = add i8 %c, 1
>>> >%t = add i8 %a, %c2
>>> >%ov = icmp sgt i8 %t, %c
>>> >ret i1 %ov
>>> >  }
>>> >
>>> >  define i1 @tgt(i8 %a, i8 %c) {
>>> >%c_off = sub i8 127, %c ; SMAX
>>> >%ov = icmp ult i8 %a, %c_off
>>> >ret i1 %ov
>>> >  }
>>> >
>>> >https://alive2.llvm.org/ce/z/c8uhnk
>>> >
>>> >  ; slt
>>> >  define i1 @src(i8 %a, i8 %c) {
>>> >%t = add i8 %a, %c
>>> >%ov = icmp slt i8 %t, %c
>>> >ret i1 %ov
>>> >  }
>>> >
>>> >  define i1 @tgt(i8 %a, i8 %c) {
>>> >%c_offnot = xor i8 %c, 127 ; SMAX
>>> >%ov = icmp ugt i8 %a, %c_offnot
>>> >ret i1 %ov
>>> >  }
>>> > 
>>> >
>>> > Results regressed to (for first_bad ==
>>> 40b752d28d95158e52dba7cfeea92e41b7ccff9a)
>>> > # reset_artifacts:
>>> > -10
>>> > # build_abe binutils:
>>> > -9
>>> > # build_abe stage1 -- --set gcc_override_configure=--with-mode=thumb
>>> --set gcc_override_configure=--disable-libsanitizer:
>>> > -8
>>> > # build_abe linux:
>>> > -7
>>> > # build_abe glibc:
>>> > -6
>>> > #

Re: [TCWG CI] 433.milc:[.] mult_su3_mat_vec slowed down by 16% after llvm: [AIX][ZOS] Excluding merge-objc-interface.m from Tests

2021-10-18 Thread Maxim Kuvyrkov
Hi Quingsi,

This report is a false positive.  We will investigate why noise levels have 
increased in our benchmarking setup.

Regards,

--
Maxim Kuvyrkov
https://www.linaro.org

> On 13 Oct 2021, at 04:43, ci_not...@linaro.org wrote:
> 
> After llvm commit 75127bce6de78b83b70b898a04473f213451f13e
> Author: Qiongsi Wu 
> 
>[AIX][ZOS] Excluding merge-objc-interface.m from Tests
> 
> the following hot functions slowed down by more than 10% (but their 
> benchmarks slowed down by less than 2%):
> - 433.milc:[.] mult_su3_mat_vec slowed down by 16% from 1615 to 1871 perf 
> samples
> 
> Below reproducer instructions can be used to re-build both "first_bad" and 
> "last_good" cross-toolchains used in this bisection.  Naturally, the scripts 
> will fail when triggerring benchmarking jobs if you don't have access to 
> Linaro TCWG CI.
> 
> For your convenience, we have uploaded tarballs with pre-processed source and 
> assembly files at:
> - First_bad save-temps: 
> https://ci.linaro.org/job/tcwg_bmk_ci_llvm-bisect-tcwg_bmk_tx1-llvm-master-aarch64-spec2k6-O3/27/artifact/artifacts/build-75127bce6de78b83b70b898a04473f213451f13e/save-temps/
> - Last_good save-temps: 
> https://ci.linaro.org/job/tcwg_bmk_ci_llvm-bisect-tcwg_bmk_tx1-llvm-master-aarch64-spec2k6-O3/27/artifact/artifacts/build-d01ae990e1fd6561ed86dc8004a7147dd09fb13c/save-temps/
> - Baseline save-temps: 
> https://ci.linaro.org/job/tcwg_bmk_ci_llvm-bisect-tcwg_bmk_tx1-llvm-master-aarch64-spec2k6-O3/27/artifact/artifacts/build-baseline/save-temps/
> 
> Configuration:
> - Benchmark: SPEC CPU2006
> - Toolchain: Clang + Glibc + LLVM Linker
> - Version: all components were built from their tip of trunk
> - Target: aarch64-linux-gnu
> - Compiler flags: -O3
> - Hardware: NVidia TX1 4x Cortex-A57
> 
> This benchmarking CI is work-in-progress, and we welcome feedback and 
> suggestions at linaro-toolchain@lists.linaro.org .  In our improvement plans 
> is to add support for SPEC CPU2017 benchmarks and provide "perf 
> report/annotate" data behind these reports.
> 
> THIS IS THE END OF INTERESTING STUFF.  BELOW ARE LINKS TO BUILDS, 
> REPRODUCTION INSTRUCTIONS, AND THE RAW COMMIT.
> 
> This commit has regressed these CI configurations:
> - tcwg_bmk_llvm_tx1/llvm-master-aarch64-spec2k6-O3
> 
> First_bad build: 
> https://ci.linaro.org/job/tcwg_bmk_ci_llvm-bisect-tcwg_bmk_tx1-llvm-master-aarch64-spec2k6-O3/27/artifact/artifacts/build-75127bce6de78b83b70b898a04473f213451f13e/
> Last_good build: 
> https://ci.linaro.org/job/tcwg_bmk_ci_llvm-bisect-tcwg_bmk_tx1-llvm-master-aarch64-spec2k6-O3/27/artifact/artifacts/build-d01ae990e1fd6561ed86dc8004a7147dd09fb13c/
> Baseline build: 
> https://ci.linaro.org/job/tcwg_bmk_ci_llvm-bisect-tcwg_bmk_tx1-llvm-master-aarch64-spec2k6-O3/27/artifact/artifacts/build-baseline/
> Even more details: 
> https://ci.linaro.org/job/tcwg_bmk_ci_llvm-bisect-tcwg_bmk_tx1-llvm-master-aarch64-spec2k6-O3/27/artifact/artifacts/
> 
> Reproduce builds:
> 
> mkdir investigate-llvm-75127bce6de78b83b70b898a04473f213451f13e
> cd investigate-llvm-75127bce6de78b83b70b898a04473f213451f13e
> 
> # Fetch scripts
> git clone https://git.linaro.org/toolchain/jenkins-scripts
> 
> # Fetch manifests and test.sh script
> mkdir -p artifacts/manifests
> curl -o artifacts/manifests/build-baseline.sh 
> https://ci.linaro.org/job/tcwg_bmk_ci_llvm-bisect-tcwg_bmk_tx1-llvm-master-aarch64-spec2k6-O3/27/artifact/artifacts/manifests/build-baseline.sh
>  --fail
> curl -o artifacts/manifests/build-parameters.sh 
> https://ci.linaro.org/job/tcwg_bmk_ci_llvm-bisect-tcwg_bmk_tx1-llvm-master-aarch64-spec2k6-O3/27/artifact/artifacts/manifests/build-parameters.sh
>  --fail
> curl -o artifacts/test.sh 
> https://ci.linaro.org/job/tcwg_bmk_ci_llvm-bisect-tcwg_bmk_tx1-llvm-master-aarch64-spec2k6-O3/27/artifact/artifacts/test.sh
>  --fail
> chmod +x artifacts/test.sh
> 
> # Reproduce the baseline build (build all pre-requisites)
> ./jenkins-scripts/tcwg_bmk-build.sh @@ artifacts/manifests/build-baseline.sh
> 
> # Save baseline build state (which is then restored in artifacts/test.sh)
> mkdir -p ./bisect
> rsync -a --del --delete-excluded --exclude /bisect/ --exclude /artifacts/ 
> --exclude /llvm/ ./ ./bisect/baseline/
> 
> cd llvm
> 
> # Reproduce first_bad build
> git checkout --detach 75127bce6de78b83b70b898a04473f213451f13e
> ../artifacts/test.sh
> 
> # Reproduce last_good build
> git checkout --detach d01ae990e1fd6561ed86dc8004a7147dd09fb13c
> ../artifacts/test.sh
> 
> cd ..
> 
> 
> Full commit (up to 1000 lines):
> 
> commit 75127bce6de78b83b70b898a04473f213451f13e
> Author: Qiongsi Wu 
> Date:   Fri Oct 8 13:58:32 2021 +
> 
>[AI

Re: [TCWG CI] 471.omnetpp slowed down by 8% after gcc: Avoid invalid loop transformations in jump threading registry.

2021-10-11 Thread Maxim Kuvyrkov
> On 8 Oct 2021, at 13:22, Martin Jambor  wrote:
> 
> Hi,
> 
> On Fri, Oct 01 2021, Gerald Pfeifer wrote:
>> On Wed, 29 Sep 2021, Maxim Kuvyrkov via Gcc wrote:
>>> Configurations that track master branches have 3-day intervals.  
>>> Configurations that track release branches — 6 days.  If a regression is 
>>> detected it is narrowed down to component first — binutils, gcc or glibc 
>>> — and then the commit range of the component is bisected down to a 
>>> specific commit.  All.  Done.  Automatically.
>>> 
>>> I will make a presentation on this CI at the next GNU Tools Cauldron.
>> 
>> Yes, please! :-)
>> 
>> On Fri, 1 Oct 2021, Maxim Kuvyrkov via Gcc wrote:
>>> It’s our next big improvement — to provide a dashboard with current 
>>> performance numbers and historical stats.
>> 
>> Awesome. And then we can even link from gcc.gnu.org.
>> 
> 
> You all are aware of the openSUSE LNT periodic SPEC benchmarker, right?
> Martin may explain better how to move around it, but the two most
> interesting result pages are:
> 
> - https://lnt.opensuse.org/db_default/v4/SPEC/latest_runs_report and
> - https://lnt.opensuse.org/db_default/v4/SPEC/spec_report/branch
> 

Hi Martin,

The novel part of TCWG CI is that it bisects “regressions” down to a single 
commit, thus pin-pointing the interesting commit, and can send out 
notifications to patch authors.

We do generate a fair number of benchmarking data for AArch64 and AArch32, and 
I want to have them plotted somewhere.  I have started to put together an LNT 
instance to do that, but after a couple of days I couldn't figure out the 
setup.  Could you share the configuration of your LNT instance?  Or, perhaps, 
make it open to the community so that others can upload the results?

Thanks,

--
Maxim Kuvyrkov
https://www.linaro.org

___
linaro-toolchain mailing list
linaro-toolchain@lists.linaro.org
https://lists.linaro.org/mailman/listinfo/linaro-toolchain


Re: [TCWG CI] 458.sjeng grew in size by 4% after gcc: aarch64: Improve size heuristic for cpymem expansion

2021-10-04 Thread Maxim Kuvyrkov
And another, even bigger, regression at plain -Os:

> After gcc commit a459ee44c0a74b0df0485ed7a56683816c02aae9
> Author: Kyrylo Tkachov 
> 
>aarch64: Improve size heuristic for cpymem expansion
> 
> the following benchmarks grew in size by more than 1%:
> - 470.lbm grew in size by 38% from 10823 to 14936 bytes
> 
> Below reproducer instructions can be used to re-build both "first_bad" and 
> "last_good" cross-toolchains used in this bisection. Naturally, the scripts 
> will fail when triggerring benchmarking jobs if you don't have access to 
> Linaro TCWG CI.
> 
> For your convenience, we have uploaded tarballs with pre-processed source and 
> assembly files at:
> - First_bad save-temps: 
> https://ci.linaro.org/job/tcwg_bmk_ci_gnu-bisect-tcwg_bmk_apm-gnu-master-aarch64-spec2k6-Os/3/artifact/artifacts/build-a459ee44c0a74b0df0485ed7a56683816c02aae9/save-temps/
> - Last_good save-temps: 
> https://ci.linaro.org/job/tcwg_bmk_ci_gnu-bisect-tcwg_bmk_apm-gnu-master-aarch64-spec2k6-Os/3/artifact/artifacts/build-8f95e3c04d659d541ca4937b3df2f1175a1c5f05/save-temps/
> - Baseline save-temps: 
> https://ci.linaro.org/job/tcwg_bmk_ci_gnu-bisect-tcwg_bmk_apm-gnu-master-aarch64-spec2k6-Os/3/artifact/artifacts/build-baseline/save-temps/
> 
> Configuration:
> - Benchmark: SPEC CPU2006
> - Toolchain: GCC + Glibc + GNU Linker
> - Version: all components were built from their tip of trunk
> - Target: aarch64-linux-gnu
> - Compiler flags: -Os
> - Hardware: APM Mustang 8x X-Gene1
> 

--
Maxim Kuvyrkov
https://www.linaro.org

> On 2 Oct 2021, at 18:45, Maxim Kuvyrkov via Gcc-regression 
>  wrote:
> 
> Hi Kyrill,
> 
> With LTO enabled this patch increases code size on SPEC CPU2006 -- it 
> increases 458.sjeng by 4% and 459.GemsFDTD by 2%.  Could you check if these 
> regressions are avoidable?
> 
> It reduces size of 465.tonto and 400.perlbench by 1%, and everything else is 
> neutral, see [1].  Overall it increases geomean code size at -Os -flto by 
> 0.1%.
> 
> [1] 
> https://ci.linaro.org/job/tcwg_bmk_ci_gnu-build-tcwg_bmk_apm-gnu-master-aarch64-spec2k6-Os_LTO/26/artifact/artifacts/11-check_regression/results.csv/*view*/
> 
> Regards,
> 
> --
> Maxim Kuvyrkov
> https://www.linaro.org
> 
> 
> 
> 
>> On Oct 2, 2021, at 11:12 AM, ci_not...@linaro.org wrote:
>> 
>> After gcc commit a459ee44c0a74b0df0485ed7a56683816c02aae9
>> Author: Kyrylo Tkachov 
>> 
>>   aarch64: Improve size heuristic for cpymem expansion
>> 
>> the following benchmarks grew in size by more than 1%:
>> - 458.sjeng grew in size by 4% from 105780 to 109944 bytes
>> - 459.GemsFDTD grew in size by 2% from 247504 to 251468 bytes
>> 
>> Below reproducer instructions can be used to re-build both "first_bad" and 
>> "last_good" cross-toolchains used in this bisection.  Naturally, the scripts 
>> will fail when triggerring benchmarking jobs if you don't have access to 
>> Linaro TCWG CI.
>> 
>> For your convenience, we have uploaded tarballs with pre-processed source 
>> and assembly files at:
>> - First_bad save-temps: 
>> https://ci.linaro.org/job/tcwg_bmk_ci_gnu-bisect-tcwg_bmk_apm-gnu-master-aarch64-spec2k6-Os_LTO/2/artifact/artifacts/build-a459ee44c0a74b0df0485ed7a56683816c02aae9/save-temps/
>> - Last_good save-temps: 
>> https://ci.linaro.org/job/tcwg_bmk_ci_gnu-bisect-tcwg_bmk_apm-gnu-master-aarch64-spec2k6-Os_LTO/2/artifact/artifacts/build-8f95e3c04d659d541ca4937b3df2f1175a1c5f05/save-temps/
>> - Baseline save-temps: 
>> https://ci.linaro.org/job/tcwg_bmk_ci_gnu-bisect-tcwg_bmk_apm-gnu-master-aarch64-spec2k6-Os_LTO/2/artifact/artifacts/build-baseline/save-temps/
>> 
>> Configuration:
>> - Benchmark: SPEC CPU2006
>> - Toolchain: GCC + Glibc + GNU Linker
>> - Version: all components were built from their tip of trunk
>> - Target: aarch64-linux-gnu
>> - Compiler flags: -Os -flto
>> - Hardware: APM Mustang 8x X-Gene1
>> 
>> This benchmarking CI is work-in-progress, and we welcome feedback and 
>> suggestions at linaro-toolchain@lists.linaro.org .  In our improvement plans 
>> is to add support for SPEC CPU2017 benchmarks and provide "perf 
>> report/annotate" data behind these reports.
>> 
>> THIS IS THE END OF INTERESTING STUFF.  BELOW ARE LINKS TO BUILDS, 
>> REPRODUCTION INSTRUCTIONS, AND THE RAW COMMIT.
>> 
>> This commit has regressed these CI configurations:
>> - tcwg_bmk_gnu_apm/gnu-master-aarch64-spec2k6-Os_LTO
>> 
>> First_bad build: 
>> https://ci.linaro.org/job/tcwg_bmk_ci_gnu-bisect-tcwg_bmk_apm-gnu-master-aarch64-spec2k6-Os_LTO/2/artifact/artifacts/build-a459ee44c0

Re: [TCWG CI] 458.sjeng grew in size by 4% after gcc: aarch64: Improve size heuristic for cpymem expansion

2021-10-02 Thread Maxim Kuvyrkov
Hi Kyrill,

With LTO enabled this patch increases code size on SPEC CPU2006 -- it increases 
458.sjeng by 4% and 459.GemsFDTD by 2%.  Could you check if these regressions 
are avoidable?

It reduces size of 465.tonto and 400.perlbench by 1%, and everything else is 
neutral, see [1].  Overall it increases geomean code size at -Os -flto by 0.1%.

[1] 
https://ci.linaro.org/job/tcwg_bmk_ci_gnu-build-tcwg_bmk_apm-gnu-master-aarch64-spec2k6-Os_LTO/26/artifact/artifacts/11-check_regression/results.csv/*view*/

Regards,

--
Maxim Kuvyrkov
https://www.linaro.org




> On Oct 2, 2021, at 11:12 AM, ci_not...@linaro.org wrote:
> 
> After gcc commit a459ee44c0a74b0df0485ed7a56683816c02aae9
> Author: Kyrylo Tkachov 
> 
>aarch64: Improve size heuristic for cpymem expansion
> 
> the following benchmarks grew in size by more than 1%:
> - 458.sjeng grew in size by 4% from 105780 to 109944 bytes
> - 459.GemsFDTD grew in size by 2% from 247504 to 251468 bytes
> 
> Below reproducer instructions can be used to re-build both "first_bad" and 
> "last_good" cross-toolchains used in this bisection.  Naturally, the scripts 
> will fail when triggerring benchmarking jobs if you don't have access to 
> Linaro TCWG CI.
> 
> For your convenience, we have uploaded tarballs with pre-processed source and 
> assembly files at:
> - First_bad save-temps: 
> https://ci.linaro.org/job/tcwg_bmk_ci_gnu-bisect-tcwg_bmk_apm-gnu-master-aarch64-spec2k6-Os_LTO/2/artifact/artifacts/build-a459ee44c0a74b0df0485ed7a56683816c02aae9/save-temps/
> - Last_good save-temps: 
> https://ci.linaro.org/job/tcwg_bmk_ci_gnu-bisect-tcwg_bmk_apm-gnu-master-aarch64-spec2k6-Os_LTO/2/artifact/artifacts/build-8f95e3c04d659d541ca4937b3df2f1175a1c5f05/save-temps/
> - Baseline save-temps: 
> https://ci.linaro.org/job/tcwg_bmk_ci_gnu-bisect-tcwg_bmk_apm-gnu-master-aarch64-spec2k6-Os_LTO/2/artifact/artifacts/build-baseline/save-temps/
> 
> Configuration:
> - Benchmark: SPEC CPU2006
> - Toolchain: GCC + Glibc + GNU Linker
> - Version: all components were built from their tip of trunk
> - Target: aarch64-linux-gnu
> - Compiler flags: -Os -flto
> - Hardware: APM Mustang 8x X-Gene1
> 
> This benchmarking CI is work-in-progress, and we welcome feedback and 
> suggestions at linaro-toolchain@lists.linaro.org .  In our improvement plans 
> is to add support for SPEC CPU2017 benchmarks and provide "perf 
> report/annotate" data behind these reports.
> 
> THIS IS THE END OF INTERESTING STUFF.  BELOW ARE LINKS TO BUILDS, 
> REPRODUCTION INSTRUCTIONS, AND THE RAW COMMIT.
> 
> This commit has regressed these CI configurations:
> - tcwg_bmk_gnu_apm/gnu-master-aarch64-spec2k6-Os_LTO
> 
> First_bad build: 
> https://ci.linaro.org/job/tcwg_bmk_ci_gnu-bisect-tcwg_bmk_apm-gnu-master-aarch64-spec2k6-Os_LTO/2/artifact/artifacts/build-a459ee44c0a74b0df0485ed7a56683816c02aae9/
> Last_good build: 
> https://ci.linaro.org/job/tcwg_bmk_ci_gnu-bisect-tcwg_bmk_apm-gnu-master-aarch64-spec2k6-Os_LTO/2/artifact/artifacts/build-8f95e3c04d659d541ca4937b3df2f1175a1c5f05/
> Baseline build: 
> https://ci.linaro.org/job/tcwg_bmk_ci_gnu-bisect-tcwg_bmk_apm-gnu-master-aarch64-spec2k6-Os_LTO/2/artifact/artifacts/build-baseline/
> Even more details: 
> https://ci.linaro.org/job/tcwg_bmk_ci_gnu-bisect-tcwg_bmk_apm-gnu-master-aarch64-spec2k6-Os_LTO/2/artifact/artifacts/
> 
> Reproduce builds:
> 
> mkdir investigate-gcc-a459ee44c0a74b0df0485ed7a56683816c02aae9
> cd investigate-gcc-a459ee44c0a74b0df0485ed7a56683816c02aae9
> 
> # Fetch scripts
> git clone https://git.linaro.org/toolchain/jenkins-scripts
> 
> # Fetch manifests and test.sh script
> mkdir -p artifacts/manifests
> curl -o artifacts/manifests/build-baseline.sh 
> https://ci.linaro.org/job/tcwg_bmk_ci_gnu-bisect-tcwg_bmk_apm-gnu-master-aarch64-spec2k6-Os_LTO/2/artifact/artifacts/manifests/build-baseline.sh
>  --fail
> curl -o artifacts/manifests/build-parameters.sh 
> https://ci.linaro.org/job/tcwg_bmk_ci_gnu-bisect-tcwg_bmk_apm-gnu-master-aarch64-spec2k6-Os_LTO/2/artifact/artifacts/manifests/build-parameters.sh
>  --fail
> curl -o artifacts/test.sh 
> https://ci.linaro.org/job/tcwg_bmk_ci_gnu-bisect-tcwg_bmk_apm-gnu-master-aarch64-spec2k6-Os_LTO/2/artifact/artifacts/test.sh
>  --fail
> chmod +x artifacts/test.sh
> 
> # Reproduce the baseline build (build all pre-requisites)
> ./jenkins-scripts/tcwg_bmk-build.sh @@ artifacts/manifests/build-baseline.sh
> 
> # Save baseline build state (which is then restored in artifacts/test.sh)
> mkdir -p ./bisect
> rsync -a --del --delete-excluded --exclude /bisect/ --exclude /artifacts/ 
> --exclude /gcc/ ./ ./bisect/baseline/
> 
> cd gcc
> 
> # Reproduce first_bad build
> git checkout --detach a459ee44c0a74b0df0485ed7a56683816c02aae9

Re: [TCWG CI] 400.perlbench slowed down by 6% after llvm: [SimplifyCFG] Ignore free instructions when computing cost for folding branch to common dest

2021-10-01 Thread Maxim Kuvyrkov
> On 1 Oct 2021, at 23:37, Arthur Eubanks  wrote:
> 
> Thanks for the flags, I can now reproduce.
> 
> I've basically come to the same conclusion as you. There's only one new 
> instance of this optimization triggering throughout the whole file, in 
> S_reginclass(). It doesn't look out of place,

I was looking at assembly of S_reginclass() and couldn’t figure out the purpose 
of the extra tst/b.ne instructions.  Sure it’s a tiny code-size increase, but 
their appearance seems silly (or, quite likely, I just don’t understand 
something).

Do you know why they are now generated?

> and S_regmatch() is identical before and after the patch. So it's likely 
> alignment as you've already mentioned.
> 
> There's not much I can do with that information.
> To further confirm if this is the case, you could try passing -mllvm 
> -align-all-functions=8 (or -align-all-blocks=4, or something along those 
> lines) at head vs with my patch reverted and see how the performance is.

Thanks,

--
Maxim Kuvyrkov
https://www.linaro.org


> 
> On Fri, Oct 1, 2021 at 2:41 AM Maxim Kuvyrkov  
> wrote:
> Hi Arthur,
> 
> Thanks for looking into this!
> 
> The flags to compile regexec.c were:
> -O3 --target=aarch64-linux-gnu -fgnu89-inline
> 
> Clang was configured with (on x86_64-linux-gnu host):
> cmake -G Ninja ../llvm/llvm '-DLLVM_ENABLE_PROJECTS=clang;lld' 
> -DCMAKE_BUILD_TYPE=Release -DLLVM_ENABLE_ASSERTIONS=True 
> -DCMAKE_INSTALL_PREFIX=../llvm-install -DLLVM_TARGETS_TO_BUILD=AArch64
> 
> Please let me know if the above doesn’t work for you.
> 
> Regards,
> 
> --
> Maxim Kuvyrkov
> https://www.linaro.org
> 
> > On 29 Sep 2021, at 20:47, Arthur Eubanks  wrote:
> > 
> > Do you know the flags passed to Clang to compile the sources? I tried 
> > compiling the preprocessed sources but ran into the below, and couldn't 
> > find the flags in any of the logs.
> > 
> > In file included from regexec.c:93:
> > In file included from ./perl.h:384:
> > In file included from 
> > /home/tcwg-buildslave/workspace/tcwg_bmk_0/abe/builds/destdir/x86_64-pc-linux-gnu/aarch64-linux-gnu/libc/usr/include/sys/types.h:144:
> > /home/tcwg-buildslave/workspace/tcwg_bmk_0/llvm-install/lib/clang/14.0.0/include/stddef.h:46:27:
> >  error: typedef redefinition with different types ('unsigned long' vs 
> > 'unsigned long long')
> > typedef long unsigned int size_t;
> >   ^
> > 1 error generated.
> > 
> > 
> > 
> > And yeah just moving the code around could cause major performance 
> > regressions, I've had other patches do the same for various benchmarks, 
> > there's not much we can do about that if that's actually the root cause. If 
> > I can compile the file I can check if the optimization actually created 
> > worse IR or not.
> > 
> > 
> > On Wed, Sep 29, 2021 at 5:59 AM Maxim Kuvyrkov  
> > wrote:
> > Hi Arthur,
> > 
> > Pre-processed source is in the save-temps tarballs linked below; 
> > S_regmatch() is in regexec.i .
> > 
> > The save-temps also have .s assembly file for before and after your patch, 
> > and the only code-gen difference is in S_reginclass() function — see the 
> > attached screenshot #1.
> > 
> > Looking into profile of S_regmatch(), some of the extra cycles come from 
> > hot loop starting with “cbz w19,...” getting misaligned — before your patch 
> > it was starting at "2bce10", and after it starts at "2bce6c”.
> > 
> > Maybe the added instructions in S_reginclass() pushed the loop in 
> > S_regmatch() in an unfortunate way?
> > 
> > --
> > Maxim Kuvyrkov
> > https://www.linaro.org
> > 
> >> On 27 Sep 2021, at 20:05, Arthur Eubanks  wrote:
> >> 
> >> Could I get the source file with S_regmatch()?
> >> 
> >> On Mon, Sep 27, 2021 at 6:07 AM Maxim Kuvyrkov  
> >> wrote:
> >> Hi Arthur,
> >> 
> >> Your patch seems to be slowing down 400.perlbench by 6% — due to slow down 
> >> of its hot function S_regmatch() by 14%.
> >> 
> >> Could you take a look if this is easily fixable, please?
> >> 
> >> Regards,
> >> 
> >> --
> >> Maxim Kuvyrkov
> >> https://www.linaro.org
> >> 
> >> > On 24 Sep 2021, at 15:07, ci_not...@linaro.org wrote:
> >> > 
> >> > After llvm commit e7249e4acf3cf9438d6d9e02edecebd5b622a4dc
> >> > Author: Arthur Eubanks 
> >> > 
> >> >[SimplifyCFG] Ignore free instructions when computing cost for 
> >> > folding branch to common dest

Re: [TCWG CI] 456.hmmer slowed down by 5% after llvm: Revert "Allow rematerialization of virtual reg uses"

2021-10-01 Thread Maxim Kuvyrkov
> On 1 Oct 2021, at 21:06, Mekhanoshin, Stanislav 
>  wrote:
> 
> [AMD Official Use Only]
> 
>> You mentioned that you saw different results for another ARM target — could 
>> you elaborate please?
> 
> When I was trying to reproduce hmmer asm I was trying to use different ARM 
> targets. I was never able to pick the one you were using apparently, but then 
> got very different results with different targets.

Our benchmarking CI is using default armhf target 
(--target=armv7a-linux-gnueabihf) with no additional -mcpu=/-march tuning 
flags.  Is it the same in your testing?  If so, then Clang should generate 
exactly same assembly in both cases, and have same extra reloads in 456.hmmer

The hardware used in benchmarking is Cortex-A15, which is still one of the most 
popular cores.  Which one you used in your experiments?

Thanks,

--
Maxim Kuvyrkov
https://www.linaro.org

> 
> Stas
> 
> -Original Message-
> From: Maxim Kuvyrkov 
> Sent: Friday, October 1, 2021 3:05
> To: Mekhanoshin, Stanislav 
> Cc: linaro-toolchain@lists.linaro.org
> Subject: Re: [TCWG CI] 456.hmmer slowed down by 5% after llvm: Revert "Allow 
> rematerialization of virtual reg uses"
> 
> [CAUTION: External Email]
> 
> Hi Stanislav,
> 
> I fully understand the challenges of compiler optimizations and the fact that 
> a generally-good optimisation can slow down a small number of benchmarks.
> 
> Still, benchmarking your original patch (commit 
> 92c1fd19abb15bc68b1127a26137a69e033cdb39) on arm-linux-gnueabihf results in 
> overall runtime slow-down across C/C++ subset of SPEC CPU2006:
> - 0.25% runtime geomean increase at -O2
> - 0.37% runtime geomean increase at -O3
> 
> See [1] for the numbers.
> 
> You mentioned that you saw different results for another ARM target — could 
> you elaborate please?
> 
> [1] 
> https://nam11.safelinks.protection.outlook.com/?url=https%3A%2F%2Fdocs.google.com%2Fspreadsheets%2Fd%2F1USWty9Vdx6JLo7TGddbkoKVUCiC4wtneOhhbHf5WXfc%2Fedit%3Fusp%3Dsharingdata=04%7C01%7CStanislav.Mekhanoshin%40amd.com%7C875cf130a5b3482342a808d984c2e333%7C3dd8961fe4884e608e11a82d994e183d%7C0%7C0%7C637686796375377160%7CUnknown%7CTWFpbGZsb3d8eyJWIjoiMC4wLjAwMDAiLCJQIjoiV2luMzIiLCJBTiI6Ik1haWwiLCJXVCI6Mn0%3D%7C3000sdata=X09fEBSn%2FykJ09vMSf2YoGnkODBoJAKhnma8KX9%2BxUE%3Dreserved=0
> 
> Regards,
> 
> --
> Maxim Kuvyrkov
> https://nam11.safelinks.protection.outlook.com/?url=https%3A%2F%2Fwww.linaro.org%2Fdata=04%7C01%7CStanislav.Mekhanoshin%40amd.com%7C875cf130a5b3482342a808d984c2e333%7C3dd8961fe4884e608e11a82d994e183d%7C0%7C0%7C637686796375377160%7CUnknown%7CTWFpbGZsb3d8eyJWIjoiMC4wLjAwMDAiLCJQIjoiV2luMzIiLCJBTiI6Ik1haWwiLCJXVCI6Mn0%3D%7C3000sdata=P503kX0DWQuMMr72znAXgvIdh2IsBDvliTko5%2F%2B4x6Q%3Dreserved=0
> 
>> On 29 Sep 2021, at 20:13, Mekhanoshin, Stanislav 
>>  wrote:
>> 
>> [AMD Official Use Only]
>> 
>> Maxim,
>> 
>> This is really difficult for me to work on this as I do not have various 
>> targets and HW affected. I am sure there were quite a lot of progressions, 
>> but as I said in the beginning regressions are also inevitable, just like 
>> every time a heuristic is involved. For the hmmer case I was getting quite 
>> different results just by selecting a different ARM target. So without a 
>> good way to measure it and given the heuristic approach I cannot satisfy all 
>> the requests from multiple parties. Our target (AMDGPU) does this for a long 
>> time and I believe it is overall beneficial. It is somewhat pity I cannot 
>> make this a universal optimization, but I am also time constrained as there 
>> is other work to do too.
>> 
>> Stas
>> 
>> -Original Message-
>> From: Maxim Kuvyrkov 
>> Sent: Wednesday, September 29, 2021 4:17
>> To: Mekhanoshin, Stanislav 
>> Cc: linaro-toolchain@lists.linaro.org
>> Subject: Re: [TCWG CI] 456.hmmer slowed down by 5% after llvm: Revert "Allow 
>> rematerialization of virtual reg uses"
>> 
>> [CAUTION: External Email]
>> 
>> I thought the speed up and slow-down from "Allow rematerialization of 
>> virtual reg uses" were for different benchmarks, but they are for the same 
>> benchmark - 456.hmmer - but for different compilation flags.
>> 
>> - At -O2 the patch slows down 456.hmmer by 5% from 751s to 771s.
>> - At -O2 -flto patch speeds up 456.hmmer by 5% from 803s to 765s.
>> 
>> Two observations from this:
>> 1. 456.hmmer is very sensitive to this optimisation
>> 2. LTO screws up on 456.hmmer.
>> 
>> --
>> Maxim Kuvyrkov
>> https://nam11.safelinks.protection.outlook.com/?url=https%3A%2F%2Fwww.linaro.org%2Fdata=04%7C01%7CStanisl

Re: [TCWG CI] 400.perlbench slowed down by 6% after llvm: [SimplifyCFG] Ignore free instructions when computing cost for folding branch to common dest

2021-10-01 Thread Maxim Kuvyrkov
Hi Arthur,

Pre-processed source is in the save-temps tarballs linked below; S_regmatch() 
is in regexec.i .

The save-temps also have .s assembly file for before and after your patch, and 
the only code-gen difference is in S_reginclass() function — see the attached 
screenshot #1.

Looking into profile of S_regmatch(), some of the extra cycles come from hot 
loop starting with “cbz w19,...” getting misaligned — before your patch it was 
starting at "2bce10", and after it starts at "2bce6c”.

Maybe the added instructions in S_reginclass() pushed the loop in S_regmatch() 
in an unfortunate way?

--
Maxim Kuvyrkov
https://www.linaro.org

> On 27 Sep 2021, at 20:05, Arthur Eubanks  wrote:
> 
> Could I get the source file with S_regmatch()?
> 
> On Mon, Sep 27, 2021 at 6:07 AM Maxim Kuvyrkov  
> wrote:
> Hi Arthur,
> 
> Your patch seems to be slowing down 400.perlbench by 6% — due to slow down of 
> its hot function S_regmatch() by 14%.
> 
> Could you take a look if this is easily fixable, please?
> 
> Regards,
> 
> --
> Maxim Kuvyrkov
> https://www.linaro.org
> 
> > On 24 Sep 2021, at 15:07, ci_not...@linaro.org wrote:
> > 
> > After llvm commit e7249e4acf3cf9438d6d9e02edecebd5b622a4dc
> > Author: Arthur Eubanks 
> > 
> >[SimplifyCFG] Ignore free instructions when computing cost for folding 
> > branch to common dest
> > 
> > the following benchmarks slowed down by more than 2%:
> > - 400.perlbench slowed down by 6% from 9730 to 10312 perf samples
> >  - 400.perlbench:[.] S_regmatch slowed down by 14% from 3660 to 4188 perf 
> > samples
> > 
> > Below reproducer instructions can be used to re-build both "first_bad" and 
> > "last_good" cross-toolchains used in this bisection.  Naturally, the 
> > scripts will fail when triggerring benchmarking jobs if you don't have 
> > access to Linaro TCWG CI.
> > 
> > For your convenience, we have uploaded tarballs with pre-processed source 
> > and assembly files at:
> > - First_bad save-temps: 
> > https://ci.linaro.org/job/tcwg_bmk_ci_llvm-bisect-tcwg_bmk_tx1-llvm-master-aarch64-spec2k6-O3/23/artifact/artifacts/build-e7249e4acf3cf9438d6d9e02edecebd5b622a4dc/save-temps/
> > - Last_good save-temps: 
> > https://ci.linaro.org/job/tcwg_bmk_ci_llvm-bisect-tcwg_bmk_tx1-llvm-master-aarch64-spec2k6-O3/23/artifact/artifacts/build-32a50078657dd8beead327a3478ede4e9d730432/save-temps/
> > - Baseline save-temps: 
> > https://ci.linaro.org/job/tcwg_bmk_ci_llvm-bisect-tcwg_bmk_tx1-llvm-master-aarch64-spec2k6-O3/23/artifact/artifacts/build-baseline/save-temps/
> > 
> > Configuration:
> > - Benchmark: SPEC CPU2006
> > - Toolchain: Clang + Glibc + LLVM Linker
> > - Version: all components were built from their tip of trunk
> > - Target: aarch64-linux-gnu
> > - Compiler flags: -O3
> > - Hardware: NVidia TX1 4x Cortex-A57
> > 
> > This benchmarking CI is work-in-progress, and we welcome feedback and 
> > suggestions at linaro-toolchain@lists.linaro.org .  In our improvement 
> > plans is to add support for SPEC CPU2017 benchmarks and provide "perf 
> > report/annotate" data behind these reports.

___
linaro-toolchain mailing list
linaro-toolchain@lists.linaro.org
https://lists.linaro.org/mailman/listinfo/linaro-toolchain


Re: [TCWG CI] 456.hmmer slowed down by 5% after llvm: Revert "Allow rematerialization of virtual reg uses"

2021-10-01 Thread Maxim Kuvyrkov
Hi Stanislav,

I fully understand the challenges of compiler optimizations and the fact that a 
generally-good optimisation can slow down a small number of benchmarks.

Still, benchmarking your original patch (commit 
92c1fd19abb15bc68b1127a26137a69e033cdb39) on arm-linux-gnueabihf results in 
overall runtime slow-down across C/C++ subset of SPEC CPU2006:
- 0.25% runtime geomean increase at -O2
- 0.37% runtime geomean increase at -O3

See [1] for the numbers.

You mentioned that you saw different results for another ARM target — could you 
elaborate please?

[1] 
https://docs.google.com/spreadsheets/d/1USWty9Vdx6JLo7TGddbkoKVUCiC4wtneOhhbHf5WXfc/edit?usp=sharing

Regards,

--
Maxim Kuvyrkov
https://www.linaro.org

> On 29 Sep 2021, at 20:13, Mekhanoshin, Stanislav 
>  wrote:
> 
> [AMD Official Use Only]
> 
> Maxim,
> 
> This is really difficult for me to work on this as I do not have various 
> targets and HW affected. I am sure there were quite a lot of progressions, 
> but as I said in the beginning regressions are also inevitable, just like 
> every time a heuristic is involved. For the hmmer case I was getting quite 
> different results just by selecting a different ARM target. So without a good 
> way to measure it and given the heuristic approach I cannot satisfy all the 
> requests from multiple parties. Our target (AMDGPU) does this for a long time 
> and I believe it is overall beneficial. It is somewhat pity I cannot make 
> this a universal optimization, but I am also time constrained as there is 
> other work to do too.
> 
> Stas
> 
> -Original Message-
> From: Maxim Kuvyrkov 
> Sent: Wednesday, September 29, 2021 4:17
> To: Mekhanoshin, Stanislav 
> Cc: linaro-toolchain@lists.linaro.org
> Subject: Re: [TCWG CI] 456.hmmer slowed down by 5% after llvm: Revert "Allow 
> rematerialization of virtual reg uses"
> 
> [CAUTION: External Email]
> 
> I thought the speed up and slow-down from "Allow rematerialization of virtual 
> reg uses" were for different benchmarks, but they are for the same benchmark 
> - 456.hmmer - but for different compilation flags.
> 
> - At -O2 the patch slows down 456.hmmer by 5% from 751s to 771s.
> - At -O2 -flto patch speeds up 456.hmmer by 5% from 803s to 765s.
> 
> Two observations from this:
> 1. 456.hmmer is very sensitive to this optimisation
> 2. LTO screws up on 456.hmmer.
> 
> --
> Maxim Kuvyrkov
> https://nam11.safelinks.protection.outlook.com/?url=https%3A%2F%2Fwww.linaro.org%2Fdata=04%7C01%7CStanislav.Mekhanoshin%40amd.com%7C06739cf07d704b0ae9c808d9833ab4db%7C3dd8961fe4884e608e11a82d994e183d%7C0%7C0%7C637685110452392032%7CUnknown%7CTWFpbGZsb3d8eyJWIjoiMC4wLjAwMDAiLCJQIjoiV2luMzIiLCJBTiI6Ik1haWwiLCJXVCI6Mn0%3D%7C3000sdata=lT0JQgOBKwpI7H04MR%2BBFww5RKAiXTq3XQiLEBQSBCE%3Dreserved=0
> 
>> On 29 Sep 2021, at 14:06, Maxim Kuvyrkov  wrote:
>> 
>> Hi Stanislav,
>> 
>> Just FYI.  Your original patch improved 456.hmmer by 5%, that's a nice speed 
>> up!
>> 
>> --
>> Maxim Kuvyrkov
>> https://nam11.safelinks.protection.outlook.com/?url=https%3A%2F%2Fwww.linaro.org%2Fdata=04%7C01%7CStanislav.Mekhanoshin%40amd.com%7C06739cf07d704b0ae9c808d9833ab4db%7C3dd8961fe4884e608e11a82d994e183d%7C0%7C0%7C637685110452402029%7CUnknown%7CTWFpbGZsb3d8eyJWIjoiMC4wLjAwMDAiLCJQIjoiV2luMzIiLCJBTiI6Ik1haWwiLCJXVCI6Mn0%3D%7C3000sdata=YyOt%2FmkYeomR8vtrFndKNlUOyKTe4kbFRTv9xMoktjY%3Dreserved=0
>> 
>>> On 28 Sep 2021, at 08:21, ci_not...@linaro.org wrote:
>>> 
>>> After llvm commit 08d7eec06e8cf5c15a96ce11f311f1480291a441
>>> Author: Stanislav Mekhanoshin 
>>> 
>>>  Revert "Allow rematerialization of virtual reg uses"
>>> 
>>> the following benchmarks slowed down by more than 2%:
>>> - 456.hmmer slowed down by 5% from 7649 to 8028 perf samples
>>> 
>>> Below reproducer instructions can be used to re-build both "first_bad" and 
>>> "last_good" cross-toolchains used in this bisection.  Naturally, the 
>>> scripts will fail when triggerring benchmarking jobs if you don't have 
>>> access to Linaro TCWG CI.
>>> 
>>> For your convenience, we have uploaded tarballs with pre-processed source 
>>> and assembly files at:
>>> - First_bad save-temps: 
>>> https://nam11.safelinks.protection.outlook.com/?url=https%3A%2F%2Fci.linaro.org%2Fjob%2Ftcwg_bmk_ci_llvm-bisect-tcwg_bmk_tk1-llvm-master-arm-spec2k6-O2_LTO%2F16%2Fartifact%2Fartifacts%2Fbuild-08d7eec06e8cf5c15a96ce11f311f1480291a441%2Fsave-temps%2Fdata=04%7C01%7CStanislav.Mekhanoshin%40amd.com%7C06739cf07d704b0ae9c808d9833ab4db%7C3dd8961fe4884e608e11a82d994e183d%7C0%7C0%7C637685110452402029%7CUnknown%7CTWFpbGZsb3d8eyJWIjoiM

Re: [TCWG CI] 400.perlbench slowed down by 6% after llvm: [SimplifyCFG] Ignore free instructions when computing cost for folding branch to common dest

2021-10-01 Thread Maxim Kuvyrkov
Hi Arthur,

Thanks for looking into this!

The flags to compile regexec.c were:
-O3 --target=aarch64-linux-gnu -fgnu89-inline

Clang was configured with (on x86_64-linux-gnu host):
cmake -G Ninja ../llvm/llvm '-DLLVM_ENABLE_PROJECTS=clang;lld' 
-DCMAKE_BUILD_TYPE=Release -DLLVM_ENABLE_ASSERTIONS=True 
-DCMAKE_INSTALL_PREFIX=../llvm-install -DLLVM_TARGETS_TO_BUILD=AArch64

Please let me know if the above doesn’t work for you.

Regards,

--
Maxim Kuvyrkov
https://www.linaro.org

> On 29 Sep 2021, at 20:47, Arthur Eubanks  wrote:
> 
> Do you know the flags passed to Clang to compile the sources? I tried 
> compiling the preprocessed sources but ran into the below, and couldn't find 
> the flags in any of the logs.
> 
> In file included from regexec.c:93:
> In file included from ./perl.h:384:
> In file included from 
> /home/tcwg-buildslave/workspace/tcwg_bmk_0/abe/builds/destdir/x86_64-pc-linux-gnu/aarch64-linux-gnu/libc/usr/include/sys/types.h:144:
> /home/tcwg-buildslave/workspace/tcwg_bmk_0/llvm-install/lib/clang/14.0.0/include/stddef.h:46:27:
>  error: typedef redefinition with different types ('unsigned long' vs 
> 'unsigned long long')
> typedef long unsigned int size_t;
>   ^
> 1 error generated.
> 
> 
> 
> And yeah just moving the code around could cause major performance 
> regressions, I've had other patches do the same for various benchmarks, 
> there's not much we can do about that if that's actually the root cause. If I 
> can compile the file I can check if the optimization actually created worse 
> IR or not.
> 
> 
> On Wed, Sep 29, 2021 at 5:59 AM Maxim Kuvyrkov  
> wrote:
> Hi Arthur,
> 
> Pre-processed source is in the save-temps tarballs linked below; S_regmatch() 
> is in regexec.i .
> 
> The save-temps also have .s assembly file for before and after your patch, 
> and the only code-gen difference is in S_reginclass() function — see the 
> attached screenshot #1.
> 
> Looking into profile of S_regmatch(), some of the extra cycles come from hot 
> loop starting with “cbz w19,...” getting misaligned — before your patch it 
> was starting at "2bce10", and after it starts at "2bce6c”.
> 
> Maybe the added instructions in S_reginclass() pushed the loop in 
> S_regmatch() in an unfortunate way?
> 
> --
> Maxim Kuvyrkov
> https://www.linaro.org
> 
>> On 27 Sep 2021, at 20:05, Arthur Eubanks  wrote:
>> 
>> Could I get the source file with S_regmatch()?
>> 
>> On Mon, Sep 27, 2021 at 6:07 AM Maxim Kuvyrkov  
>> wrote:
>> Hi Arthur,
>> 
>> Your patch seems to be slowing down 400.perlbench by 6% — due to slow down 
>> of its hot function S_regmatch() by 14%.
>> 
>> Could you take a look if this is easily fixable, please?
>> 
>> Regards,
>> 
>> --
>> Maxim Kuvyrkov
>> https://www.linaro.org
>> 
>> > On 24 Sep 2021, at 15:07, ci_not...@linaro.org wrote:
>> > 
>> > After llvm commit e7249e4acf3cf9438d6d9e02edecebd5b622a4dc
>> > Author: Arthur Eubanks 
>> > 
>> >[SimplifyCFG] Ignore free instructions when computing cost for folding 
>> > branch to common dest
>> > 
>> > the following benchmarks slowed down by more than 2%:
>> > - 400.perlbench slowed down by 6% from 9730 to 10312 perf samples
>> >  - 400.perlbench:[.] S_regmatch slowed down by 14% from 3660 to 4188 perf 
>> > samples
>> > 
>> > Below reproducer instructions can be used to re-build both "first_bad" and 
>> > "last_good" cross-toolchains used in this bisection.  Naturally, the 
>> > scripts will fail when triggerring benchmarking jobs if you don't have 
>> > access to Linaro TCWG CI.
>> > 
>> > For your convenience, we have uploaded tarballs with pre-processed source 
>> > and assembly files at:
>> > - First_bad save-temps: 
>> > https://ci.linaro.org/job/tcwg_bmk_ci_llvm-bisect-tcwg_bmk_tx1-llvm-master-aarch64-spec2k6-O3/23/artifact/artifacts/build-e7249e4acf3cf9438d6d9e02edecebd5b622a4dc/save-temps/
>> > - Last_good save-temps: 
>> > https://ci.linaro.org/job/tcwg_bmk_ci_llvm-bisect-tcwg_bmk_tx1-llvm-master-aarch64-spec2k6-O3/23/artifact/artifacts/build-32a50078657dd8beead327a3478ede4e9d730432/save-temps/
>> > - Baseline save-temps: 
>> > https://ci.linaro.org/job/tcwg_bmk_ci_llvm-bisect-tcwg_bmk_tx1-llvm-master-aarch64-spec2k6-O3/23/artifact/artifacts/build-baseline/save-temps/
>> > 
>> > Configuration:
>> > - Benchmark: SPEC CPU2006
>> > - Toolchain: Clang + Glibc + LLVM Linker
>> > - Version: all components were built from their t

Re: [TCWG CI] 471.omnetpp slowed down by 8% after gcc: Avoid invalid loop transformations in jump threading registry.

2021-10-01 Thread Maxim Kuvyrkov
> On 29 Sep 2021, at 21:21, Andrew MacLeod  wrote:
> 
> On 9/29/21 7:59 AM, Maxim Kuvyrkov wrote:
>> 
>>>   Does it run like once a day/some-time-period, and if you note a 
>>> regression, narrow it down?
>> Configurations that track master branches have 3-day intervals.  
>> Configurations that track release branches — 6 days.  If a regression is 
>> detected it is narrowed down to component first — binutils, gcc or glibc — 
>> and then the commit range of the component is bisected down to a specific 
>> commit.  All.  Done.  Automatically.
>> 
>> I will make a presentation on this CI at the next GNU Tools Cauldron.
>> 
>>>  Regardless, I think it could be very useful to be able to see the results 
>>> of anything you do run at whatever frequency it happens.
>> Thanks!
>> 
>> --
> 
> One more follow on question.. is this information/summary of the results 
> every 3rd day interval of master  published anywhere?  ie, to a web page or 
> posted somewhere?that seems like it could useful, especially  with a +/- 
> differential from the previous run (which you obviously calculate to 
> determine if there is a regression).

It’s our next big improvement — to provide a dashboard with current performance 
numbers and historical stats.  Performance summary information is publicly 
available as artifacts in jenkins jobs (e.g., [1]), but one needs to know 
exactly where to look.

We plan to implement the dashboard before the end of the year.

We also have raw perf.data files and benchmark executables stashed for detailed 
inspection.  I /think/, we can publish these for SPEC CPU2xxx benchmarks — they 
are all based on open-source software.  For other benchmarks (EEMBC, CoreMark 
Pro) we can’t publish much beyond time/size metrics.

[1] 
https://ci.linaro.org/view/tcwg_bmk_ci_gnu/job/tcwg_bmk_ci_gnu-build-tcwg_bmk_tx1-gnu-master-aarch64-spec2k6-O2/237/artifact/artifacts/11-check_regression/results.csv/*view*/

Regards,

--
Maxim Kuvyrkov
https://www.linaro.org

___
linaro-toolchain mailing list
linaro-toolchain@lists.linaro.org
https://lists.linaro.org/mailman/listinfo/linaro-toolchain


Re: [TCWG CI] 456.hmmer slowed down by 5% after llvm: Revert "Allow rematerialization of virtual reg uses"

2021-09-30 Thread Maxim Kuvyrkov
Hi Stanislav,

Just FYI.  Your original patch improved 456.hmmer by 5%, that’s a nice speed up!

--
Maxim Kuvyrkov
https://www.linaro.org

> On 28 Sep 2021, at 08:21, ci_not...@linaro.org wrote:
> 
> After llvm commit 08d7eec06e8cf5c15a96ce11f311f1480291a441
> Author: Stanislav Mekhanoshin 
> 
>Revert "Allow rematerialization of virtual reg uses"
> 
> the following benchmarks slowed down by more than 2%:
> - 456.hmmer slowed down by 5% from 7649 to 8028 perf samples
> 
> Below reproducer instructions can be used to re-build both "first_bad" and 
> "last_good" cross-toolchains used in this bisection.  Naturally, the scripts 
> will fail when triggerring benchmarking jobs if you don't have access to 
> Linaro TCWG CI.
> 
> For your convenience, we have uploaded tarballs with pre-processed source and 
> assembly files at:
> - First_bad save-temps: 
> https://ci.linaro.org/job/tcwg_bmk_ci_llvm-bisect-tcwg_bmk_tk1-llvm-master-arm-spec2k6-O2_LTO/16/artifact/artifacts/build-08d7eec06e8cf5c15a96ce11f311f1480291a441/save-temps/
> - Last_good save-temps: 
> https://ci.linaro.org/job/tcwg_bmk_ci_llvm-bisect-tcwg_bmk_tk1-llvm-master-arm-spec2k6-O2_LTO/16/artifact/artifacts/build-e8e2edd8ca88f8b0a7dba141349b2aa83284f3af/save-temps/
> - Baseline save-temps: 
> https://ci.linaro.org/job/tcwg_bmk_ci_llvm-bisect-tcwg_bmk_tk1-llvm-master-arm-spec2k6-O2_LTO/16/artifact/artifacts/build-baseline/save-temps/
> 
> Configuration:
> - Benchmark: SPEC CPU2006
> - Toolchain: Clang + Glibc + LLVM Linker
> - Version: all components were built from their tip of trunk
> - Target: arm-linux-gnueabihf
> - Compiler flags: -O2 -flto -marm
> - Hardware: NVidia TK1 4x Cortex-A15
> 
> This benchmarking CI is work-in-progress, and we welcome feedback and 
> suggestions at linaro-toolchain@lists.linaro.org .  In our improvement plans 
> is to add support for SPEC CPU2017 benchmarks and provide "perf 
> report/annotate" data behind these reports.
> 
> THIS IS THE END OF INTERESTING STUFF.  BELOW ARE LINKS TO BUILDS, 
> REPRODUCTION INSTRUCTIONS, AND THE RAW COMMIT.
> 
> This commit has regressed these CI configurations:
> - tcwg_bmk_llvm_tk1/llvm-master-arm-spec2k6-O2_LTO
> 
> First_bad build: 
> https://ci.linaro.org/job/tcwg_bmk_ci_llvm-bisect-tcwg_bmk_tk1-llvm-master-arm-spec2k6-O2_LTO/16/artifact/artifacts/build-08d7eec06e8cf5c15a96ce11f311f1480291a441/
> Last_good build: 
> https://ci.linaro.org/job/tcwg_bmk_ci_llvm-bisect-tcwg_bmk_tk1-llvm-master-arm-spec2k6-O2_LTO/16/artifact/artifacts/build-e8e2edd8ca88f8b0a7dba141349b2aa83284f3af/
> Baseline build: 
> https://ci.linaro.org/job/tcwg_bmk_ci_llvm-bisect-tcwg_bmk_tk1-llvm-master-arm-spec2k6-O2_LTO/16/artifact/artifacts/build-baseline/
> Even more details: 
> https://ci.linaro.org/job/tcwg_bmk_ci_llvm-bisect-tcwg_bmk_tk1-llvm-master-arm-spec2k6-O2_LTO/16/artifact/artifacts/
> 
> Reproduce builds:
> 
> mkdir investigate-llvm-08d7eec06e8cf5c15a96ce11f311f1480291a441
> cd investigate-llvm-08d7eec06e8cf5c15a96ce11f311f1480291a441
> 
> # Fetch scripts
> git clone https://git.linaro.org/toolchain/jenkins-scripts
> 
> # Fetch manifests and test.sh script
> mkdir -p artifacts/manifests
> curl -o artifacts/manifests/build-baseline.sh 
> https://ci.linaro.org/job/tcwg_bmk_ci_llvm-bisect-tcwg_bmk_tk1-llvm-master-arm-spec2k6-O2_LTO/16/artifact/artifacts/manifests/build-baseline.sh
>  --fail
> curl -o artifacts/manifests/build-parameters.sh 
> https://ci.linaro.org/job/tcwg_bmk_ci_llvm-bisect-tcwg_bmk_tk1-llvm-master-arm-spec2k6-O2_LTO/16/artifact/artifacts/manifests/build-parameters.sh
>  --fail
> curl -o artifacts/test.sh 
> https://ci.linaro.org/job/tcwg_bmk_ci_llvm-bisect-tcwg_bmk_tk1-llvm-master-arm-spec2k6-O2_LTO/16/artifact/artifacts/test.sh
>  --fail
> chmod +x artifacts/test.sh
> 
> # Reproduce the baseline build (build all pre-requisites)
> ./jenkins-scripts/tcwg_bmk-build.sh @@ artifacts/manifests/build-baseline.sh
> 
> # Save baseline build state (which is then restored in artifacts/test.sh)
> mkdir -p ./bisect
> rsync -a --del --delete-excluded --exclude /bisect/ --exclude /artifacts/ 
> --exclude /llvm/ ./ ./bisect/baseline/
> 
> cd llvm
> 
> # Reproduce first_bad build
> git checkout --detach 08d7eec06e8cf5c15a96ce11f311f1480291a441
> ../artifacts/test.sh
> 
> # Reproduce last_good build
> git checkout --detach e8e2edd8ca88f8b0a7dba141349b2aa83284f3af
> ../artifacts/test.sh
> 
> cd ..
> 
> 
> Full commit (up to 1000 lines):
> 
> commit 08d7eec06e8cf5c15a96ce11f311f1480291a441
> Author: Stanislav Mekhanoshin 
> Date:   Fri Sep 24 09:53:51 2021 -0700
> 
>Revert "Allow rematerialization of virtual reg uses"
> 
>Reverted 

clang-aarch64-full-2stage buildbot timeout

2021-09-30 Thread Maxim Kuvyrkov
> > On Sep 22, 2021, at 11:23, Florian Hahn  > wrote:
> >
> > Hi,
> >
> > It looks like a lot of the recent builds of clang-aarch64-full-2stage are 
> > timing out.
> >
> > E.g https://lab.llvm.org/buildbot/#/builders/179/builds/1078 while checking 
> > out sources
> 
> > https://lab.llvm.org/buildbot/#/builders/179/builds/1076 during building 
> > stage2
> >
> > Is there anything that could be done to avoid such timeouts and avoid false 
> > positive failure emails?
> >
> > Cheers,
> > Florian

Hi Florian,

Thanks for the heads up.  We’ve noticed these timeouts too, and have reduced 
the load in the machine.  It appears to have helped.

> 
> Looks like other bots are also hit by timeouts, including 
> clang-arm64-windows-msvc-2stage (
> https://lab.llvm.org/buildbot/#/builders/120/builds/1197
> )

This one looks like a legitimate failure, and appears to have been fixed by 
https://github.com/llvm/llvm-project/commit/c6013f71a4555f6d9ef9c60e6bc4376ad63f1c47
 in build https://lab.llvm.org/buildbot/#/builders/120/builds/1200 .

Regards,

--
Maxim Kuvyrkov
https://www.linaro.org

___
linaro-toolchain mailing list
linaro-toolchain@lists.linaro.org
https://lists.linaro.org/mailman/listinfo/linaro-toolchain


Re: [TCWG CI] 456.hmmer slowed down by 6% after llvm: Allow rematerialization of virtual reg uses

2021-09-30 Thread Maxim Kuvyrkov
Thanks, Stanislav,

FWIW, it will be, probably, easier for you to just rebuild the compiler, it is 
an x86_64-linux-gnu -> arm-linux-gnueabihf cross.  This link has the build log 
[1].

cmake -G Ninja ../llvm/llvm '-DLLVM_ENABLE_PROJECTS=clang;lld' 
-DCMAKE_BUILD_TYPE=Release -DLLVM_ENABLE_ASSERTIONS=True 
-DCMAKE_INSTALL_PREFIX=../llvm-install -DLLVM_TARGETS_TO_BUILD=ARM

Then compile the pre-processed source with plain -O2 or -O3 optimisation 
settings.

[1] 
https://ci.linaro.org/job/tcwg_bmk_ci_llvm-bisect-tcwg_bmk_tk1-llvm-master-arm-spec2k6-O3/18/artifact/artifacts/build-92c1fd19abb15bc68b1127a26137a69e033cdb39/09-build_llvm-true/

Regards,

--
Maxim Kuvyrkov
https://www.linaro.org

> On 24 Sep 2021, at 20:30, Mekhanoshin, Stanislav 
>  wrote:
> 
> [AMD Official Use Only]
> 
> I have reverted the whole change. There was yet another perf regression 
> report.
>  
> Stas
>  
> From: Mekhanoshin, Stanislav 
> Sent: Thursday, September 23, 2021 11:48
> To: Maxim Kuvyrkov 
> Cc: linaro-toolchain 
> Subject: RE: [TCWG CI] 456.hmmer slowed down by 6% after llvm: Allow 
> rematerialization of virtual reg uses
>  
> Thanks. I see the reload. There shall not be extra pressure since that is the 
> whole idea, make pressure less. However, I see more spills in that specific 
> file, fast_algorithms.s if I get it right.
> Can I get the IR for it? Something to feed llc.
>  
> Stas
>  
> From: Maxim Kuvyrkov  
> Sent: Thursday, September 23, 2021 2:31
> To: Mekhanoshin, Stanislav 
> Cc: linaro-toolchain 
> Subject: Re: [TCWG CI] 456.hmmer slowed down by 6% after llvm: Allow 
> rematerialization of virtual reg uses
>  
> [CAUTION: External Email]
> 
> Thanks, Stanislav.
> 
> I’ve looked into profile dumps, and 456.hmmer’s hot loop get several 
> additional reloads.  E.g., "ldrr1, [sp, #84]” generates 203 additional 
> samples, which translates into 20 seconds of time just for that one 
> instruction.
> 
> See the attached profile dumps and the the screenshot with the hot loop 
> highlighted.
> 
> Maybe your patch increases register pressure too much?
> 
> Regards,
> 
> --
> Maxim Kuvyrkov
> https://www.linaro.org
> 
> > On 22 Sep 2021, at 22:35, Mekhanoshin, Stanislav 
> >  wrote:
> >
> > [AMD Official Use Only]
> >
> > There are actually couple things worth to try if that is easy:
> >
> > https://reviews.llvm.org/D109077
> > https://reviews.llvm.org/differential/diff/374324/
> >
> > Both may slightly change spill weights and then spilling pattern.
> >
> > Stas
> >
> > -Original Message-
> > From: Mekhanoshin, Stanislav
> > Sent: Wednesday, September 22, 2021 12:09
> > To: Maxim Kuvyrkov 
> > Cc: linaro-toolchain 
> > Subject: RE: [TCWG CI] 456.hmmer slowed down by 6% after llvm: Allow 
> > rematerialization of virtual reg uses
> >
> > I assume some of the newly rematerialized instructions caused perf drops. 
> > Probably some very specific ones. I would appreciate if you could point 
> > them to me.
> > In addition I believe I would need to have a linked or optimized bitcode to 
> > feed into llc.
> >
> > Stas
> >
> > -Original Message-
> > From: Maxim Kuvyrkov 
> > Sent: Wednesday, September 22, 2021 12:06
> > To: Mekhanoshin, Stanislav 
> > Cc: linaro-toolchain 
> > Subject: Re: [TCWG CI] 456.hmmer slowed down by 6% after llvm: Allow 
> > rematerialization of virtual reg uses
> >
> > [CAUTION: External Email]
> >
> > Hi Stanislav,
> >
> > That's fair; I or someone from Linaro will try to analyze this and follow 
> > up here.
> >
> > On a more general note, what info would you like to see in these 
> > benchmarking regression reports?
> >
> > Thanks,
> >
> > --
> > Maxim Kuvyrkov
> > https://nam11.safelinks.protection.outlook.com/?url=https%3A%2F%2Fwww.linaro.org%2Fdata=04%7C01%7CStanislav.Mekhanoshin%40amd.com%7Ccb8b53f8e69f4fa8b2d508d97dfc017a%7C3dd8961fe4884e608e11a82d994e183d%7C0%7C0%7C637679343573433629%7CUnknown%7CTWFpbGZsb3d8eyJWIjoiMC4wLjAwMDAiLCJQIjoiV2luMzIiLCJBTiI6Ik1haWwiLCJXVCI6Mn0%3D%7C1000sdata=FP9FReEFKUi0Pvr%2FB1K3Z1VB%2BL2EuU7GqqZx2XOnawE%3Dreserved=0
> >
> >
> >> On Sep 22, 2021, at 9:40 PM, Mekhanoshin, Stanislav 
> >>  wrote:
> >>
> >> [AMD Official Use Only]
> >>
> >> Hm... I'd really like to help, but I do not think I can do anything with 
> >> megabytes of code in an asm which I do not understand and tons of 
> >> differences in 48 asm files.
> >> What I can see there is overall less spilling code which

Re: [TCWG CI] 471.omnetpp slowed down by 8% after gcc: Avoid invalid loop transformations in jump threading registry.

2021-09-30 Thread Maxim Kuvyrkov
> On 27 Sep 2021, at 16:52, Aldy Hernandez  wrote:
> 
> [CCing Jeff and list for broader audience]
> 
> On 9/27/21 2:53 PM, Maxim Kuvyrkov wrote:
>> Hi Aldy,
>> Your patch seems to slow down 471.omnetpp by 8% at -O3.  Could you please 
>> take a look if this is something that could be easily fixed?
> 
> First of all, thanks for chasing this down.  It's incredibly useful to have 
> these types of bug reports.

Thanks, Aldy, this is music to my ears :-).

We have built this automated benchmarking CI that bisects code-speed and 
code-size regressions down to a single commit.  It is still work-in-progress, 
and I’m forwarding these reports to patch authors, whose patches caused 
regressions.  If GCC community finds these useful, we can also setup posting to 
one of GCC’s mailing lists.

> 
> Jeff and I have been discussing the repercussions of adjusting the loop 
> crossing restrictions in the various threaders.  He's seen some regressions 
> in embedded targets when disallowing certain corner cases of loop crossing 
> threads causes all sorts of grief.
> 
> Out of curiosity, does the attached (untested) patch fix the regression?

I’ll test the patch and will follow up.

Regards,

--
Maxim Kuvyrkov
https://www.linaro.org


> 
> Aldy
> 
>> Regards,
>> --
>> Maxim Kuvyrkov
>> https://www.linaro.org
>>> On 27 Sep 2021, at 02:52, ci_not...@linaro.org wrote:
>>> 
>>> After gcc commit 4a960d548b7d7d942f316c5295f6d849b74214f5
>>> Author: Aldy Hernandez 
>>> 
>>>Avoid invalid loop transformations in jump threading registry.
>>> 
>>> the following benchmarks slowed down by more than 2%:
>>> - 471.omnetpp slowed down by 8% from 6348 to 6828 perf samples
>>> 
>>> Below reproducer instructions can be used to re-build both "first_bad" and 
>>> "last_good" cross-toolchains used in this bisection.  Naturally, the 
>>> scripts will fail when triggerring benchmarking jobs if you don't have 
>>> access to Linaro TCWG CI.
>>> 
>>> For your convenience, we have uploaded tarballs with pre-processed source 
>>> and assembly files at:
>>> - First_bad save-temps: 
>>> https://ci.linaro.org/job/tcwg_bmk_ci_gnu-bisect-tcwg_bmk_tk1-gnu-master-arm-spec2k6-O3/40/artifact/artifacts/build-4a960d548b7d7d942f316c5295f6d849b74214f5/save-temps/
>>> - Last_good save-temps: 
>>> https://ci.linaro.org/job/tcwg_bmk_ci_gnu-bisect-tcwg_bmk_tk1-gnu-master-arm-spec2k6-O3/40/artifact/artifacts/build-29c92857039d0a105281be61c10c9e851aaeea4a/save-temps/
>>> - Baseline save-temps: 
>>> https://ci.linaro.org/job/tcwg_bmk_ci_gnu-bisect-tcwg_bmk_tk1-gnu-master-arm-spec2k6-O3/40/artifact/artifacts/build-baseline/save-temps/
>>> 
>>> Configuration:
>>> - Benchmark: SPEC CPU2006
>>> - Toolchain: GCC + Glibc + GNU Linker
>>> - Version: all components were built from their tip of trunk
>>> - Target: arm-linux-gnueabihf
>>> - Compiler flags: -O3 -marm
>>> - Hardware: NVidia TK1 4x Cortex-A15
>>> 
>>> This benchmarking CI is work-in-progress, and we welcome feedback and 
>>> suggestions at linaro-toolchain@lists.linaro.org .  In our improvement 
>>> plans is to add support for SPEC CPU2017 benchmarks and provide "perf 
>>> report/annotate" data behind these reports.
>>> 
>>> THIS IS THE END OF INTERESTING STUFF.  BELOW ARE LINKS TO BUILDS, 
>>> REPRODUCTION INSTRUCTIONS, AND THE RAW COMMIT.
>>> 
>>> This commit has regressed these CI configurations:
>>> - tcwg_bmk_gnu_tk1/gnu-master-arm-spec2k6-O3
>>> 
>>> First_bad build: 
>>> https://ci.linaro.org/job/tcwg_bmk_ci_gnu-bisect-tcwg_bmk_tk1-gnu-master-arm-spec2k6-O3/40/artifact/artifacts/build-4a960d548b7d7d942f316c5295f6d849b74214f5/
>>> Last_good build: 
>>> https://ci.linaro.org/job/tcwg_bmk_ci_gnu-bisect-tcwg_bmk_tk1-gnu-master-arm-spec2k6-O3/40/artifact/artifacts/build-29c92857039d0a105281be61c10c9e851aaeea4a/
>>> Baseline build: 
>>> https://ci.linaro.org/job/tcwg_bmk_ci_gnu-bisect-tcwg_bmk_tk1-gnu-master-arm-spec2k6-O3/40/artifact/artifacts/build-baseline/
>>> Even more details: 
>>> https://ci.linaro.org/job/tcwg_bmk_ci_gnu-bisect-tcwg_bmk_tk1-gnu-master-arm-spec2k6-O3/40/artifact/artifacts/
>>> 
>>> Reproduce builds:
>>> 
>>> mkdir investigate-gcc-4a960d548b7d7d942f316c5295f6d849b74214f5
>>> cd investigate-gcc-4a960d548b7d7d942f316c5295f6d849b74214f5
>>> 
>>> # Fetch scripts
>>> git clone https://git.linaro.org/toolchain/jenkins-scripts
>>> 
>>&

Re: [TCWG CI] 471.omnetpp slowed down by 8% after gcc: Avoid invalid loop transformations in jump threading registry.

2021-09-30 Thread Maxim Kuvyrkov
> On 27 Sep 2021, at 19:02, Andrew MacLeod  wrote:
> 
> On 9/27/21 11:39 AM, Maxim Kuvyrkov via Gcc wrote:
>>> On 27 Sep 2021, at 16:52, Aldy Hernandez  wrote:
>>> 
>>> [CCing Jeff and list for broader audience]
>>> 
>>> On 9/27/21 2:53 PM, Maxim Kuvyrkov wrote:
>>>> Hi Aldy,
>>>> Your patch seems to slow down 471.omnetpp by 8% at -O3.  Could you please 
>>>> take a look if this is something that could be easily fixed?
>>> First of all, thanks for chasing this down.  It's incredibly useful to have 
>>> these types of bug reports.
>> Thanks, Aldy, this is music to my ears :-).
>> 
>> We have built this automated benchmarking CI that bisects code-speed and 
>> code-size regressions down to a single commit.  It is still 
>> work-in-progress, and I’m forwarding these reports to patch authors, whose 
>> patches caused regressions.  If GCC community finds these useful, we can 
>> also setup posting to one of GCC’s mailing lists.
> 
> I second that this sort of thing is incredibly useful.   I don't suppose its 
> easy to do the reverse?... let patch authors know when they've caused a 
> significant improvement? :-)  That would be much less common I suspect, so 
> perhaps not worth it :-)

We do this occasionally, when identifying a regression in a patch revert commit 
:-).  Seriously, though, it’s an easy enough code-change to the metric, but we 
are maxing out our benchmarking capacity with current configuration matrix.

> 
> Its certainly very useful when we are making a wholesale change to a pass 
> which we think is beneficial, but aren't sure.
> 
> And a followup question...  Sometimes we have no good way of determining the 
> widespread run-time effects of a change.  You seem to be running SPEC/other 
> things continuously then?

We continuously run SPEC CPU2006 on {arm,aarch64}-{-Os/-O2/-O3}-{no LTO/LTO} 
matrix for GNU and LLVM toolchains.

In the GNU toolchain we track master branches and latest-release branches of 
Binutils, GCC and Glibc — and detect code-speed and code-size regressions 
across all toolchain components.

>   Does it run like once a day/some-time-period, and if you note a regression, 
> narrow it down?

Configurations that track master branches have 3-day intervals.  Configurations 
that track release branches — 6 days.  If a regression is detected it is 
narrowed down to component first — binutils, gcc or glibc — and then the commit 
range of the component is bisected down to a specific commit.  All.  Done.  
Automatically.

I will make a presentation on this CI at the next GNU Tools Cauldron.

>  Regardless, I think it could be very useful to be able to see the results of 
> anything you do run at whatever frequency it happens.

Thanks!

--
Maxim Kuvyrkov
https://www.linaro.org

___
linaro-toolchain mailing list
linaro-toolchain@lists.linaro.org
https://lists.linaro.org/mailman/listinfo/linaro-toolchain


Re: [TCWG CI] 456.hmmer slowed down by 6% after llvm: Allow rematerialization of virtual reg uses

2021-09-30 Thread Maxim Kuvyrkov
Thanks, Stanislav.

I’ve looked into profile dumps, and 456.hmmer’s hot loop get several additional 
reloads.  E.g., "ldrr1, [sp, #84]” generates 203 additional samples, which 
translates into 20 seconds of time just for that one instruction.

See the attached profile dumps and the the screenshot with the hot loop 
highlighted.

Maybe your patch increases register pressure too much?

Regards,

--
Maxim Kuvyrkov
https://www.linaro.org

> On 22 Sep 2021, at 22:35, Mekhanoshin, Stanislav 
>  wrote:
> 
> [AMD Official Use Only]
> 
> There are actually couple things worth to try if that is easy:
> 
> https://reviews.llvm.org/D109077
> https://reviews.llvm.org/differential/diff/374324/
> 
> Both may slightly change spill weights and then spilling pattern.
> 
> Stas
> 
> -Original Message-
> From: Mekhanoshin, Stanislav
> Sent: Wednesday, September 22, 2021 12:09
> To: Maxim Kuvyrkov 
> Cc: linaro-toolchain 
> Subject: RE: [TCWG CI] 456.hmmer slowed down by 6% after llvm: Allow 
> rematerialization of virtual reg uses
> 
> I assume some of the newly rematerialized instructions caused perf drops. 
> Probably some very specific ones. I would appreciate if you could point them 
> to me.
> In addition I believe I would need to have a linked or optimized bitcode to 
> feed into llc.
> 
> Stas
> 
> -Original Message-
> From: Maxim Kuvyrkov 
> Sent: Wednesday, September 22, 2021 12:06
> To: Mekhanoshin, Stanislav 
> Cc: linaro-toolchain 
> Subject: Re: [TCWG CI] 456.hmmer slowed down by 6% after llvm: Allow 
> rematerialization of virtual reg uses
> 
> [CAUTION: External Email]
> 
> Hi Stanislav,
> 
> That's fair; I or someone from Linaro will try to analyze this and follow up 
> here.
> 
> On a more general note, what info would you like to see in these benchmarking 
> regression reports?
> 
> Thanks,
> 
> --
> Maxim Kuvyrkov
> https://nam11.safelinks.protection.outlook.com/?url=https%3A%2F%2Fwww.linaro.org%2Fdata=04%7C01%7CStanislav.Mekhanoshin%40amd.com%7Ccb8b53f8e69f4fa8b2d508d97dfc017a%7C3dd8961fe4884e608e11a82d994e183d%7C0%7C0%7C637679343573433629%7CUnknown%7CTWFpbGZsb3d8eyJWIjoiMC4wLjAwMDAiLCJQIjoiV2luMzIiLCJBTiI6Ik1haWwiLCJXVCI6Mn0%3D%7C1000sdata=FP9FReEFKUi0Pvr%2FB1K3Z1VB%2BL2EuU7GqqZx2XOnawE%3Dreserved=0
> 
> 
>> On Sep 22, 2021, at 9:40 PM, Mekhanoshin, Stanislav 
>>  wrote:
>> 
>> [AMD Official Use Only]
>> 
>> Hm... I'd really like to help, but I do not think I can do anything with 
>> megabytes of code in an asm which I do not understand and tons of 
>> differences in 48 asm files.
>> What I can see there is overall less spilling code which was the intent in 
>> the first place: hmmer has 4 less spill opcodes overall and sphinx has 27 
>> less of them.
>> I doubt I could say much more without someone pointing to the actual root 
>> cause.
>> 
>> Stas
>> 
>> -Original Message-
>> From: Maxim Kuvyrkov 
>> Sent: Wednesday, September 22, 2021 5:16
>> To: Mekhanoshin, Stanislav 
>> Cc: linaro-toolchain 
>> Subject: Re: [TCWG CI] 456.hmmer slowed down by 6% after llvm: Allow 
>> rematerialization of virtual reg uses
>> 
>> [CAUTION: External Email]
>> 
>> Hi Stanislav,
>> 
>> Attached is a tarball with -save-temps output (pre-processed source and 
>> generated assembly) for first-bad run (your commit) and last-good run 
>> (immediate parent of your commit).
>> 
>> --
>> Maxim Kuvyrkov
>> https://nam11.safelinks.protection.outlook.com/?url=https%3A%2F%2Fwww.linaro.org%2Fdata=04%7C01%7CStanislav.Mekhanoshin%40amd.com%7Ccb8b53f8e69f4fa8b2d508d97dfc017a%7C3dd8961fe4884e608e11a82d994e183d%7C0%7C0%7C637679343573433629%7CUnknown%7CTWFpbGZsb3d8eyJWIjoiMC4wLjAwMDAiLCJQIjoiV2luMzIiLCJBTiI6Ik1haWwiLCJXVCI6Mn0%3D%7C1000sdata=FP9FReEFKUi0Pvr%2FB1K3Z1VB%2BL2EuU7GqqZx2XOnawE%3Dreserved=0
>> 
>>> On 20 Sep 2021, at 23:15, Mekhanoshin, Stanislav 
>>>  wrote:
>>> 
>>> [AMD Official Use Only]
>>> 
>>> Thanks for letting me know. Some regressions are inevitable, however do you 
>>> happen to have any analysis and dumps? I myself do not understand ARM ISA 
>>> well...
>>> 
>>> Stas
>>> 
>>> -Original Message-
>>> From: Maxim Kuvyrkov 
>>> Sent: Wednesday, September 15, 2021 5:52
>>> To: Mekhanoshin, Stanislav 
>>> Cc: linaro-toolchain 
>>> Subject: Re: [TCWG CI] 456.hmmer slowed down by 6% after llvm: Allow 
>>> rematerialization of virtual reg uses
>>> 
>>> [CAUTION: External Email]
>>> 
&

Re: [TCWG CI] 456.hmmer slowed down by 6% after llvm: Allow rematerialization of virtual reg uses

2021-09-30 Thread Maxim Kuvyrkov
Hi Stanislav,

Attached is a tarball with -save-temps output (pre-processed source and 
generated assembly) for first-bad run (your commit) and last-good run 
(immediate parent of your commit).

--
Maxim Kuvyrkov
https://www.linaro.org

> On 20 Sep 2021, at 23:15, Mekhanoshin, Stanislav 
>  wrote:
> 
> [AMD Official Use Only]
> 
> Thanks for letting me know. Some regressions are inevitable, however do you 
> happen to have any analysis and dumps? I myself do not understand ARM ISA 
> well...
> 
> Stas
> 
> -Original Message-
> From: Maxim Kuvyrkov 
> Sent: Wednesday, September 15, 2021 5:52
> To: Mekhanoshin, Stanislav 
> Cc: linaro-toolchain 
> Subject: Re: [TCWG CI] 456.hmmer slowed down by 6% after llvm: Allow 
> rematerialization of virtual reg uses
> 
> [CAUTION: External Email]
> 
> Hi Stanislav,
> 
> FYI, your patch seems to be slowing down two of SPEC CPU2006 tests on 32-bit 
> ARM at -O2 and -O3 optimization levels.
> 
> --
> Maxim Kuvyrkov
> https://nam11.safelinks.protection.outlook.com/?url=https%3A%2F%2Fwww.linaro.org%2Fdata=04%7C01%7CStanislav.Mekhanoshin%40amd.com%7C70fc4b555fa8419b283708d97847a8c7%7C3dd8961fe4884e608e11a82d994e183d%7C0%7C0%7C637673071485470682%7CUnknown%7CTWFpbGZsb3d8eyJWIjoiMC4wLjAwMDAiLCJQIjoiV2luMzIiLCJBTiI6Ik1haWwiLCJXVCI6Mn0%3D%7C1000sdata=bkObPKjWEcK%2FLKi4Vc0q0an1gwCHUmro6OUILcE4Qpg%3Dreserved=0
> 


___
linaro-toolchain mailing list
linaro-toolchain@lists.linaro.org
https://lists.linaro.org/mailman/listinfo/linaro-toolchain


  1   2   3   >