On 24/10/2019 11:16, Christophe Lyon wrote:
On 23/10/2019 15:21, Richard Earnshaw (lists) wrote:
On 23/10/2019 09:28, Christophe Lyon wrote:
On 21/10/2019 14:24, Richard Earnshaw (lists) wrote:
On 21/10/2019 12:51, Christophe Lyon wrote:
On 18/10/2019 21:48, Richard Earnshaw wrote:
Each patch should produce a working compiler (it did when it was
originally written), though since the patch set has been re-ordered
slightly there is a possibility that some of the intermediate steps
may have missing test updates that are only cleaned up later.
However, only the end of the series should be considered complete.
I've kept the patch as a series to permit easier regression hunting
should that prove necessary.

Thanks for this information: my validation system was designed in such a way that it will run the GCC testsuite after each of your patches, so I'll keep in mind not to report regressions (I've noticed several already).


I can perform a manual validation taking your 29 patches as a single one and compare the results with those of the revision preceding the one were you committed patch #1. Do you think it would be useful?


Christophe



I think if you can filter out any that are removed by later patches and then report against the patch that caused the regression itself then that would be the best.  But I realise that would be more work for you, so a round-up against the combined set would be OK.

BTW, I'm aware of an issue with the compiler now generating

     <alu> reg, reg, shift <reg>

in Thumb2; no need to report that again.

Thanks,
R.
.



Hi Richard,

The validation of the whole set shows 1 regression, which was also reported by the validation of r277179 (early split most DImode comparison operations)

When GCC is configured as:
--target arm-none-eabi
--with-mode default
--with-cpu default
--with-fpu default
(that is, no --with-mode, --with-cpu, --with-fpu option)
I'm using binutils-2.28 and newlib-3.1.0

I can see:
FAIL: g++.dg/opt/pr36449.C  -std=gnu++14 execution test
(whatever -std=gnu++XX option)

That's strange.  The assembler code generated for that test is unchanged from before the patch series, so I can't see how it can't be a problem in the test itself.  What's more, I can't seem to reproduce this myself.

As you have noticed, I have created PR92207 to help understand this.


Similarly, in my build the code for _Znwj, malloc, malloc_r and free_r are also unchanged, while the malloc_[un]lock functions are empty stubs (not surprising as we aren't multi-threaded).

So the only thing that looks to have really changed are the linker offsets (some of the library code has changed, but I don't think it's really reached in practice, so shouldn't be relevant).


I'm executing the tests using qemu-4.1.0 -cpu arm926
The qemu traces shows that code enters main, then _Znwj (operator new), then _malloc_r
The qemu traces end with:

What do you mean by 'end with'?  What's the failure mode of the test? A crash, or the test exiting with a failure code?

qemu complains with:
qemu: uncaught target signal 11 (Segmentation fault) - core dumped
Segmentation fault (core dumped)

'end with' because my automated validation builds do not keep the full execution traces (that would need too much disk space)


As I've said in the PR, this looks like a bug in the qemu+newlib code. We call sbrk() which says, OK, but then the page isn't mapped by qemu into the process and it then faults.

So I think these changes are off the hook, it's just bad luck that they expose the issue at this point in time.

R.

Reply via email to