On Fri, 2019-01-11 at 12:30 +0100, Jakub Jelinek wrote:
>
> > Yeah, like you say, this was originally posted in stage 1 and is the
> > last patch in the series. Not committing it would leave the work
> > incomplete in GCC 9. The problem is that we're now in stage 4 rather
> > than stage 3.
> >
On Fri, Jan 11, 2019 at 11:22:59AM +, Richard Sandiford wrote:
> Steve Ellcey writes:
> > If this looks good to you can I go ahead and check it in? I know
> > we are in Stage 3 now, but my recollection is that patches that were
> > initially submitted during Stage 1 could go ahead once
Steve Ellcey writes:
> OK, I fixed the issues in your last email. I initially found one
> regression while testing. In lra_create_live_ranges_1 I had removed
> the 'call_p = false' statement but did not replaced it with anything.
> This resulted in no regressions on aarch64 but caused a single
OK, I fixed the issues in your last email. I initially found one
regression while testing. In lra_create_live_ranges_1 I had removed
the 'call_p = false' statement but did not replaced it with anything.
This resulted in no regressions on aarch64 but caused a single
regression on x86
Steve Ellcey writes:
> On Wed, 2019-01-09 at 10:00 +, Richard Sandiford wrote:
>
> Thanks for the quick turnaround on the comments Richard. Here is a new
> version where I tried to address all the issues you raised. One thing
> I noticed is that I think your calls_have_same_clobbers_p
On Wed, 2019-01-09 at 10:00 +, Richard Sandiford wrote:
Thanks for the quick turnaround on the comments Richard. Here is a new
version where I tried to address all the issues you raised. One thing
I noticed is that I think your calls_have_same_clobbers_p function only
works if, when
Steve Ellcey writes:
> /* Implement TARGET_HARD_REGNO_CALL_PART_CLOBBERED. The callee only saves
> the lower 64 bits of a 128-bit register. Tell the compiler the callee
> clobbers the top 64 bits when restoring the bottom 64 bits. */
>
> static bool
>
On Mon, 2019-01-07 at 17:38 +, Richard Sandiford wrote:
>
> Yeah, this was the kind of thing I had in mind, thanks.
Here is an updated version of the patch. I bootstrapped and tested
on aarch64 and x86. I didn't test the other platforms where I changed
the arguments to
Steve Ellcey writes:
> On Thu, 2018-12-06 at 12:25 +, Richard Sandiford wrote:
>>
>> Since we're looking at the call insns anyway, we could have a hook that
>> "jousts" two calls and picks the one that preserves *fewer* registers.
>> This would mean that loop produces a single instruction
On Thu, 2018-12-06 at 12:25 +, Richard Sandiford wrote:
>
> Since we're looking at the call insns anyway, we could have a hook that
> "jousts" two calls and picks the one that preserves *fewer* registers.
> This would mean that loop produces a single instruction that conservatively
>
Steve Ellcey writes:
> This is a patch 4 to support the Aarch64 SIMD ABI [1] in GCC.
>
> It defines a new target hook targetm.check_part_clobbered that
> takes a rtx_insn and checks to see if it is a call to a function
> that may clobber partial registers. It returns true by default,
> which
This is a patch 4 to support the Aarch64 SIMD ABI [1] in GCC.
It defines a new target hook targetm.check_part_clobbered that
takes a rtx_insn and checks to see if it is a call to a function
that may clobber partial registers. It returns true by default,
which results in the current behaviour,
12 matches
Mail list logo