On Thu, Jun 2, 2022 at 11:32 AM Uros Bizjak <ubiz...@gmail.com> wrote:
>
> On Thu, Jun 2, 2022 at 9:20 AM Roger Sayle <ro...@nextmovesoftware.com> wrote:
> >
> > The simple test case below demonstrates an interesting register
> > allocation challenge facing x86_64, imposed by ABI requirements
> > on int128.
> >
> > __int128 foo(__int128 x, __int128 y)
> > {
> >   return x+y;
> > }
> >
> > For which GCC currently generates the unusual sequence:
> >
> >         movq    %rsi, %rax
> >         movq    %rdi, %r8
> >         movq    %rax, %rdi
> >         movq    %rdx, %rax
> >         movq    %rcx, %rdx
> >         addq    %r8, %rax
> >         adcq    %rdi, %rdx
> >         ret
> >
> > The challenge is that the x86_64 ABI requires passing the first __int128,
> > x, in %rsi:%rdi (highpart in %rsi, lowpart in %rdi), where internally
> > GCC prefers TI mode (double word) integers to be register allocated as
> > %rdi:%rsi (highpart in %rdi, lowpart in %rsi).  So after reload, we have
> > four mov instructions, two to move the double word to temporary registers
> > and then two to move them back.
> >
> > This patch adds a peephole2 to spot this register shuffling, and with
> > -Os generates a xchg instruction, to produce:
> >
> >         xchgq   %rsi, %rdi
> >         movq    %rdx, %rax
> >         movq    %rcx, %rdx
> >         addq    %rsi, %rax
> >         adcq    %rdi, %rdx
> >         ret
> >
> > or when optimizing for speed, a three mov sequence, using just one of
> > the temporary registers, which ultimately results in the improved:
> >
> >         movq    %rdi, %r8
> >         movq    %rdx, %rax
> >         movq    %rcx, %rdx
> >         addq    %r8, %rax
> >         adcq    %rsi, %rdx
> >         ret
> >
> > I've a follow-up patch which improves things further, and with the
> > output in flux, I'd like to add the new testcase with part 2, once
> > we're back down to requiring only two movq instructions.
>
> Shouldn't we rather do something about:
>
> (insn 2 9 3 2 (set (reg:DI 85)
>        (reg:DI 5 di [ x ])) "dword-2.c":2:1 82 {*movdi_internal}
>     (nil))
> (insn 3 2 4 2 (set (reg:DI 86)
>        (reg:DI 4 si [ x+8 ])) "dword-2.c":2:1 82 {*movdi_internal}
>     (nil))
> (insn 4 3 5 2 (set (reg:TI 84)
>        (subreg:TI (reg:DI 85) 0)) "dword-2.c":2:1 81 {*movti_internal}
>     (nil))
> (insn 5 4 6 2 (set (subreg:DI (reg:TI 84) 8)
>        (reg:DI 86)) "dword-2.c":2:1 82 {*movdi_internal}
>     (nil))
> (insn 6 5 7 2 (set (reg/v:TI 83 [ x ])
>        (reg:TI 84)) "dword-2.c":2:1 81 {*movti_internal}
>     (nil))
>
> The above is how the functionTImode argument is constructed.
>
> The other problem is that double-word addition gets split only after
> reload, mostly due to RA reasons. In the past it was determined that
> RA creates better code when registers are split late (this reason
> probably does not hold anymore), but nowadays the limitation remains
> only for arithmetic and shifts.

FYI, the effect of the patch can be seen with the following testcase:

--cut here--
#include <stdint.h>

void test (int64_t n)
{
  while (1)
    {
      n++;
      asm volatile ("#"
            :: "b" ((int32_t)n),
               "c" ((int32_t)(n >> 32)));
    }
}
--cut here--

Please compile this with -O2 -m32 with patched and unpatched compiler.

Uros.

Reply via email to