On Thu, 5 Nov 2020, Alexander Monakov via Gcc wrote:
> On Thu, 5 Nov 2020, Uros Bizjak via Gcc wrote:
>
> > > No, this is not how LEA operates. It needs a memory input operand. The
> > > above will report "operand type mismatch for 'lea'" error.
> >
> > The following will work:
> >
> > asm volatile ("lea (%1), %0" : "=r"(addr) : "r"((uintptr_t)&x));
>
> This is the same as a plain move though, and the cast to uintptr_t doesn't
> do anything, you can simply pass "r"(&x) to the same effect.
>
> The main advantage of passing a "fake" memory location for use with lea is
> avoiding base+offset computation outside the asm. If you're okay with one
> extra register tied up by the asm, just pass the address to the asm directly:
>
> void foo(__seg_fs int *x)
> {
> asm("# %0 (%1)" :: "m"(x[1]), "r"(&x[1]));
> asm("# %0 (%1)" :: "m"(x[0]), "r"(&x[0]));
> }
Actually, in the original context the asm ties up %rsi no matter what (because
the operand must be in %rsi to make the call), so the code would just
pass "S"(&var) for the call alternative and "m"(var) for the native instruction.
Then the only disadvantage is useless mov/lea to %rsi on the common path when
the alternative selected at runtime is native.
Alexander