On Wed, Aug 21, 2019 at 06:56:10AM -0700, Paul E. McKenney wrote:
> On Wed, Aug 21, 2019 at 02:32:48PM +0100, Will Deacon wrote:
> > On Wed, Aug 21, 2019 at 06:23:10AM -0700, Paul E. McKenney wrote:
> > > On Wed, Aug 21, 2019 at 11:32:01AM +0100, Will Deacon wrote:
> > > > void bar(u64 *x)
> > > > {
> > > >         *(volatile u64 *)x = 0xabcdef10abcdef10;
> > > > }
> > > > 
> > > > then I get:
> > > > 
> > > > bar:
> > > >         mov     w1, 61200
> > > >         movk    w1, 0xabcd, lsl 16
> > > >         str     w1, [x0]
> > > >         str     w1, [x0, 4]
> > > >         ret
> > > > 
> > > > so I'm not sure that WRITE_ONCE would even help :/
> > > 
> > > Well, I can have the LWN article cite your email, then.  So thank you
> > > very much!
> > > 
> > > Is generation of this code for a 64-bit volatile store considered a bug?
> > 
> > I consider it a bug for the volatile case, and the one compiler person I've
> > spoken to also seems to reckon it's a bug, so hopefully it will be fixed.
> > I'm led to believe it's an optimisation in the AArch64 backend of GCC.
> 
> Here is hoping for the fix!
> 
> > > Or does ARMv8 exclude the possibility of 64-bit MMIO registers?  And I
> > > would guess that Thomas and Linus would ask a similar bugginess question
> > > for normal stores.  ;-)
> > 
> > We use inline asm for MMIO, fwiw.
> 
> I should have remembered that, shouldn't I have?  ;-)
> 
> Is that also common practice across other embedded kernels these days?

I think so. Sometimes you care about things like the addressing mode being
used, so it's easier to roll it by hand.

Will

Reply via email to