On Mon, Jan 26, 2026 at 08:11:51PM +0200, Andriy Gapon wrote:
> On 26/01/2026 18:34, Konstantin Belousov wrote:
> > On Mon, Jan 26, 2026 at 03:57:45PM +0000, Marius Strobl wrote:
> > > The branch main has been updated by marius:
> > >
> > > URL:
> > > https://cgit.FreeBSD.org/src/commit/?id=e769bc77184312b6137a9b180c97b87c0760b849
> > >
> > > commit e769bc77184312b6137a9b180c97b87c0760b849
> > > Author: Marius Strobl <[email protected]>
> > > AuthorDate: 2026-01-26 13:58:57 +0000
> > > Commit: Marius Strobl <[email protected]>
> > > CommitDate: 2026-01-26 15:54:48 +0000
> > >
> > > sym(4): Employ memory barriers also on x86
> > > In an MP world, it doesn't hold that x86 requires no memory barriers.
> > It does hold. x86 is much more strongly ordered than all other arches
> > we currently support.
> >
> > That said, the use of the barriers in drivers is usually not justified
> > (I did not looked at this specific case).
> >
> > Even if needed, please stop using rmb/wmb etc. Use atomic_thread_fence()
> > of appropriate kind, see atomic(9). Then on x86 it does the right thing.
> I understand that this advice is for the "normal" memory access model.
> But does it apply to "special" memory? E.g., to memory-based communication
> with devices?
Even more so, because rmb/wmb etc are about something very different than
'using special memory'. In this case, you need the 'special memory' properly
set up.
E.g. on x86 UC memory accesses are strongly ordered, so there is absolutely
no need in issuing neither locked instructions nor {L,M,S}FENCE to fence
these accesses, at least I have hard times imaging what would it change
except slowing CPU down.