On Mon, Jan 26, 2026 at 09:30:58PM +0100, Marius Strobl wrote:
> On Mon, Jan 26, 2026 at 06:34:49PM +0200, Konstantin Belousov wrote:
> > On Mon, Jan 26, 2026 at 03:57:45PM +0000, Marius Strobl wrote:
> > > The branch main has been updated by marius:
> > > 
> > > URL: 
> > > https://cgit.FreeBSD.org/src/commit/?id=e769bc77184312b6137a9b180c97b87c0760b849
> > > 
> > > commit e769bc77184312b6137a9b180c97b87c0760b849
> > > Author:     Marius Strobl <[email protected]>
> > > AuthorDate: 2026-01-26 13:58:57 +0000
> > > Commit:     Marius Strobl <[email protected]>
> > > CommitDate: 2026-01-26 15:54:48 +0000
> > > 
> > >     sym(4): Employ memory barriers also on x86
> > >     
> > >     In an MP world, it doesn't hold that x86 requires no memory barriers.
> > It does hold.  x86 is much more strongly ordered than all other arches
> > we currently support.
> 
> If it does hold, then why is atomic_thread_fence_seq_cst() employing
> a StoreLoad barrier even on amd64?
> I agree that x86 is more strongly ordered than the other supported
> architectures, though.
Well, it depends on the purpose.

Can you please explain what is the purpose of this specific barrier, and
where is the reciprocal barrier for it?

Often drivers for advanced devices do need fences.  For instance, from
my experience with the Mellanox networking cards, there are some structures
that are located in regular cacheable memory.  The readiness of the structure
for the card is indicated by a write to some location.  If this location is
BAR, then at least on x86 we do not need any barriers. But if it is also
in the regular memory, the visibility of writes to the structure before
the write to a signalling variable must be enforced.

This is done normally by atomic_thread_fence_rel(), which on x86 becomes
just compiler barrier, since the ordering is guaranteed by CPU (but not
compiler).

In this situation, using rmb() (which is fence) really degrades
the performance on high rates.


> 
> The panic seen matches the typical scenario of even x86 requiring a
> StoreLoad barrier. For the actual usage of these macros, the use of
> bus_{space,9}_barrier(9) would be more appropriate, however. On x86,
> this translates to a "lock addl $0,mem" for BUS_SPACE_BARRIER_READ,
> which probably would also achieve the intended order. I'd much
> prefer to just do what Linux still does up until today and be done
> with it, though.
> 
> Marius

Reply via email to