On Thu, 26 Jun 2025 12:34:45 -0400 Steven Rostedt <rost...@goodmis.org> wrote:
> On Thu, 26 Jun 2025 18:04:59 +0200 > Nam Cao <nam...@linutronix.de> wrote: > > > I think you have it inverted? I assume you meant: > > > > "Without the barriers, the tr->buffer_disabled = 1 can be set on one CPU, > > and the other CPU can think the buffer is still enabled and do work that > > will end up doing nothing." > > > > Your scenario can still happen despite the memory barrier: > > Yes, but the point isn't really to prevent the race. It's more about making > the race window smaller. > > When we disable it, if something is currently using it then it may or may > not get in. That's fine as this isn't critical. > > But from my understanding, without the barriers, some architectures may > never see the update. That is, the write from one CPU may not get to memory > for a long time and new incoming readers will still see the old data. I'm > more concerned with new readers than ones that are currently racing with > the updates. I'm not an expert here, but I don't think the barriers necessarily do anything to force writes out of the 'store buffer' (so the data gets into the cache from where it will be snooped). An implementation of 'wmb' might wait for the store buffer (or other scheme for pending writes) to empty, but it only has to use a marker to ensure ordering. The actual writes of data to the data cache are also likely to happen 'in their own time' regardless of the code the cpu is executing (although cache line reads from main memory (for reads) may take preference over those for writes). Thinks... A plausible model is that write data is buffered on a cache-line basis waiting for the old cache line to be read from memory. While that is happening later writes can be written into other cache lines. So a 'wmb' might just stall the cpu that executes it without having any real effect on the timings of the memory updates seen by other cpu. David