On Mon, Aug 28, 2017 at 03:05:46AM +0000, Mathieu Desnoyers wrote: > ----- On Aug 27, 2017, at 3:53 PM, Andy Lutomirski l...@amacapital.net wrote: > > >> On Aug 27, 2017, at 1:50 PM, Mathieu Desnoyers > >> <mathieu.desnoy...@efficios.com> > >> wrote: > >> > >> Add a new MEMBARRIER_CMD_REGISTER_SYNC_CORE command to the membarrier > >> system call. It allows processes to register their intent to have their > >> threads issue core serializing barriers in addition to memory barriers > >> whenever a membarrier command is performed. > >> > > > > Why is this stateful? That is, why not just have a new membarrier command > > to > > sync every thread's icache? > > If we'd do it on every CPU icache, it would be as trivial as you say. The > concern here is sending IPIs only to CPUs running threads that belong to the > same process, so we don't disturb unrelated processes. > > If we could just grab each CPU's runqueue lock, it would be fairly simple > to do. But we want to avoid hitting each runqueue with exclusive atomic > access associated with grabbing the lock. (cache-line bouncing)
I'm still trying to get my head around this for arm64, where we have the following properties: * Return to userspace is context-synchronizing * We have a heavy barrier in switch_to so it would seem to me that we could avoid taking RQ locks if the mm_cpumask was kept up to date. The problematic case is where a CPU is not observed in the mask (maybe the write is buffered), but it is running in userspace. However, that can't occur with the barrier in switch_to. So we only need to IPI those CPUs that were in userspace for this task at the point when the syscall was made, and the mm_cpumask should reflect that. What am I missing? Will