* Waiman Long <waiman.l...@hp.com> wrote: > > Furthermore, since you are seeing this effect so profoundly, have you > > considered using another approach, such as queueing all the poll-waiters in > > some fashion? > > > > That would optimize your workload additionally: removing the 'stampede' of > > trylock attempts when an unlock happens - only a single wait-poller would > > get > > the lock. > > The mutex code in the slowpath has already put the waiters into a sleep queue > and wait up only one at a time.
Yes - but I'm talking about spin/poll-waiters. > [...] However, there are 2 additional source of mutex lockers besides those > in > the sleep queue: > > 1. New tasks trying to acquire the mutex and currently in the fast path. > 2. Mutex spinners (CONFIG_MUTEX_SPIN_ON_OWNER on) who are spinning > on the owner field and ready to acquire the mutex once the owner > field change. > > The 2nd and 3rd patches are my attempts to limit the second types of mutex > lockers. Even the 1st patch seems to do that, it limits the impact of spin-loopers, right? I'm fine with patch #1 [your numbers are proof enough that it helps while the low client count effect seems to be in the noise] - the questions that seem open to me are: - Could the approach in patch #1 be further improved by an additional patch that adds queueing to the _spinners_ in some fashion - like ticket spin locks try to do in essence? Not queue the blocked waiters (they are already queued), but the active spinners. This would have additional benefits, especially with a high CPU count and a high NUMA factor, by removing the stampede effect as owners get switched. - Why does patch #2 have an effect? (it shouldn't at first glance) It has a non-trivial cost, it increases the size of 'struct mutex' by 8 bytes, which structure is embedded in numerous kernel data structures. When doing comparisons I'd suggest comparing it not to just vanilla, but to a patch that only extends the struct mutex data structure (and changes no code) - this allows the isolation of cache layout change effects. - Patch #3 is rather ugly - and my hope would be that if spinners are queued in some fashion it becomes unnecessary. Thanks, Ingo -- To unsubscribe from this list: send the line "unsubscribe linux-kernel" in the body of a message to majord...@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html Please read the FAQ at http://www.tux.org/lkml/