On Thursday, November 05, 2015 04:26:28 PM Konstantin Belousov wrote:
> On Thu, Nov 05, 2015 at 12:32:18AM +0100, Mateusz Guzik wrote:
> > mtx_lock will unconditionally try to grab the lock and if that fails,
> > will call __mtx_lock_sleep which will immediately try to do the same
> > atomic op again.
> > So, the obvious microoptimization is to check the state in
> > __mtx_lock_sleep and avoid the operation if the lock is not free.
> > This gives me ~40% speedup in a microbenchmark of 40 find processes
> > traversing tmpfs and contending on mount mtx (only used as an easy
> > benchmark, I have WIP patches to get rid of it).
> > Second part of the patch is optional and just checks the state of the
> > lock prior to doing any atomic operations, but it gives a very modest
> > speed up when applied on top of the __mtx_lock_sleep change. As such,
> > I'm not going to defend this part.
> Shouldn't the same consideration applied to all spinning loops, i.e.
> also to the spin/thread mutexes, and to the spinning parts of sx and
> lockmgr ?
I agree. I think both changes are good and worth doing in our other
firstname.lastname@example.org mailing list
To unsubscribe, send any mail to "freebsd-current-unsubscr...@freebsd.org"