Excerpts from Tejun Heo's message of 2011-03-23 06:46:14 -0400:
> Hello, Chris.
> 
> On Tue, Mar 22, 2011 at 07:13:09PM -0400, Chris Mason wrote:
> > Ok, this impact of this is really interesting.  If we have very short
> > waits where there is no IO at all, this patch tends to lose.  I ran with
> > dbench 10 and got about 20% slower tput.
> > 
> > But, if we do any IO at all it wins by at least that much or more.  I
> > think we should take this patch and just work on getting rid of the
> > scheduling with the mutex held where possible.
> 
> I see.
> 
> > Tejun, could you please send the mutex_tryspin stuff in?  If we can get
> > a sob for that I can send the whole thing.
> 
> I'm not sure whetehr mutex_tryspin() is justified at this point, and,
> even if so, how to proceed with it.  Maybe we want to make
> mutex_trylock() perform owner spin by default without introducing a
> new API.

I'll benchmark without it, but I think the cond_resched is going to have
a pretty big impact.  I'm digging up the related benchmarks I used
during the initial adaptive spin work.

> 
> Given that the difference between SIMPLE and SPIN is small, I think it
> would be best to simply use mutex_trylock() for now.  It's not gonna
> make much difference either way.

mutex_trylock is a good start.

> 
> How do you want to proceed?  I can prep patches doing the following.
> 
> - Convert CONFIG_DEBUG_LOCK_ALLOC to CONFIG_LOCKDEP.
> 
> - Drop locking.c and make the lock function simple wrapper around
>   mutex operations.  This makes blocking/unblocking noops.
> 
> - Remove all blocking/unblocking calls along with the API.

I'd like to keep the blocking/unblocking calls for one release.  I'd
like to finally finish off my patches that do concurrent reads.

> 
> - Remove locking wrappers and use mutex API directly.

I'd also like to keep the wrappers until the concurrent reader locking
is done.

> 
> What do you think?

Thanks for all the work.

-chris
--
To unsubscribe from this list: send the line "unsubscribe linux-btrfs" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html

Reply via email to