Re: [PATCH [RT] 10/14] adjust pi_lock usage in wakeup
On Thu, 2008-02-21 at 11:48 -0500, Steven Rostedt wrote: > On Thu, 21 Feb 2008, Gregory Haskins wrote: > > > From: Peter W.Morreale <[EMAIL PROTECTED]> > > > > In wakeup_next_waiter(), we take the pi_lock, and then find out whether > > we have another waiter to add to the pending owner. We can reduce > > contention on the pi_lock for the pending owner if we first obtain the > > pointer to the next waiter outside of the pi_lock. > > > > This patch adds a measureable increase in throughput. > > I see how this may decrease contention (slightly less time in holding the > pi_lock). But, please, when stating something like: "adds a measurable > increase in throughput", show the benchmark numbers. > > -- Steve > Approximately 3% to the dbench benchmark we used. My "standard" sanity check was to mount a ramfs filesystem and execute: dbench -t 10 30 five times and generate an average from the reported "Throughput" numbers displayed by the runs. dbench was chosen merely because of the contention on dcache_lock. Best, -PWM -- To unsubscribe from this list: send the line "unsubscribe linux-kernel" in the body of a message to [EMAIL PROTECTED] More majordomo info at http://vger.kernel.org/majordomo-info.html Please read the FAQ at http://www.tux.org/lkml/
Re: [PATCH [RT] 10/14] adjust pi_lock usage in wakeup
On Thu, 21 Feb 2008, Gregory Haskins wrote: > From: Peter W.Morreale <[EMAIL PROTECTED]> > > In wakeup_next_waiter(), we take the pi_lock, and then find out whether > we have another waiter to add to the pending owner. We can reduce > contention on the pi_lock for the pending owner if we first obtain the > pointer to the next waiter outside of the pi_lock. > > This patch adds a measureable increase in throughput. I see how this may decrease contention (slightly less time in holding the pi_lock). But, please, when stating something like: "adds a measurable increase in throughput", show the benchmark numbers. -- Steve -- To unsubscribe from this list: send the line "unsubscribe linux-kernel" in the body of a message to [EMAIL PROTECTED] More majordomo info at http://vger.kernel.org/majordomo-info.html Please read the FAQ at http://www.tux.org/lkml/
[PATCH [RT] 10/14] adjust pi_lock usage in wakeup
From: Peter W.Morreale <[EMAIL PROTECTED]> In wakeup_next_waiter(), we take the pi_lock, and then find out whether we have another waiter to add to the pending owner. We can reduce contention on the pi_lock for the pending owner if we first obtain the pointer to the next waiter outside of the pi_lock. This patch adds a measureable increase in throughput. Signed-off-by: Peter W. Morreale <[EMAIL PROTECTED]> --- kernel/rtmutex.c | 14 +- 1 files changed, 9 insertions(+), 5 deletions(-) diff --git a/kernel/rtmutex.c b/kernel/rtmutex.c index a7ed7b2..122f143 100644 --- a/kernel/rtmutex.c +++ b/kernel/rtmutex.c @@ -505,6 +505,7 @@ static void wakeup_next_waiter(struct rt_mutex *lock, int savestate) { struct rt_mutex_waiter *waiter; struct task_struct *pendowner; + struct rt_mutex_waiter *next; spin_lock(>pi_lock); @@ -549,6 +550,12 @@ static void wakeup_next_waiter(struct rt_mutex *lock, int savestate) * waiter with higher priority than pending-owner->normal_prio * is blocked on the unboosted (pending) owner. */ + + if (rt_mutex_has_waiters(lock)) + next = rt_mutex_top_waiter(lock); + else + next = NULL; + spin_lock(>pi_lock); WARN_ON(!pendowner->pi_blocked_on); @@ -557,12 +564,9 @@ static void wakeup_next_waiter(struct rt_mutex *lock, int savestate) pendowner->pi_blocked_on = NULL; - if (rt_mutex_has_waiters(lock)) { - struct rt_mutex_waiter *next; - - next = rt_mutex_top_waiter(lock); + if (next) plist_add(>pi_list_entry, >pi_waiters); - } + spin_unlock(>pi_lock); } -- To unsubscribe from this list: send the line "unsubscribe linux-kernel" in the body of a message to [EMAIL PROTECTED] More majordomo info at http://vger.kernel.org/majordomo-info.html Please read the FAQ at http://www.tux.org/lkml/
[PATCH [RT] 10/14] adjust pi_lock usage in wakeup
From: Peter W.Morreale [EMAIL PROTECTED] In wakeup_next_waiter(), we take the pi_lock, and then find out whether we have another waiter to add to the pending owner. We can reduce contention on the pi_lock for the pending owner if we first obtain the pointer to the next waiter outside of the pi_lock. This patch adds a measureable increase in throughput. Signed-off-by: Peter W. Morreale [EMAIL PROTECTED] --- kernel/rtmutex.c | 14 +- 1 files changed, 9 insertions(+), 5 deletions(-) diff --git a/kernel/rtmutex.c b/kernel/rtmutex.c index a7ed7b2..122f143 100644 --- a/kernel/rtmutex.c +++ b/kernel/rtmutex.c @@ -505,6 +505,7 @@ static void wakeup_next_waiter(struct rt_mutex *lock, int savestate) { struct rt_mutex_waiter *waiter; struct task_struct *pendowner; + struct rt_mutex_waiter *next; spin_lock(current-pi_lock); @@ -549,6 +550,12 @@ static void wakeup_next_waiter(struct rt_mutex *lock, int savestate) * waiter with higher priority than pending-owner-normal_prio * is blocked on the unboosted (pending) owner. */ + + if (rt_mutex_has_waiters(lock)) + next = rt_mutex_top_waiter(lock); + else + next = NULL; + spin_lock(pendowner-pi_lock); WARN_ON(!pendowner-pi_blocked_on); @@ -557,12 +564,9 @@ static void wakeup_next_waiter(struct rt_mutex *lock, int savestate) pendowner-pi_blocked_on = NULL; - if (rt_mutex_has_waiters(lock)) { - struct rt_mutex_waiter *next; - - next = rt_mutex_top_waiter(lock); + if (next) plist_add(next-pi_list_entry, pendowner-pi_waiters); - } + spin_unlock(pendowner-pi_lock); } -- To unsubscribe from this list: send the line unsubscribe linux-kernel in the body of a message to [EMAIL PROTECTED] More majordomo info at http://vger.kernel.org/majordomo-info.html Please read the FAQ at http://www.tux.org/lkml/
Re: [PATCH [RT] 10/14] adjust pi_lock usage in wakeup
On Thu, 2008-02-21 at 11:48 -0500, Steven Rostedt wrote: On Thu, 21 Feb 2008, Gregory Haskins wrote: From: Peter W.Morreale [EMAIL PROTECTED] In wakeup_next_waiter(), we take the pi_lock, and then find out whether we have another waiter to add to the pending owner. We can reduce contention on the pi_lock for the pending owner if we first obtain the pointer to the next waiter outside of the pi_lock. This patch adds a measureable increase in throughput. I see how this may decrease contention (slightly less time in holding the pi_lock). But, please, when stating something like: adds a measurable increase in throughput, show the benchmark numbers. -- Steve Approximately 3% to the dbench benchmark we used. My standard sanity check was to mount a ramfs filesystem and execute: dbench -t 10 30 five times and generate an average from the reported Throughput numbers displayed by the runs. dbench was chosen merely because of the contention on dcache_lock. Best, -PWM -- To unsubscribe from this list: send the line unsubscribe linux-kernel in the body of a message to [EMAIL PROTECTED] More majordomo info at http://vger.kernel.org/majordomo-info.html Please read the FAQ at http://www.tux.org/lkml/
Re: [PATCH [RT] 10/14] adjust pi_lock usage in wakeup
On Thu, 21 Feb 2008, Gregory Haskins wrote: From: Peter W.Morreale [EMAIL PROTECTED] In wakeup_next_waiter(), we take the pi_lock, and then find out whether we have another waiter to add to the pending owner. We can reduce contention on the pi_lock for the pending owner if we first obtain the pointer to the next waiter outside of the pi_lock. This patch adds a measureable increase in throughput. I see how this may decrease contention (slightly less time in holding the pi_lock). But, please, when stating something like: adds a measurable increase in throughput, show the benchmark numbers. -- Steve -- To unsubscribe from this list: send the line unsubscribe linux-kernel in the body of a message to [EMAIL PROTECTED] More majordomo info at http://vger.kernel.org/majordomo-info.html Please read the FAQ at http://www.tux.org/lkml/