The patch titled
sched: less locking
has been added to the -mm tree. Its filename is
sched-less-locking.patch
Patches currently in -mm which might be from [EMAIL PROTECTED] are
ia64-cpuset-build_sched_domains-mangles-structures.patch
mm-comment-rmap.patch
mm-micro-optimise-rmap.patch
mm-cleanup-rmap.patch
mm-remap-zero_page-mappings.patch
mm-remove-atomic.patch
sched-idlest-cpus_allowed-aware.patch
sched-implement-nice-support-across-physical-cpus-on-smp.patch
sched-change_prio_bias_only_if_queued.patch
sched-account_rt_tasks_in_prio_bias.patch
sched-less-newidle-locking.patch
sched-less-locking.patch
sched-ht-optimisation.patch
sched-consider-migration-thread-with-smp-nice.patch
sched2-sched-domain-sysctl.patch
From: Nick Piggin <[EMAIL PROTECTED]>
During periodic load balancing, don't hold this runqueue's lock while
scanning remote runqueues, which can take a non trivial amount of time
especially on very large systems.
Holding the runqueue lock will only help to stabilise ->nr_running, however
this doesn't do much to help because tasks being woken will simply get held
up on the runqueue lock, so ->nr_running would not provide a really
accurate picture of runqueue load in that case anyway.
What's more, ->nr_running (and possibly the cpu_load averages) of remote
runqueues won't be stable anyway, so load balancing is always an inexact
operation.
Signed-off-by: Nick Piggin <[EMAIL PROTECTED]>
Acked-by: Ingo Molnar <[EMAIL PROTECTED]>
Signed-off-by: Andrew Morton <[EMAIL PROTECTED]>
---
kernel/sched.c | 9 ++-------
1 files changed, 2 insertions(+), 7 deletions(-)
diff -puN kernel/sched.c~sched-less-locking kernel/sched.c
--- devel/kernel/sched.c~sched-less-locking 2005-08-29 23:36:09.000000000
-0700
+++ devel-akpm/kernel/sched.c 2005-08-29 23:36:09.000000000 -0700
@@ -2150,7 +2150,6 @@ static int load_balance(int this_cpu, ru
int nr_moved, all_pinned = 0;
int active_balance = 0;
- spin_lock(&this_rq->lock);
schedstat_inc(sd, lb_cnt[idle]);
group = find_busiest_group(sd, this_cpu, &imbalance, idle);
@@ -2177,18 +2176,16 @@ static int load_balance(int this_cpu, ru
* still unbalanced. nr_moved simply stays zero, so it is
* correctly treated as an imbalance.
*/
- double_lock_balance(this_rq, busiest);
+ double_rq_lock(this_rq, busiest);
nr_moved = move_tasks(this_rq, this_cpu, busiest,
imbalance, sd, idle, &all_pinned);
- spin_unlock(&busiest->lock);
+ double_rq_unlock(this_rq, busiest);
/* All tasks on this runqueue were pinned by CPU affinity */
if (unlikely(all_pinned))
goto out_balanced;
}
- spin_unlock(&this_rq->lock);
-
if (!nr_moved) {
schedstat_inc(sd, lb_failed[idle]);
sd->nr_balance_failed++;
@@ -2231,8 +2228,6 @@ static int load_balance(int this_cpu, ru
return nr_moved;
out_balanced:
- spin_unlock(&this_rq->lock);
-
schedstat_inc(sd, lb_balanced[idle]);
sd->nr_balance_failed = 0;
_
-
To unsubscribe from this list: send the line "unsubscribe mm-commits" in
the body of a message to [EMAIL PROTECTED]
More majordomo info at http://vger.kernel.org/majordomo-info.html