The patch titled

     sched: less newidle locking

has been added to the -mm tree.  Its filename is

     sched-less-newidle-locking.patch

Patches currently in -mm which might be from [EMAIL PROTECTED] are

ia64-cpuset-build_sched_domains-mangles-structures.patch
mm-comment-rmap.patch
mm-micro-optimise-rmap.patch
mm-cleanup-rmap.patch
mm-remap-zero_page-mappings.patch
mm-remove-atomic.patch
sched-idlest-cpus_allowed-aware.patch
sched-implement-nice-support-across-physical-cpus-on-smp.patch
sched-change_prio_bias_only_if_queued.patch
sched-account_rt_tasks_in_prio_bias.patch
sched-less-newidle-locking.patch
sched-less-locking.patch
sched-ht-optimisation.patch
sched-consider-migration-thread-with-smp-nice.patch
sched2-sched-domain-sysctl.patch



From: Nick Piggin <[EMAIL PROTECTED]>

Similarly to the earlier change in load_balance, only lock the runqueue in
load_balance_newidle if the busiest queue found has a nr_running > 1.  This
will reduce frequency of expensive remote runqueue lock aquisitions in the
schedule() path on some workloads.

Signed-off-by: Nick Piggin <[EMAIL PROTECTED]>
Acked-by: Ingo Molnar <[EMAIL PROTECTED]>
Signed-off-by: Andrew Morton <[EMAIL PROTECTED]>
---

 kernel/sched.c |   17 ++++++++++-------
 1 files changed, 10 insertions(+), 7 deletions(-)

diff -puN kernel/sched.c~sched-less-newidle-locking kernel/sched.c
--- devel/kernel/sched.c~sched-less-newidle-locking     2005-08-29 
23:36:05.000000000 -0700
+++ devel-akpm/kernel/sched.c   2005-08-29 23:36:05.000000000 -0700
@@ -2179,8 +2179,7 @@ static int load_balance(int this_cpu, ru
                 */
                double_lock_balance(this_rq, busiest);
                nr_moved = move_tasks(this_rq, this_cpu, busiest,
-                                               imbalance, sd, idle,
-                                               &all_pinned);
+                                       imbalance, sd, idle, &all_pinned);
                spin_unlock(&busiest->lock);
 
                /* All tasks on this runqueue were pinned by CPU affinity */
@@ -2275,18 +2274,22 @@ static int load_balance_newidle(int this
 
        BUG_ON(busiest == this_rq);
 
-       /* Attempt to move tasks */
-       double_lock_balance(this_rq, busiest);
-
        schedstat_add(sd, lb_imbalance[NEWLY_IDLE], imbalance);
-       nr_moved = move_tasks(this_rq, this_cpu, busiest,
+
+       nr_moved = 0;
+       if (busiest->nr_running > 1) {
+               /* Attempt to move tasks */
+               double_lock_balance(this_rq, busiest);
+               nr_moved = move_tasks(this_rq, this_cpu, busiest,
                                        imbalance, sd, NEWLY_IDLE, NULL);
+               spin_unlock(&busiest->lock);
+       }
+
        if (!nr_moved)
                schedstat_inc(sd, lb_failed[NEWLY_IDLE]);
        else
                sd->nr_balance_failed = 0;
 
-       spin_unlock(&busiest->lock);
        return nr_moved;
 
 out_balanced:
_
-
To unsubscribe from this list: send the line "unsubscribe mm-commits" in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html

Reply via email to