Commit:     9439aab8dbc33c2c03c3a19dba267360383ba38c
Parent:     c41917df8a1adde34864116ce2231a7fe308d2ff
Author:     Suresh Siddha <[EMAIL PROTECTED]>
AuthorDate: Thu Jul 19 21:28:35 2007 +0200
Committer:  Ingo Molnar <[EMAIL PROTECTED]>
CommitDate: Thu Jul 19 21:28:35 2007 +0200

    [PATCH] sched: fix newly idle load balance in case of SMT
    In the presence of SMT, newly idle balance was never happening for
    multi-core and SMP domains (even when both the logical siblings are
    If thread 0 is already idle and when thread 1 is about to go to idle,
    newly idle load balance always think that one of the threads is not idle
    and skips doing the newly idle load balance for multi-core and SMP
    This is because of the idle_cpu() macro, which checks if the current
    process on a cpu is an idle process. But this is not the case for the
    thread doing the load_balance_newidle().
    Fix this by using runqueue's nr_running field instead of idle_cpu(). And
    also skip the logic of 'only one idle cpu in the group will be doing
    load balancing' during newly idle case.
    Signed-off-by: Suresh Siddha <[EMAIL PROTECTED]>
    Signed-off-by: Ingo Molnar <[EMAIL PROTECTED]>
 kernel/sched.c |    8 +++++---
 1 files changed, 5 insertions(+), 3 deletions(-)

diff --git a/kernel/sched.c b/kernel/sched.c
index 645256b..e36d99d 100644
--- a/kernel/sched.c
+++ b/kernel/sched.c
@@ -2235,7 +2235,7 @@ find_busiest_group(struct sched_domain *sd, int this_cpu,
                        rq = cpu_rq(i);
-                       if (*sd_idle && !idle_cpu(i))
+                       if (*sd_idle && rq->nr_running)
                                *sd_idle = 0;
                        /* Bias balancing toward cpus of our domain */
@@ -2257,9 +2257,11 @@ find_busiest_group(struct sched_domain *sd, int this_cpu,
                 * First idle cpu or the first cpu(busiest) in this sched group
                 * is eligible for doing load balancing at this and above
-                * domains.
+                * domains. In the newly idle case, we will allow all the cpu's
+                * to do the newly idle load balance.
-               if (local_group && balance_cpu != this_cpu && balance) {
+               if (idle != CPU_NEWLY_IDLE && local_group &&
+                   balance_cpu != this_cpu && balance) {
                        *balance = 0;
                        goto ret;
To unsubscribe from this list: send the line "unsubscribe git-commits-head" in
the body of a message to [EMAIL PROTECTED]
More majordomo info at

Reply via email to