* Nick Piggin <[EMAIL PROTECTED]> wrote: > Well, you can easily see suboptimal scheduling decisions on many > programs with lots of interprocess communication. For example, tbench > on a dual Xeon: > > processes 1 2 3 4 > > 2.6.13-rc4: 187, 183, 179 260, 259, 256 340, 320, 349 504, 496, 500 > no wake-bal: 180, 180, 177 254, 254, 253 268, 270, 348 345, 290, 500 > > Numbers are MB/s, higher is better.
i cannot see any difference with/without wake-balancing in this workload, on a dual Xeon. Could you try the quick hack below and do: echo 1 > /proc/sys/kernel/panic # turn on wake-balancing echo 0 > /proc/sys/kernel/panic # turn off wake-balancing does the runtime switching show any effects on the throughput numbers tbench is showing? I'm using dbench-3.03. (i only checked the status numbers, didnt do full runs) (did you have SCHED_SMT enabled?) Ingo kernel/sched.c | 2 ++ 1 files changed, 2 insertions(+) Index: linux-prefetch-task/kernel/sched.c =================================================================== --- linux-prefetch-task.orig/kernel/sched.c +++ linux-prefetch-task/kernel/sched.c @@ -1155,6 +1155,8 @@ static int try_to_wake_up(task_t * p, un goto out_activate; new_cpu = cpu; + if (!panic_timeout) + goto out_set_cpu; schedstat_inc(rq, ttwu_cnt); if (cpu == this_cpu) { - To unsubscribe from this list: send the line "unsubscribe linux-kernel" in the body of a message to [EMAIL PROTECTED] More majordomo info at http://vger.kernel.org/majordomo-info.html Please read the FAQ at http://www.tux.org/lkml/