The patch titled
sched: allow the load to grow upto its cpu_power
has been added to the -mm tree. Its filename is
sched-allow-the-load-to-grow-upto-its-cpu_power.patch
Patches currently in -mm which might be from [EMAIL PROTECTED] are
sched-dont-kick-alb-in-the-presence-of-pinned-task.patch
sched-allow-the-load-to-grow-upto-its-cpu_power.patch
From: "Siddha, Suresh B" <[EMAIL PROTECTED]>
Don't pull tasks from a group if that would cause the group's total load to
drop below its total cpu_power (ie. cause the group to start going idle).
Signed-off-by: Suresh Siddha <[EMAIL PROTECTED]>
Signed-off-by: Nick Piggin <[EMAIL PROTECTED]>
Cc: Ingo Molnar <[EMAIL PROTECTED]>
Signed-off-by: Andrew Morton <[EMAIL PROTECTED]>
---
kernel/sched.c | 9 +++++++--
1 files changed, 7 insertions(+), 2 deletions(-)
diff -puN kernel/sched.c~sched-allow-the-load-to-grow-upto-its-cpu_power
kernel/sched.c
--- 25/kernel/sched.c~sched-allow-the-load-to-grow-upto-its-cpu_power Wed Aug
17 15:38:19 2005
+++ 25-akpm/kernel/sched.c Wed Aug 17 15:38:19 2005
@@ -2007,6 +2007,7 @@ find_busiest_group(struct sched_domain *
{
struct sched_group *busiest = NULL, *this = NULL, *group = sd->groups;
unsigned long max_load, avg_load, total_load, this_load, total_pwr;
+ unsigned long max_pull;
int load_idx;
max_load = this_load = total_load = total_pwr = 0;
@@ -2053,7 +2054,7 @@ find_busiest_group(struct sched_domain *
group = group->next;
} while (group != sd->groups);
- if (!busiest || this_load >= max_load)
+ if (!busiest || this_load >= max_load || max_load <= SCHED_LOAD_SCALE)
goto out_balanced;
avg_load = (SCHED_LOAD_SCALE * total_load) / total_pwr;
@@ -2073,8 +2074,12 @@ find_busiest_group(struct sched_domain *
* by pulling tasks to us. Be careful of negative numbers as they'll
* appear as very large values with unsigned longs.
*/
+
+ /* Don't want to pull so many tasks that a group would go idle */
+ max_pull = min(max_load - avg_load, max_load - SCHED_LOAD_SCALE);
+
/* How much load to actually move to equalise the imbalance */
- *imbalance = min((max_load - avg_load) * busiest->cpu_power,
+ *imbalance = min(max_pull * busiest->cpu_power,
(avg_load - this_load) * this->cpu_power)
/ SCHED_LOAD_SCALE;
_
-
To unsubscribe from this list: send the line "unsubscribe mm-commits" in
the body of a message to [EMAIL PROTECTED]
More majordomo info at http://vger.kernel.org/majordomo-info.html