[RFC PATCH 01/13] sched:Prevent movement of short running tasks during load balancing

2012-10-25 Thread Preeti U Murthy
call should be taken if the tasks can afford to be throttled. This is why an additional metric has been included,which can determine how long we can tolerate tasks not being moved even if the load is low. Signed-off-by: Preeti U Murthy pre...@linux.vnet.ibm.com --- kernel/sched/fair.c | 16

Re: [RFC PATCH 00/13] sched: Integrating Per-entity-load-tracking with the core scheduler

2012-10-25 Thread Preeti U Murthy
a more sensible movement of loads* This is how I build the picture. Regards Preeti -- To unsubscribe from this list: send the line unsubscribe linux-kernel in the body of a message to majord...@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html Please read the FAQ

Re: [RFC PATCH 00/13] sched: Integrating Per-entity-load-tracking with the core scheduler

2012-10-25 Thread Preeti U Murthy
with the aid of the new metric. *End Result: Hopefully a more sensible movement of loads* This is how I build the picture. Regards Preeti U Murthy -- To unsubscribe from this list: send the line unsubscribe linux-kernel in the body of a message to majord...@vger.kernel.org More majordomo info at http

Re: [RFC PATCH 00/13] sched: Integrating Per-entity-load-tracking with the core scheduler

2012-10-26 Thread Preeti U Murthy
On 10/26/2012 05:59 PM, Peter Zijlstra wrote: On Thu, 2012-10-25 at 23:42 +0530, Preeti U Murthy wrote: firstly, cfs_rq is the wrong place for a per-cpu load measure, secondly why add another load field instead of fixing the one we have? Hmm..,rq-load.weight is the place. So why didnt I

Re: [RFC PATCH 00/13] sched: Integrating Per-entity-load-tracking with the core scheduler

2012-10-26 Thread Preeti U Murthy
you Regards Preeti -- To unsubscribe from this list: send the line unsubscribe linux-kernel in the body of a message to majord...@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html Please read the FAQ at http://www.tux.org/lkml/

Re: [RFC PATCH v3 5/6] sched: pack the idle load balance

2013-04-21 Thread Preeti U Murthy
this and that we must cash in on it. Thanks Regards Preeti U Murthy Vincent On 26 March 2013 15:42, Peter Zijlstra pet...@infradead.org wrote: On Tue, 2013-03-26 at 15:03 +0100, Vincent Guittot wrote: But ha! here's your NO_HZ link.. but does the above DTRT and ensure that the ILB

Re: [PATCH 4/5] sched: don't consider upper se in sched_slice()

2013-04-04 Thread Preeti U Murthy
Hi Joonsoo, On 04/04/2013 06:12 AM, Joonsoo Kim wrote: Hello, Preeti. So, how about extending a sched_period with rq-nr_running, instead of cfs_rq-nr_running? It is my quick thought and I think that we can ensure to run atleast once in this extending sched_period. Yeah this seems

Re: [patch v7 20/21] sched: don't do power balance on share cpu power domain

2013-04-07 Thread Preeti U Murthy
-flags SD_SHARE_CPUPOWER + || env-idle == CPU_NOT_IDLE) { env-flags = ~LBF_POWER_BAL; env-flags |= LBF_PERF_BAL; return; Regards Preeti U Murthy -- To unsubscribe from this list: send the line unsubscribe linux-kernel in the body

Re: [patch v7 20/21] sched: don't do power balance on share cpu power domain

2013-04-07 Thread Preeti U Murthy
version5 of this patchset, dont you think the below patch can be avoided? group-capacity being the threshold will automatically ensure that you dont pack onto domains that share cpu power. Regards Preeti U Murthy On 04/08/2013 08:47 AM, Preeti U Murthy wrote: Hi Alex, On 04/04/2013 07:31 AM, Alex

Re: [PATCH 4/5] sched: don't consider upper se in sched_slice()

2013-03-29 Thread Preeti U Murthy
cpu share must add upto the parent's share. Thank you Regards Preeti U Murthy -- To unsubscribe from this list: send the line unsubscribe linux-kernel in the body of a message to majord...@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html Please read the FAQ

Re: [PATCH 5/5] sched: limit sched_slice if it is more than sysctl_sched_latency

2013-03-29 Thread Preeti U Murthy
, while your scheduling latency period was extended to 40ms,just so that each of these tasks don't have their sched_slices shrunk due to large number of tasks. + return slice; } Regards Preeti U Murthy -- To unsubscribe from this list: send the line unsubscribe linux-kernel in the body

Re: [patch v5 14/15] sched: power aware load balance

2013-03-29 Thread Preeti U Murthy
Hi Alex, On 03/25/2013 10:22 AM, Alex Shi wrote: On 03/22/2013 01:14 PM, Preeti U Murthy wrote: the value get from decay_load(): sa-runnable_avg_sum = decay_load(sa-runnable_avg_sum, in decay_load it is possible to be set zero. Yes you are right, it is possible to be set to 0, but after

Re: [patch v5 14/15] sched: power aware load balance

2013-03-30 Thread Preeti U Murthy
On 03/29/2013 07:09 PM, Alex Shi wrote: On 03/29/2013 08:42 PM, Preeti U Murthy wrote: did you try the simplest benchmark: while true; do :; done Yeah I tried out this while true; do :; done benchmark on a vm which ran Thanks a lot for trying! What's do you mean 'vm'? Virtual machine

Re: [patch v5 14/15] sched: power aware load balance

2013-03-30 Thread Preeti U Murthy
Hi, On 03/30/2013 07:34 PM, Alex Shi wrote: On 03/30/2013 07:25 PM, Preeti U Murthy wrote: I still give the rq-util weight even the nr_running is 0, because some transitory tasks may actived on the cpu, but just missed on balancing point. I just wondering that forgetting rq-util when

Re: [PATCH 5/5] sched: limit sched_slice if it is more than sysctl_sched_latency

2013-04-01 Thread Preeti U Murthy
Hi Joonsoo, On 04/01/2013 10:39 AM, Joonsoo Kim wrote: Hello Preeti. So we should limit this possible weird situation. Signed-off-by: Joonsoo Kim iamjoonsoo@lge.com diff --git a/kernel/sched/fair.c b/kernel/sched/fair.c index e232421..6ceffbc 100644 --- a/kernel/sched/fair.c +++ b

Re: [PATCH 4/5] sched: don't consider upper se in sched_slice()

2013-04-01 Thread Preeti U Murthy
Hi Joonsoo, On 04/01/2013 09:38 AM, Joonsoo Kim wrote: Hello, Preeti. Ideally the children's cpu share must add upto the parent's share. I don't think so. We should schedule out the parent tg if 5ms is over. As we do so, we can fairly distribute time slice to every tg within short

Re: [patch v6 12/21] sched: add power aware scheduling in fork/exec/wake

2013-04-01 Thread Preeti U Murthy
scheduling in fork/wake/exec From: Preeti U Murthy pre...@linux.vnet.ibm.com Problem: select_task_rq_fair() returns a target CPU/ waking CPU if no balancing is required. However with the current power aware scheduling in this path, an invalid CPU might be returned. If get_cpu_for_power_policy

Re: [PATCH 4/5] sched: don't consider upper se in sched_slice()

2013-04-01 Thread Preeti U Murthy
Hi Joonsoo, On 04/02/2013 07:55 AM, Joonsoo Kim wrote: Hello, Preeti. On Mon, Apr 01, 2013 at 12:36:52PM +0530, Preeti U Murthy wrote: Hi Joonsoo, On 04/01/2013 09:38 AM, Joonsoo Kim wrote: Hello, Preeti. Ideally the children's cpu share must add upto the parent's share. I don't

Re: [PATCH 4/5] sched: don't consider upper se in sched_slice()

2013-04-02 Thread Preeti U Murthy
a patch description. Ok,take the example of a runqueue with 2 task groups,each with 10 tasks.Same as your previous example. Can you explain how your patch ensures that all 20 tasks get to run atleast once in a sched_period? Regards Preeti U Murthy -- To unsubscribe from this list: send the line

Re: [patch v5 14/15] sched: power aware load balance

2013-03-19 Thread Preeti U Murthy
as well. I think we would be better off without accounting the rq-utils of the cpus which do not have any processes running on them for sgs-utils. What do you think? Regards Preeti U Murthy -- To unsubscribe from this list: send the line unsubscribe linux-kernel in the body of a message to majord

Re: [patch v5 14/15] sched: power aware load balance

2013-03-21 Thread Preeti U Murthy
Hi Alex, On 03/21/2013 01:13 PM, Alex Shi wrote: On 03/20/2013 12:57 PM, Preeti U Murthy wrote: Neither core will be able to pull the task from the other to consolidate the load because the rq-util of t2 and t4, on which no process is running, continue to show some number even though

Re: [patch v5 14/15] sched: power aware load balance

2013-03-21 Thread Preeti U Murthy
On 03/21/2013 02:57 PM, Alex Shi wrote: On 03/21/2013 04:41 PM, Preeti U Murthy wrote: Yes, I did find this behaviour on a 2 socket, 8 core machine very consistently. rq-util cannot go to 0, after it has begun accumulating load right? Say a load was running on a runqueue which had its rq

Re: [patch v5 14/15] sched: power aware load balance

2013-03-21 Thread Preeti U Murthy
Hi, On 03/22/2013 07:00 AM, Alex Shi wrote: On 03/21/2013 06:27 PM, Preeti U Murthy wrote: did you close all of background system services? In theory the rq-avg.runnable_avg_sum should be zero if there is no task a bit long, otherwise there are some bugs in kernel. Could you explain why rq

Re: [RFC PATCH v3 0/6] sched: packing small tasks

2013-03-23 Thread Preeti U Murthy
into one, since both of them are having the common goal of packing small tasks. Thanks Regards Preeti U Murthy On 03/22/2013 05:55 PM, Vincent Guittot wrote: Hi, This patchset takes advantage of the new per-task load tracking that is available in the kernel for packing the small tasks in as few

Re: [RFC PATCH v3 3/6] sched: pack small tasks

2013-03-26 Thread Preeti U Murthy
by Alex. Thanks Regards Preeti U Murthy -- To unsubscribe from this list: send the line unsubscribe linux-kernel in the body of a message to majord...@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html Please read the FAQ at http://www.tux.org/lkml/

Re: [RFC PATCH v3 3/6] sched: pack small tasks

2013-03-27 Thread Preeti U Murthy
as the equation goes. Regards Preeti U Murthy -- To unsubscribe from this list: send the line unsubscribe linux-kernel in the body of a message to majord...@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html Please read the FAQ at http://www.tux.org/lkml/

Re: [RFC v2 PATCH 2.1] sched: Use Per-Entity-Load-Tracking metric for load balancing

2012-12-03 Thread Preeti U Murthy
sched: Use Per-Entity-Load-Tracking metric for load balancing From: Preeti U Murthy pre...@linux.vnet.ibm.com Currently the load balancer weighs a task based upon its priority,and this weight consequently gets added up to the weight of the run queue that it is on.It is this weight

Re: [RFC v2 PATCH 0/2] sched: Integrating Per-entity-load-tracking with the core scheduler

2012-12-04 Thread Preeti U Murthy
)); } int main() { start_threads(); return 0; } END WORKLOAD Regards Preeti U Murthy -- To unsubscribe from this list: send the line unsubscribe linux-kernel in the body of a message to majord...@vger.kernel.org More

Re: [patch v5 02/15] sched: set initial load avg of new forked task

2013-02-27 Thread Preeti U Murthy
()' during the second iteration. the se is changed. That is different se. Correct Alex,sorry I overlooked this. Regards Preeti U Murthy -- To unsubscribe from this list: send the line unsubscribe linux-kernel in the body of a message to majord...@vger.kernel.org More majordomo info at http

Re: sched: Consequences of integrating the Per Entity Load Tracking Metric into the Load Balancer

2013-01-20 Thread Preeti U Murthy
. :( Ops, the performance is still worse than just count runnable_load_avg. But dropping is not so big, it dropped 30%, not 70%. Thank you Regards Preeti U Murthy -- To unsubscribe from this list: send the line unsubscribe linux-kernel in the body of a message to majord...@vger.kernel.org More

Re: [PATCH] tty: Only wakeup the line discipline idle queue when queue is active

2013-01-20 Thread Preeti U Murthy
On 01/18/2013 09:15 PM, Oleg Nesterov wrote: On 01/17, Preeti U Murthy wrote: On 01/16/2013 05:32 PM, Ivo Sieben wrote: I don't have a problem that there is a context switch to the high priority process: it has a higher priority, so it probably is more important. My problem is that even

Re: [PATCH] tty: Only wakeup the line discipline idle queue when queue is active

2013-01-16 Thread Preeti U Murthy
. On 01/03/2013 03:19 PM, Ivo Sieben wrote: Oleg, Peter, Ingo, Andi Preeti, 2013/1/2 Jiri Slaby jsl...@suse.cz: On 01/02/2013 04:21 PM, Ivo Sieben wrote: I don't understand your responses: do you suggest to implement this if active behavior in: * A new wake_up function called

Re: [PATCH] tty: Only wakeup the line discipline idle queue when queue is active

2013-01-16 Thread Preeti U Murthy
Hi Ivo, On 01/16/2013 02:46 PM, Ivo Sieben wrote: Hi Preeti, 2013/1/16 Preeti U Murthy pre...@linux.vnet.ibm.com: Hi Ivo, Can you explain how this problem could create a scheduler overhead? I am a little confused, because as far as i know,scheduler does not come in the picture of the wake

Re: sched: Consequences of integrating the Per Entity Load Tracking Metric into the Load Balancer

2013-01-17 Thread Preeti U Murthy
Hi Alex, On 01/16/2013 07:38 PM, Alex Shi wrote: On 01/08/2013 04:41 PM, Preeti U Murthy wrote: Hi Mike, Thank you very much for such a clear and comprehensive explanation. So when I put together the problem and the proposed solution pieces in the current scheduler scalability

Re: sched: Consequences of integrating the Per Entity Load Tracking Metric into the Load Balancer

2013-01-17 Thread Preeti U Murthy
that the code prefers running a task on a idle cpu which is a sibling thread in the same core rather than running it on an idle cpu in another idle core. I guess we didn't do that before. It should has some help on burst wake up benchmarks like aim7. Original-patch-by: Preeti U Murthy pre

Re: [PATCH] tty: Only wakeup the line discipline idle queue when queue is active

2013-01-17 Thread Preeti U Murthy
On 01/16/2013 05:32 PM, Ivo Sieben wrote: 2013/1/16 Preeti U Murthy pre...@linux.vnet.ibm.com: Yes.Thank you very much for the explanation :) But I dont see how the context switching goes away with your patch.With your patch, when the higher priority thread comes in when the lower priority

Re: [patch v4 08/18] Revert sched: Introduce temporary FAIR_GROUP_SCHED dependency for load-tracking

2013-02-13 Thread Preeti U Murthy
for the task in the blocked_load,hence this move would not increase its load.Would you recommend going in this direction? Thank you Regards Preeti U Murthy -- To unsubscribe from this list: send the line unsubscribe linux-kernel in the body of a message to majord...@vger.kernel.org More majordomo info

Re: [patch v4 05/18] sched: quicker balancing on fork/exec/wake

2013-02-14 Thread Preeti U Murthy
-min_util = sgs-group_util; sds-min_load_per_task = sgs-sum_weighted_load; Regards Preeti U Murthy -- To unsubscribe from this list: send the line unsubscribe linux-kernel in the body of a message to majord...@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo

Re: [patch v4 07/18] sched: set initial load avg of new forked task

2013-02-19 Thread Preeti U Murthy
too heavier -*/ - if (flags ENQUEUE_NEWTASK) - se-avg.load_avg_contrib = se-load.weight; cfs_rq-runnable_load_avg += se-avg.load_avg_contrib; /* we force update consideration on load-balancer moves */ Thanks Regards Preeti U Murthy -- To unsubscribe

Re: [patch v5 06/15] sched: log the cpu utilization at rq

2013-02-20 Thread Preeti U Murthy
too? What I mean is,if the answer to the above question is yes,then can we safely assume that the furthur optimizations to the load balancer like the power aware scheduler and the usage of per entity load tracking can be done without considering the real time tasks? Regards Preeti U Murthy

Re: [patch v5 06/15] sched: log the cpu utilization at rq

2013-02-20 Thread Preeti U Murthy
to use here. Refer to this discussion:https://lkml.org/lkml/2012/10/29/448 Regards Preeti U Murthy -- To unsubscribe from this list: send the line unsubscribe linux-kernel in the body of a message to majord...@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html Please

Re: [patch v5 09/15] sched: add power aware scheduling in fork/exec/wake

2013-02-24 Thread Preeti U Murthy
is the chance for the load to get incremented in steps? In sleeping tasks since runnable_avg_sum progresses much slower than runnable_avg_period,these tasks take much time to accumulate the load when they wake up.This makes sense of course.But how does this happen for forked tasks? Regards Preeti U

Re: [patch v5 02/15] sched: set initial load avg of new forked task

2013-02-24 Thread Preeti U Murthy
= rq_of(cfs_rq)-clock_task; Regards Preeti U Murthy -- To unsubscribe from this list: send the line unsubscribe linux-kernel in the body of a message to majord...@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html Please read the FAQ at http://www.tux.org/lkml/

Re: [patch v5 09/15] sched: add power aware scheduling in fork/exec/wake

2013-02-24 Thread Preeti U Murthy
on it only if burst wakeups are detected. By doing so you ensure that nr_running as a metric for load balancing is used when it is right to do so and the reason to use it also gets well documented. Regards Preeti U Murthy -- To unsubscribe from this list: send the line unsubscribe linux-kernel in the body

Re: weakness of runnable load tracking?

2012-12-05 Thread Preeti U Murthy
in update_sd_lb_stats,but select_task_rq_fair is yet another place to do this, thats right.Good that this issue was brought up :) Regards! Alex Regards Preeti U Murthy -- To unsubscribe from this list: send the line unsubscribe linux-kernel in the body of a message to majord...@vger.kernel.org

Re: [PATCH 01/18] sched: select_task_rq_fair clean up

2012-12-10 Thread Preeti U Murthy
= new_cpu; Regards Preeti U Murthy -- To unsubscribe from this list: send the line unsubscribe linux-kernel in the body of a message to majord...@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html Please read the FAQ at http://www.tux.org/lkml/

Re: [PATCH 02/18] sched: fix find_idlest_group mess logical

2012-12-10 Thread Preeti U Murthy
but this_group(sd hierarchy moves towards the cpu it belongs to). Again here the idlest group search begins. + return idlest; } Regards Preeti U Murthy -- To unsubscribe from this list: send the line unsubscribe linux-kernel in the body of a message to majord...@vger.kernel.org More majordomo

Re: [PATCH 02/18] sched: fix find_idlest_group mess logical

2012-12-10 Thread Preeti U Murthy
Hi Alex, On 12/11/2012 10:59 AM, Alex Shi wrote: On 12/11/2012 01:08 PM, Preeti U Murthy wrote: Hi Alex, On 12/10/2012 01:52 PM, Alex Shi wrote: There is 4 situations in the function: 1, no task allowed group; so min_load = ULONG_MAX, this_load = 0, idlest = NULL 2, only local group

Re: [PATCH 01/18] sched: select_task_rq_fair clean up

2012-12-10 Thread Preeti U Murthy
On 12/11/2012 10:58 AM, Alex Shi wrote: On 12/11/2012 12:23 PM, Preeti U Murthy wrote: Hi Alex, On 12/10/2012 01:52 PM, Alex Shi wrote: It is impossible to miss a task allowed cpu in a eligible group. The one thing I am concerned with here is if there is a possibility of the task changing

[PATCH] sched: Explicit division calls on 64-bit integers

2012-11-19 Thread Preeti U Murthy
. Signed-off-by: Preeti U Murthypre...@linux.vnet.ibm.com --- kernel/sched/fair.c | 51 +++ 1 file changed, 31 insertions(+), 20 deletions(-) diff --git a/kernel/sched/fair.c b/kernel/sched/fair.c index f8f3a29..7cd3096 100644 --- a/kernel/sched

Re: [RFC PATCH 0/5] enable runnable load avg in load balance

2012-11-26 Thread Preeti U Murthy
.Even though in the former case CPU2 is relieved of one task,its of no use if Task3 is going to sleep most of the time.This might result in more load balancing on behalf of cpu3. What do you guys think? Thank you Regards Preeti U Murthy -- To unsubscribe from this list: send the line unsubscribe

Re: [RFC PATCH 0/5] enable runnable load avg in load balance

2012-11-26 Thread Preeti U Murthy
Hi, On 11/27/2012 11:44 AM, Alex Shi wrote: On 11/27/2012 11:08 AM, Preeti U Murthy wrote: Hi everyone, On 11/27/2012 12:33 AM, Benjamin Segall wrote: So, I've been trying out using the runnable averages for load balance in a few ways, but haven't actually gotten any improvement

sched: Consequences of integrating the Per Entity Load Tracking Metric into the Load Balancer

2013-01-01 Thread Preeti U Murthy
the right steps here on, in achieving the correct integration. Thank you Regards Preeti U Murthy -- To unsubscribe from this list: send the line unsubscribe linux-kernel in the body of a message to majord...@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html Please read

Re: sched: Consequences of integrating the Per Entity Load Tracking Metric into the Load Balancer

2013-01-03 Thread Preeti U Murthy
Hi Mike, Thank you very much for your feedback.Considering your suggestions,I have posted out a proposed solution to prevent select_idle_sibling() from becoming a disadvantage to normal load balancing,rather aiding it. **This patch is *without* the enablement of the per entity load tracking

Re: [PATCH 07/18] sched: compute runnable load avg in cpu_load and cpu_avg_load_per_task

2012-12-11 Thread Preeti U Murthy
) - return rq-load.weight / nr_running; + return rq-cfs.runnable_load_avg / nr_running; rq-cfs.runnable_load_avg is u64 type.you will need to typecast it here also right? how does this division work? because the return type is unsigned long. return 0; } Regards Preeti

Re: [PATCH 08/18] sched: consider runnable load average in move_tasks

2012-12-11 Thread Preeti U Murthy
(task_group(p), env-src_cpu, env-dst_cpu)) goto next; - load = task_h_load(p); + load = task_h_load_avg(p); if (sched_feat(LB_MIN) load 16 !env-sd-nr_balance_failed) goto next; Regards Preeti U

Re: [PATCH 01/18] sched: select_task_rq_fair clean up

2012-12-11 Thread Preeti U Murthy
On 12/11/2012 05:23 PM, Alex Shi wrote: On 12/11/2012 02:30 PM, Preeti U Murthy wrote: On 12/11/2012 10:58 AM, Alex Shi wrote: On 12/11/2012 12:23 PM, Preeti U Murthy wrote: Hi Alex, On 12/10/2012 01:52 PM, Alex Shi wrote: It is impossible to miss a task allowed cpu in a eligible group

[RFC v2 PATCH 0/2] sched: Integrating Per-entity-load-tracking with the core scheduler

2012-11-15 Thread Preeti U Murthy
and Ingo Molnar for their valuable feedback on v1 of the RFC which was the foundation for this version. PATCH[1/2] Aims at enabling usage of Per-Entity-Load-Tracking for load balacing PATCH[2/2] The crux of the patchset lies here. --- Preeti U Murthy (2): sched: Revert temporary

[RFC v2 PATCH 1/2] sched: Revert temporary FAIR_GROUP_SCHED dependency for load-tracking

2012-11-15 Thread Preeti U Murthy
Now that we need the per-entity load tracking for load balancing, trivially revert the patch which introduced the FAIR_GROUP_SCHED dependence for load tracking. Signed-off-by: Preeti U Murthypre...@linux.vnet.ibm.com --- include/linux/sched.h |7 +-- kernel/sched/core.c |7

[RFC v2 PATCH 2/2] sched: Use Per-Entity-Load-Tracking metric for load balancing

2012-11-15 Thread Preeti U Murthy
patch does not consider CONFIG_FAIR_GROUP_SCHED AND CONFIG_SCHED_NUMA.This is done so as to evaluate this approach starting from the simplest scenario.Earlier discussions can be found in the link below. Link: https://lkml.org/lkml/2012/10/25/162 Signed-off-by: Preeti U Murthypre...@linux.vnet.ibm.com

Re: [RFC v2 PATCH 2/2] sched: Use Per-Entity-Load-Tracking metric for load balancing

2012-11-16 Thread Preeti U Murthy
Hi Vincent, Thank you for your review. On 11/15/2012 11:43 PM, Vincent Guittot wrote: Hi Preeti, On 15 November 2012 17:54, Preeti U Murthy pre...@linux.vnet.ibm.com wrote: Currently the load balancer weighs a task based upon its priority,and this weight consequently gets added up

Re: [RFC PATCH 4/5] sched: consider runnable load average in wake_affine and move_tasks

2012-11-17 Thread Preeti U Murthy
. if (sched_feat(LB_MIN) load 16 !env-failed) goto next; Regards Preeti U Murthy -- To unsubscribe from this list: send the line unsubscribe linux-kernel in the body of a message to majord...@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html Please

Re: [RFC PATCH 0/5] enable runnable load avg in load balance

2012-11-17 Thread Preeti U Murthy
] sched: using runnable load avg in cpu_load and [RFC PATCH 4/5] sched: consider runnable load average in wake_affine [RFC PATCH 5/5] sched: revert 'Introduce temporary FAIR_GROUP_SCHED Regards Preeti U Murthy -- To unsubscribe from this list: send the line unsubscribe linux-kernel in the body

Re: [REPOST-v2] sched: Prevent wakeup to enter critical section needlessly

2012-11-19 Thread Preeti U Murthy
); + } } EXPORT_SYMBOL(__wake_up); Looks good to me. Reviewed-by: Preeti U Murthy pre...@linux.vnet.ibm.com Regards Preeti U Murthy -- To unsubscribe from this list: send the line unsubscribe linux-kernel in the body of a message to majord...@vger.kernel.org More majordomo info

Re: [PATCH v3 05/22] sched: remove domain iterations in fork/exec/wake

2013-01-10 Thread Preeti U Murthy
sched domain in detail. Therefore even i feel that this patch should be implemented after thorough tests. Morten Regards Preeti U Murthy -- To unsubscribe from this list: send the line unsubscribe linux-kernel in the body of a message to majord...@vger.kernel.org More majordomo info at http

Re: [PATCH v3 02/22] sched: select_task_rq_fair clean up

2013-01-10 Thread Preeti U Murthy
On 01/05/2013 02:07 PM, Alex Shi wrote: It is impossible to miss a task allowed cpu in a eligible group. And since find_idlest_group only return a different group which excludes old cpu, it's also imporissible to find a new cpu same as old cpu. Signed-off-by: Alex Shi alex@intel.com

Re: [PATCH v3 03/22] sched: fix find_idlest_group mess logical

2013-01-10 Thread Preeti U Murthy
On 01/05/2013 02:07 PM, Alex Shi wrote: There is 4 situations in the function: 1, no task allowed group; so min_load = ULONG_MAX, this_load = 0, idlest = NULL 2, only local group task allowed; so min_load = ULONG_MAX, this_load assigned, idlest = NULL 3, only non-local task group

Re: [PATCH v3 04/22] sched: don't need go to smaller sched domain

2013-01-10 Thread Preeti U Murthy
On 01/05/2013 02:07 PM, Alex Shi wrote: If parent sched domain has no task allowed cpu find. neither find in it's child. So, go out to save useless checking. Signed-off-by: Alex Shi alex@intel.com --- kernel/sched/fair.c | 6 ++ 1 file changed, 2 insertions(+), 4 deletions(-)

Re: [PATCH v3 07/22] sched: set initial load avg of new forked task

2013-01-10 Thread Preeti U Murthy
On 01/05/2013 02:07 PM, Alex Shi wrote: New task has no runnable sum at its first runnable time, that make burst forking just select few idle cpus to put tasks. Set initial load avg of new forked task as its load weight to resolve this issue. Signed-off-by: Alex Shi alex@intel.com ---

Re: sched: Consequences of integrating the Per Entity Load Tracking Metric into the Load Balancer

2013-01-06 Thread Preeti U Murthy
will also try to run tbench and a few other benchmarks to find out why the results are like below.Will update you very soon on this. Thank you Regards Preeti U Murthy On 01/06/2013 10:02 PM, Mike Galbraith wrote: On Sat, 2013-01-05 at 09:13 +0100, Mike Galbraith wrote: I still have a 2.6-rt

Re: [PATCH v3 09/22] sched: compute runnable load avg in cpu_load and cpu_avg_load_per_task

2013-01-06 Thread Preeti U Murthy
. This means we will need to use the PJT's metrics but with an additional constraint. Signed-off-by: Preeti U Murthy pre...@linux.vnet.ibm.com --- kernel/sched/fair.c | 25 ++--- 1 file changed, 22 insertions(+), 3 deletions(-) diff --git a/kernel/sched/fair.c b/kernel/sched/fair.c

Re: sched: Consequences of integrating the Per Entity Load Tracking Metric into the Load Balancer

2013-01-07 Thread Preeti U Murthy
On 01/07/2013 09:18 PM, Vincent Guittot wrote: On 2 January 2013 05:22, Preeti U Murthy pre...@linux.vnet.ibm.com wrote: Hi everyone, I have been looking at how different workloads react when the per entity load tracking metric is integrated into the load balancer and what are the possible

Re: sched: Consequences of integrating the Per Entity Load Tracking Metric into the Load Balancer

2013-01-08 Thread Preeti U Murthy
to tackle STEP3.STEP 3 will not prevent bouncing but a good STEP2 could tell us if it is worth the bounce. STEP3 Patch is given below: ***START PATCH** sched:Reduce the overhead of select_idle_sibling From: Preeti U Murthy pre

Re: sched: Consequences of integrating the Per Entity Load Tracking Metric into the Load Balancer

2013-01-08 Thread Preeti U Murthy
Here comes the point of making both load balancing and wake up balance(select_idle_sibling) co operative. How about we always schedule the woken up task on the prev_cpu? This seems more sensible considering load balancing considers blocked load as being a part of the load of cpu2. Hi Preeti

Re: [PATCH] sched: Get rid of unnecessary checks from select_idle_sibling

2013-01-09 Thread Preeti U Murthy
On 01/09/2013 12:20 PM, Namhyung Kim wrote: From: Namhyung Kim namhyung@lge.com AFAICS @target cpu of select_idle_sibling() is always either prev_cpu or this_cpu. So no need to check it again and the conditionals can be consolidated. Cc: Mike Galbraith efa...@gmx.de Cc: Preeti U

Re: [PATCH] sched: Get rid of unnecessary checks from select_idle_sibling

2013-01-10 Thread Preeti U Murthy
On 01/10/2013 11:19 AM, Namhyung Kim wrote: Hi Preeti, On Wed, 09 Jan 2013 13:51:00 +0530, Preeti U. Murthy wrote: On 01/09/2013 12:20 PM, Namhyung Kim wrote: From: Namhyung Kim namhyung@lge.com AFAICS @target cpu of select_idle_sibling() is always either prev_cpu or this_cpu. So

Re: [PATCH v2 2/3] sched: factor out code to should_we_balance()

2013-08-04 Thread Preeti U Murthy
On 08/02/2013 04:02 PM, Peter Zijlstra wrote: On Fri, Aug 02, 2013 at 02:56:14PM +0530, Preeti U Murthy wrote: You need to iterate over all the groups of the sched domain env-sd and not just the first group of env-sd like you are doing above. This is to I don't think so. IIRC, env-sd-groups

Re: [PATCH v2 3/3] sched: clean-up struct sd_lb_stat

2013-08-06 Thread Preeti U Murthy
to understand this. Anyway this is a minor issue, you can ignore it. Regards Preeti U Murthy -- To unsubscribe from this list: send the line unsubscribe linux-kernel in the body of a message to majord...@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html Please read

[RFC PATCH 5/5] cpuidle/ppc: Add longnap state to the idle states on powernv

2013-07-25 Thread Preeti U Murthy
-by: Preeti U Murthy pre...@linux.vnet.ibm.com --- arch/powerpc/platforms/powernv/processor_idle.c | 48 +++ 1 file changed, 47 insertions(+), 1 deletion(-) diff --git a/arch/powerpc/platforms/powernv/processor_idle.c b/arch/powerpc/platforms/powernv/processor_idle.c index

[RFC PATCH 0/5] cpuidle/ppc: Timer offload framework to support deep idle states

2013-07-25 Thread Preeti U Murthy
makes use of the timer offload framework that the patches Patch[1/5] to Patch[4/5] build. --- Preeti U Murthy (3): cpuidle/ppc: Add timer offload framework to support deep idle states cpuidle/ppc: CPU goes tickless if there are no arch-specific constraints cpuidle/ppc: Add longnap

[RFC PATCH 1/5] powerpc: Free up the IPI message slot of ipi call function (PPC_MSG_CALL_FUNC)

2013-07-25 Thread Preeti U Murthy
...@linux.vnet.ibm.com Signed-off-by: Preeti U Murthy pre...@linux.vnet.ibm.com --- arch/powerpc/include/asm/smp.h |2 +- arch/powerpc/kernel/smp.c | 12 +--- arch/powerpc/platforms/cell/interrupt.c |2 +- arch/powerpc/platforms/ps3/smp.c|2 +- 4 files changed, 8

[RFC PATCH 3/5] cpuidle/ppc: Add timer offload framework to support deep idle states

2013-07-25 Thread Preeti U Murthy
. On a broadcast ipi the event handler for a timer interrupt is called on the cpu in deep idle state to handle the local events. The current design and implementation of the timer offload framework supports the ONESHOT tick mode but not the PERIODIC mode. Signed-off-by: Preeti U. Murthy pre

[RFC PATCH 4/5] cpuidle/ppc: CPU goes tickless if there are no arch-specific constraints

2013-07-25 Thread Preeti U Murthy
disables tickless idle, is a system wide setting. Hence resort to an arch specific call to check if a cpu can go into tickless idle. Signed-off-by: Preeti U Murthy pre...@linux.vnet.ibm.com --- arch/powerpc/kernel/time.c |5 + kernel/time/tick-sched.c |7 +++ 2 files changed, 12

[RFC PATCH 2/5] powerpc: Implement broadcast timer interrupt as an IPI message

2013-07-25 Thread Preeti U Murthy
efficiently. Signed-off-by: Srivatsa S. Bhat srivatsa.b...@linux.vnet.ibm.com Signed-off-by: Preeti U Murthy pre...@linux.vnet.ibm.com --- arch/powerpc/include/asm/smp.h |3 ++- arch/powerpc/kernel/smp.c | 19 +++ arch/powerpc/platforms/cell/interrupt.c |2

Re: [RFC PATCH 4/5] cpuidle/ppc: CPU goes tickless if there are no arch-specific constraints

2013-07-25 Thread Preeti U Murthy
Hi Frederic, On 07/25/2013 07:00 PM, Frederic Weisbecker wrote: On Thu, Jul 25, 2013 at 02:33:02PM +0530, Preeti U Murthy wrote: In the current design of timer offload framework, the broadcast cpu should *not* go into tickless idle so as to avoid missed wakeups on CPUs in deep idle states

Re: [RFC PATCH 4/5] cpuidle/ppc: CPU goes tickless if there are no arch-specific constraints

2013-07-25 Thread Preeti U Murthy
Hi Frederic, On 07/25/2013 07:00 PM, Frederic Weisbecker wrote: On Thu, Jul 25, 2013 at 02:33:02PM +0530, Preeti U Murthy wrote: In the current design of timer offload framework, the broadcast cpu should *not* go into tickless idle so as to avoid missed wakeups on CPUs in deep idle states

Re: [RFC PATCH 4/5] cpuidle/ppc: CPU goes tickless if there are no arch-specific constraints

2013-07-25 Thread Preeti U Murthy
Hi Paul, On 07/26/2013 08:49 AM, Paul Mackerras wrote: On Fri, Jul 26, 2013 at 08:09:23AM +0530, Preeti U Murthy wrote: Hi Frederic, On 07/25/2013 07:00 PM, Frederic Weisbecker wrote: Hi Preeti, I'm not exactly sure why you can't enter the broadcast CPU in dynticks idle mode. I read

Re: [RFC PATCH 4/5] cpuidle/ppc: CPU goes tickless if there are no arch-specific constraints

2013-07-25 Thread Preeti U Murthy
Hi Frederic, I apologise for the confusion. As Paul pointed out maybe the usage of the term lapic is causing a large amount of confusion. So please see the clarification below. Maybe it will help answer your question. On 07/26/2013 08:09 AM, Preeti U Murthy wrote: Hi Frederic, On 07/25/2013

[Resend RFC PATCH 0/5] cpuidle/ppc: Timer offload framework to support deep idle states

2013-07-25 Thread Preeti U Murthy
makes use of the timer offload framework that the patches Patch[1/5] to Patch[4/5] build. This patch series is being resent to clarify certain ambiguity in the patch descriptions from the previous post. Discussion around this: https://lkml.org/lkml/2013/7/25/754 --- Preeti U Murthy (3): cpuidle

[Resend RFC PATCH 1/5] powerpc: Free up the IPI message slot of ipi call function (PPC_MSG_CALL_FUNC)

2013-07-25 Thread Preeti U Murthy
...@linux.vnet.ibm.com Signed-off-by: Preeti U Murthy pre...@linux.vnet.ibm.com --- arch/powerpc/include/asm/smp.h |2 +- arch/powerpc/kernel/smp.c | 12 +--- arch/powerpc/platforms/cell/interrupt.c |2 +- arch/powerpc/platforms/ps3/smp.c|2 +- 4 files changed, 8

[Resend RFC PATCH 2/5] powerpc: Implement broadcast timer interrupt as an IPI message

2013-07-25 Thread Preeti U Murthy
efficiently. Signed-off-by: Srivatsa S. Bhat srivatsa.b...@linux.vnet.ibm.com Signed-off-by: Preeti U Murthy pre...@linux.vnet.ibm.com --- arch/powerpc/include/asm/smp.h |3 ++- arch/powerpc/kernel/smp.c | 19 +++ arch/powerpc/platforms/cell/interrupt.c |2

[Resend RFC PATCH 3/5] cpuidle/ppc: Add timer offload framework to support deep idle states

2013-07-25 Thread Preeti U Murthy
is called on the cpu in deep idle state to handle the local events. The current design and implementation of the timer offload framework supports the ONESHOT tick mode but not the PERIODIC mode. Signed-off-by: Preeti U. Murthy pre...@linux.vnet.ibm.com --- arch/powerpc/include/asm/time.h

[Resend RFC PATCH 5/5] cpuidle/ppc: Add longnap state to the idle states on powernv

2013-07-25 Thread Preeti U Murthy
-by: Preeti U Murthy pre...@linux.vnet.ibm.com --- arch/powerpc/platforms/powernv/processor_idle.c | 48 +++ 1 file changed, 47 insertions(+), 1 deletion(-) diff --git a/arch/powerpc/platforms/powernv/processor_idle.c b/arch/powerpc/platforms/powernv/processor_idle.c index

[Resend RFC PATCH 4/5] cpuidle/ppc: CPU goes tickless if there are no arch-specific constraints

2013-07-26 Thread Preeti U Murthy
disables tickless idle, is a system wide setting. Hence resort to an arch specific call to check if a cpu can go into tickless idle. Signed-off-by: Preeti U Murthy pre...@linux.vnet.ibm.com --- arch/powerpc/kernel/time.c |5 + kernel/time/tick-sched.c |7 +++ 2 files changed, 12

Re: [RFC PATCH 4/5] cpuidle/ppc: CPU goes tickless if there are no arch-specific constraints

2013-07-27 Thread Preeti U Murthy
Hi Ben, On 07/27/2013 12:00 PM, Benjamin Herrenschmidt wrote: On Fri, 2013-07-26 at 08:09 +0530, Preeti U Murthy wrote: *The lapic of a broadcast CPU is active always*. Say CPUX, wants the broadcast CPU to wake it up at timeX. Since we cannot program the lapic of a remote CPU, CPUX will need

Re: [RFC PATCH 4/5] cpuidle/ppc: CPU goes tickless if there are no arch-specific constraints

2013-07-29 Thread Preeti U Murthy
Hi, On 07/29/2013 10:58 AM, Vaidyanathan Srinivasan wrote: * Preeti U Murthy pre...@linux.vnet.ibm.com [2013-07-27 13:20:37]: Hi Ben, On 07/27/2013 12:00 PM, Benjamin Herrenschmidt wrote: On Fri, 2013-07-26 at 08:09 +0530, Preeti U Murthy wrote: *The lapic of a broadcast CPU is active

Re: [PATCH 06/10] sched, fair: Make group power more consitent

2013-08-22 Thread Preeti U Murthy
On 08/19/2013 09:31 PM, Peter Zijlstra wrote: Reviewed-by: Preeti U Murthy pre...@linux.vnet.ibm.com -- To unsubscribe from this list: send the line unsubscribe linux-kernel in the body of a message to majord...@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html

Re: [PATCH 07/10] sched, fair: Optimize find_busiest_queue()

2013-08-23 Thread Preeti U Murthy
*/ if (idle == CPU_NEWLY_IDLE) env.dst_grpmask = NULL; cpumask_copy(cpus, cpu_active_mask); schedstat_inc(sd, lb_count[idle]); redo: group = find_busiest_group(env, balance); Regards Preeti U Murthy -- To unsubscribe from this list: send the line

Re: [PATCH 07/10] sched, fair: Optimize find_busiest_queue()

2013-08-23 Thread Preeti U Murthy
On 08/23/2013 03:33 PM, Peter Zijlstra wrote: On Fri, Aug 23, 2013 at 01:41:55PM +0530, Preeti U Murthy wrote: Hi Peter, On 08/19/2013 09:31 PM, Peter Zijlstra wrote: In the load balancing code, looks to me that cpumask_copy(cpus, cpu_active_mask) is not updating the env.cpus at all

Re: [PATCH 5/6] sched, fair: Make group power more consitent

2013-08-18 Thread Preeti U Murthy
(), should not the power of the sched_groups comprising of that cpu also get updated? Why wait till the load balancing is done at the sched_domain level of that group, to update its group power? Regards Preeti U Murthy -- To unsubscribe from this list: send the line unsubscribe linux-kernel

<    1   2   3   4   5   6   7   8   9   10   >