call should be taken if the tasks can afford to be throttled.
This is why an additional metric has been included,which can determine how
long we can tolerate tasks not being moved even if the load is low.
Signed-off-by: Preeti U Murthy pre...@linux.vnet.ibm.com
---
kernel/sched/fair.c | 16
a more sensible movement of loads*
This is how I build the picture.
Regards
Preeti
--
To unsubscribe from this list: send the line unsubscribe linux-kernel in
the body of a message to majord...@vger.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ
with the aid of the new metric.
*End Result: Hopefully a more sensible movement of loads*
This is how I build the picture.
Regards
Preeti U Murthy
--
To unsubscribe from this list: send the line unsubscribe linux-kernel in
the body of a message to majord...@vger.kernel.org
More majordomo info at http
On 10/26/2012 05:59 PM, Peter Zijlstra wrote:
On Thu, 2012-10-25 at 23:42 +0530, Preeti U Murthy wrote:
firstly, cfs_rq is the wrong place for a per-cpu load measure, secondly
why add another load field instead of fixing the one we have?
Hmm..,rq-load.weight is the place.
So why didnt I
you
Regards
Preeti
--
To unsubscribe from this list: send the line unsubscribe linux-kernel in
the body of a message to majord...@vger.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
this and
that we must cash in on it.
Thanks
Regards
Preeti U Murthy
Vincent
On 26 March 2013 15:42, Peter Zijlstra pet...@infradead.org wrote:
On Tue, 2013-03-26 at 15:03 +0100, Vincent Guittot wrote:
But ha! here's your NO_HZ link.. but does the above DTRT and ensure
that the ILB
Hi Joonsoo,
On 04/04/2013 06:12 AM, Joonsoo Kim wrote:
Hello, Preeti.
So, how about extending a sched_period with rq-nr_running, instead of
cfs_rq-nr_running? It is my quick thought and I think that we can ensure
to run atleast once in this extending sched_period.
Yeah this seems
-flags SD_SHARE_CPUPOWER
+ || env-idle == CPU_NOT_IDLE) {
env-flags = ~LBF_POWER_BAL;
env-flags |= LBF_PERF_BAL;
return;
Regards
Preeti U Murthy
--
To unsubscribe from this list: send the line unsubscribe linux-kernel in
the body
version5 of this patchset, dont you think the below patch can be
avoided? group-capacity being the threshold will automatically ensure
that you dont pack onto domains that share cpu power.
Regards
Preeti U Murthy
On 04/08/2013 08:47 AM, Preeti U Murthy wrote:
Hi Alex,
On 04/04/2013 07:31 AM, Alex
cpu share must add upto the parent's share.
Thank you
Regards
Preeti U Murthy
--
To unsubscribe from this list: send the line unsubscribe linux-kernel in
the body of a message to majord...@vger.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ
, while your scheduling latency period was extended to 40ms,just
so that each of these tasks don't have their sched_slices shrunk due to
large number of tasks.
+
return slice;
}
Regards
Preeti U Murthy
--
To unsubscribe from this list: send the line unsubscribe linux-kernel in
the body
Hi Alex,
On 03/25/2013 10:22 AM, Alex Shi wrote:
On 03/22/2013 01:14 PM, Preeti U Murthy wrote:
the value get from decay_load():
sa-runnable_avg_sum = decay_load(sa-runnable_avg_sum,
in decay_load it is possible to be set zero.
Yes you are right, it is possible to be set to 0, but after
On 03/29/2013 07:09 PM, Alex Shi wrote:
On 03/29/2013 08:42 PM, Preeti U Murthy wrote:
did you try the simplest benchmark: while true; do :; done
Yeah I tried out this while true; do :; done benchmark on a vm which ran
Thanks a lot for trying!
What's do you mean 'vm'? Virtual machine
Hi,
On 03/30/2013 07:34 PM, Alex Shi wrote:
On 03/30/2013 07:25 PM, Preeti U Murthy wrote:
I still give the rq-util weight even the nr_running is 0, because some
transitory tasks may actived on the cpu, but just missed on balancing
point.
I just wondering that forgetting rq-util when
Hi Joonsoo,
On 04/01/2013 10:39 AM, Joonsoo Kim wrote:
Hello Preeti.
So we should limit this possible weird situation.
Signed-off-by: Joonsoo Kim iamjoonsoo@lge.com
diff --git a/kernel/sched/fair.c b/kernel/sched/fair.c
index e232421..6ceffbc 100644
--- a/kernel/sched/fair.c
+++ b
Hi Joonsoo,
On 04/01/2013 09:38 AM, Joonsoo Kim wrote:
Hello, Preeti.
Ideally the children's cpu share must add upto the parent's share.
I don't think so.
We should schedule out the parent tg if 5ms is over. As we do so, we can
fairly distribute time slice to every tg within short
scheduling in fork/wake/exec
From: Preeti U Murthy pre...@linux.vnet.ibm.com
Problem:
select_task_rq_fair() returns a target CPU/ waking CPU if no balancing is
required. However with the current power aware scheduling in this path, an
invalid CPU might be returned.
If get_cpu_for_power_policy
Hi Joonsoo,
On 04/02/2013 07:55 AM, Joonsoo Kim wrote:
Hello, Preeti.
On Mon, Apr 01, 2013 at 12:36:52PM +0530, Preeti U Murthy wrote:
Hi Joonsoo,
On 04/01/2013 09:38 AM, Joonsoo Kim wrote:
Hello, Preeti.
Ideally the children's cpu share must add upto the parent's share.
I don't
a patch description.
Ok,take the example of a runqueue with 2 task groups,each with 10
tasks.Same as your previous example. Can you explain how your patch
ensures that all 20 tasks get to run atleast once in a sched_period?
Regards
Preeti U Murthy
--
To unsubscribe from this list: send the line
as well.
I think we would be better off without accounting the rq-utils of the
cpus which do not have any processes running on them for sgs-utils.
What do you think?
Regards
Preeti U Murthy
--
To unsubscribe from this list: send the line unsubscribe linux-kernel in
the body of a message to majord
Hi Alex,
On 03/21/2013 01:13 PM, Alex Shi wrote:
On 03/20/2013 12:57 PM, Preeti U Murthy wrote:
Neither core will be able to pull the task from the other to consolidate
the load because the rq-util of t2 and t4, on which no process is
running, continue to show some number even though
On 03/21/2013 02:57 PM, Alex Shi wrote:
On 03/21/2013 04:41 PM, Preeti U Murthy wrote:
Yes, I did find this behaviour on a 2 socket, 8 core machine very
consistently.
rq-util cannot go to 0, after it has begun accumulating load right?
Say a load was running on a runqueue which had its rq
Hi,
On 03/22/2013 07:00 AM, Alex Shi wrote:
On 03/21/2013 06:27 PM, Preeti U Murthy wrote:
did you close all of background system services?
In theory the rq-avg.runnable_avg_sum should be zero if there is no
task a bit long, otherwise there are some bugs in kernel.
Could you explain why rq
into one, since both of
them are having the common goal of packing small tasks.
Thanks
Regards
Preeti U Murthy
On 03/22/2013 05:55 PM, Vincent Guittot wrote:
Hi,
This patchset takes advantage of the new per-task load tracking that is
available in the kernel for packing the small tasks in as few
by Alex.
Thanks
Regards
Preeti U Murthy
--
To unsubscribe from this list: send the line unsubscribe linux-kernel in
the body of a message to majord...@vger.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
as the equation goes.
Regards
Preeti U Murthy
--
To unsubscribe from this list: send the line unsubscribe linux-kernel in
the body of a message to majord...@vger.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
sched: Use Per-Entity-Load-Tracking metric for load balancing
From: Preeti U Murthy pre...@linux.vnet.ibm.com
Currently the load balancer weighs a task based upon its priority,and this
weight consequently gets added up to the weight of the run queue that it is
on.It is this weight
));
}
int main()
{
start_threads();
return 0;
}
END WORKLOAD
Regards
Preeti U Murthy
--
To unsubscribe from this list: send the line unsubscribe linux-kernel in
the body of a message to majord...@vger.kernel.org
More
()' during the
second iteration. the se is changed.
That is different se.
Correct Alex,sorry I overlooked this.
Regards
Preeti U Murthy
--
To unsubscribe from this list: send the line unsubscribe linux-kernel in
the body of a message to majord...@vger.kernel.org
More majordomo info at http
. :(
Ops, the performance is still worse than just count runnable_load_avg.
But dropping is not so big, it dropped 30%, not 70%.
Thank you
Regards
Preeti U Murthy
--
To unsubscribe from this list: send the line unsubscribe linux-kernel in
the body of a message to majord...@vger.kernel.org
More
On 01/18/2013 09:15 PM, Oleg Nesterov wrote:
On 01/17, Preeti U Murthy wrote:
On 01/16/2013 05:32 PM, Ivo Sieben wrote:
I don't have a problem that there is a context switch to the high
priority process: it has a higher priority, so it probably is more
important.
My problem is that even
.
On 01/03/2013 03:19 PM, Ivo Sieben wrote:
Oleg, Peter, Ingo, Andi Preeti,
2013/1/2 Jiri Slaby jsl...@suse.cz:
On 01/02/2013 04:21 PM, Ivo Sieben wrote:
I don't understand your responses: do you suggest to implement this
if active behavior in:
* A new wake_up function called
Hi Ivo,
On 01/16/2013 02:46 PM, Ivo Sieben wrote:
Hi Preeti,
2013/1/16 Preeti U Murthy pre...@linux.vnet.ibm.com:
Hi Ivo,
Can you explain how this problem could create a scheduler overhead?
I am a little confused, because as far as i know,scheduler does not come
in the picture of the wake
Hi Alex,
On 01/16/2013 07:38 PM, Alex Shi wrote:
On 01/08/2013 04:41 PM, Preeti U Murthy wrote:
Hi Mike,
Thank you very much for such a clear and comprehensive explanation.
So when I put together the problem and the proposed solution pieces in the
current
scheduler scalability
that the code prefers running a task on a idle cpu which
is a sibling thread in the same core rather than running it on an idle
cpu in another idle core. I guess we didn't do that before.
It should has some help on burst wake up benchmarks like aim7.
Original-patch-by: Preeti U Murthy pre
On 01/16/2013 05:32 PM, Ivo Sieben wrote:
2013/1/16 Preeti U Murthy pre...@linux.vnet.ibm.com:
Yes.Thank you very much for the explanation :) But I dont see how the
context switching goes away with your patch.With your patch, when the
higher priority thread comes in when the lower priority
for the task in the blocked_load,hence this move would not
increase its load.Would you recommend going in this direction?
Thank you
Regards
Preeti U Murthy
--
To unsubscribe from this list: send the line unsubscribe linux-kernel in
the body of a message to majord...@vger.kernel.org
More majordomo info
-min_util = sgs-group_util;
sds-min_load_per_task = sgs-sum_weighted_load;
Regards
Preeti U Murthy
--
To unsubscribe from this list: send the line unsubscribe linux-kernel in
the body of a message to majord...@vger.kernel.org
More majordomo info at http://vger.kernel.org/majordomo
too heavier
-*/
- if (flags ENQUEUE_NEWTASK)
- se-avg.load_avg_contrib = se-load.weight;
cfs_rq-runnable_load_avg += se-avg.load_avg_contrib;
/* we force update consideration on load-balancer moves */
Thanks
Regards
Preeti U Murthy
--
To unsubscribe
too?
What I mean is,if the answer to the above question is yes,then can we
safely assume that the furthur optimizations to the load balancer like
the power aware scheduler and the usage of per entity load tracking can
be done without considering the real time tasks?
Regards
Preeti U Murthy
to use here.
Refer to this discussion:https://lkml.org/lkml/2012/10/29/448
Regards
Preeti U Murthy
--
To unsubscribe from this list: send the line unsubscribe linux-kernel in
the body of a message to majord...@vger.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please
is the chance for the load
to get incremented in steps?
In sleeping tasks since runnable_avg_sum progresses much slower than
runnable_avg_period,these tasks take much time to accumulate the load
when they wake up.This makes sense of course.But how does this happen
for forked tasks?
Regards
Preeti U
= rq_of(cfs_rq)-clock_task;
Regards
Preeti U Murthy
--
To unsubscribe from this list: send the line unsubscribe linux-kernel in
the body of a message to majord...@vger.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
on it
only if burst wakeups are detected. By doing so you ensure that
nr_running as a metric for load balancing is used when it is right to do
so and the reason to use it also gets well documented.
Regards
Preeti U Murthy
--
To unsubscribe from this list: send the line unsubscribe linux-kernel in
the body
in update_sd_lb_stats,but select_task_rq_fair is yet another place
to do this, thats right.Good that this issue was brought up :)
Regards!
Alex
Regards
Preeti U Murthy
--
To unsubscribe from this list: send the line unsubscribe linux-kernel in
the body of a message to majord...@vger.kernel.org
= new_cpu;
Regards
Preeti U Murthy
--
To unsubscribe from this list: send the line unsubscribe linux-kernel in
the body of a message to majord...@vger.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
but this_group(sd hierarchy
moves towards the cpu it belongs to). Again here the idlest group search
begins.
+
return idlest;
}
Regards
Preeti U Murthy
--
To unsubscribe from this list: send the line unsubscribe linux-kernel in
the body of a message to majord...@vger.kernel.org
More majordomo
Hi Alex,
On 12/11/2012 10:59 AM, Alex Shi wrote:
On 12/11/2012 01:08 PM, Preeti U Murthy wrote:
Hi Alex,
On 12/10/2012 01:52 PM, Alex Shi wrote:
There is 4 situations in the function:
1, no task allowed group;
so min_load = ULONG_MAX, this_load = 0, idlest = NULL
2, only local group
On 12/11/2012 10:58 AM, Alex Shi wrote:
On 12/11/2012 12:23 PM, Preeti U Murthy wrote:
Hi Alex,
On 12/10/2012 01:52 PM, Alex Shi wrote:
It is impossible to miss a task allowed cpu in a eligible group.
The one thing I am concerned with here is if there is a possibility of
the task changing
.
Signed-off-by: Preeti U Murthypre...@linux.vnet.ibm.com
---
kernel/sched/fair.c | 51 +++
1 file changed, 31 insertions(+), 20 deletions(-)
diff --git a/kernel/sched/fair.c b/kernel/sched/fair.c
index f8f3a29..7cd3096 100644
--- a/kernel/sched
.Even though in the former case CPU2 is relieved of one task,its of no
use if Task3 is going to sleep most of the time.This might result in
more load balancing on behalf of cpu3.
What do you guys think?
Thank you
Regards
Preeti U Murthy
--
To unsubscribe from this list: send the line unsubscribe
Hi,
On 11/27/2012 11:44 AM, Alex Shi wrote:
On 11/27/2012 11:08 AM, Preeti U Murthy wrote:
Hi everyone,
On 11/27/2012 12:33 AM, Benjamin Segall wrote:
So, I've been trying out using the runnable averages for load balance in
a few ways, but haven't actually gotten any improvement
the
right steps here on, in achieving the correct integration.
Thank you
Regards
Preeti U Murthy
--
To unsubscribe from this list: send the line unsubscribe linux-kernel in
the body of a message to majord...@vger.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read
Hi Mike,
Thank you very much for your feedback.Considering your suggestions,I have
posted out a
proposed solution to prevent select_idle_sibling() from becoming a disadvantage
to normal
load balancing,rather aiding it.
**This patch is *without* the enablement of the per entity load tracking
)
- return rq-load.weight / nr_running;
+ return rq-cfs.runnable_load_avg / nr_running;
rq-cfs.runnable_load_avg is u64 type.you will need to typecast it here
also right? how does this division work? because the return type is
unsigned long.
return 0;
}
Regards
Preeti
(task_group(p), env-src_cpu,
env-dst_cpu))
goto next;
- load = task_h_load(p);
+ load = task_h_load_avg(p);
if (sched_feat(LB_MIN) load 16
!env-sd-nr_balance_failed)
goto next;
Regards
Preeti U
On 12/11/2012 05:23 PM, Alex Shi wrote:
On 12/11/2012 02:30 PM, Preeti U Murthy wrote:
On 12/11/2012 10:58 AM, Alex Shi wrote:
On 12/11/2012 12:23 PM, Preeti U Murthy wrote:
Hi Alex,
On 12/10/2012 01:52 PM, Alex Shi wrote:
It is impossible to miss a task allowed cpu in a eligible group
and Ingo Molnar for their valuable feedback on v1
of the RFC which was the foundation for this version.
PATCH[1/2] Aims at enabling usage of Per-Entity-Load-Tracking for load balacing
PATCH[2/2] The crux of the patchset lies here.
---
Preeti U Murthy (2):
sched: Revert temporary
Now that we need the per-entity load tracking for load balancing,
trivially revert the patch which introduced the FAIR_GROUP_SCHED
dependence for load tracking.
Signed-off-by: Preeti U Murthypre...@linux.vnet.ibm.com
---
include/linux/sched.h |7 +--
kernel/sched/core.c |7
patch does not consider CONFIG_FAIR_GROUP_SCHED AND
CONFIG_SCHED_NUMA.This is done so as to evaluate this approach starting from the
simplest scenario.Earlier discussions can be found in the link below.
Link: https://lkml.org/lkml/2012/10/25/162
Signed-off-by: Preeti U Murthypre...@linux.vnet.ibm.com
Hi Vincent,
Thank you for your review.
On 11/15/2012 11:43 PM, Vincent Guittot wrote:
Hi Preeti,
On 15 November 2012 17:54, Preeti U Murthy pre...@linux.vnet.ibm.com wrote:
Currently the load balancer weighs a task based upon its priority,and this
weight consequently gets added up
.
if (sched_feat(LB_MIN) load 16 !env-failed)
goto next;
Regards
Preeti U Murthy
--
To unsubscribe from this list: send the line unsubscribe linux-kernel in
the body of a message to majord...@vger.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please
] sched: using runnable load avg in cpu_load and
[RFC PATCH 4/5] sched: consider runnable load average in wake_affine
[RFC PATCH 5/5] sched: revert 'Introduce temporary FAIR_GROUP_SCHED
Regards
Preeti U Murthy
--
To unsubscribe from this list: send the line unsubscribe linux-kernel in
the body
);
+ }
}
EXPORT_SYMBOL(__wake_up);
Looks good to me.
Reviewed-by: Preeti U Murthy pre...@linux.vnet.ibm.com
Regards
Preeti U Murthy
--
To unsubscribe from this list: send the line unsubscribe linux-kernel in
the body of a message to majord...@vger.kernel.org
More majordomo info
sched domain in
detail.
Therefore even i feel that this patch should be implemented after
thorough tests.
Morten
Regards
Preeti U Murthy
--
To unsubscribe from this list: send the line unsubscribe linux-kernel in
the body of a message to majord...@vger.kernel.org
More majordomo info at http
On 01/05/2013 02:07 PM, Alex Shi wrote:
It is impossible to miss a task allowed cpu in a eligible group.
And since find_idlest_group only return a different group which
excludes old cpu, it's also imporissible to find a new cpu same as old
cpu.
Signed-off-by: Alex Shi alex@intel.com
On 01/05/2013 02:07 PM, Alex Shi wrote:
There is 4 situations in the function:
1, no task allowed group;
so min_load = ULONG_MAX, this_load = 0, idlest = NULL
2, only local group task allowed;
so min_load = ULONG_MAX, this_load assigned, idlest = NULL
3, only non-local task group
On 01/05/2013 02:07 PM, Alex Shi wrote:
If parent sched domain has no task allowed cpu find. neither find in
it's child. So, go out to save useless checking.
Signed-off-by: Alex Shi alex@intel.com
---
kernel/sched/fair.c | 6 ++
1 file changed, 2 insertions(+), 4 deletions(-)
On 01/05/2013 02:07 PM, Alex Shi wrote:
New task has no runnable sum at its first runnable time, that make
burst forking just select few idle cpus to put tasks.
Set initial load avg of new forked task as its load weight to resolve
this issue.
Signed-off-by: Alex Shi alex@intel.com
---
will also try to run tbench and a few other benchmarks to
find out why the results are like below.Will update you very soon on this.
Thank you
Regards
Preeti U Murthy
On 01/06/2013 10:02 PM, Mike Galbraith wrote:
On Sat, 2013-01-05 at 09:13 +0100, Mike Galbraith wrote:
I still have a 2.6-rt
.
This means we will need to use the PJT's metrics but with an
additional constraint.
Signed-off-by: Preeti U Murthy pre...@linux.vnet.ibm.com
---
kernel/sched/fair.c | 25 ++---
1 file changed, 22 insertions(+), 3 deletions(-)
diff --git a/kernel/sched/fair.c b/kernel/sched/fair.c
On 01/07/2013 09:18 PM, Vincent Guittot wrote:
On 2 January 2013 05:22, Preeti U Murthy pre...@linux.vnet.ibm.com wrote:
Hi everyone,
I have been looking at how different workloads react when the per entity
load tracking metric is integrated into the load balancer and what are
the possible
to tackle STEP3.STEP 3 will not prevent bouncing but a good STEP2
could tell
us if it is worth the bounce.
STEP3 Patch is given below:
***START PATCH**
sched:Reduce the overhead of select_idle_sibling
From: Preeti U Murthy pre
Here comes the point of making both load balancing and wake up
balance(select_idle_sibling) co operative. How about we always schedule
the woken up task on the prev_cpu? This seems more sensible considering
load balancing considers blocked load as being a part of the load of cpu2.
Hi Preeti
On 01/09/2013 12:20 PM, Namhyung Kim wrote:
From: Namhyung Kim namhyung@lge.com
AFAICS @target cpu of select_idle_sibling() is always either prev_cpu
or this_cpu. So no need to check it again and the conditionals can be
consolidated.
Cc: Mike Galbraith efa...@gmx.de
Cc: Preeti U
On 01/10/2013 11:19 AM, Namhyung Kim wrote:
Hi Preeti,
On Wed, 09 Jan 2013 13:51:00 +0530, Preeti U. Murthy wrote:
On 01/09/2013 12:20 PM, Namhyung Kim wrote:
From: Namhyung Kim namhyung@lge.com
AFAICS @target cpu of select_idle_sibling() is always either prev_cpu
or this_cpu. So
On 08/02/2013 04:02 PM, Peter Zijlstra wrote:
On Fri, Aug 02, 2013 at 02:56:14PM +0530, Preeti U Murthy wrote:
You need to iterate over all the groups of the sched domain env-sd and
not just the first group of env-sd like you are doing above. This is to
I don't think so.
IIRC, env-sd-groups
to understand this.
Anyway this is a minor issue, you can ignore it.
Regards
Preeti U Murthy
--
To unsubscribe from this list: send the line unsubscribe linux-kernel in
the body of a message to majord...@vger.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read
-by: Preeti U Murthy pre...@linux.vnet.ibm.com
---
arch/powerpc/platforms/powernv/processor_idle.c | 48 +++
1 file changed, 47 insertions(+), 1 deletion(-)
diff --git a/arch/powerpc/platforms/powernv/processor_idle.c
b/arch/powerpc/platforms/powernv/processor_idle.c
index
makes use of the timer offload
framework that the patches Patch[1/5] to Patch[4/5] build.
---
Preeti U Murthy (3):
cpuidle/ppc: Add timer offload framework to support deep idle states
cpuidle/ppc: CPU goes tickless if there are no arch-specific constraints
cpuidle/ppc: Add longnap
...@linux.vnet.ibm.com
Signed-off-by: Preeti U Murthy pre...@linux.vnet.ibm.com
---
arch/powerpc/include/asm/smp.h |2 +-
arch/powerpc/kernel/smp.c | 12 +---
arch/powerpc/platforms/cell/interrupt.c |2 +-
arch/powerpc/platforms/ps3/smp.c|2 +-
4 files changed, 8
. On
a broadcast ipi the event handler for a timer interrupt is called on the cpu
in deep idle state to handle the local events.
The current design and implementation of the timer offload framework supports
the ONESHOT tick mode but not the PERIODIC mode.
Signed-off-by: Preeti U. Murthy pre
disables tickless idle,
is a system wide setting. Hence resort to an arch specific call to check if a
cpu
can go into tickless idle.
Signed-off-by: Preeti U Murthy pre...@linux.vnet.ibm.com
---
arch/powerpc/kernel/time.c |5 +
kernel/time/tick-sched.c |7 +++
2 files changed, 12
efficiently.
Signed-off-by: Srivatsa S. Bhat srivatsa.b...@linux.vnet.ibm.com
Signed-off-by: Preeti U Murthy pre...@linux.vnet.ibm.com
---
arch/powerpc/include/asm/smp.h |3 ++-
arch/powerpc/kernel/smp.c | 19 +++
arch/powerpc/platforms/cell/interrupt.c |2
Hi Frederic,
On 07/25/2013 07:00 PM, Frederic Weisbecker wrote:
On Thu, Jul 25, 2013 at 02:33:02PM +0530, Preeti U Murthy wrote:
In the current design of timer offload framework, the broadcast cpu should
*not* go into tickless idle so as to avoid missed wakeups on CPUs in deep
idle states
Hi Frederic,
On 07/25/2013 07:00 PM, Frederic Weisbecker wrote:
On Thu, Jul 25, 2013 at 02:33:02PM +0530, Preeti U Murthy wrote:
In the current design of timer offload framework, the broadcast cpu should
*not* go into tickless idle so as to avoid missed wakeups on CPUs in deep
idle states
Hi Paul,
On 07/26/2013 08:49 AM, Paul Mackerras wrote:
On Fri, Jul 26, 2013 at 08:09:23AM +0530, Preeti U Murthy wrote:
Hi Frederic,
On 07/25/2013 07:00 PM, Frederic Weisbecker wrote:
Hi Preeti,
I'm not exactly sure why you can't enter the broadcast CPU in dynticks idle
mode.
I read
Hi Frederic,
I apologise for the confusion. As Paul pointed out maybe the usage of
the term lapic is causing a large amount of confusion. So please see the
clarification below. Maybe it will help answer your question.
On 07/26/2013 08:09 AM, Preeti U Murthy wrote:
Hi Frederic,
On 07/25/2013
makes use of the timer offload
framework that the patches Patch[1/5] to Patch[4/5] build.
This patch series is being resent to clarify certain ambiguity in the patch
descriptions from the previous post. Discussion around this:
https://lkml.org/lkml/2013/7/25/754
---
Preeti U Murthy (3):
cpuidle
...@linux.vnet.ibm.com
Signed-off-by: Preeti U Murthy pre...@linux.vnet.ibm.com
---
arch/powerpc/include/asm/smp.h |2 +-
arch/powerpc/kernel/smp.c | 12 +---
arch/powerpc/platforms/cell/interrupt.c |2 +-
arch/powerpc/platforms/ps3/smp.c|2 +-
4 files changed, 8
efficiently.
Signed-off-by: Srivatsa S. Bhat srivatsa.b...@linux.vnet.ibm.com
Signed-off-by: Preeti U Murthy pre...@linux.vnet.ibm.com
---
arch/powerpc/include/asm/smp.h |3 ++-
arch/powerpc/kernel/smp.c | 19 +++
arch/powerpc/platforms/cell/interrupt.c |2
is called on the cpu
in deep idle state to handle the local events.
The current design and implementation of the timer offload framework supports
the ONESHOT tick mode but not the PERIODIC mode.
Signed-off-by: Preeti U. Murthy pre...@linux.vnet.ibm.com
---
arch/powerpc/include/asm/time.h
-by: Preeti U Murthy pre...@linux.vnet.ibm.com
---
arch/powerpc/platforms/powernv/processor_idle.c | 48 +++
1 file changed, 47 insertions(+), 1 deletion(-)
diff --git a/arch/powerpc/platforms/powernv/processor_idle.c
b/arch/powerpc/platforms/powernv/processor_idle.c
index
disables tickless idle,
is a system wide setting. Hence resort to an arch specific call to check if a
cpu
can go into tickless idle.
Signed-off-by: Preeti U Murthy pre...@linux.vnet.ibm.com
---
arch/powerpc/kernel/time.c |5 +
kernel/time/tick-sched.c |7 +++
2 files changed, 12
Hi Ben,
On 07/27/2013 12:00 PM, Benjamin Herrenschmidt wrote:
On Fri, 2013-07-26 at 08:09 +0530, Preeti U Murthy wrote:
*The lapic of a broadcast CPU is active always*. Say CPUX, wants the
broadcast CPU to wake it up at timeX. Since we cannot program the lapic
of a remote CPU, CPUX will need
Hi,
On 07/29/2013 10:58 AM, Vaidyanathan Srinivasan wrote:
* Preeti U Murthy pre...@linux.vnet.ibm.com [2013-07-27 13:20:37]:
Hi Ben,
On 07/27/2013 12:00 PM, Benjamin Herrenschmidt wrote:
On Fri, 2013-07-26 at 08:09 +0530, Preeti U Murthy wrote:
*The lapic of a broadcast CPU is active
On 08/19/2013 09:31 PM, Peter Zijlstra wrote:
Reviewed-by: Preeti U Murthy pre...@linux.vnet.ibm.com
--
To unsubscribe from this list: send the line unsubscribe linux-kernel in
the body of a message to majord...@vger.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
*/
if (idle == CPU_NEWLY_IDLE)
env.dst_grpmask = NULL;
cpumask_copy(cpus, cpu_active_mask);
schedstat_inc(sd, lb_count[idle]);
redo:
group = find_busiest_group(env, balance);
Regards
Preeti U Murthy
--
To unsubscribe from this list: send the line
On 08/23/2013 03:33 PM, Peter Zijlstra wrote:
On Fri, Aug 23, 2013 at 01:41:55PM +0530, Preeti U Murthy wrote:
Hi Peter,
On 08/19/2013 09:31 PM, Peter Zijlstra wrote:
In the load balancing code, looks to me that
cpumask_copy(cpus, cpu_active_mask) is not updating the env.cpus at all
(), should not
the power of the sched_groups comprising of that cpu also get updated?
Why wait till the load balancing is done at the sched_domain level of
that group, to update its group power?
Regards
Preeti U Murthy
--
To unsubscribe from this list: send the line unsubscribe linux-kernel
101 - 200 of 1331 matches
Mail list logo