On 08/02/2013 02:35 PM, 김준수 wrote:
>
>
>> -Original Message-----
>> From: Preeti U Murthy [mailto:pre...@linux.vnet.ibm.com]
>> Sent: Friday, August 02, 2013 1:23 PM
>> To: Joonsoo Kim
>> Cc: Ingo Molnar; Peter Zijlstra; linux-kernel@vger.kernel.org; Mike
_and_lock();
>> +cpuidle_enable_device(dev);
>> +cpuidle_resume_and_unlock();
>> +break;
>> +
>> +case CPU_DEAD:
>> +case CPU_DEAD_FROZEN:
>> +cpuidle_pa
On 08/02/2013 04:02 PM, Peter Zijlstra wrote:
> On Fri, Aug 02, 2013 at 02:56:14PM +0530, Preeti U Murthy wrote:
>>>> You need to iterate over all the groups of the sched domain env->sd and
>>>> not just the first group of env->sd like you are doing above. Th
e_capacity);
The deleted line says that there is an imbalance in the load/cpu in some
part of the sd(max_load), which is above the average load of the sd, which
we are trying to even out. Something like the below diagram for the sd load.
max_load
_/\ avg_load.
I believe the l
env->idle == CPU_NOT_IDLE) {
> + if (sched_balance_policy == SCHED_POLICY_PERFORMANCE
> + || env->sd->flags & SD_SHARE_CPUPOWER
> + || env->idle == CPU_NOT_IDLE) {
> env->flags &= ~LBF_POWER_B
n your version5 of this patchset, dont you think the below patch can be
avoided? group->capacity being the threshold will automatically ensure
that you dont pack onto domains that share cpu power.
Regards
Preeti U Murthy
On 04/08/2013 08:47 AM, Preeti U Murthy wrote:
> Hi Alex,
>
> On 0
Hi Morten,
On 07/12/2013 07:18 PM, Morten Rasmussen wrote:
> On Thu, Jul 11, 2013 at 12:34:49PM +0100, Preeti U Murthy wrote:
>> Hi Morten,
>>
>> I have a few quick comments.
>>
>> I am concerned too about scheduler making its load balancing decisions
>> ba
pu power gets updated in update_cpu_power(), should not
the power of the sched_groups comprising of that cpu also get updated?
Why wait till the load balancing is done at the sched_domain level of
that group, to update its group power?
Regards
Preeti U Murthy
--
To unsubscribe from this list: send the line &quo
Hi Peter,
Thank you for the clarification.
On 08/19/2013 04:00 PM, Peter Zijlstra wrote:
> On Mon, Aug 19, 2013 at 09:47:47AM +0530, Preeti U Murthy wrote:
>> Hi Peter,
>>
>> On 08/16/2013 03:42 PM, Peter Zijlstra wrote:
>>
>> I have a few comments and clarificat
interrupt is called on the cpu
in deep idle state to handle the local events.
The current design and implementation of the timer offload framework supports
the ONESHOT tick mode but not the PERIODIC mode.
Signed-off-by: Preeti U. Murthy
---
arch/powerpc/include/asm/time.h|3 +
arch/powerpc
. Bhat
Signed-off-by: Preeti U Murthy
---
arch/powerpc/include/asm/smp.h |3 ++-
arch/powerpc/kernel/smp.c | 19 +++
arch/powerpc/platforms/cell/interrupt.c |2 +-
arch/powerpc/platforms/ps3/smp.c|2 +-
4 files changed, 19 insertions
are available).
So, implement the functionality of PPC_MSG_CALL_FUNC using
PPC_MSG_CALL_FUNC_SINGLE itself and release its IPI message slot, so that it
can be used for something else in the future, if desired.
Signed-off-by: Srivatsa S. Bhat
Signed-off-by: Preeti U Murthy
---
arch/powerpc/include
on powernv
Patch[5/6]: Dynamically pick a broadcast CPU
Patch[6/6]: Remove the constraint of having to disable tickless idle on the
broadcast cpu, by queueing a hrtimer exclusively to do broadcast handling.
---
Preeti U Murthy (4):
cpuidle/ppc: Add timer offload framework to support deep idle
just wakeup, the new broadcast CPU has to restart
the hrtimer on itself so as to continue broadcast handling.
Signed-off-by: Preeti U Murthy
---
arch/powerpc/include/asm/time.h |5 ++
arch/powerpc/kernel/time.c | 47 ---
arch/powerpc
-by: Preeti U Murthy
---
arch/powerpc/include/asm/time.h |1
arch/powerpc/kernel/time.c | 10 ++--
arch/powerpc/platforms/powernv/processor_idle.c | 56 +++
3 files changed, 53 insertions(+), 14 deletions(-)
diff --git a/arch
sleep on ppc.
Signed-off-by: Preeti U Murthy
---
arch/powerpc/platforms/powernv/processor_idle.c | 48 +++
1 file changed, 47 insertions(+), 1 deletion(-)
diff --git a/arch/powerpc/platforms/powernv/processor_idle.c
b/arch/powerpc/platforms/powernv/processor_idle.c
index
Hi,
On 07/29/2013 10:58 AM, Vaidyanathan Srinivasan wrote:
> * Preeti U Murthy [2013-07-27 13:20:37]:
>
>> Hi Ben,
>>
>> On 07/27/2013 12:00 PM, Benjamin Herrenschmidt wrote:
>>> On Fri, 2013-07-26 at 08:09 +0530, Preeti U Murthy wrote:
>>>> *The la
essors idle support IMO should
hook onto the backend cpuidle driver that this patchset provides.
Regards
Preeti U Murthy
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majord...@vger.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
Hi Dongsheng,
On 07/31/2013 11:16 AM, Wang Dongsheng-B40534 wrote:
> Hi Preeti,
>
>> -Original Message-----
>> From: Preeti U Murthy [mailto:pre...@linux.vnet.ibm.com]
>> Sent: Wednesday, July 31, 2013 12:00 PM
>> To: Wang Dongsheng-B40534
>> Cc: Deept
-by: Preeti U Murthy
---
arch/powerpc/platforms/powernv/processor_idle.c | 48 +++
1 file changed, 47 insertions(+), 1 deletion(-)
diff --git a/arch/powerpc/platforms/powernv/processor_idle.c
b/arch/powerpc/platforms/powernv/processor_idle.c
index f43ad91a..9aca502 100644
makes use of the timer offload
framework that the patches Patch[1/5] to Patch[4/5] build.
---
Preeti U Murthy (3):
cpuidle/ppc: Add timer offload framework to support deep idle states
cpuidle/ppc: CPU goes tickless if there are no arch-specific constraints
cpuidle/ppc: Add longnap
are available).
So, implement the functionality of PPC_MSG_CALL_FUNC using
PPC_MSG_CALL_FUNC_SINGLE itself and release its IPI message slot, so that it
can be used for something else in the future, if desired.
Signed-off-by: Srivatsa S. Bhat
Signed-off-by: Preeti U Murthy
---
arch/powerpc/include
. On
a broadcast ipi the event handler for a timer interrupt is called on the cpu
in deep idle state to handle the local events.
The current design and implementation of the timer offload framework supports
the ONESHOT tick mode but not the PERIODIC mode.
Signed-off-by: Preeti U. Murthy
---
arch/powerpc
disables tickless idle,
is a system wide setting. Hence resort to an arch specific call to check if a
cpu
can go into tickless idle.
Signed-off-by: Preeti U Murthy
---
arch/powerpc/kernel/time.c |5 +
kernel/time/tick-sched.c |7 +++
2 files changed, 12 insertions(+)
diff --git
. Bhat
Signed-off-by: Preeti U Murthy
---
arch/powerpc/include/asm/smp.h |3 ++-
arch/powerpc/kernel/smp.c | 19 +++
arch/powerpc/platforms/cell/interrupt.c |2 +-
arch/powerpc/platforms/ps3/smp.c|2 +-
4 files changed, 19 insertions
Hi Frederic,
On 07/25/2013 07:00 PM, Frederic Weisbecker wrote:
> On Thu, Jul 25, 2013 at 02:33:02PM +0530, Preeti U Murthy wrote:
>> In the current design of timer offload framework, the broadcast cpu should
>> *not* go into tickless idle so as to avoid missed wakeups on CPUs i
Hi Frederic,
On 07/25/2013 07:00 PM, Frederic Weisbecker wrote:
> On Thu, Jul 25, 2013 at 02:33:02PM +0530, Preeti U Murthy wrote:
>> In the current design of timer offload framework, the broadcast cpu should
>> *not* go into tickless idle so as to avoid missed wakeups on CPUs i
Hi Paul,
On 07/26/2013 08:49 AM, Paul Mackerras wrote:
> On Fri, Jul 26, 2013 at 08:09:23AM +0530, Preeti U Murthy wrote:
>> Hi Frederic,
>>
>> On 07/25/2013 07:00 PM, Frederic Weisbecker wrote:
>>> Hi Preeti,
>>>
>>> I'm not exactly sure why yo
Hi Frederic,
I apologise for the confusion. As Paul pointed out maybe the usage of
the term lapic is causing a large amount of confusion. So please see the
clarification below. Maybe it will help answer your question.
On 07/26/2013 08:09 AM, Preeti U Murthy wrote:
> Hi Frederic,
>
>
makes use of the timer offload
framework that the patches Patch[1/5] to Patch[4/5] build.
This patch series is being resent to clarify certain ambiguity in the patch
descriptions from the previous post. Discussion around this:
https://lkml.org/lkml/2013/7/25/754
---
Preeti U Murthy (3):
cpuidle
are available).
So, implement the functionality of PPC_MSG_CALL_FUNC using
PPC_MSG_CALL_FUNC_SINGLE itself and release its IPI message slot, so that it
can be used for something else in the future, if desired.
Signed-off-by: Srivatsa S. Bhat
Signed-off-by: Preeti U Murthy
---
arch/powerpc/include
. Bhat
Signed-off-by: Preeti U Murthy
---
arch/powerpc/include/asm/smp.h |3 ++-
arch/powerpc/kernel/smp.c | 19 +++
arch/powerpc/platforms/cell/interrupt.c |2 +-
arch/powerpc/platforms/ps3/smp.c|2 +-
4 files changed, 19 insertions
is called on the cpu
in deep idle state to handle the local events.
The current design and implementation of the timer offload framework supports
the ONESHOT tick mode but not the PERIODIC mode.
Signed-off-by: Preeti U. Murthy
---
arch/powerpc/include/asm/time.h|3 +
arch/powerpc/kernel
-by: Preeti U Murthy
---
arch/powerpc/platforms/powernv/processor_idle.c | 48 +++
1 file changed, 47 insertions(+), 1 deletion(-)
diff --git a/arch/powerpc/platforms/powernv/processor_idle.c
b/arch/powerpc/platforms/powernv/processor_idle.c
index f43ad91a..9aca502 100644
disables tickless idle,
is a system wide setting. Hence resort to an arch specific call to check if a
cpu
can go into tickless idle.
Signed-off-by: Preeti U Murthy
---
arch/powerpc/kernel/time.c |5 +
kernel/time/tick-sched.c |7 +++
2 files changed, 12 insertions(+)
diff --git
Hi Ben,
On 07/27/2013 12:00 PM, Benjamin Herrenschmidt wrote:
> On Fri, 2013-07-26 at 08:09 +0530, Preeti U Murthy wrote:
>> *The lapic of a broadcast CPU is active always*. Say CPUX, wants the
>> broadcast CPU to wake it up at timeX. Since we cannot program the lapic
>>
slots are available).
So, implement the functionality of PPC_MSG_CALL_FUNC_SINGLE using
PPC_MSG_CALL_FUNC itself and release its IPI message slot, so that it can be
used for something else in the future, if desired.
Signed-off-by: Srivatsa S. Bhat
Signed-off-by: Preeti U. Murthy
Acked-by: Geoff
oadcast CPU, instead of having a dedicated one.
2. Remove the constraint of having to disable tickless idle on the broadcast
CPU by queueing a hrtimer dedicated to do broadcast.
V1 posting: https://lkml.org/lkml/2013/7/25/740.
1. Added the infrastructure to wakeup CPUs in deep idle states in which the
loc
[Functions renamed to tick_broadcast* and Changelog modified by
Preeti U. Murthy]
Signed-off-by: Preeti U. Murthy
Acked-by: Geoff Levand [For the PS3 part]
---
arch/powerpc/include/asm/smp.h |2 +-
arch/powerpc/include/asm/time.h |1 +
arch/powerpc/kernel/smp.c
-by: Vaidyanathan Srinivasan
[Changelog modified by Preeti U. Murthy ]
Signed-off-by: Preeti U. Murthy
---
arch/powerpc/include/asm/processor.h |1 +
arch/powerpc/kernel/exceptions-64s.S | 10 -
arch/powerpc/kernel/idle_power7.S| 63 --
3 files
timers to
directly call into __timer_interupt(). One of the use cases of this is the
tick broadcast IPI handling in which the sleeping CPUs need to handle the local
timers that have expired.
Signed-off-by: Preeti U Murthy
---
arch/powerpc/kernel/time.c | 73
Signed-off-by: Vaidyanathan Srinivasan
Signed-off-by: Preeti U. Murthy
---
arch/powerpc/include/asm/opal.h|2 ++
arch/powerpc/kernel/exceptions-64s.S |2 +-
arch/powerpc/kernel/idle_power7.S | 27
arch/powerpc/platforms/po
repeats.
Protect the region of nomination,de-nomination and check for existence of
broadcast
CPU with a lock to ensure synchronization between them.
Signed-off-by: Preeti U Murthy
---
arch/powerpc/include/asm/time.h |1
arch/powerpc/kernel/time.c |2
drivers
Add deep idle states such as nap and fast sleep to the cpuidle state table
only if they are discovered from the device tree during cpuidle initialization.
Signed-off-by: Preeti U. Murthy
---
drivers/cpuidle/cpuidle-powerpc-book3s.c | 81 --
1 file changed, 64
Signed-off-by: Preeti U Murthy
---
arch/powerpc/Kconfig|2 +
arch/powerpc/include/asm/time.h |1 +
arch/powerpc/kernel/time.c | 58 ++-
3 files changed, 60 insertions(+), 1 deletion(-)
diff --git a/arch/powerpc/Kconfig b/arch/powerpc/Kco
so as to not
miss wakeups under such scenarios.
Signed-off-by: Preeti U Murthy
---
arch/powerpc/include/asm/time.h |1 +
arch/powerpc/kernel/time.c |1 +
drivers/cpuidle/cpuidle-powerpc-book3s.c | 22 ++
3 files changed, 24 insertions
Hi Thomas,
On 11/29/2013 08:09 PM, Thomas Gleixner wrote:
> On Fri, 29 Nov 2013, Preeti U Murthy wrote:
>> +static enum hrtimer_restart handle_broadcast(struct hrtimer *hrtimer)
>> +{
>> +struct clock_event_device *bc_evt = _timer;
>> +ktime_t interval, next_bc_
Hi Thomas,
On 11/29/2013 05:28 PM, Thomas Gleixner wrote:
> On Fri, 29 Nov 2013, Preeti U Murthy wrote:
>> diff --git a/arch/powerpc/Kconfig b/arch/powerpc/Kconfig
>> index b44b52c..cafa788 100644
>> --- a/arch/powerpc/Kconfig
>> +++ b/arch/powerpc/Kconfig
>&g
Hi,
On 11/13/2013 04:53 PM, Srikar Dronamraju wrote:
> * Preeti Murthy [2013-11-13 16:22:37]:
>
>> Hi Srikar,
>>
>> update_group_power() is called only during load balancing during
>> update_sg_lb_stats().
>> Load balancing begins at the base domain of the
Hi Peter,
On 11/14/2013 02:00 PM, Peter Zijlstra wrote:
> On Thu, Nov 14, 2013 at 11:36:27AM +0530, Preeti U Murthy wrote:
>> However I was thinking that a better fix would be to reorder the way we call
>> update_group_power() and cpu_attach_domain(). Why do we need to do
>&g
Hi Peter,
On 10/28/2013 07:20 PM, Peter Zijlstra wrote:
> On Thu, Oct 24, 2013 at 01:37:38PM +0530, Preeti U Murthy wrote:
>> kernel/sched/core.c |5 +
>> kernel/sched/fair.c | 38 --
>> kernel/sched/sched.h |1 +
&g
Hi Peter,
On 10/28/2013 09:23 PM, Peter Zijlstra wrote:
> On Mon, Oct 21, 2013 at 05:15:02PM +0530, Vaidyanathan Srinivasan wrote:
>> From: Preeti U Murthy
>>
>> The current logic in load balance is such that after picking the
>> busiest group, the load is attempted t
domain-sd_busy where it is relevant.
3. Introduce sd_asym to represent the sched domain where asymmetric load
balancing has to be done.
---
Preeti U Murthy (1):
sched: Remove un-necessary iteration over sched domains to update
nr_busy_cpus
Vaidyanathan Srinivasan (1):
sched: Fix
at the sd_busy domain level alone and not the base domain level of a
CPU.
This will unify the concept of busy cpus at just one level of sched domain
where it is currently used.
Signed-off-by: Preeti U Murthy
---
kernel/sched/core.c |6 ++
kernel/sched/fair.c | 38
and() will not yield any set bits if this domain
has no idle cpu.
Hence, nr_busy check against group weight can be removed.
Reported-by: Michael Neuling
Signed-off-by: Vaidyanathan Srinivasan
Signed-off-by: Preeti U Murthy
Tested-by: Michael Neuling
---
kernel/sched/fair.c |2 +-
1 file chan
The changelog has missed mentioning the introduction of sd_asym per_cpu sched
domain.
Apologies for this. The patch with the changelog including mention of sd_asym is
pasted below.
Regards
Preeti U Murthy
---
sched: Remove un-necessary iteration over sched domains to update
Hi Kamalesh,
On 10/30/2013 02:53 PM, Kamalesh Babulal wrote:
> Hi Preeti,
>
>> nr_busy_cpus parameter is used by nohz_kick_needed() to find out the number
>> of busy cpus in a sched domain which has SD_SHARE_PKG_RESOURCES flag set.
>> Therefore instead of updating nr_
bc_cpu is woken up by
an IPI so as to queue the above mentioned hrtimer on itself.
This patch is compile tested only.
Signed-off-by: Preeti U Murthy
---
include/linux/clockchips.h |4 +
kernel/time/clockevents.c|8 +-
kernel/time/tick-broadcast.c | 157
Hi Ben,
On 12/13/2013 10:47 AM, Benjamin Herrenschmidt wrote:
> On Fri, 2013-12-13 at 09:49 +0530, Preeti U Murthy wrote:
>> On some architectures, in certain CPU deep idle states the local timers stop.
>> An external clock device is used to wakeup these CPUs. The
Hi,
The patch had some compile time fixes to be done. It was accidentally mailed
out before doing so. Below is the right patch. Apologies for the same.
Thanks
Regards
Preeti U Murthy
-
time: Support in tick broadcast
en though they degrade with time
and sgs->utils accounts for them. Therefore,
for core1 and core2, the sgs->utils will be slightly above 100 and the
above condition will fail, thus failing them as candidates for
group_leader,since threshold_util will be 200.
This phenomenon is seen for bala
; it'll likely stack the whole thing on a CPU or two, if so, it'll hurt)
At this point, I would like to raise one issue.
*Is the goal of the power aware scheduler improving power efficiency of
the scheduler or a compromise on the power efficiency but definitely a
decrease in power consumption, since it
flexible enough to do this and
that we must cash in on it.
Thanks
Regards
Preeti U Murthy
>
> Vincent
>
> On 26 March 2013 15:42, Peter Zijlstra wrote:
>> On Tue, 2013-03-26 at 15:03 +0100, Vincent Guittot wrote:
>>>> But ha! here's your NO_HZ link.. but doe
the following points again.
Thanks
Regards
Preeti U Murthy
On 04/23/2013 01:27 AM, Vincent Guittot wrote:
> On Monday, 22 April 2013, Preeti U Murthy wrote:
>> Hi Vincent,
>>
>> On 04/05/2013 04:38 PM, Vincent Guittot wrote:
>>> Peter,
>>>
>>> Aft
Hi Alex,
I have one point below.
On 04/23/2013 07:53 AM, Alex Shi wrote:
> Thanks you, Preeti and Vincent to talk the power aware scheduler for
> details! believe this open discussion is helpful to conduct a a more
> comprehensive solution. :)
>
>> Hi Preeti,
>>
>
ing.
> + *
> + * When enqueue a new forked task, the se->avg.decay_count == 0, so
> + * we bypass update_entity_load_avg(), use avg.load_avg_contrib initial
> + * value: se->load.weight.
>*/
> if (unlikely(se->avg.decay_count <= 0)) {
>
Hi Alex,
You can add my Reviewed-by for the below patch.
Thanks
Regards
Preeti U Murthy
On 04/04/2013 07:30 AM, Alex Shi wrote:
> The cpu's utilization is to measure how busy is the cpu.
> util = cpu_rq(cpu)->avg.runnable_avg_sum * SCHED_POEWR_SCALE
>
Hi Alex,
You can add my Reviewed-by for the below patch.
Thanks
Regards
Preeti U Murthy
On 04/04/2013 07:30 AM, Alex Shi wrote:
> In power aware scheduling, we don't want to balance 'prefer_sibling'
> groups just because local group has capacity.
> If the local group has no tasks at
Hi Alex,
You might want to do the below for struct sched_entity also?
AFAIK,struct sched_entity has struct sched_avg under CONFIG_SMP.
Regards
Preeti U Murthy
On 05/06/2013 07:15 AM, Alex Shi wrote:
> The following variables were covered under CONFIG_SMP in struct cfs_rq.
> but similar ru
regression.
The below patch is a substitute for patch 7.
---
sched: Modify effective_load() to use runnable load average
From: Preeti U Murthy
The runqueue weight distribution should update the runnable load average
forth another question,should we modify wake_affine()
to pass the runnable load average of the waking up task to effective_load().
What do you think?
Thanks
Regards
Preeti U Murthy
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message
cfs_rq
under CONFIG_SMP, how will tg->load_avg get updated? tg->load_avg is not
SMP dependent.
tg->load_avg in-turn is used to decide the CPU shares of the sched
entities on the processor right?
Thanks
Regards
Preeti U Murthy
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majord...@vger.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
ed tasks.
>> enqueue_task_fair->update_entity_load_avg() during the second
>> iteration.But __update_entity_load_avg() in update_entity_load_avg()
>>
>
> When goes 'enqueue_task_fair->update_entity_load_avg()' during the
> second iteration. the se is changed.
> That is dif
Hi Alex,
On 03/21/2013 01:13 PM, Alex Shi wrote:
> On 03/20/2013 12:57 PM, Preeti U Murthy wrote:
>> Neither core will be able to pull the task from the other to consolidate
>> the load because the rq->util of t2 and t4, on which no process is
>> running, continue to show
On 03/21/2013 02:57 PM, Alex Shi wrote:
> On 03/21/2013 04:41 PM, Preeti U Murthy wrote:
>>>>
>> Yes, I did find this behaviour on a 2 socket, 8 core machine very
>> consistently.
>>
>> rq->util cannot go to 0, after it has begun accumulating load right?
Hi,
On 03/22/2013 07:00 AM, Alex Shi wrote:
> On 03/21/2013 06:27 PM, Preeti U Murthy wrote:
>>>> did you close all of background system services?
>>>> In theory the rq->avg.runnable_avg_sum should be zero if there is no
>>>> task a bit long, otherwise t
merged into one, since both of
them are having the common goal of packing small tasks.
Thanks
Regards
Preeti U Murthy
On 03/22/2013 05:55 PM, Vincent Guittot wrote:
> Hi,
>
> This patchset takes advantage of the new per-task load tracking that is
> available in the kernel for packi
On 02/19/2014 12:10 AM, Thomas Gleixner wrote:
> On Tue, 18 Feb 2014, Preeti Murthy wrote:
>
>> Hi Thomas,
>>
>> With regard to the patch: "tick: Clear broadcast pending bit when
>> switching to oneshot"
>> isn't BROADCAST_EXIT called atleast after in
broadcast fails we should not be tracing either.
2. Moving the trace after the cpuidle_enter() call is wrong.
So I would suggest the patch at the end of this mail as the alternative
to this one so as to get around the patching conflict.
Thanks
Regards
Preeti U Murthy
>
> Thomas,
since you would have done BROADCAST_ENTRY and if this call
to the broadcast framework succeeds, you will have to do a
BROADCAST_EXIT irrespective of if the driver could put the CPU to that
idle state or not. So even if cpuidle_enter() fails, you will need to do
a clockevents_notify(CLOCK_EVT
e
> 3. reflect the idle state
>
> The cpuidle_idle_call calls these three functions to implement the main
> idle entry function.
>
> Signed-off-by: Daniel Lezcano
> Acked-by: Nicolas Pitre
> ---
>
> ChangeLog:
>
> V3:
> * moved broadcast timer outside of cpuidle_enter() a
r sharing) but it can become complex if we
> want to add more.
What if we want to add arch specific flags to the NUMA domain? Currently
with Peter's patch:https://lkml.org/lkml/2013/11/5/239 and this patch,
the arch can modify the sd flags of the topology levels till just before
the NUMA domain.
On 01/07/2014 03:20 PM, Peter Zijlstra wrote:
> On Tue, Jan 07, 2014 at 03:10:21PM +0530, Preeti U Murthy wrote:
>> What if we want to add arch specific flags to the NUMA domain? Currently
>> with Peter's patch:https://lkml.org/lkml/2013/11/5/239 and this patch,
>> the arch ca
On 01/07/2014 04:43 PM, Peter Zijlstra wrote:
> On Tue, Jan 07, 2014 at 04:09:39PM +0530, Preeti U Murthy wrote:
>> On 01/07/2014 03:20 PM, Peter Zijlstra wrote:
>>> On Tue, Jan 07, 2014 at 03:10:21PM +0530, Preeti U Murthy wrote:
>>>> What if we want to add arch spe
On 01/07/2014 06:01 PM, Vincent Guittot wrote:
> On 7 January 2014 11:39, Preeti U Murthy wrote:
>> On 01/07/2014 03:20 PM, Peter Zijlstra wrote:
>>> On Tue, Jan 07, 2014 at 03:10:21PM +0530, Preeti U Murthy wrote:
>>>> What if we want to add arch specific flags
endif
> + { cpu_cpu_mask, SD_INIT_NAME(DIE) },
> + { NULL, },
> +};
> +
> +struct sched_domain_topology_level *sched_domain_topology = default_topology;
> +
> +#define for_each_sd_topology(tl) \
> + for (tl = sched_domain_topology; tl->mask; t
return 0*SD_ASYM_PACKING;
> -}
> -
> /*
> * Initializers for schedule domains
> * Non-inlined to reduce accumulated stack pressure in build_sched_domains()
> @@ -6018,7 +6013,6 @@ sd_init(struct sched_domain_topology_level *tl, int cpu)
> if (sd->fla
.
I don't see this flag being set either in sd_init() or in
default_topology[]. Should not the default_topology[] flag setting
routines set this flag at every level of sched domain along with other
topology flags, unless the arch wants to override it?
Regards
Preeti U Murthy
> This flag is part of
On 03/18/2014 05:14 PM, Kirill Tkhai wrote:
>
>
> 18.03.2014, 15:08, "Preeti Murthy" :
>> On Sat, Mar 15, 2014 at 3:44 AM, Kirill Tkhai wrote:
>>
>>> {inc,dec}_rt_tasks used to count entities which are directly queued
>>> on rt_rq. If an en
On 03/19/2014 03:22 PM, Vincent Guittot wrote:
> On 19 March 2014 07:21, Preeti U Murthy wrote:
>> Hi Vincent,
>>
>> On 03/18/2014 11:26 PM, Vincent Guittot wrote:
>>> A new flag SD_SHARE_POWERDOMAIN is created to reflect whether groups of CPUs
>>> i
pci_root_bus_resources(int bus, struct list_head *resources);
>
> -#ifdef CONFIG_SMP
> -#define mc_capable() ((boot_cpu_data.x86_max_cores > 1) && \
> - (cpumask_weight(cpu_core_mask(0)) != nr_cpu_ids))
> -#define smt_capable()(
Hi Daniel,
Thank you very much for the review.
On 02/11/2014 03:46 PM, Daniel Lezcano wrote:
> On 02/07/2014 09:06 AM, Preeti U Murthy wrote:
>> From: Thomas Gleixner
>>
>> On some architectures, in certain CPU deep idle states the local
>> timers stop.
>>
the patch which should fix this. This is based on top of tip-tree.
Thanks
Regards
Preeti U Murthy
-
cpuidle/pseries: Fix fallout caused due to cleanup in pseries cpuidle backend
driver
From: Preeti U Murthy
C
else
> - entered_state = cpuidle_enter_state(dev, drv, next_state);
> -
> - if (broadcast)
> - clockevents_notify(CLOCK_EVT_NOTIFY_BROADCAST_EXIT, >cpu);
> + entered_state = cpuidle_enter(drv, dev, next_state);
>
> trace_cpu_idle_rcuidle(PWR_
c.. so that we can expect the
governor and driver to take better decisions about entry and exit into
idle states. Is this the advantage we hope to begin with?
Thanks
Regards
Preeti U Murthy
>
> Signed-off-by: Daniel Lezcano
> Acked-by: Nicolas Pitre
> ---
--
To unsubscribe from this li
ling functions
into it would result in some confusion and add more code than it is
meant to handle. This will avoid having to add comments in the
cpuidle_idle_call() function as currently being done in Patch[5/5], to
clarify what each function is meant to do.
So IMO, Patches[1/5] and [2/5] by themselves a
Hi,
On 02/13/2014 01:15 PM, Alex Shi wrote:
> On 02/11/2014 07:11 PM, Daniel Lezcano wrote:
>> On 02/10/2014 10:24 AM, Preeti Murthy wrote:
>>> HI Daniel,
>>>
>>> Isn't the only scenario where another cpu can put an idle task on
>>> our runqueue,
>
Hi Daniel,
On 02/11/2014 05:37 PM, Daniel Lezcano wrote:
> On 02/10/2014 11:04 AM, Preeti Murthy wrote:
>> Hi Daniel,
>>
>> On Fri, Feb 7, 2014 at 4:40 AM, Daniel Lezcano
>> wrote:
>>> The idle_balance modifies the idle_stamp field of the rq, making this
Hi Nicolas,
You will have to include the below patch with yours. You
could squash the two I guess, I have added the changelog
just for clarity. And you also might want to change the subject to
cpuidle/powernv. It gives a better picture.
Thanks
Regards
Preeti U Murthy
cpuidle/powernv: Add
Hi Nicolas,
On 02/07/2014 06:47 AM, Nicolas Pitre wrote:
> On Thu, 6 Feb 2014, Preeti U Murthy wrote:
>
>> Hi Daniel,
>>
>> On 02/06/2014 09:55 PM, Daniel Lezcano wrote:
>>> Hi Nico,
>>>
>>>
>>> On 6 February 2014 14:16, Nico
701 - 800 of 1331 matches
Mail list logo