for load-balancing. Latency
would be affected as mentioned earlier.
Exactly.idle_time == spare_cpu_cycles == less cpu_utilization.I hope i
am not wrong in drawing this equivalence.if thats the case then the same
explanation as above holds good here too.
Morten
Thank you
Regards
Preeti
here too.
Morten
Thank you
Regards
Preeti
--
To unsubscribe from this list: send the line unsubscribe linux-kernel in
the body of a message to majord...@vger.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
SCHED_GRP2:4096
Load calculator would probably qualify SCHED_GRP1 as the candidate
for sd-busiest due to the following loads that it calculates
SCHED_GRP1:3200
SCHED_GRP2:1156
Regards
Preeti
--
To unsubscribe from this list: send the line unsubscribe linux-kernel in
the body of a message
loaded
sched groups does not mean too few tasks.
Thank you
Regards
Preeti
--
To unsubscribe from this list: send the line unsubscribe linux-kernel in
the body of a message to majord...@vger.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http
within a small range try to
handle the existing load.
Regards
Preeti
--
To unsubscribe from this list: send the line unsubscribe linux-kernel in
the body of a message to majord...@vger.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http
taken care of the above build scenario.
hastily and failed miserably as you have noticed and then I build-tested a
wrong tree. Sorry.
It should be fixed now for real.
Thanks,
Rafael
Regards
Preeti
--
To unsubscribe from this list: send the line unsubscribe linux-kernel in
the body
usiest due to the following loads
> that it calculates.
>
> SCHED_GRP1:2048
> SCHED_GRP2:4096
>
> Load calculator would probably qualify SCHED_GRP1 as the candidate
> for sd->busiest due to the following loads that it calculates
>
> SCHED_GRP1:3200
> SCHED_GRP2:1156
>
t review and build
>> testing this went through (the above should produce warnings since they
>> are non void returning functions with no return statements).
>
> Thanks for reporting this, I tried to fix a build issue in the original patch
I apologise for not having tak
ad-balancing. Latency
> would be affected as mentioned earlier.
>
Exactly.idle_time == spare_cpu_cycles == less cpu_utilization.I hope i
am not wrong in drawing this equivalence.if thats the case then the same
explanation as above holds good here too.
>
> Morten
Thank you
Regard
last
> schedule period would be a good candidate for load-balancing. Latency
> would be affected as mentioned earlier.
>
Exactly.idle_time == spare_cpu_cycles == less cpu_utilization.I hope i
am not wrong in drawing this equivalence.if thats the case then the same
explanation as above holds good
loaded
sched groups does not mean too few tasks.
Thank you
Regards
Preeti
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majord...@vger.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FA
pus within a small range try to
handle the existing load.
Regards
Preeti
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majord...@vger.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the
idle towards nearly busy groups,but by using PJT's metric to
make the decision.
What do you think?
Regards
Preeti U Murthy
On Tue, Nov 6, 2012 at 6:39 PM, Alex Shi alex@intel.com wrote:
This patch enabled the power aware consideration in load balance.
As mentioned in the power aware scheduler
(stderr, Error joining thread %d\n, i);
exit(1);
}
}
printf(%u records/s\n,
(unsigned int) (((double) records_read)/diff_time));
}
int main()
{
start_threads();
return 0;
}
Regards
Preeti U Murthy
--
To unsubscribe from this list: send the line unsubscribe
if it is 1.
Thank you
Regards
Preeti U Murthy
On Thu, Aug 23, 2012 at 7:44 PM, p...@google.com wrote:
From: Ben Segall bseg...@google.com
Since runqueues do not have a corresponding sched_entity we instead embed a
sched_avg structure directly.
Signed-off-by: Ben Segall bseg...@google.com
Hi Alex
I apologise for the delay in replying .
On Wed, Nov 7, 2012 at 6:57 PM, Alex Shi alex@intel.com wrote:
On 11/07/2012 12:37 PM, Preeti Murthy wrote:
Hi Alex,
What I am concerned about in this patchset as Peter also
mentioned in the previous discussion of your approach
(https
opine about this issue if possible and needed.
Reviewed-by: Preeti U Murthy pre...@linux.vnet.ibm.com
Regards
Preeti U Murthy
--
To unsubscribe from this list: send the line unsubscribe linux-kernel in
the body of a message to majord...@vger.kernel.org
More majordomo info at http://vger.kernel.org
.
Regards
Preeti U Murthy
There is no problem here, but now we want to count number of all tasks
which are actually queued under the rt_rq in all the hierarhy (except
throttled rt queues).
Empty queues are not able to be queued and all of the places, which
use rt_nr_running, just compare
the difference in the loads of the wake affine
CPU and the
prev_cpu can get messed up.
Thanks
Regards
Preeti U Murthy
task_numa_compare since commit fb13c7ee (sched/numa: Use a system-wide
search to find swap/migration candidates), this patch simply restores the
historical behaviour.
[mgor
mask is cleared
and hence should not trigger a WARN_ON().
Thanks
Regards
Preeti U Murthy
On Sun, Feb 16, 2014 at 12:51 AM, Thomas Gleixner t...@linutronix.de wrote:
Linus,
please pull the latest timers-urgent-for-linus git tree from:
git://git.kernel.org/pub/scm/linux/kernel/git/tip/tip.git
Hi Yijing,
For the powerpc part:
Acked-by: Preeti U Murthy pre...@linux.vnet.ibm.com
On Mon, Feb 10, 2014 at 7:28 AM, Yijing Wang wangyij...@huawei.com wrote:
Currently, clocksource_register() and __clocksource_register_scale()
functions always return 0, it's pointless, make functions void
right?
Any other case would trigger load balancing on the same cpu, but
we are preempt_disabled and interrupt disabled at this point.
Thanks
Regards
Preeti U Murthy
On Fri, Feb 7, 2014 at 4:40 AM, Daniel Lezcano
daniel.lezc...@linaro.org wrote:
The scheduler main function 'schedule()' checks
of idle_balance()
as idle time?
Thanks
Regards
Preeti U Murthy
--
To unsubscribe from this list: send the line unsubscribe linux-kernel in
the body of a message to majord...@vger.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http
On Sat, Mar 15, 2014 at 3:45 AM, Kirill Tkhai tk...@yandex.ru wrote:
This reverts commit 4c6c4e38c4e9 [sched/core: Fix endless loop in
pick_next_task()], which is not necessary after [sched/rt: Substract number
of tasks of throttled queues from rq-nr_running]
Reviewed-by: Preeti U Murthy pre
Hi Nicolas,
You might want to change the subject.
s/sched: remove remaining power to the CPU/
sched: remove remaining usage of cpu *power* .
The subject has to explicitly specify in some way
that it is a change made to the terminology.
Regards
Preeti U Murthy
On Thu, May 15, 2014 at 2:27 AM
of the old cpu's clock_task right?
Will not setting exec_start to the clock_task of the destination rq
during migration be better? This would be the closest we could
come to estimating the amount of time the task has run on this new
cpu while deciding task_hot or not no?
Regards
Preeti U Murthy
.
Having said the above, the fix that Viresh has proposed along with the nohz_full
condition that Frederick added looks to solve this problem.
But just a thought on if there is scope to improve this part of the
cpufreq code.
What do you all think?
Thanks
Regards
Preeti U Murthy
I think below diff
be *non-deferrable*
timers in the list
s/non-deferrable/deferrable.
Thanks
Regards
Preeti U Murthy
On Thu, Jan 30, 2014 at 5:09 AM, Paul E. McKenney
paul...@linux.vnet.ibm.com wrote:
Hello, Ingo,
This pull request contains latency bandaids^Woptimizations to the
timer-wheel code that are useful
exit_latency or target_residency is present for the scheduler. The idle
state index alone will not be sufficient.
Thanks
Regards
Preeti U Murthy
Also, we should probably create a pretty function to get that state,
just like you did in patch 1.
Yes, right.
IIRC, Alex Shi sent a patchset
for my understanding.
Thanks!
Regards
Preeti U Murthy
On 3/24/14, Hidetoshi Seto seto.hideto...@jp.fujitsu.com wrote:
snip
+ * Known bug: Return value is not monotonic in case if @last_update_time
+ * is NULL and therefore update is not performed. Because it includes
+ * cputime which
a given duration will be of use.
Having said that, a tool that gives the running power efficiency
image of my system would be more useful in the long run.
Regards
Preeti U Murthy
On Tue, Mar 25, 2014 at 1:35 AM, Zoran Markovic
zoran.marko...@linaro.org wrote:
Conclusions from Energy Aware Scheduling
().
Regards
Preeti U Murthy
--
All rights reversed
--
To unsubscribe from this list: send the line unsubscribe linux-kernel in
the body of a message to majord...@vger.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml
it to reflect the right cpu load average?
Regards
Preeti U Murthy
--
To unsubscribe from this list: send the line unsubscribe linux-kernel in
the body of a message to majord...@vger.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http
it be:
if (time_after(jiffies, this_rq-next_balance) ||
time_after(this_rq-next_balance, next_balance))
this_rq-next_balance = next_balance;
Besides this:
Reviewed-by: Preeti U Murthy pre...@linux.vnet.ibm.com
Regards
Preeti U Murthy
On Sat, Apr 26, 2014 at 1:24 AM, Jason Low jason.l
Preeti U Murthy
When hres_active isn't set, we run hrtimer handlers from timer
handlers, which means that timers would be sufficient in finding
the next event and we don't need to check for hrtimers.
But when hres_active is set, hrtimers could be set to fire before
the next timer event
the tick_sched_timer
dies along with the hotplugged out CPU since there is no need for it any more.
Regards
Preeti U Murthy
--
To unsubscribe from this list: send the line unsubscribe linux-kernel in
the body of a message to majord...@vger.kernel.org
More majordomo info at http://vger.kernel.org
Hi Kirill,
Which tree is this patch based on? __migrate_task() does a
double_rq_lock/unlock() today in mainline, doesn't it? I don't
however see that in your patch.
Regards
Preeti U Murthy
On Fri, Sep 12, 2014 at 4:33 PM, Kirill Tkhai ktk...@parallels.com wrote:
If a task is queued
.
If not, you can fall through to the regular path of calling into the
cpuidle driver.
The scheduler can query the cpuidle_driver structure anyway.
What do you think?
Regards
Preeti U Murthy
--
To unsubscribe from this list: send the line unsubscribe linux-kernel in
the body of a message to majord
constraint when choosing an idle state
You might want to include this change in the previous patch itself.
+ * @next_timer_event: the duration until the timer expires
*
* Returns the index of the idle state.
*/
Regards
Preeti U Murthy
--
To unsubscribe from this list: send the line
;
Why is the last_state_idx not getting updated ?
Regards
Preeti U Murthy
--
To unsubscribe from this list: send the line unsubscribe linux-kernel in
the body of a message to majord...@vger.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http
Hi Thomas,
On Tue, Dec 16, 2014 at 6:19 PM, Thomas Gleixner t...@linutronix.de wrote:
On Tue, 16 Dec 2014, Preeti U Murthy wrote:
As far as I can see, the primary purpose of tick_nohz_irq_enter()/exit()
paths was to take care of *tick stopped* cases.
Before handling interrupts we would want
),
+ TP_CONDITION(cpu_online(smp_processor_id())),
+
TP_STRUCT__entry(
__field(unsigned long, pfn )
__field(unsigned int, order )
--
Reviewed-by: Preeti U Murthy pr...@linux.vnet.ibm.com
Regards
Preeti U Murthy
1.9.3
(mm_page_free,
Reviewed-by: Preeti U Murthy pre...@linux.vnet.ibm.com
Regards
Preeti U Murthy
--
1.9.3
--
To unsubscribe from this list: send the line unsubscribe linux-kernel in
the body of a message to majord...@vger.kernel.org
More majordomo info at http://vger.kernel.org/majordomo
;
+ __entry-migratetype= migratetype;
+ ),
+
What was the need to do the above changes besides adding TP_CONDITION ?
Regards
Preeti U Murthy
--
To unsubscribe from this list: send the line unsubscribe linux-kernel in
the body of a message to majord...@vger.kernel.org
More
On Wed, Apr 29, 2015 at 2:36 PM, Preeti Murthy preeti.l...@gmail.com wrote:
Ccing Paul,
On Tue, Apr 28, 2015 at 9:21 PM, Shreyas B. Prabhu
shre...@linux.vnet.ibm.com wrote:
Since tracepoints use RCU for protection, they must not be called on
offline cpus. trace_mm_page_free can be called
a DEFINE_EVENT_PRINT_CONDITION(), we can modify that code to use
it.
Okay, sure.
Looks good then.
Reviewed-by: Preeti U Murthy pre...@linux.vnet.ibm.com
Thanks,
Shreyas
--
To unsubscribe from this list: send the line unsubscribe linux-kernel in
the body of a message to majord
CAST_ENTER again, the
pending mask is cleared
and hence should not trigger a WARN_ON().
Thanks
Regards
Preeti U Murthy
On Sun, Feb 16, 2014 at 12:51 AM, Thomas Gleixner wrote:
> Linus,
>
> please pull the latest timers-urgent-for-linus git tree from:
>
>git://git.kernel.org/pub/s
for every rq in the hierarchy. But you would
never dequeue a sched_entity if it has more than 1 task in it. The
granularity of enqueue and dequeue of sched_entities is one task
at a time. You can extend this to enqueue and dequeue of a sched_entity
only if it has just one task in its queue.
Regards
On Sat, Mar 15, 2014 at 3:45 AM, Kirill Tkhai wrote:
> This reverts commit 4c6c4e38c4e9 [sched/core: Fix endless loop in
> pick_next_task()], which is not necessary after [sched/rt: Substract number
> of tasks of throttled queues from rq->nr_running]
Reviewed-by: Preeti U Murthy
&g
Hi Yijing,
For the powerpc part:
Acked-by: Preeti U Murthy
On Mon, Feb 10, 2014 at 7:28 AM, Yijing Wang wrote:
> Currently, clocksource_register() and __clocksource_register_scale()
> functions always return 0, it's pointless, make functions void.
> And remove the dead code t
right?
Any other case would trigger load balancing on the same cpu, but
we are preempt_disabled and interrupt disabled at this point.
Thanks
Regards
Preeti U Murthy
On Fri, Feb 7, 2014 at 4:40 AM, Daniel Lezcano
wrote:
> The scheduler main function 'schedule()' checks if there are no more ta
idle
> time.
Should not this be "such that we *do not* measure the duration of idle_balance()
as idle time?"
Thanks
Regards
Preeti U Murthy
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majord...@vger.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
below restored check will be relevant.
Without the below check the difference in the loads of the wake affine
CPU and the
prev_cpu can get messed up.
Thanks
Regards
Preeti U Murthy
> task_numa_compare since commit fb13c7ee (sched/numa: Use a system-wide
> search to find swap/migrati
hich does not seem like
the right thing to do.
Having said the above, the fix that Viresh has proposed along with the nohz_full
condition that Frederick added looks to solve this problem.
But just a thought on if there is scope to improve this part of the
cpufreq code.
What do you all think?
Thanks
ght well be *non-deferrable*
timers in the list"
s/non-deferrable/deferrable.
Thanks
Regards
Preeti U Murthy
On Thu, Jan 30, 2014 at 5:09 AM, Paul E. McKenney
wrote:
> Hello, Ingo,
>
> This pull request contains latency bandaids^Woptimizations to the
> timer-wheel code that
tes in the higher indexed
states although it should have halted if the idle states' were ordered according
to their target residency.. The same holds for exit_latency.
Hence I think this patch would make sense only with additional information
like exit_latency or target_residency is present f
ate it to reflect the right cpu load average?
Regards
Preeti U Murthy
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majord...@vger.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the
ust for my understanding.
Thanks!
Regards
Preeti U Murthy
On 3/24/14, Hidetoshi Seto wrote:
> + * Known bug: Return value is not monotonic in case if @last_update_time
> + * is NULL and therefore update is not performed. Because it includes
> + * cputime which is not determined idle
a given duration will be of use.
Having said that, a tool that gives the running power efficiency
image of my system would be more useful in the long run.
Regards
Preeti U Murthy
On Tue, Mar 25, 2014 at 1:35 AM, Zoran Markovic
wrote:
> Conclusions from Energy Aware Scheduling sessions at the lat
Hi Nicolas,
You might want to change the subject.
s/sched: remove remaining power to the CPU/
sched: remove remaining usage of cpu *power* .
The subject has to explicitly specify in some way
that it is a change made to the terminology.
Regards
Preeti U Murthy
On Thu, May 15, 2014 at 2:27 AM
gration be better? This would be the closest we could
come to estimating the amount of time the task has run on this new
cpu while deciding task_hot or not no?
Regards
Preeti U Murthy
>
> Thanks!
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majord...@vger.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
i);
exit(1);
}
}
printf("%u records/s\n",
(unsigned int) (((double) records_read)/diff_time));
}
int main()
{
start_threads();
return 0;
}
Regards
Preeti U Murthy
--
To unsubscribe from this list: send the line "unsubscribe linux-kern
any differently if it is >1.
Thank you
Regards
Preeti U Murthy
On Thu, Aug 23, 2012 at 7:44 PM, wrote:
> From: Ben Segall
>
> Since runqueues do not have a corresponding sched_entity we instead embed a
> sched_avg structure directly.
>
> Signed-off-by: Be
idle towards nearly busy groups,but by using PJT's metric to
make the decision.
What do you think?
Regards
Preeti U Murthy
On Tue, Nov 6, 2012 at 6:39 PM, Alex Shi wrote:
> This patch enabled the power aware consideration in load balance.
>
> As mentioned in the power aware scheduler propos
Hi Alex
I apologise for the delay in replying .
On Wed, Nov 7, 2012 at 6:57 PM, Alex Shi wrote:
> On 11/07/2012 12:37 PM, Preeti Murthy wrote:
>> Hi Alex,
>>
>> What I am concerned about in this patchset as Peter also
>> mentioned in the previous discussion of your ap
can opine about this issue if possible and needed.
Reviewed-by: Preeti U Murthy
Regards
Preeti U Murthy
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majord...@vger.kernel.org
More majordomo info at http://vger.kernel.org/majord
n to your check on the
latency_req == 0.
If not, you can fall through to the regular path of calling into the
cpuidle driver.
The scheduler can query the cpuidle_driver structure anyway.
What do you think?
Regards
Preeti U Murthy
--
To unsubscribe from this list: send the line "uns
ou might want to include this change in the previous patch itself.
> + * @next_timer_event: the duration until the timer expires
> *
> * Returns the index of the idle state.
> */
Regards
Preeti U Murthy
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel"
data->last_state_idx = index;
> - if (index >= 0)
> - data->needs_update = 1;
> + data->needs_update = 1;
Why is the last_state_idx not getting updated ?
Regards
Preeti U Murthy
--
To unsubscribe from this list: send the line "unsubs
Hi Thomas,
On Tue, Dec 16, 2014 at 6:19 PM, Thomas Gleixner wrote:
> On Tue, 16 Dec 2014, Preeti U Murthy wrote:
>> As far as I can see, the primary purpose of tick_nohz_irq_enter()/exit()
>> paths was to take care of *tick stopped* cases.
>>
>> Before handling inter
Hi Kirill,
Which tree is this patch based on? __migrate_task() does a
double_rq_lock/unlock() today in mainline, doesn't it? I don't
however see that in your patch.
Regards
Preeti U Murthy
On Fri, Sep 12, 2014 at 4:33 PM, Kirill Tkhai wrote:
>
> If a task is queued but not running on it
_ARGS(call_site, ptr)
> + TP_ARGS(call_site, ptr),
> +
> + TP_CONDITION(cpu_online(smp_processor_id()))
> );
>
> TRACE_EVENT(mm_page_free,
Reviewed-by: Preeti U Murthy
Regards
Preeti U Murthy
> --
> 1.9.3
>
> --
> To unsubscribe from this list: s
VENT(mm_page_free,
> +TRACE_EVENT_CONDITION(mm_page_free,
>
> TP_PROTO(struct page *page, unsigned int order),
>
> TP_ARGS(page, order),
>
> + TP_CONDITION(cpu_online(smp_processor_id())),
> +
> TP_STRUCT__entry(
> __field
On Wed, Apr 29, 2015 at 2:36 PM, Preeti Murthy wrote:
> Ccing Paul,
>
> On Tue, Apr 28, 2015 at 9:21 PM, Shreyas B. Prabhu
> wrote:
>> Since tracepoints use RCU for protection, they must not be called on
>> offline cpus. trace_mm_page_free can be called on an offline
int,migratetype )
> + ),
> +
> + TP_fast_assign(
> + __entry->pfn= page ? page_to_pfn(page) : -1UL;
> + __entry->order = order;
> + __entry->migratetype= migratetype;
> +
do this. Push the current changes as is, and when I get around to
>> adding a DEFINE_EVENT_PRINT_CONDITION(), we can modify that code to use
>> it.
>>
> Okay, sure.
Looks good then.
Reviewed-by: Preeti U Murthy
>
> Thanks,
> Shreyas
>
--
To unsubscribe from th
st since its a part and parcel of the timer wheel
events.
Regards
Preeti U Murthy
>
> When hres_active isn't set, we run hrtimer handlers from timer
> handlers, which means that timers would be sufficient in finding
> the next event and we don't need to check for hrtimers.
>
> But
So shouldn't it be:
if (time_after(jiffies, this_rq->next_balance) ||
time_after(this_rq->next_balance, next_balance))
this_rq->next_balance = next_balance;
Besides this:
Reviewed-by: Preeti U Murthy
Regards
Preeti U Murthy
On Sat, Apr 26, 2014 at 1:24 AM, Jason Low
p on a different NUMA node.
Looks to me that the problem lies here and not in the wake_affine()
and select_idle_siblings().
Regards
Preeti U Murthy
>
> --
> All rights reversed
> --
> To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
> the body of a messag
the tick_sched_timer
dies along with the hotplugged out CPU since there is no need for it any more.
Regards
Preeti U Murthy
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majord...@vger.kernel.org
More majordomo info at http://vger.
and comments on the idea and the changes
in the patch.
Signed-off-by: Preeti Nagar
---
include/asm-generic/vmlinux.lds.h | 10 ++
include/linux/init.h | 4
security/Kconfig | 10 ++
security/selinux/hooks.c | 4
4 files changed, 28
and comments on the idea and the changes
in the patch.
Signed-off-by: Preeti Nagar
---
include/asm-generic/vmlinux.lds.h | 10 ++
include/linux/init.h | 4
security/Kconfig | 10 ++
security/selinux/hooks.c | 4
4 files changed, 28
plan to
move more security-related kernel assets to this page to enhance
protection.
Signed-off-by: Preeti Nagar
---
The RFC patch reviewed available at:
https://lore.kernel.org/linux-security-module/1610099389-28329-1-git-send-email-pna...@codeaurora.org/
---
include/asm-generic/vmlinux.lds.h | 10
On 10/29/2012 11:08 PM, Benjamin Segall wrote:
Preeti Murthy preeti.l...@gmail.com writes:
Hi Paul, Ben,
A few queries regarding this patch:
1.What exactly is the significance of introducing sched_avg structure
for a runqueue? If I have
understood correctly, sched_avg keeps track
in find_busiest_group.
---
Preeti U Murthy (2):
sched:Prevent movement of short running tasks during load balancing
sched:Pick the apt busy sched group during load balancing
kernel/sched/fair.c | 38 +++---
1 file changed, 35 insertions(+), 3 deletions
call should be taken if the tasks can afford to be throttled.
This is why an additional metric has been included,which can determine how
long we can tolerate tasks not being moved even if the load is low.
Signed-off-by: Preeti U Murthy pre...@linux.vnet.ibm.com
---
kernel/sched/fair.c | 16
between the loads of the
group and the number of tasks running on the group to decide the
busiest group in the sched_domain.
This means we will need to use the PJT's metrics but with an
additional constraint.
Signed-off-by: Preeti U Murthy pre...@linux.vnet.ibm.com
---
kernel/sched/fair.c | 22
-
Average_number_of_migrations 046
Average_number_of_records/s 9,71,114 9,45,158
With more memory intensive workloads, a higher difference in the number of
migrations is seen without any performance compromise.
---
Preeti U Murthy (13):
sched:Prevent movement
Additional parameters for deciding a sched group's imbalance status
which are calculated using the per entity load tracking are used.
Signed-off-by: Preeti U Murthy pre...@linux.vnet.ibm.com
---
kernel/sched/fair.c | 22 --
1 file changed, 20 insertions(+), 2 deletions
Additional parameters which decide the busiest cpu in the chosen sched group
calculated using PJT's metric are used
Signed-off-by: Preeti U Murthy pre...@linux.vnet.ibm.com
---
kernel/sched/fair.c | 13 +
1 file changed, 9 insertions(+), 4 deletions(-)
diff --git a/kernel/sched
Modify certain decisions in load_balance to use the imbalance
amount as calculated by the PJT's metric.
Signed-off-by: Preeti U Murthy pre...@linux.vnet.ibm.com
---
kernel/sched/fair.c |5 -
1 file changed, 4 insertions(+), 1 deletion(-)
diff --git a/kernel/sched/fair.c b/kernel/sched
Additional parameters introduced to perform this function which are
calculated using PJT's metrics and its helpers.
Signed-off-by: Preeti U Murthy pre...@linux.vnet.ibm.com
---
kernel/sched/fair.c | 14 ++
1 file changed, 10 insertions(+), 4 deletions(-)
diff --git a/kernel/sched
between the loads of the
group and the number of tasks running on the group to decide the
busiest group in the sched_domain.
This means we will need to use the PJT's metrics but with an
additional constraint.
Signed-off-by: Preeti U Murthy pre...@linux.vnet.ibm.com
---
kernel/sched/fair.c | 25
Additional parameters which aid in taking the decisions in
fix_small_imbalance which are calculated using PJT's metric are used.
Signed-off-by: Preeti U Murthy pre...@linux.vnet.ibm.com
---
kernel/sched/fair.c | 54 +++
1 file changed, 33
Make appropriate modifications in check_asym_packing to reflect PJT's
metric.
Signed-off-by: Preeti U Murthy pre...@linux.vnet.ibm.com
---
kernel/sched/fair.c |2 ++
1 file changed, 2 insertions(+)
diff --git a/kernel/sched/fair.c b/kernel/sched/fair.c
index 68a6b1d..3b18f5f 100644
Additional parameters introduced to perform this function which are
calculated using PJT's metrics and its helpers.
Signed-off-by: Preeti U Murthy pre...@linux.vnet.ibm.com
---
kernel/sched/fair.c |8 +---
1 file changed, 5 insertions(+), 3 deletions(-)
diff --git a/kernel/sched/fair.c
Additional parameters introduced to perform this function which are
calculated using PJT's metrics and its helpers.
Signed-off-by: Preeti U Murthy pre...@linux.vnet.ibm.com
---
kernel/sched/fair.c | 34 +++---
1 file changed, 15 insertions(+), 19 deletions(-)
diff
Make decisions based on PJT's metrics and the dependent metrics
about which tasks to move to reduce the imbalance.
Signed-off-by: Preeti U Murthy pre...@linux.vnet.ibm.com
---
kernel/sched/fair.c | 14 +-
1 file changed, 9 insertions(+), 5 deletions(-)
diff --git a/kernel/sched
group is capable of pulling tasks upon
itself.
Signed-off-by: Preeti U Murthy pre...@linux.vnet.ibm.com
---
kernel/sched/fair.c | 33 +
1 file changed, 25 insertions(+), 8 deletions(-)
diff --git a/kernel/sched/fair.c b/kernel/sched/fair.c
index aafa3c1..67a916d
Additional parameters which decide the amount of imbalance in the sched domain
calculated using PJT's metric are used.
Signed-off-by: Preeti U Murthy pre...@linux.vnet.ibm.com
---
kernel/sched/fair.c | 36 +++-
1 file changed, 23 insertions(+), 13 deletions
1 - 100 of 1331 matches
Mail list logo