On Tue, Dec 11, 2018 at 2:02 AM Mathieu Poirier
wrote:
>
> Good day Adrian,
>
> On Sat, 8 Dec 2018 at 05:05, Lei Wen wrote:
> >
> > Hi Mathieu,
> >
> > I am enabling etmv4 coresight over one Cortex-A7 soc, using 32bit kernel.
> > And I am following [1]
Hi Mathieu,
I am enabling etmv4 coresight over one Cortex-A7 soc, using 32bit kernel.
And I am following [1] to do experiment regarding the addr_range feature.
The default addr_range is set as _stext~_etext, and it works fine with
etb as sink,
and etm as source. I could see there are valid kernel
Hi Mathieu,
I am enabling etmv4 coresight over one Cortex-A7 soc, using 32bit kernel.
And I am following [1] to do experiment regarding the addr_range feature.
The default addr_range is set as _stext~_etext, and it works fine with
etb as sink,
and etm as source. I could see there are valid kernel
On Fri, Apr 4, 2014 at 3:02 PM, noman pouigt wrote:
> Hello,
>
> Probably this question belongs to kernelnewbies
> list but I think i will get accurate answer from here.
>
> I am doing some optimization in kernel video driver
> code to reduce the latency from the time buffer
> is given to the
On Fri, Apr 4, 2014 at 3:02 PM, noman pouigt varik...@gmail.com wrote:
Hello,
Probably this question belongs to kernelnewbies
list but I think i will get accurate answer from here.
I am doing some optimization in kernel video driver
code to reduce the latency from the time buffer
is given
So that in the very early booting place, we could call timekeeping
code, while it would not cause system panic, since clock is not
init yet.
And for system default clock is always jiffies, so that it shall be
safe to do so.
Signed-off-by: Lei Wen
---
include/linux/time.h | 1 +
init
As people may want to align the kernel log with some other processor
running over the same machine but not the same copy of linux, we
need to keep their log aligned, so that it would not make debug
process hard and confused.
Signed-off-by: Lei Wen
---
kernel/printk/printk.c | 4 ++--
1 file
such assumption in the old days.
So this patch set is supposed to recover such behavior again.
BTW, I am not sure whether we could add additional member in printk
log structure, so that we could print out two piece of log with
one including suspend time, while another not?
Lei Wen (3):
time: create
in the old way, get_monotonic_boottime is a good
candidate, but it cannot be called after suspend process has happen.
Thus, it prevent printk to be used in every corner.
Export one warn less __get_monotonic_boottime to solve this issue.
Signed-off-by: Lei Wen
---
include/linux/time.h | 1
such assumption in the old days.
So this patch set is supposed to recover such behavior again.
BTW, I am not sure whether we could add additional member in printk
log structure, so that we could print out two piece of log with
one including suspend time, while another not?
Lei Wen (3):
time: create
in the old way, get_monotonic_boottime is a good
candidate, but it cannot be called after suspend process has happen.
Thus, it prevent printk to be used in every corner.
Export one warn less __get_monotonic_boottime to solve this issue.
Signed-off-by: Lei Wen lei...@marvell.com
---
include/linux
So that in the very early booting place, we could call timekeeping
code, while it would not cause system panic, since clock is not
init yet.
And for system default clock is always jiffies, so that it shall be
safe to do so.
Signed-off-by: Lei Wen lei...@marvell.com
---
include/linux/time.h
As people may want to align the kernel log with some other processor
running over the same machine but not the same copy of linux, we
need to keep their log aligned, so that it would not make debug
process hard and confused.
Signed-off-by: Lei Wen lei...@marvell.com
---
kernel/printk/printk.c
Hi Stephen,
On Thu, Apr 3, 2014 at 2:09 AM, Stephen Boyd wrote:
> On 04/02/14 04:02, Lei Wen wrote:
>> Since arm's arch_timer's counter would keep accumulated even in the
>> low power mode, including suspend state, it is very suitable to be
>> the persistent clock instea
-off-by: Lei Wen
---
I am not sure whether it is good to add something like
generic_persistent_clock_read in the new added kernel/time/sched_clock.c?
Since from arch timer's perspective, all it need to do is to pick
the suspend period from the place where sched_clock being stopped/restarted.
Any
-off-by: Lei Wen lei...@marvell.com
---
I am not sure whether it is good to add something like
generic_persistent_clock_read in the new added kernel/time/sched_clock.c?
Since from arch timer's perspective, all it need to do is to pick
the suspend period from the place where sched_clock being
Hi Stephen,
On Thu, Apr 3, 2014 at 2:09 AM, Stephen Boyd sb...@codeaurora.org wrote:
On 04/02/14 04:02, Lei Wen wrote:
Since arm's arch_timer's counter would keep accumulated even in the
low power mode, including suspend state, it is very suitable to be
the persistent clock instead of RTC
On Mon, Feb 24, 2014 at 3:07 PM, Peter Zijlstra wrote:
> On Mon, Feb 24, 2014 at 10:11:05AM +0800, Lei Wen wrote:
>> How about use the API as cpumask_test_and_clear_cpu?
>> Then below one line is enough.
>
> Its more expensive.
>
I see...
No problem for me then.
Acked-b
ed-off-by: Mike Galbraith
> Signed-off-by: Peter Zijlstra
> Cc: Lei Wen
> Link: http://lkml.kernel.org/n/tip-vmme4f49psirp966pklm5...@git.kernel.org
> Signed-off-by: Thomas Gleixner
> Signed-off-by: Ingo Molnar
> ---
> kernel/sched/fair.c | 25 ++---
>
-by: Mike Galbraith mgalbra...@suse.de
Signed-off-by: Peter Zijlstra pet...@infradead.org
Cc: Lei Wen lei...@marvell.com
Link: http://lkml.kernel.org/n/tip-vmme4f49psirp966pklm5...@git.kernel.org
Signed-off-by: Thomas Gleixner t...@linutronix.de
Signed-off-by: Ingo Molnar mi...@kernel.org
On Mon, Feb 24, 2014 at 3:07 PM, Peter Zijlstra pet...@infradead.org wrote:
On Mon, Feb 24, 2014 at 10:11:05AM +0800, Lei Wen wrote:
How about use the API as cpumask_test_and_clear_cpu?
Then below one line is enough.
Its more expensive.
I see...
No problem for me then.
Acked-by: Lei Wen
such cpu set nohz.idle_cpus_mask in the
first place.
Signed-off-by: Lei Wen
Cc: Peter Zijlstra
Cc: Mike Galbraith
---
Much thanks to Mike Pointing out the root span would be merged when the
last cpu becomes isolated from the crash result checking!
kernel/sched/fair.c | 8
1 file changed, 8
such cpu set nohz.idle_cpus_mask in the
first place.
Signed-off-by: Lei Wen lei...@marvell.com
Cc: Peter Zijlstra pet...@infradead.org
Cc: Mike Galbraith bitbuc...@online.de
---
Much thanks to Mike Pointing out the root span would be merged when the
last cpu becomes isolated from the crash result
Mike,
On Fri, Feb 21, 2014 at 1:51 PM, Mike Galbraith wrote:
> On Fri, 2014-02-21 at 10:23 +0800, Lei Wen wrote:
>> Cpu which is put into quiescent mode, would remove itself
>> from kernel's sched_domain, and want others not disturb its
>> task running. But current schedul
it by preventing such cpu set nohz.idle_cpus_mask in the
first place.
Signed-off-by: Lei Wen
---
kernel/sched/fair.c | 7 +++
1 file changed, 7 insertions(+)
diff --git a/kernel/sched/fair.c b/kernel/sched/fair.c
index 235cfa7..66194fc 100644
--- a/kernel/sched/fair.c
+++ b/kernel/sched/fair.c
it by preventing such cpu set nohz.idle_cpus_mask in the
first place.
Signed-off-by: Lei Wen
---
kernel/sched/fair.c | 7 +++
1 file changed, 7 insertions(+)
diff --git a/kernel/sched/fair.c b/kernel/sched/fair.c
index 235cfa7..bc85022 100644
--- a/kernel/sched/fair.c
+++ b/kernel/sched/fair.c
On Thu, Feb 20, 2014 at 4:50 PM, Peter Zijlstra wrote:
> On Thu, Feb 20, 2014 at 10:42:51AM +0800, Lei Wen wrote:
>> >> - int ilb = cpumask_first(nohz.idle_cpus_mask);
>> >> + int ilb;
>> >> + int cpu = smp_processor_id(
On Thu, Feb 20, 2014 at 4:50 PM, Peter Zijlstra pet...@infradead.org wrote:
On Thu, Feb 20, 2014 at 10:42:51AM +0800, Lei Wen wrote:
- int ilb = cpumask_first(nohz.idle_cpus_mask);
+ int ilb;
+ int cpu = smp_processor_id();
+ struct sched_domain *tmp;
- if (ilb
it by preventing such cpu set nohz.idle_cpus_mask in the
first place.
Signed-off-by: Lei Wen lei...@marvell.com
---
kernel/sched/fair.c | 7 +++
1 file changed, 7 insertions(+)
diff --git a/kernel/sched/fair.c b/kernel/sched/fair.c
index 235cfa7..bc85022 100644
--- a/kernel/sched/fair.c
+++ b/kernel
it by preventing such cpu set nohz.idle_cpus_mask in the
first place.
Signed-off-by: Lei Wen lei...@marvell.com
---
kernel/sched/fair.c | 7 +++
1 file changed, 7 insertions(+)
diff --git a/kernel/sched/fair.c b/kernel/sched/fair.c
index 235cfa7..66194fc 100644
--- a/kernel/sched/fair.c
+++ b/kernel
Mike,
On Fri, Feb 21, 2014 at 1:51 PM, Mike Galbraith bitbuc...@online.de wrote:
On Fri, 2014-02-21 at 10:23 +0800, Lei Wen wrote:
Cpu which is put into quiescent mode, would remove itself
from kernel's sched_domain, and want others not disturb its
task running. But current scheduler would
On Wed, Feb 19, 2014 at 5:04 PM, Peter Zijlstra wrote:
> On Wed, Feb 19, 2014 at 01:20:30PM +0800, Lei Wen wrote:
>> Since cpu which is put into quiescent mode, would remove itself
>> from kernel's sched_domain. So we could use search sched_domain
>> method to check whethe
On Wed, Feb 19, 2014 at 5:04 PM, Peter Zijlstra pet...@infradead.org wrote:
On Wed, Feb 19, 2014 at 01:20:30PM +0800, Lei Wen wrote:
Since cpu which is put into quiescent mode, would remove itself
from kernel's sched_domain. So we could use search sched_domain
method to check whether this cpu
Since cpu which is put into quiescent mode, would remove itself
from kernel's sched_domain. So we could use search sched_domain
method to check whether this cpu don't want to be disturbed as
idle load balance would send IPI to it.
Signed-off-by: Lei Wen
---
kernel/sched/fair.c | 14
Since cpu which is put into quiescent mode, would remove itself
from kernel's sched_domain. So we could use search sched_domain
method to check whether this cpu don't want to be disturbed as
idle load balance would send IPI to it.
Signed-off-by: Lei Wen lei...@marvell.com
---
kernel/sched/fair.c
On Wed, Jan 22, 2014 at 10:07 PM, Thomas Gleixner wrote:
> On Wed, 22 Jan 2014, Lei Wen wrote:
>> Recently I want to do the experiment for cpu isolation over 3.10 kernel.
>> But I find the isolated one is periodically waken up by IPI interrupt.
>>
>> By checking
Hi Thomas,
Recently I want to do the experiment for cpu isolation over 3.10 kernel.
But I find the isolated one is periodically waken up by IPI interrupt.
By checking the trace, I find those IPI is generated by add_timer_on,
which would calls wake_up_nohz_cpu, and wake up the already idle cpu.
Hi Thomas,
Recently I want to do the experiment for cpu isolation over 3.10 kernel.
But I find the isolated one is periodically waken up by IPI interrupt.
By checking the trace, I find those IPI is generated by add_timer_on,
which would calls wake_up_nohz_cpu, and wake up the already idle cpu.
On Wed, Jan 22, 2014 at 10:07 PM, Thomas Gleixner t...@linutronix.de wrote:
On Wed, 22 Jan 2014, Lei Wen wrote:
Recently I want to do the experiment for cpu isolation over 3.10 kernel.
But I find the isolated one is periodically waken up by IPI interrupt.
By checking the trace, I find those
On Mon, Jan 20, 2014 at 11:41 PM, Frederic Weisbecker
wrote:
> On Mon, Jan 20, 2014 at 08:30:10PM +0530, Viresh Kumar wrote:
>> On 20 January 2014 19:29, Lei Wen wrote:
>> > Hi Viresh,
>>
>> Hi Lei,
>>
>> > I have one question regarding unbounded w
Hi Viresh,
On Wed, Jan 15, 2014 at 5:27 PM, Viresh Kumar wrote:
> Hi Again,
>
> I am now successful in isolating a CPU completely using CPUsets,
> NO_HZ_FULL and CPU hotplug..
>
> My setup and requirements for those who weren't following the
> earlier mails:
>
> For networking machines it is
Hi Viresh,
On Wed, Jan 15, 2014 at 5:27 PM, Viresh Kumar viresh.ku...@linaro.org wrote:
Hi Again,
I am now successful in isolating a CPU completely using CPUsets,
NO_HZ_FULL and CPU hotplug..
My setup and requirements for those who weren't following the
earlier mails:
For networking
On Mon, Jan 20, 2014 at 11:41 PM, Frederic Weisbecker
fweis...@gmail.com wrote:
On Mon, Jan 20, 2014 at 08:30:10PM +0530, Viresh Kumar wrote:
On 20 January 2014 19:29, Lei Wen adrian.w...@gmail.com wrote:
Hi Viresh,
Hi Lei,
I have one question regarding unbounded workqueue migration
Hi Mike,
On Mon, Dec 30, 2013 at 12:08 PM, Mike Galbraith wrote:
> On Mon, 2013-12-30 at 11:14 +0800, Lei Wen wrote:
>> Since we would update rq clock at task enqueue/dequeue, or schedule
>> tick. If we don't update the rq clock when our previous task get
>> preempted, our n
more precise account for the task start and duration time,
we'd better ensure rq clock get updated when it begin to run.
Best regards,
Lei
Signed-off-by: Lei Wen
---
kernel/sched/core.c | 5 -
1 file changed, 4 insertions(+), 1 deletion(-)
diff --git a/kernel/sched/core.c b/kernel/sched
more precise account for the task start and duration time,
we'd better ensure rq clock get updated when it begin to run.
Best regards,
Lei
Signed-off-by: Lei Wen lei...@marvell.com
---
kernel/sched/core.c | 5 -
1 file changed, 4 insertions(+), 1 deletion(-)
diff --git a/kernel/sched/core.c
Hi Mike,
On Mon, Dec 30, 2013 at 12:08 PM, Mike Galbraith bitbuc...@online.de wrote:
On Mon, 2013-12-30 at 11:14 +0800, Lei Wen wrote:
Since we would update rq clock at task enqueue/dequeue, or schedule
tick. If we don't update the rq clock when our previous task get
preempted, our new
On Mon, Sep 9, 2013 at 7:15 PM, Peter Zijlstra wrote:
> On Mon, Sep 02, 2013 at 02:26:45PM +0800, Lei Wen wrote:
>> Hi Peter,
>>
>> I find one list API usage may not be correct in current fair.c code.
>> In move_one_task function, it may iterate through whole cfs_ta
On Mon, Sep 9, 2013 at 7:15 PM, Peter Zijlstra pet...@infradead.org wrote:
On Mon, Sep 02, 2013 at 02:26:45PM +0800, Lei Wen wrote:
Hi Peter,
I find one list API usage may not be correct in current fair.c code.
In move_one_task function, it may iterate through whole cfs_tasks
list to get one
Hi Peter,
I find one list API usage may not be correct in current fair.c code.
In move_one_task function, it may iterate through whole cfs_tasks
list to get one task to move.
But in dequeue_task(), it would delete one task node from list
without the lock protection. So that we could see from
Hi Peter,
I find one list API usage may not be correct in current fair.c code.
In move_one_task function, it may iterate through whole cfs_tasks
list to get one task to move.
But in dequeue_task(), it would delete one task node from list
without the lock protection. So that we could see from
On Mon, Aug 26, 2013 at 12:36 PM, Paul Turner wrote:
> On Sun, Aug 25, 2013 at 7:56 PM, Lei Wen wrote:
>> On Tue, Aug 20, 2013 at 12:01 AM, Peter Zijlstra
>> wrote:
>>> From: Joonsoo Kim
>>>
>>> There is no reason to maintain separate vari
On Mon, Aug 26, 2013 at 12:36 PM, Paul Turner p...@google.com wrote:
On Sun, Aug 25, 2013 at 7:56 PM, Lei Wen adrian.w...@gmail.com wrote:
On Tue, Aug 20, 2013 at 12:01 AM, Peter Zijlstra pet...@infradead.org
wrote:
From: Joonsoo Kim iamjoonsoo@lge.com
There is no reason to maintain
On Tue, Aug 20, 2013 at 12:01 AM, Peter Zijlstra wrote:
> From: Joonsoo Kim
>
> There is no reason to maintain separate variables for this_group
> and busiest_group in sd_lb_stat, except saving some space.
> But this structure is always allocated in stack, so this saving
> isn't really
On Tue, Aug 20, 2013 at 12:01 AM, Peter Zijlstra pet...@infradead.org wrote:
From: Joonsoo Kim iamjoonsoo@lge.com
There is no reason to maintain separate variables for this_group
and busiest_group in sd_lb_stat, except saving some space.
But this structure is always allocated in stack, so
Paul,
On Tue, Aug 13, 2013 at 5:25 PM, Paul Turner wrote:
> On Tue, Aug 13, 2013 at 1:18 AM, Lei Wen wrote:
>> Hi Paul,
>>
>> On Tue, Aug 13, 2013 at 4:08 PM, Paul Turner wrote:
>>> On Tue, Aug 13, 2013 at 12:38 AM, Peter Zijlstra
>>> wrote:
>>>
Signed-off-by: Lei Wen
---
kernel/sched/sched.h |6 ++
1 files changed, 6 insertions(+), 0 deletions(-)
diff --git a/kernel/sched/sched.h b/kernel/sched/sched.h
index ef0a7b2..b8f0924 100644
--- a/kernel/sched/sched.h
+++ b/kernel/sched/sched.h
@@ -248,6 +248,12 @@ struct cfs_bandwidth
Since it is different for the nr_running and h_nr_running in its
presenting meaning, we should take care of their usage in the scheduler.
Lei Wen (8):
sched: change load balance number to h_nr_running of run queue
sched: change cpu_avg_load_per_task using h_nr_running
sched: change
ove.
Signed-off-by: Lei Wen
---
kernel/sched/fair.c |2 +-
1 files changed, 1 insertions(+), 1 deletions(-)
diff --git a/kernel/sched/fair.c b/kernel/sched/fair.c
index 3656603..4c96124 100644
--- a/kernel/sched/fair.c
+++ b/kernel/sched/fair.c
@@ -5349,7 +5349,7 @@ static
Since find_busiest_queue try to avoid do load balance for runqueue
which has only one cfs task and its load is above the imbalance
value calculated, we should use h_nr_running of cfs instead of
nr_running of rq.
Signed-off-by: Lei Wen
---
kernel/sched/fair.c |3 ++-
1 files changed, 2
Since update_sg_lb_stats is used to calculate sched_group load
difference of cfs type task, it should use h_nr_running instead of
nr_running of rq.
Signed-off-by: Lei Wen
---
kernel/sched/fair.c |2 +-
1 files changed, 1 insertions(+), 1 deletions(-)
diff --git a/kernel/sched/fair.c b
Since pick_next_task_fair only want to ensure there is some task in the
run queue to be picked up, it should use the h_nr_running instead of
nr_running, since nr_running cannot present all tasks if group existed.
Signed-off-by: Lei Wen
---
kernel/sched/fair.c |2 +-
1 files changed, 1
Since cpu_avg_load_per_task is used only by cfs scheduler, its meaning
should present the average cfs type task load in the current run queue.
Thus we change it to h_nr_running for well presenting its meaning.
Signed-off-by: Lei Wen
---
kernel/sched/fair.c |2 +-
1 files changed, 1
control
mechanism. Thus its sleep time should not being taken into
runnable avg load calculation.
Signed-off-by: Lei Wen
---
kernel/sched/fair.c |4 ++--
1 files changed, 2 insertions(+), 2 deletions(-)
diff --git a/kernel/sched/fair.c b/kernel/sched/fair.c
index e6b99b4..9869d4d 100644
off-by: Lei Wen
---
kernel/sched/fair.c |8 +---
1 files changed, 5 insertions(+), 3 deletions(-)
diff --git a/kernel/sched/fair.c b/kernel/sched/fair.c
index f918635..d6153c8 100644
--- a/kernel/sched/fair.c
+++ b/kernel/sched/fair.c
@@ -5096,17 +5096,19 @@ redo:
schedstat_add
-by: Lei Wen lei...@marvell.com
---
kernel/sched/fair.c |8 +---
1 files changed, 5 insertions(+), 3 deletions(-)
diff --git a/kernel/sched/fair.c b/kernel/sched/fair.c
index f918635..d6153c8 100644
--- a/kernel/sched/fair.c
+++ b/kernel/sched/fair.c
@@ -5096,17 +5096,19 @@ redo
Since pick_next_task_fair only want to ensure there is some task in the
run queue to be picked up, it should use the h_nr_running instead of
nr_running, since nr_running cannot present all tasks if group existed.
Signed-off-by: Lei Wen lei...@marvell.com
---
kernel/sched/fair.c |2 +-
1
Since cpu_avg_load_per_task is used only by cfs scheduler, its meaning
should present the average cfs type task load in the current run queue.
Thus we change it to h_nr_running for well presenting its meaning.
Signed-off-by: Lei Wen lei...@marvell.com
---
kernel/sched/fair.c |2 +-
1 files
control
mechanism. Thus its sleep time should not being taken into
runnable avg load calculation.
Signed-off-by: Lei Wen lei...@marvell.com
---
kernel/sched/fair.c |4 ++--
1 files changed, 2 insertions(+), 2 deletions(-)
diff --git a/kernel/sched/fair.c b/kernel/sched/fair.c
index e6b99b4
Since update_sg_lb_stats is used to calculate sched_group load
difference of cfs type task, it should use h_nr_running instead of
nr_running of rq.
Signed-off-by: Lei Wen lei...@marvell.com
---
kernel/sched/fair.c |2 +-
1 files changed, 1 insertions(+), 1 deletions(-)
diff --git a/kernel
Since find_busiest_queue try to avoid do load balance for runqueue
which has only one cfs task and its load is above the imbalance
value calculated, we should use h_nr_running of cfs instead of
nr_running of rq.
Signed-off-by: Lei Wen lei...@marvell.com
---
kernel/sched/fair.c |3 ++-
1
.
Signed-off-by: Lei Wen lei...@marvell.com
---
kernel/sched/fair.c |2 +-
1 files changed, 1 insertions(+), 1 deletions(-)
diff --git a/kernel/sched/fair.c b/kernel/sched/fair.c
index 3656603..4c96124 100644
--- a/kernel/sched/fair.c
+++ b/kernel/sched/fair.c
@@ -5349,7 +5349,7 @@ static int
Signed-off-by: Lei Wen lei...@marvell.com
---
kernel/sched/sched.h |6 ++
1 files changed, 6 insertions(+), 0 deletions(-)
diff --git a/kernel/sched/sched.h b/kernel/sched/sched.h
index ef0a7b2..b8f0924 100644
--- a/kernel/sched/sched.h
+++ b/kernel/sched/sched.h
@@ -248,6 +248,12
Since it is different for the nr_running and h_nr_running in its
presenting meaning, we should take care of their usage in the scheduler.
Lei Wen (8):
sched: change load balance number to h_nr_running of run queue
sched: change cpu_avg_load_per_task using h_nr_running
sched: change
Paul,
On Tue, Aug 13, 2013 at 5:25 PM, Paul Turner p...@google.com wrote:
On Tue, Aug 13, 2013 at 1:18 AM, Lei Wen adrian.w...@gmail.com wrote:
Hi Paul,
On Tue, Aug 13, 2013 at 4:08 PM, Paul Turner p...@google.com wrote:
On Tue, Aug 13, 2013 at 12:38 AM, Peter Zijlstra pet...@infradead.org
Hi Paul,
On Tue, Aug 13, 2013 at 4:08 PM, Paul Turner wrote:
> On Tue, Aug 13, 2013 at 12:38 AM, Peter Zijlstra wrote:
>> On Tue, Aug 13, 2013 at 12:45:12PM +0800, Lei Wen wrote:
>>> > Not quite right; I think you need busiest->cfs.h_nr_running.
>>> > cfs
Hi Paul,
On Tue, Aug 13, 2013 at 4:08 PM, Paul Turner p...@google.com wrote:
On Tue, Aug 13, 2013 at 12:38 AM, Peter Zijlstra pet...@infradead.org wrote:
On Tue, Aug 13, 2013 at 12:45:12PM +0800, Lei Wen wrote:
Not quite right; I think you need busiest-cfs.h_nr_running.
cfs.nr_running
Peter,
On Mon, Aug 12, 2013 at 10:43 PM, Peter Zijlstra wrote:
> On Tue, Aug 06, 2013 at 09:23:46PM +0800, Lei Wen wrote:
>> Hi Paul,
>>
>> I notice in load_balance function, it would check busiest->nr_running
>> to decide whether to perform the real task movement.
Peter,
On Mon, Aug 12, 2013 at 10:43 PM, Peter Zijlstra pet...@infradead.org wrote:
On Tue, Aug 06, 2013 at 09:23:46PM +0800, Lei Wen wrote:
Hi Paul,
I notice in load_balance function, it would check busiest-nr_running
to decide whether to perform the real task movement.
But in some case
Hi Paul,
I notice in load_balance function, it would check busiest->nr_running
to decide whether to perform the real task movement.
But in some case, I saw the nr_running is not matching with
the task in the queue, which seems make scheduler to do many redundant
checking.
What I means is like
Hi Paul,
I notice in load_balance function, it would check busiest-nr_running
to decide whether to perform the real task movement.
But in some case, I saw the nr_running is not matching with
the task in the queue, which seems make scheduler to do many redundant
checking.
What I means is like
Hi list,
I recently find a strange issue over 3.4 kernel.
The scenario is doing the hotplug test over ARM platfrom, and when the
hotplugged
out cpu1 want to get back in again, seems it stuck at cpu_stop_cpu_callback.
The task backtrace is as below:
PID: 21749 TASK: d194b300 CPU: 0 COMMAND:
Hi Peter,
Do you have some further suggestion for this patch? :)
Thanks,
Lei
On Tue, Jul 2, 2013 at 8:15 PM, Lei Wen wrote:
> Since we could track task in the entity level now, we may want to
> investigate tasks' running status by recording the trace info, so that
> could make so
Hi Peter,
Do you have some further suggestion for this patch? :)
Thanks,
Lei
On Tue, Jul 2, 2013 at 8:15 PM, Lei Wen lei...@marvell.com wrote:
Since we could track task in the entity level now, we may want to
investigate tasks' running status by recording the trace info, so that
could make
Hi list,
I recently find a strange issue over 3.4 kernel.
The scenario is doing the hotplug test over ARM platfrom, and when the
hotplugged
out cpu1 want to get back in again, seems it stuck at cpu_stop_cpu_callback.
The task backtrace is as below:
PID: 21749 TASK: d194b300 CPU: 0 COMMAND:
may get confused.
Signed-off-by: Lei Wen
Cc: Alex Shi
Cc: Paul Turner
---
kernel/sched/fair.c |2 +-
1 file changed, 1 insertion(+), 1 deletion(-)
diff --git a/kernel/sched/fair.c b/kernel/sched/fair.c
index 2290469..53224d1 100644
--- a/kernel/sched/fair.c
+++ b/kernel/sched/fair.c
Since we could track task in the entity level now, we may want to
investigate tasks' running status by recording the trace info, so that
could make some tuning if needed.
Signed-off-by: Lei Wen
Cc: Alex Shi
Cc: Peter Zijlstra
Cc: Kamalesh Babulal
---
include/trace/events/sched.h | 76
ake trace events passing parameter being simple, and only extend
its detail in the header file definition. Thanks Peter for pointing out
this.
V2: Abstract sched_cfs_rq_runnable_load and sched_cfs_rq_blocked_load using
sched_cfs_rq_load_contri_template. Thanks Kamalesh for this contribut
passing parameter being simple, and only extend
its detail in the header file definition. Thanks Peter for pointing out
this.
V2: Abstract sched_cfs_rq_runnable_load and sched_cfs_rq_blocked_load using
sched_cfs_rq_load_contri_template. Thanks Kamalesh for this contribution!
Lei Wen (2
Since we could track task in the entity level now, we may want to
investigate tasks' running status by recording the trace info, so that
could make some tuning if needed.
Signed-off-by: Lei Wen lei...@marvell.com
Cc: Alex Shi alex@intel.com
Cc: Peter Zijlstra pet...@infradead.org
Cc: Kamalesh
may get confused.
Signed-off-by: Lei Wen lei...@marvell.com
Cc: Alex Shi alex@intel.com
Cc: Paul Turner p...@google.com
---
kernel/sched/fair.c |2 +-
1 file changed, 1 insertion(+), 1 deletion(-)
diff --git a/kernel/sched/fair.c b/kernel/sched/fair.c
index 2290469..53224d1 100644
Paul,
On Mon, Jul 1, 2013 at 10:07 PM, Paul Turner wrote:
> Could you please restate the below?
>
> On Mon, Jul 1, 2013 at 5:33 AM, Lei Wen wrote:
>> Since we are going to calculate cfs_rq's average ratio by
>> runnable_load_avg/load.weight
>
> I don
Hi Peter,
On Mon, Jul 1, 2013 at 8:44 PM, Peter Zijlstra wrote:
> On Mon, Jul 01, 2013 at 08:33:21PM +0800, Lei Wen wrote:
>> Since we could track task in the entity level now, we may want to
>> investigate tasks' running status by recording the trace info, so that
>> c
Since we are going to calculate cfs_rq's average ratio by
runnable_load_avg/load.weight, if not increase the load.weight prior to
enqueue_entity_load_avg, it may lead to one cfs_rq's avg ratio higher
than 100%.
Adjust the sequence, so that all ratio is kept below 100%.
Signed-off-by: Lei Wen
oad distribution status in the whole system
V2: Abstract sched_cfs_rq_runnable_load and sched_cfs_rq_blocked_load using
sched_cfs_rq_load_contri_template. Thanks Kamalesh for this contribution!
Lei Wen (2):
sched: add trace events for task and rq usage tracking
sched: update cfs_r
Since we could track task in the entity level now, we may want to
investigate tasks' running status by recording the trace info, so that
could make some tuning if needed.
Signed-off-by: Lei Wen
---
include/trace/events/sched.h | 57 ++
kernel/sched
Hi Kamalesh,
On Mon, Jul 1, 2013 at 5:43 PM, Kamalesh Babulal
wrote:
> * Lei Wen [2013-07-01 15:10:32]:
>
>> Since we could track task in the entity level now, we may want to
>> investigate tasks' running status by recording the trace info, so that
>> could ma
Alex,
On Mon, Jul 1, 2013 at 4:06 PM, Alex Shi wrote:
> On 07/01/2013 03:10 PM, Lei Wen wrote:
>> Thanks for the per-entity tracking feature, we could know the details of
>> each task by its help.
>> This patch add its trace support, so that we could quickly know the system
Since we are going to calculate cfs_rq's average ratio by
runnable_load_avg/load.weight, if not increase the load.weight prior to
enqueue_entity_load_avg, it may lead to one cfs_rq's avg ratio higher
than 100%.
Adjust the sequence, so that all ratio is kept below 100%.
Signed-off-by: Lei Wen
->runnable_load_avg/cfs_rq->load.weight
Lei Wen (2):
sched: add trace events for task and rq usage tracking
sched: update cfs_rq weight earlier in enqueue_entity
include/trace/events/sched.h | 73 ++
kernel/sched/fair.c | 31 --
2
1 - 100 of 195 matches
Mail list logo