Re: Coresight etmv4 enable over 32bit kernel

2018-12-11 Thread Lei Wen
On Tue, Dec 11, 2018 at 2:02 AM Mathieu Poirier wrote: > > Good day Adrian, > > On Sat, 8 Dec 2018 at 05:05, Lei Wen wrote: > > > > Hi Mathieu, > > > > I am enabling etmv4 coresight over one Cortex-A7 soc, using 32bit kernel. > > And I am following [1]

Coresight etmv4 enable over 32bit kernel

2018-12-08 Thread Lei Wen
Hi Mathieu, I am enabling etmv4 coresight over one Cortex-A7 soc, using 32bit kernel. And I am following [1] to do experiment regarding the addr_range feature. The default addr_range is set as _stext~_etext, and it works fine with etb as sink, and etm as source. I could see there are valid kernel

Coresight etmv4 enable over 32bit kernel

2018-12-08 Thread Lei Wen
Hi Mathieu, I am enabling etmv4 coresight over one Cortex-A7 soc, using 32bit kernel. And I am following [1] to do experiment regarding the addr_range feature. The default addr_range is set as _stext~_etext, and it works fine with etb as sink, and etm as source. I could see there are valid kernel

Re: Query for time tracking between userspace and kernelspace

2014-04-04 Thread Lei Wen
On Fri, Apr 4, 2014 at 3:02 PM, noman pouigt wrote: > Hello, > > Probably this question belongs to kernelnewbies > list but I think i will get accurate answer from here. > > I am doing some optimization in kernel video driver > code to reduce the latency from the time buffer > is given to the

Re: Query for time tracking between userspace and kernelspace

2014-04-04 Thread Lei Wen
On Fri, Apr 4, 2014 at 3:02 PM, noman pouigt varik...@gmail.com wrote: Hello, Probably this question belongs to kernelnewbies list but I think i will get accurate answer from here. I am doing some optimization in kernel video driver code to reduce the latency from the time buffer is given

[PATCH 2/3] timekeeping: move clocksource init to the early place

2014-04-03 Thread Lei Wen
So that in the very early booting place, we could call timekeeping code, while it would not cause system panic, since clock is not init yet. And for system default clock is always jiffies, so that it shall be safe to do so. Signed-off-by: Lei Wen --- include/linux/time.h | 1 + init

[PATCH 3/3] printk: using booting time as the timestamp

2014-04-03 Thread Lei Wen
As people may want to align the kernel log with some other processor running over the same machine but not the same copy of linux, we need to keep their log aligned, so that it would not make debug process hard and confused. Signed-off-by: Lei Wen --- kernel/printk/printk.c | 4 ++-- 1 file

[PATCH 0/3] switch printk timestamp to use booting time

2014-04-03 Thread Lei Wen
such assumption in the old days. So this patch set is supposed to recover such behavior again. BTW, I am not sure whether we could add additional member in printk log structure, so that we could print out two piece of log with one including suspend time, while another not? Lei Wen (3): time: create

[PATCH 1/3] time: create __get_monotonic_boottime for WARNless calls

2014-04-03 Thread Lei Wen
in the old way, get_monotonic_boottime is a good candidate, but it cannot be called after suspend process has happen. Thus, it prevent printk to be used in every corner. Export one warn less __get_monotonic_boottime to solve this issue. Signed-off-by: Lei Wen --- include/linux/time.h | 1

[PATCH 0/3] switch printk timestamp to use booting time

2014-04-03 Thread Lei Wen
such assumption in the old days. So this patch set is supposed to recover such behavior again. BTW, I am not sure whether we could add additional member in printk log structure, so that we could print out two piece of log with one including suspend time, while another not? Lei Wen (3): time: create

[PATCH 1/3] time: create __get_monotonic_boottime for WARNless calls

2014-04-03 Thread Lei Wen
in the old way, get_monotonic_boottime is a good candidate, but it cannot be called after suspend process has happen. Thus, it prevent printk to be used in every corner. Export one warn less __get_monotonic_boottime to solve this issue. Signed-off-by: Lei Wen lei...@marvell.com --- include/linux

[PATCH 2/3] timekeeping: move clocksource init to the early place

2014-04-03 Thread Lei Wen
So that in the very early booting place, we could call timekeeping code, while it would not cause system panic, since clock is not init yet. And for system default clock is always jiffies, so that it shall be safe to do so. Signed-off-by: Lei Wen lei...@marvell.com --- include/linux/time.h

[PATCH 3/3] printk: using booting time as the timestamp

2014-04-03 Thread Lei Wen
As people may want to align the kernel log with some other processor running over the same machine but not the same copy of linux, we need to keep their log aligned, so that it would not make debug process hard and confused. Signed-off-by: Lei Wen lei...@marvell.com --- kernel/printk/printk.c

Re: [PATCH] clocksource: register persistent clock for arm arch_timer

2014-04-02 Thread Lei Wen
Hi Stephen, On Thu, Apr 3, 2014 at 2:09 AM, Stephen Boyd wrote: > On 04/02/14 04:02, Lei Wen wrote: >> Since arm's arch_timer's counter would keep accumulated even in the >> low power mode, including suspend state, it is very suitable to be >> the persistent clock instea

[PATCH] clocksource: register persistent clock for arm arch_timer

2014-04-02 Thread Lei Wen
-off-by: Lei Wen --- I am not sure whether it is good to add something like generic_persistent_clock_read in the new added kernel/time/sched_clock.c? Since from arch timer's perspective, all it need to do is to pick the suspend period from the place where sched_clock being stopped/restarted. Any

[PATCH] clocksource: register persistent clock for arm arch_timer

2014-04-02 Thread Lei Wen
-off-by: Lei Wen lei...@marvell.com --- I am not sure whether it is good to add something like generic_persistent_clock_read in the new added kernel/time/sched_clock.c? Since from arch timer's perspective, all it need to do is to pick the suspend period from the place where sched_clock being

Re: [PATCH] clocksource: register persistent clock for arm arch_timer

2014-04-02 Thread Lei Wen
Hi Stephen, On Thu, Apr 3, 2014 at 2:09 AM, Stephen Boyd sb...@codeaurora.org wrote: On 04/02/14 04:02, Lei Wen wrote: Since arm's arch_timer's counter would keep accumulated even in the low power mode, including suspend state, it is very suitable to be the persistent clock instead of RTC

Re: [tip:sched/core] sched, nohz: Exclude isolated cores from load balancing

2014-02-23 Thread Lei Wen
On Mon, Feb 24, 2014 at 3:07 PM, Peter Zijlstra wrote: > On Mon, Feb 24, 2014 at 10:11:05AM +0800, Lei Wen wrote: >> How about use the API as cpumask_test_and_clear_cpu? >> Then below one line is enough. > > Its more expensive. > I see... No problem for me then. Acked-b

Re: [tip:sched/core] sched, nohz: Exclude isolated cores from load balancing

2014-02-23 Thread Lei Wen
ed-off-by: Mike Galbraith > Signed-off-by: Peter Zijlstra > Cc: Lei Wen > Link: http://lkml.kernel.org/n/tip-vmme4f49psirp966pklm5...@git.kernel.org > Signed-off-by: Thomas Gleixner > Signed-off-by: Ingo Molnar > --- > kernel/sched/fair.c | 25 ++--- >

Re: [tip:sched/core] sched, nohz: Exclude isolated cores from load balancing

2014-02-23 Thread Lei Wen
-by: Mike Galbraith mgalbra...@suse.de Signed-off-by: Peter Zijlstra pet...@infradead.org Cc: Lei Wen lei...@marvell.com Link: http://lkml.kernel.org/n/tip-vmme4f49psirp966pklm5...@git.kernel.org Signed-off-by: Thomas Gleixner t...@linutronix.de Signed-off-by: Ingo Molnar mi...@kernel.org

Re: [tip:sched/core] sched, nohz: Exclude isolated cores from load balancing

2014-02-23 Thread Lei Wen
On Mon, Feb 24, 2014 at 3:07 PM, Peter Zijlstra pet...@infradead.org wrote: On Mon, Feb 24, 2014 at 10:11:05AM +0800, Lei Wen wrote: How about use the API as cpumask_test_and_clear_cpu? Then below one line is enough. Its more expensive. I see... No problem for me then. Acked-by: Lei Wen

[PATCH v3] sched: keep quiescent cpu out of idle balance loop

2014-02-21 Thread Lei Wen
such cpu set nohz.idle_cpus_mask in the first place. Signed-off-by: Lei Wen Cc: Peter Zijlstra Cc: Mike Galbraith --- Much thanks to Mike Pointing out the root span would be merged when the last cpu becomes isolated from the crash result checking! kernel/sched/fair.c | 8 1 file changed, 8

[PATCH v3] sched: keep quiescent cpu out of idle balance loop

2014-02-21 Thread Lei Wen
such cpu set nohz.idle_cpus_mask in the first place. Signed-off-by: Lei Wen lei...@marvell.com Cc: Peter Zijlstra pet...@infradead.org Cc: Mike Galbraith bitbuc...@online.de --- Much thanks to Mike Pointing out the root span would be merged when the last cpu becomes isolated from the crash result

Re: [PATCH v2] sched: keep quiescent cpu out of idle balance loop

2014-02-20 Thread Lei Wen
Mike, On Fri, Feb 21, 2014 at 1:51 PM, Mike Galbraith wrote: > On Fri, 2014-02-21 at 10:23 +0800, Lei Wen wrote: >> Cpu which is put into quiescent mode, would remove itself >> from kernel's sched_domain, and want others not disturb its >> task running. But current schedul

[PATCH v2] sched: keep quiescent cpu out of idle balance loop

2014-02-20 Thread Lei Wen
it by preventing such cpu set nohz.idle_cpus_mask in the first place. Signed-off-by: Lei Wen --- kernel/sched/fair.c | 7 +++ 1 file changed, 7 insertions(+) diff --git a/kernel/sched/fair.c b/kernel/sched/fair.c index 235cfa7..66194fc 100644 --- a/kernel/sched/fair.c +++ b/kernel/sched/fair.c

[PATCH] sched: keep quiescent cpu out of idle balance loop

2014-02-20 Thread Lei Wen
it by preventing such cpu set nohz.idle_cpus_mask in the first place. Signed-off-by: Lei Wen --- kernel/sched/fair.c | 7 +++ 1 file changed, 7 insertions(+) diff --git a/kernel/sched/fair.c b/kernel/sched/fair.c index 235cfa7..bc85022 100644 --- a/kernel/sched/fair.c +++ b/kernel/sched/fair.c

Re: [PATCH] sched: keep quiescent cpu out of idle balance loop

2014-02-20 Thread Lei Wen
On Thu, Feb 20, 2014 at 4:50 PM, Peter Zijlstra wrote: > On Thu, Feb 20, 2014 at 10:42:51AM +0800, Lei Wen wrote: >> >> - int ilb = cpumask_first(nohz.idle_cpus_mask); >> >> + int ilb; >> >> + int cpu = smp_processor_id(

Re: [PATCH] sched: keep quiescent cpu out of idle balance loop

2014-02-20 Thread Lei Wen
On Thu, Feb 20, 2014 at 4:50 PM, Peter Zijlstra pet...@infradead.org wrote: On Thu, Feb 20, 2014 at 10:42:51AM +0800, Lei Wen wrote: - int ilb = cpumask_first(nohz.idle_cpus_mask); + int ilb; + int cpu = smp_processor_id(); + struct sched_domain *tmp; - if (ilb

[PATCH] sched: keep quiescent cpu out of idle balance loop

2014-02-20 Thread Lei Wen
it by preventing such cpu set nohz.idle_cpus_mask in the first place. Signed-off-by: Lei Wen lei...@marvell.com --- kernel/sched/fair.c | 7 +++ 1 file changed, 7 insertions(+) diff --git a/kernel/sched/fair.c b/kernel/sched/fair.c index 235cfa7..bc85022 100644 --- a/kernel/sched/fair.c +++ b/kernel

[PATCH v2] sched: keep quiescent cpu out of idle balance loop

2014-02-20 Thread Lei Wen
it by preventing such cpu set nohz.idle_cpus_mask in the first place. Signed-off-by: Lei Wen lei...@marvell.com --- kernel/sched/fair.c | 7 +++ 1 file changed, 7 insertions(+) diff --git a/kernel/sched/fair.c b/kernel/sched/fair.c index 235cfa7..66194fc 100644 --- a/kernel/sched/fair.c +++ b/kernel

Re: [PATCH v2] sched: keep quiescent cpu out of idle balance loop

2014-02-20 Thread Lei Wen
Mike, On Fri, Feb 21, 2014 at 1:51 PM, Mike Galbraith bitbuc...@online.de wrote: On Fri, 2014-02-21 at 10:23 +0800, Lei Wen wrote: Cpu which is put into quiescent mode, would remove itself from kernel's sched_domain, and want others not disturb its task running. But current scheduler would

Re: [PATCH] sched: keep quiescent cpu out of idle balance loop

2014-02-19 Thread Lei Wen
On Wed, Feb 19, 2014 at 5:04 PM, Peter Zijlstra wrote: > On Wed, Feb 19, 2014 at 01:20:30PM +0800, Lei Wen wrote: >> Since cpu which is put into quiescent mode, would remove itself >> from kernel's sched_domain. So we could use search sched_domain >> method to check whethe

Re: [PATCH] sched: keep quiescent cpu out of idle balance loop

2014-02-19 Thread Lei Wen
On Wed, Feb 19, 2014 at 5:04 PM, Peter Zijlstra pet...@infradead.org wrote: On Wed, Feb 19, 2014 at 01:20:30PM +0800, Lei Wen wrote: Since cpu which is put into quiescent mode, would remove itself from kernel's sched_domain. So we could use search sched_domain method to check whether this cpu

[PATCH] sched: keep quiescent cpu out of idle balance loop

2014-02-18 Thread Lei Wen
Since cpu which is put into quiescent mode, would remove itself from kernel's sched_domain. So we could use search sched_domain method to check whether this cpu don't want to be disturbed as idle load balance would send IPI to it. Signed-off-by: Lei Wen --- kernel/sched/fair.c | 14

[PATCH] sched: keep quiescent cpu out of idle balance loop

2014-02-18 Thread Lei Wen
Since cpu which is put into quiescent mode, would remove itself from kernel's sched_domain. So we could use search sched_domain method to check whether this cpu don't want to be disturbed as idle load balance would send IPI to it. Signed-off-by: Lei Wen lei...@marvell.com --- kernel/sched/fair.c

Re: Is it ok for deferrable timer wakeup the idle cpu?

2014-01-22 Thread Lei Wen
On Wed, Jan 22, 2014 at 10:07 PM, Thomas Gleixner wrote: > On Wed, 22 Jan 2014, Lei Wen wrote: >> Recently I want to do the experiment for cpu isolation over 3.10 kernel. >> But I find the isolated one is periodically waken up by IPI interrupt. >> >> By checking

Is it ok for deferrable timer wakeup the idle cpu?

2014-01-22 Thread Lei Wen
Hi Thomas, Recently I want to do the experiment for cpu isolation over 3.10 kernel. But I find the isolated one is periodically waken up by IPI interrupt. By checking the trace, I find those IPI is generated by add_timer_on, which would calls wake_up_nohz_cpu, and wake up the already idle cpu.

Is it ok for deferrable timer wakeup the idle cpu?

2014-01-22 Thread Lei Wen
Hi Thomas, Recently I want to do the experiment for cpu isolation over 3.10 kernel. But I find the isolated one is periodically waken up by IPI interrupt. By checking the trace, I find those IPI is generated by add_timer_on, which would calls wake_up_nohz_cpu, and wake up the already idle cpu.

Re: Is it ok for deferrable timer wakeup the idle cpu?

2014-01-22 Thread Lei Wen
On Wed, Jan 22, 2014 at 10:07 PM, Thomas Gleixner t...@linutronix.de wrote: On Wed, 22 Jan 2014, Lei Wen wrote: Recently I want to do the experiment for cpu isolation over 3.10 kernel. But I find the isolated one is periodically waken up by IPI interrupt. By checking the trace, I find those

Re: [QUERY]: Is using CPU hotplug right for isolating CPUs?

2014-01-20 Thread Lei Wen
On Mon, Jan 20, 2014 at 11:41 PM, Frederic Weisbecker wrote: > On Mon, Jan 20, 2014 at 08:30:10PM +0530, Viresh Kumar wrote: >> On 20 January 2014 19:29, Lei Wen wrote: >> > Hi Viresh, >> >> Hi Lei, >> >> > I have one question regarding unbounded w

Re: [QUERY]: Is using CPU hotplug right for isolating CPUs?

2014-01-20 Thread Lei Wen
Hi Viresh, On Wed, Jan 15, 2014 at 5:27 PM, Viresh Kumar wrote: > Hi Again, > > I am now successful in isolating a CPU completely using CPUsets, > NO_HZ_FULL and CPU hotplug.. > > My setup and requirements for those who weren't following the > earlier mails: > > For networking machines it is

Re: [QUERY]: Is using CPU hotplug right for isolating CPUs?

2014-01-20 Thread Lei Wen
Hi Viresh, On Wed, Jan 15, 2014 at 5:27 PM, Viresh Kumar viresh.ku...@linaro.org wrote: Hi Again, I am now successful in isolating a CPU completely using CPUsets, NO_HZ_FULL and CPU hotplug.. My setup and requirements for those who weren't following the earlier mails: For networking

Re: [QUERY]: Is using CPU hotplug right for isolating CPUs?

2014-01-20 Thread Lei Wen
On Mon, Jan 20, 2014 at 11:41 PM, Frederic Weisbecker fweis...@gmail.com wrote: On Mon, Jan 20, 2014 at 08:30:10PM +0530, Viresh Kumar wrote: On 20 January 2014 19:29, Lei Wen adrian.w...@gmail.com wrote: Hi Viresh, Hi Lei, I have one question regarding unbounded workqueue migration

Re: [RFC] sched: update rq clock when only get preempt

2013-12-29 Thread Lei Wen
Hi Mike, On Mon, Dec 30, 2013 at 12:08 PM, Mike Galbraith wrote: > On Mon, 2013-12-30 at 11:14 +0800, Lei Wen wrote: >> Since we would update rq clock at task enqueue/dequeue, or schedule >> tick. If we don't update the rq clock when our previous task get >> preempted, our n

[RFC] sched: update rq clock when only get preempt

2013-12-29 Thread Lei Wen
more precise account for the task start and duration time, we'd better ensure rq clock get updated when it begin to run. Best regards, Lei Signed-off-by: Lei Wen --- kernel/sched/core.c | 5 - 1 file changed, 4 insertions(+), 1 deletion(-) diff --git a/kernel/sched/core.c b/kernel/sched

[RFC] sched: update rq clock when only get preempt

2013-12-29 Thread Lei Wen
more precise account for the task start and duration time, we'd better ensure rq clock get updated when it begin to run. Best regards, Lei Signed-off-by: Lei Wen lei...@marvell.com --- kernel/sched/core.c | 5 - 1 file changed, 4 insertions(+), 1 deletion(-) diff --git a/kernel/sched/core.c

Re: [RFC] sched: update rq clock when only get preempt

2013-12-29 Thread Lei Wen
Hi Mike, On Mon, Dec 30, 2013 at 12:08 PM, Mike Galbraith bitbuc...@online.de wrote: On Mon, 2013-12-30 at 11:14 +0800, Lei Wen wrote: Since we would update rq clock at task enqueue/dequeue, or schedule tick. If we don't update the rq clock when our previous task get preempted, our new

Re: Question regarding list_for_each_entry_safe usage in move_one_task

2013-09-09 Thread Lei Wen
On Mon, Sep 9, 2013 at 7:15 PM, Peter Zijlstra wrote: > On Mon, Sep 02, 2013 at 02:26:45PM +0800, Lei Wen wrote: >> Hi Peter, >> >> I find one list API usage may not be correct in current fair.c code. >> In move_one_task function, it may iterate through whole cfs_ta

Re: Question regarding list_for_each_entry_safe usage in move_one_task

2013-09-09 Thread Lei Wen
On Mon, Sep 9, 2013 at 7:15 PM, Peter Zijlstra pet...@infradead.org wrote: On Mon, Sep 02, 2013 at 02:26:45PM +0800, Lei Wen wrote: Hi Peter, I find one list API usage may not be correct in current fair.c code. In move_one_task function, it may iterate through whole cfs_tasks list to get one

Question regarding list_for_each_entry_safe usage in move_one_task

2013-09-02 Thread Lei Wen
Hi Peter, I find one list API usage may not be correct in current fair.c code. In move_one_task function, it may iterate through whole cfs_tasks list to get one task to move. But in dequeue_task(), it would delete one task node from list without the lock protection. So that we could see from

Question regarding list_for_each_entry_safe usage in move_one_task

2013-09-02 Thread Lei Wen
Hi Peter, I find one list API usage may not be correct in current fair.c code. In move_one_task function, it may iterate through whole cfs_tasks list to get one task to move. But in dequeue_task(), it would delete one task node from list without the lock protection. So that we could see from

Re: [PATCH 03/10] sched: Clean-up struct sd_lb_stat

2013-08-26 Thread Lei Wen
On Mon, Aug 26, 2013 at 12:36 PM, Paul Turner wrote: > On Sun, Aug 25, 2013 at 7:56 PM, Lei Wen wrote: >> On Tue, Aug 20, 2013 at 12:01 AM, Peter Zijlstra >> wrote: >>> From: Joonsoo Kim >>> >>> There is no reason to maintain separate vari

Re: [PATCH 03/10] sched: Clean-up struct sd_lb_stat

2013-08-26 Thread Lei Wen
On Mon, Aug 26, 2013 at 12:36 PM, Paul Turner p...@google.com wrote: On Sun, Aug 25, 2013 at 7:56 PM, Lei Wen adrian.w...@gmail.com wrote: On Tue, Aug 20, 2013 at 12:01 AM, Peter Zijlstra pet...@infradead.org wrote: From: Joonsoo Kim iamjoonsoo@lge.com There is no reason to maintain

Re: [PATCH 03/10] sched: Clean-up struct sd_lb_stat

2013-08-25 Thread Lei Wen
On Tue, Aug 20, 2013 at 12:01 AM, Peter Zijlstra wrote: > From: Joonsoo Kim > > There is no reason to maintain separate variables for this_group > and busiest_group in sd_lb_stat, except saving some space. > But this structure is always allocated in stack, so this saving > isn't really

Re: [PATCH 03/10] sched: Clean-up struct sd_lb_stat

2013-08-25 Thread Lei Wen
On Tue, Aug 20, 2013 at 12:01 AM, Peter Zijlstra pet...@infradead.org wrote: From: Joonsoo Kim iamjoonsoo@lge.com There is no reason to maintain separate variables for this_group and busiest_group in sd_lb_stat, except saving some space. But this structure is always allocated in stack, so

Re: false nr_running check in load balance?

2013-08-18 Thread Lei Wen
Paul, On Tue, Aug 13, 2013 at 5:25 PM, Paul Turner wrote: > On Tue, Aug 13, 2013 at 1:18 AM, Lei Wen wrote: >> Hi Paul, >> >> On Tue, Aug 13, 2013 at 4:08 PM, Paul Turner wrote: >>> On Tue, Aug 13, 2013 at 12:38 AM, Peter Zijlstra >>> wrote: >>>

[PATCH 8/8] sched: document the difference between nr_running and h_nr_running

2013-08-18 Thread Lei Wen
Signed-off-by: Lei Wen --- kernel/sched/sched.h |6 ++ 1 files changed, 6 insertions(+), 0 deletions(-) diff --git a/kernel/sched/sched.h b/kernel/sched/sched.h index ef0a7b2..b8f0924 100644 --- a/kernel/sched/sched.h +++ b/kernel/sched/sched.h @@ -248,6 +248,12 @@ struct cfs_bandwidth

[PATCH 0/8] sched: fixes for the nr_running usage

2013-08-18 Thread Lei Wen
Since it is different for the nr_running and h_nr_running in its presenting meaning, we should take care of their usage in the scheduler. Lei Wen (8): sched: change load balance number to h_nr_running of run queue sched: change cpu_avg_load_per_task using h_nr_running sched: change

[PATCH 7/8] sched: change active_load_balance_cpu_stop to use h_nr_running

2013-08-18 Thread Lei Wen
ove. Signed-off-by: Lei Wen --- kernel/sched/fair.c |2 +- 1 files changed, 1 insertions(+), 1 deletions(-) diff --git a/kernel/sched/fair.c b/kernel/sched/fair.c index 3656603..4c96124 100644 --- a/kernel/sched/fair.c +++ b/kernel/sched/fair.c @@ -5349,7 +5349,7 @@ static

[PATCH 6/8] sched: change find_busiest_queue to h_nr_running

2013-08-18 Thread Lei Wen
Since find_busiest_queue try to avoid do load balance for runqueue which has only one cfs task and its load is above the imbalance value calculated, we should use h_nr_running of cfs instead of nr_running of rq. Signed-off-by: Lei Wen --- kernel/sched/fair.c |3 ++- 1 files changed, 2

[PATCH 5/8] sched: change update_sg_lb_stats to h_nr_running

2013-08-18 Thread Lei Wen
Since update_sg_lb_stats is used to calculate sched_group load difference of cfs type task, it should use h_nr_running instead of nr_running of rq. Signed-off-by: Lei Wen --- kernel/sched/fair.c |2 +- 1 files changed, 1 insertions(+), 1 deletions(-) diff --git a/kernel/sched/fair.c b

[PATCH 4/8] sched: change pick_next_task_fair to h_nr_running

2013-08-18 Thread Lei Wen
Since pick_next_task_fair only want to ensure there is some task in the run queue to be picked up, it should use the h_nr_running instead of nr_running, since nr_running cannot present all tasks if group existed. Signed-off-by: Lei Wen --- kernel/sched/fair.c |2 +- 1 files changed, 1

[PATCH 2/8] sched: change cpu_avg_load_per_task using h_nr_running

2013-08-18 Thread Lei Wen
Since cpu_avg_load_per_task is used only by cfs scheduler, its meaning should present the average cfs type task load in the current run queue. Thus we change it to h_nr_running for well presenting its meaning. Signed-off-by: Lei Wen --- kernel/sched/fair.c |2 +- 1 files changed, 1

[PATCH 3/8] sched: change update_rq_runnable_avg using h_nr_running

2013-08-18 Thread Lei Wen
control mechanism. Thus its sleep time should not being taken into runnable avg load calculation. Signed-off-by: Lei Wen --- kernel/sched/fair.c |4 ++-- 1 files changed, 2 insertions(+), 2 deletions(-) diff --git a/kernel/sched/fair.c b/kernel/sched/fair.c index e6b99b4..9869d4d 100644

[PATCH 1/8] sched: change load balance number to h_nr_running of run queue

2013-08-18 Thread Lei Wen
off-by: Lei Wen --- kernel/sched/fair.c |8 +--- 1 files changed, 5 insertions(+), 3 deletions(-) diff --git a/kernel/sched/fair.c b/kernel/sched/fair.c index f918635..d6153c8 100644 --- a/kernel/sched/fair.c +++ b/kernel/sched/fair.c @@ -5096,17 +5096,19 @@ redo: schedstat_add

[PATCH 1/8] sched: change load balance number to h_nr_running of run queue

2013-08-18 Thread Lei Wen
-by: Lei Wen lei...@marvell.com --- kernel/sched/fair.c |8 +--- 1 files changed, 5 insertions(+), 3 deletions(-) diff --git a/kernel/sched/fair.c b/kernel/sched/fair.c index f918635..d6153c8 100644 --- a/kernel/sched/fair.c +++ b/kernel/sched/fair.c @@ -5096,17 +5096,19 @@ redo

[PATCH 4/8] sched: change pick_next_task_fair to h_nr_running

2013-08-18 Thread Lei Wen
Since pick_next_task_fair only want to ensure there is some task in the run queue to be picked up, it should use the h_nr_running instead of nr_running, since nr_running cannot present all tasks if group existed. Signed-off-by: Lei Wen lei...@marvell.com --- kernel/sched/fair.c |2 +- 1

[PATCH 2/8] sched: change cpu_avg_load_per_task using h_nr_running

2013-08-18 Thread Lei Wen
Since cpu_avg_load_per_task is used only by cfs scheduler, its meaning should present the average cfs type task load in the current run queue. Thus we change it to h_nr_running for well presenting its meaning. Signed-off-by: Lei Wen lei...@marvell.com --- kernel/sched/fair.c |2 +- 1 files

[PATCH 3/8] sched: change update_rq_runnable_avg using h_nr_running

2013-08-18 Thread Lei Wen
control mechanism. Thus its sleep time should not being taken into runnable avg load calculation. Signed-off-by: Lei Wen lei...@marvell.com --- kernel/sched/fair.c |4 ++-- 1 files changed, 2 insertions(+), 2 deletions(-) diff --git a/kernel/sched/fair.c b/kernel/sched/fair.c index e6b99b4

[PATCH 5/8] sched: change update_sg_lb_stats to h_nr_running

2013-08-18 Thread Lei Wen
Since update_sg_lb_stats is used to calculate sched_group load difference of cfs type task, it should use h_nr_running instead of nr_running of rq. Signed-off-by: Lei Wen lei...@marvell.com --- kernel/sched/fair.c |2 +- 1 files changed, 1 insertions(+), 1 deletions(-) diff --git a/kernel

[PATCH 6/8] sched: change find_busiest_queue to h_nr_running

2013-08-18 Thread Lei Wen
Since find_busiest_queue try to avoid do load balance for runqueue which has only one cfs task and its load is above the imbalance value calculated, we should use h_nr_running of cfs instead of nr_running of rq. Signed-off-by: Lei Wen lei...@marvell.com --- kernel/sched/fair.c |3 ++- 1

[PATCH 7/8] sched: change active_load_balance_cpu_stop to use h_nr_running

2013-08-18 Thread Lei Wen
. Signed-off-by: Lei Wen lei...@marvell.com --- kernel/sched/fair.c |2 +- 1 files changed, 1 insertions(+), 1 deletions(-) diff --git a/kernel/sched/fair.c b/kernel/sched/fair.c index 3656603..4c96124 100644 --- a/kernel/sched/fair.c +++ b/kernel/sched/fair.c @@ -5349,7 +5349,7 @@ static int

[PATCH 8/8] sched: document the difference between nr_running and h_nr_running

2013-08-18 Thread Lei Wen
Signed-off-by: Lei Wen lei...@marvell.com --- kernel/sched/sched.h |6 ++ 1 files changed, 6 insertions(+), 0 deletions(-) diff --git a/kernel/sched/sched.h b/kernel/sched/sched.h index ef0a7b2..b8f0924 100644 --- a/kernel/sched/sched.h +++ b/kernel/sched/sched.h @@ -248,6 +248,12

[PATCH 0/8] sched: fixes for the nr_running usage

2013-08-18 Thread Lei Wen
Since it is different for the nr_running and h_nr_running in its presenting meaning, we should take care of their usage in the scheduler. Lei Wen (8): sched: change load balance number to h_nr_running of run queue sched: change cpu_avg_load_per_task using h_nr_running sched: change

Re: false nr_running check in load balance?

2013-08-18 Thread Lei Wen
Paul, On Tue, Aug 13, 2013 at 5:25 PM, Paul Turner p...@google.com wrote: On Tue, Aug 13, 2013 at 1:18 AM, Lei Wen adrian.w...@gmail.com wrote: Hi Paul, On Tue, Aug 13, 2013 at 4:08 PM, Paul Turner p...@google.com wrote: On Tue, Aug 13, 2013 at 12:38 AM, Peter Zijlstra pet...@infradead.org

Re: false nr_running check in load balance?

2013-08-13 Thread Lei Wen
Hi Paul, On Tue, Aug 13, 2013 at 4:08 PM, Paul Turner wrote: > On Tue, Aug 13, 2013 at 12:38 AM, Peter Zijlstra wrote: >> On Tue, Aug 13, 2013 at 12:45:12PM +0800, Lei Wen wrote: >>> > Not quite right; I think you need busiest->cfs.h_nr_running. >>> > cfs

Re: false nr_running check in load balance?

2013-08-13 Thread Lei Wen
Hi Paul, On Tue, Aug 13, 2013 at 4:08 PM, Paul Turner p...@google.com wrote: On Tue, Aug 13, 2013 at 12:38 AM, Peter Zijlstra pet...@infradead.org wrote: On Tue, Aug 13, 2013 at 12:45:12PM +0800, Lei Wen wrote: Not quite right; I think you need busiest-cfs.h_nr_running. cfs.nr_running

Re: false nr_running check in load balance?

2013-08-12 Thread Lei Wen
Peter, On Mon, Aug 12, 2013 at 10:43 PM, Peter Zijlstra wrote: > On Tue, Aug 06, 2013 at 09:23:46PM +0800, Lei Wen wrote: >> Hi Paul, >> >> I notice in load_balance function, it would check busiest->nr_running >> to decide whether to perform the real task movement.

Re: false nr_running check in load balance?

2013-08-12 Thread Lei Wen
Peter, On Mon, Aug 12, 2013 at 10:43 PM, Peter Zijlstra pet...@infradead.org wrote: On Tue, Aug 06, 2013 at 09:23:46PM +0800, Lei Wen wrote: Hi Paul, I notice in load_balance function, it would check busiest-nr_running to decide whether to perform the real task movement. But in some case

false nr_running check in load balance?

2013-08-06 Thread Lei Wen
Hi Paul, I notice in load_balance function, it would check busiest->nr_running to decide whether to perform the real task movement. But in some case, I saw the nr_running is not matching with the task in the queue, which seems make scheduler to do many redundant checking. What I means is like

false nr_running check in load balance?

2013-08-06 Thread Lei Wen
Hi Paul, I notice in load_balance function, it would check busiest-nr_running to decide whether to perform the real task movement. But in some case, I saw the nr_running is not matching with the task in the queue, which seems make scheduler to do many redundant checking. What I means is like

task kworker/u:0 blocked for more than 120 seconds

2013-07-03 Thread Lei Wen
Hi list, I recently find a strange issue over 3.4 kernel. The scenario is doing the hotplug test over ARM platfrom, and when the hotplugged out cpu1 want to get back in again, seems it stuck at cpu_stop_cpu_callback. The task backtrace is as below: PID: 21749 TASK: d194b300 CPU: 0 COMMAND:

Re: [V3 1/2] sched: add trace events for task and rq usage tracking

2013-07-03 Thread Lei Wen
Hi Peter, Do you have some further suggestion for this patch? :) Thanks, Lei On Tue, Jul 2, 2013 at 8:15 PM, Lei Wen wrote: > Since we could track task in the entity level now, we may want to > investigate tasks' running status by recording the trace info, so that > could make so

Re: [V3 1/2] sched: add trace events for task and rq usage tracking

2013-07-03 Thread Lei Wen
Hi Peter, Do you have some further suggestion for this patch? :) Thanks, Lei On Tue, Jul 2, 2013 at 8:15 PM, Lei Wen lei...@marvell.com wrote: Since we could track task in the entity level now, we may want to investigate tasks' running status by recording the trace info, so that could make

task kworker/u:0 blocked for more than 120 seconds

2013-07-03 Thread Lei Wen
Hi list, I recently find a strange issue over 3.4 kernel. The scenario is doing the hotplug test over ARM platfrom, and when the hotplugged out cpu1 want to get back in again, seems it stuck at cpu_stop_cpu_callback. The task backtrace is as below: PID: 21749 TASK: d194b300 CPU: 0 COMMAND:

[V3 2/2] sched: update cfs_rq weight earlier in enqueue_entity

2013-07-02 Thread Lei Wen
may get confused. Signed-off-by: Lei Wen Cc: Alex Shi Cc: Paul Turner --- kernel/sched/fair.c |2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/kernel/sched/fair.c b/kernel/sched/fair.c index 2290469..53224d1 100644 --- a/kernel/sched/fair.c +++ b/kernel/sched/fair.c

[V3 1/2] sched: add trace events for task and rq usage tracking

2013-07-02 Thread Lei Wen
Since we could track task in the entity level now, we may want to investigate tasks' running status by recording the trace info, so that could make some tuning if needed. Signed-off-by: Lei Wen Cc: Alex Shi Cc: Peter Zijlstra Cc: Kamalesh Babulal --- include/trace/events/sched.h | 76

[PATCH V3 0/2] sched: add trace event for per-entity tracking

2013-07-02 Thread Lei Wen
ake trace events passing parameter being simple, and only extend its detail in the header file definition. Thanks Peter for pointing out this. V2: Abstract sched_cfs_rq_runnable_load and sched_cfs_rq_blocked_load using sched_cfs_rq_load_contri_template. Thanks Kamalesh for this contribut

[PATCH V3 0/2] sched: add trace event for per-entity tracking

2013-07-02 Thread Lei Wen
passing parameter being simple, and only extend its detail in the header file definition. Thanks Peter for pointing out this. V2: Abstract sched_cfs_rq_runnable_load and sched_cfs_rq_blocked_load using sched_cfs_rq_load_contri_template. Thanks Kamalesh for this contribution! Lei Wen (2

[V3 1/2] sched: add trace events for task and rq usage tracking

2013-07-02 Thread Lei Wen
Since we could track task in the entity level now, we may want to investigate tasks' running status by recording the trace info, so that could make some tuning if needed. Signed-off-by: Lei Wen lei...@marvell.com Cc: Alex Shi alex@intel.com Cc: Peter Zijlstra pet...@infradead.org Cc: Kamalesh

[V3 2/2] sched: update cfs_rq weight earlier in enqueue_entity

2013-07-02 Thread Lei Wen
may get confused. Signed-off-by: Lei Wen lei...@marvell.com Cc: Alex Shi alex@intel.com Cc: Paul Turner p...@google.com --- kernel/sched/fair.c |2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/kernel/sched/fair.c b/kernel/sched/fair.c index 2290469..53224d1 100644

Re: [V2 2/2] sched: update cfs_rq weight earlier in enqueue_entity

2013-07-01 Thread Lei Wen
Paul, On Mon, Jul 1, 2013 at 10:07 PM, Paul Turner wrote: > Could you please restate the below? > > On Mon, Jul 1, 2013 at 5:33 AM, Lei Wen wrote: >> Since we are going to calculate cfs_rq's average ratio by >> runnable_load_avg/load.weight > > I don

Re: [V2 1/2] sched: add trace events for task and rq usage tracking

2013-07-01 Thread Lei Wen
Hi Peter, On Mon, Jul 1, 2013 at 8:44 PM, Peter Zijlstra wrote: > On Mon, Jul 01, 2013 at 08:33:21PM +0800, Lei Wen wrote: >> Since we could track task in the entity level now, we may want to >> investigate tasks' running status by recording the trace info, so that >> c

[V2 2/2] sched: update cfs_rq weight earlier in enqueue_entity

2013-07-01 Thread Lei Wen
Since we are going to calculate cfs_rq's average ratio by runnable_load_avg/load.weight, if not increase the load.weight prior to enqueue_entity_load_avg, it may lead to one cfs_rq's avg ratio higher than 100%. Adjust the sequence, so that all ratio is kept below 100%. Signed-off-by: Lei Wen

[PATCH V2 0/2] sched: add trace event for per-entity tracking

2013-07-01 Thread Lei Wen
oad distribution status in the whole system V2: Abstract sched_cfs_rq_runnable_load and sched_cfs_rq_blocked_load using sched_cfs_rq_load_contri_template. Thanks Kamalesh for this contribution! Lei Wen (2): sched: add trace events for task and rq usage tracking sched: update cfs_r

[V2 1/2] sched: add trace events for task and rq usage tracking

2013-07-01 Thread Lei Wen
Since we could track task in the entity level now, we may want to investigate tasks' running status by recording the trace info, so that could make some tuning if needed. Signed-off-by: Lei Wen --- include/trace/events/sched.h | 57 ++ kernel/sched

Re: [PATCH 1/2] sched: add trace events for task and rq usage tracking

2013-07-01 Thread Lei Wen
Hi Kamalesh, On Mon, Jul 1, 2013 at 5:43 PM, Kamalesh Babulal wrote: > * Lei Wen [2013-07-01 15:10:32]: > >> Since we could track task in the entity level now, we may want to >> investigate tasks' running status by recording the trace info, so that >> could ma

Re: [PATCH 0/2] sched: add trace event for per-entity tracking

2013-07-01 Thread Lei Wen
Alex, On Mon, Jul 1, 2013 at 4:06 PM, Alex Shi wrote: > On 07/01/2013 03:10 PM, Lei Wen wrote: >> Thanks for the per-entity tracking feature, we could know the details of >> each task by its help. >> This patch add its trace support, so that we could quickly know the system

[PATCH 2/2] sched: update cfs_rq weight earlier in enqueue_entity

2013-07-01 Thread Lei Wen
Since we are going to calculate cfs_rq's average ratio by runnable_load_avg/load.weight, if not increase the load.weight prior to enqueue_entity_load_avg, it may lead to one cfs_rq's avg ratio higher than 100%. Adjust the sequence, so that all ratio is kept below 100%. Signed-off-by: Lei Wen

[PATCH 0/2] sched: add trace event for per-entity tracking

2013-07-01 Thread Lei Wen
->runnable_load_avg/cfs_rq->load.weight Lei Wen (2): sched: add trace events for task and rq usage tracking sched: update cfs_rq weight earlier in enqueue_entity include/trace/events/sched.h | 73 ++ kernel/sched/fair.c | 31 -- 2

  1   2   >