missed ticks when updating global cpu load
kernel/sched/fair.c | 51 +++
1 file changed, 43 insertions(+), 8 deletions(-)
--
1.7.9.5
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message
ou,
> > > byungchul
> > >
> > > ->8-
> > > From 8ece9a0482e74a39cd2e9165bf8eec1d04665fa9 Mon Sep 17 00:00:00 2001
> > > From: Byungchul Park
> > > Date: Fri, 25 Sep 2015 17:10:10 +0900
> > > Subject: [RESEND PATCH] sched: co
On Wed, Sep 30, 2015 at 12:43:43PM +0200, Peter Zijlstra wrote:
> On Sat, Sep 26, 2015 at 03:14:45PM +0200, Frederic Weisbecker wrote:
>
> > > when the next tick occurs, update_process_times() -> scheduler_tick()
> > > -> update_cpu_load_active() is performed, assuming the distance between
> > > l
17 00:00:00 2001
> > From: Byungchul Park
> > Date: Fri, 25 Sep 2015 17:10:10 +0900
> > Subject: [RESEND PATCH] sched: consider missed ticks when updating global
> > cpu
> > load
> >
> > in hrtimer_interrupt(), the first tick_program_event() can be fai
On Sat, Sep 26, 2015 at 03:14:45PM +0200, Frederic Weisbecker wrote:
> > when the next tick occurs, update_process_times() -> scheduler_tick()
> > -> update_cpu_load_active() is performed, assuming the distance between
> > last tick and current tick is 1 tick! it's wrong in this case. thus,
> > th
tional commit
> message.
>
> thank you,
> byungchul
>
> ->8-
> From 8ece9a0482e74a39cd2e9165bf8eec1d04665fa9 Mon Sep 17 00:00:00 2001
> From: Byungchul Park
> Date: Fri, 25 Sep 2015 17:10:10 +0900
> Subject: [RESEND PATCH] sched: consider missed ticks when upd
Sep 17 00:00:00 2001
From: Byungchul Park
Date: Fri, 25 Sep 2015 17:10:10 +0900
Subject: [RESEND PATCH] sched: consider missed ticks when updating global cpu
load
in hrtimer_interrupt(), the first tick_program_event() can be failed
because the next timer could be already expired due to,
(see the c
From: Peter Hurley
3.4.106-rc1 review patch. If anyone has any objections, please let me know.
--
commit 37b164578826406a173ca7c20d9ba7430134d23e upstream.
Kernel oops can cause the tty to be unreleaseable (for example, if
n_tty_read() crashes while on the read_wait queue). T
3.2.65-rc1 review patch. If anyone has any objections, please let me know.
--
From: Peter Hurley
commit 37b164578826406a173ca7c20d9ba7430134d23e upstream.
Kernel oops can cause the tty to be unreleaseable (for example, if
n_tty_read() crashes while on the read_wait queue). Thi
3.13.11-ckt12 -stable review patch. If anyone has any objections, please let
me know.
--
From: Peter Hurley
commit 37b164578826406a173ca7c20d9ba7430134d23e upstream.
Kernel oops can cause the tty to be unreleaseable (for example, if
n_tty_read() crashes while on the read_wait
3.16.7-ckt2 -stable review patch. If anyone has any objections, please let me
know.
--
From: Peter Hurley
commit 37b164578826406a173ca7c20d9ba7430134d23e upstream.
Kernel oops can cause the tty to be unreleaseable (for example, if
n_tty_read() crashes while on the read_wait q
From: Peter Hurley
3.12-stable review patch. If anyone has any objections, please let me know.
===
commit 37b164578826406a173ca7c20d9ba7430134d23e upstream.
Kernel oops can cause the tty to be unreleaseable (for example, if
n_tty_read() crashes while on the read_wait queue). This
2.6.32-longterm review patch. If anyone has any objections, please let me know.
--
From: Peter Hurley
Kernel oops can cause the tty to be unreleaseable (for example, if
n_tty_read() crashes while on the read_wait queue). This will cause
tty_release() to endlessly loop without s
3.14-stable review patch. If anyone has any objections, please let me know.
--
From: Peter Hurley
commit 37b164578826406a173ca7c20d9ba7430134d23e upstream.
Kernel oops can cause the tty to be unreleaseable (for example, if
n_tty_read() crashes while on the read_wait queue). Th
3.10-stable review patch. If anyone has any objections, please let me know.
--
From: Peter Hurley
commit 37b164578826406a173ca7c20d9ba7430134d23e upstream.
Kernel oops can cause the tty to be unreleaseable (for example, if
n_tty_read() crashes while on the read_wait queue). Th
3.17-stable review patch. If anyone has any objections, please let me know.
--
From: Peter Hurley
commit 37b164578826406a173ca7c20d9ba7430134d23e upstream.
Kernel oops can cause the tty to be unreleaseable (for example, if
n_tty_read() crashes while on the read_wait queue). Th
On Tue, Sep 23, 2014 at 08:32:02AM +0100, Mike Lothian wrote:
> Hi
>
> I've raised https://bugzilla.kernel.org/show_bug.cgi?id=84131 and
> attempted to bisect
>
> The bug I'm stumbling on happens on both and Intel Sandybridge and AMD
> Kabini system - it manifests itself as a hard lockup when the
Hi
I've raised https://bugzilla.kernel.org/show_bug.cgi?id=84131 and
attempted to bisect
The bug I'm stumbling on happens on both and Intel Sandybridge and AMD
Kabini system - it manifests itself as a hard lockup when the system
is under high load
If I switch off DynTicks the problem doesn't hap
ies to
> > > overo gumstix, beagleboard, probably others) we see a high CPU load in a
> > > kworker thread.
> > >
> > > Between 2.6.33 and 2.6.34 musb_core.c changed.
> > >
> > > IRQ handlers changed with the result that a worker in musb_core.
On Mon, Jul 21, 2014 at 05:28:58PM +0200, Laurent Pinchart wrote:
> Hi Adam,
>
> On Wednesday 29 January 2014 08:44:57 Adam Wozniak wrote:
> > With a USB 2.0 webcam attached to the OTG port on an OMAP3 (applies to
> > overo gumstix, beagleboard, probably others) we see
Hi Adam,
On Wednesday 29 January 2014 08:44:57 Adam Wozniak wrote:
> With a USB 2.0 webcam attached to the OTG port on an OMAP3 (applies to
> overo gumstix, beagleboard, probably others) we see a high CPU load in a
> kworker thread.
>
> Between 2.6.33 and 2.6.34 musb_core.c ch
The current scheduler’s load balancing is completely work-conserving. In some
workload, generally low CPU utilization but immersed with CPU bursts of
transient tasks, migrating task to engage all available CPUs for
work-conserving can lead to significant overhead: cache locality loss,
idle/active H
On Mon, Jun 09, 2014 at 05:48:48PM +0100, Morten Rasmussen wrote:
Thanks, Morten.
> > 2) CC vs. CPU utilization. CC is runqueue-length-weighted CPU utilization.
> > If
> > we change: "a = sum(concurrency * time) / period" to "a' = sum(1 * time) /
> > period". Then a' is just about the CPU utiliz
Resend... The first attempt didn't reach LKML for some reason.
On Fri, May 30, 2014 at 07:35:56AM +0100, Yuyang Du wrote:
> Thanks to CFS’s “model an ideal, precise multi-tasking CPU”, tasks can be seen
> as concurrently running (the tasks in the runqueue). So it is natural to use
> task concurren
Hi Ingo, PeterZ, Rafael, and others,
The current scheduler’s load balancing is completely work-conserving. In some
workload, generally low CPU utilization but immersed with CPU bursts of
transient tasks, migrating task to engage all available CPUs for
work-conserving can lead to significant overhe
> So I should have just deleted all patches, for none of them has a
> changelog.
>
It is my bad to not make changelogs in patches. The v2 has them, but I should
have made them since always.
> So all this cc crap only hooks into and modifies fair.c behaviour. There
> is absolutely no reason it sh
On Wed, May 07, 2014 at 02:46:37AM +0800, Yuyang Du wrote:
> > The general code structure is an immediate no go. We're not going to
> > bolt on anything like this.
>
> Could you please detail a little bit about general code structure?
So I should have just deleted all patches, for none of them ha
On pon, 2014-05-12 at 02:16 +0800, Yuyang Du wrote:
> Hi Ingo, PeterZ, Rafael, and others,
>
> The current scheduler’s load balancing is completely work-conserving. In
> some
> workload, generally low CPU utilization but immersed with CPU bursts of
> transient tasks, migrating task to engage al
On Mon, May 12, 2014 at 09:28:57AM +0800, Yuyang Du wrote:
> On Mon, May 12, 2014 at 08:45:33AM +0200, Peter Zijlstra wrote:
> > On Mon, May 12, 2014 at 02:16:49AM +0800, Yuyang Du wrote:
> >
> > Yes, just what we need, more patches while we haven't had the time to
> > look at the old set yet :-(
On Mon, May 12, 2014 at 08:45:33AM +0200, Peter Zijlstra wrote:
> On Mon, May 12, 2014 at 02:16:49AM +0800, Yuyang Du wrote:
>
> Yes, just what we need, more patches while we haven't had the time to
> look at the old set yet :-(
No essential change, added commit messages (after smashing our heads
On Mon, May 12, 2014 at 02:16:49AM +0800, Yuyang Du wrote:
Yes, just what we need, more patches while we haven't had the time to
look at the old set yet :-(
pgpVHMyuJSFAA.pgp
Description: PGP signature
Hi Ingo, PeterZ, Rafael, and others,
The current scheduler’s load balancing is completely work-conserving. In some
workload, generally low CPU utilization but immersed with CPU bursts of
transient tasks, migrating task to engage all available CPUs for
work-conserving can lead to significant overhe
> The general code structure is an immediate no go. We're not going to
> bolt on anything like this.
Could you please detail a little bit about general code structure?
Thank you all the same,
Yuyang
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a messag
On Mon, May 05, 2014 at 08:02:40AM +0800, Yuyang Du wrote:
> Hi Ingo, PeterZ, Rafael, and others,
The general code structure is an immediate no go. We're not going to
bolt on anything like this.
I've yet to look at the content.
--
To unsubscribe from this list: send the line "unsubscribe linux-ke
Hi Ingo, PeterZ, Rafael, and others,
The current scheduler’s load balancing is completely work-conserving. In some
workload, generally low CPU utilization but immersed with CPU bursts of
transient tasks, migrating task to engage all available CPUs for
work-conserving can lead to significant overhe
> I'm a bit confused, do you have one global CC that tracks the number of
> tasks across all runqueues in the system or one for each cpu? There
> could be some contention when updating that value on larger systems if
> it one global CC. If they are separate, how do you then decide when to
> consoli
On Sun, Apr 27, 2014 at 09:07:25PM +0100, Yuyang Du wrote:
> On Fri, Apr 25, 2014 at 03:53:34PM +0100, Morten Rasmussen wrote:
> > I fully agree. My point was that there is more to task consolidation
> > than just observing the degree of task parallelism. The system topology
> > has a lot to say wh
On Fri, Apr 25, 2014 at 03:53:34PM +0100, Morten Rasmussen wrote:
> I fully agree. My point was that there is more to task consolidation
> than just observing the degree of task parallelism. The system topology
> has a lot to say when deciding whether or not to pack. That was the
> motivation for p
On Friday, April 25, 2014 03:53:34 PM Morten Rasmussen wrote:
> On Fri, Apr 25, 2014 at 01:19:46PM +0100, Rafael J. Wysocki wrote:
[...]
> >
> > So in my opinion we need to figure out how to measure workloads while they
> > are
> > running and then use that information to make load balancing de
On Fri, Apr 25, 2014 at 01:19:46PM +0100, Rafael J. Wysocki wrote:
> On Friday, April 25, 2014 11:23:07 AM Morten Rasmussen wrote:
> > Hi Yuyang,
> >
> > On Thu, Apr 24, 2014 at 08:30:05PM +0100, Yuyang Du wrote:
> > > 1)Divide continuous time into periods of time, and average task
> > >
On Friday, April 25, 2014 11:23:07 AM Morten Rasmussen wrote:
> Hi Yuyang,
>
> On Thu, Apr 24, 2014 at 08:30:05PM +0100, Yuyang Du wrote:
> > 1) Divide continuous time into periods of time, and average task
> > concurrency
> > in period, for tolerating the transient bursts:
> > a = sum(concurren
e quite platform dependent. Different idle state leakage
power and wake-up costs may change the picture.
I'm therefore quite interested in knowing what sort of test scenarios
you used and the parameters for CC (f and size of the periods). I'm not
convinced (yet) that a cpu load concurr
On Fri, Apr 25, 2014 at 10:00:02AM +0200, Vincent Guittot wrote:
> On 24 April 2014 21:30, Yuyang Du wrote:
> > Hi Ingo, PeterZ, and others,
> >
> > The current scheduler's load balancing is completely work-conserving. In
> > some
> > workload, generally low CPU utilization but immersed with CPU
On 24 April 2014 21:30, Yuyang Du wrote:
> Hi Ingo, PeterZ, and others,
>
> The current scheduler's load balancing is completely work-conserving. In some
> workload, generally low CPU utilization but immersed with CPU bursts of
> transient tasks, migrating task to engage all available CPUs for
> w
On Fri, 2014-04-25 at 03:30 +0800, Yuyang Du wrote:
> To track CC, we intercept the scheduler in 1) enqueue, 2) dequeue, 3)
> scheduler tick, and 4) enter/exit idle.
Boo hiss to 1, 2 and 4. Less fastpath math would be better.
-Mike
--
To unsubscribe from this list: send the line "unsubscribe l
Hi Ingo, PeterZ, and others,
The current scheduler’s load balancing is completely work-conserving. In some
workload, generally low CPU utilization but immersed with CPU bursts of
transient tasks, migrating task to engage all available CPUs for
work-conserving can lead to significant overhead: cach
With a USB 2.0 webcam attached to the OTG port on an OMAP3 (applies to
overo gumstix, beagleboard, probably others) we see a high CPU load in a
kworker thread.
Between 2.6.33 and 2.6.34 musb_core.c changed.
IRQ handlers changed with the result that a worker in musb_core.c got
scheduled far
Like
in find_busiset_group:
Assume a 2 groups domain, each group has 8 cores cpus.
The target group will bias 8 * (imbalance_pct -100)
= 8 * (125 - 100) = 200.
Since each of cpu bias .25 times load, for 8 cpus, totally bias 2 times
average cpu load bet
On Tue, Jan 07, 2014 at 03:16:32PM +, Morten Rasmussen wrote:
> From a load perspective wouldn't it be better to pick the least loaded
> cpu in the group? It is not cheap to implement, but in theory it should
> give less balancing within the group later an less unfairness until it
> happens.
I
On Tue, Jan 07, 2014 at 01:15:23PM +, Peter Zijlstra wrote:
> On Tue, Jan 07, 2014 at 01:59:30PM +0100, Peter Zijlstra wrote:
> > On Tue, Jan 07, 2014 at 12:55:18PM +, Morten Rasmussen wrote:
> > > My understanding is that should_we_balance() decides which cpu is
> > > eligible for doing th
On Tue, Jan 07, 2014 at 02:32:07PM +0100, Vincent Guittot wrote:
> On 7 January 2014 14:15, Peter Zijlstra wrote:
> > On Tue, Jan 07, 2014 at 01:59:30PM +0100, Peter Zijlstra wrote:
> >> On Tue, Jan 07, 2014 at 12:55:18PM +, Morten Rasmussen wrote:
> >> > My understanding is that should_we_bal
On 7 January 2014 14:15, Peter Zijlstra wrote:
> On Tue, Jan 07, 2014 at 01:59:30PM +0100, Peter Zijlstra wrote:
>> On Tue, Jan 07, 2014 at 12:55:18PM +, Morten Rasmussen wrote:
>> > My understanding is that should_we_balance() decides which cpu is
>> > eligible for doing the load balancing fo
On Tue, Jan 07, 2014 at 01:59:30PM +0100, Peter Zijlstra wrote:
> On Tue, Jan 07, 2014 at 12:55:18PM +, Morten Rasmussen wrote:
> > My understanding is that should_we_balance() decides which cpu is
> > eligible for doing the load balancing for a given domain (and the
> > domains above). That is
On Tue, Jan 07, 2014 at 12:55:18PM +, Morten Rasmussen wrote:
> My understanding is that should_we_balance() decides which cpu is
> eligible for doing the load balancing for a given domain (and the
> domains above). That is, only one cpu in a group is allowed to load
> balance between the local
gt;>> From: Alex Shi
> >>>> Date: Sat, 23 Nov 2013 23:18:09 +0800
> >>>> Subject: [PATCH 4/4] sched: bias to target cpu load to reduce task moving
> >>>>
> >>>> Task migration happens when target just a bit less then source cpu load.
t;>>> Subject: [PATCH 4/4] sched: bias to target cpu load to reduce task moving
>>>>
>>>> Task migration happens when target just a bit less then source cpu load.
>>>> To reduce such situation happens, aggravate the target cpu load with
>>>&
On Wed, Dec 25, 2013 at 02:58:26PM +, Alex Shi wrote:
>
> >> From 5cd67d975001edafe2ee820e0be5d86881a23bd6 Mon Sep 17 00:00:00 2001
> >> From: Alex Shi
> >> Date: Sat, 23 Nov 2013 23:18:09 +0800
> >> Subject: [PATCH 4/4] sched: bias to target cpu lo
>> From 5cd67d975001edafe2ee820e0be5d86881a23bd6 Mon Sep 17 00:00:00 2001
>> From: Alex Shi
>> Date: Sat, 23 Nov 2013 23:18:09 +0800
>> Subject: [PATCH 4/4] sched: bias to target cpu load to reduce task moving
>>
>> Task migration happens when target just a
On 12/20/2013 07:19 PM, Morten Rasmussen wrote:
>> @@ -4132,10 +4137,10 @@ find_idlest_group(struct sched_domain *sd, struct
>> task_struct *p, int this_cpu)
>> >
>> >for_each_cpu(i, sched_group_cpus(group)) {
>> >/* Bias balancing toward cpus of our domain */
>>
. Any testing are appreciated!
>
> BTW, Seems lots of changes in scheduler come from kinds of
> scenarios/benchmarks
> experience. But I still like to take any theoretical comments/suggestions.
>
> --
> Thanks
> Alex
>
> ===
>
> From 5cd67d975001edafe2ee82
comments/suggestions.
--
Thanks
Alex
===
>From 5cd67d975001edafe2ee820e0be5d86881a23bd6 Mon Sep 17 00:00:00 2001
From: Alex Shi
Date: Sat, 23 Nov 2013 23:18:09 +0800
Subject: [PATCH 4/4] sched: bias to target cpu load to reduce task moving
Task migration happens when target just
On Tue, Dec 17, 2013 at 02:10:12PM +, Morten Rasmussen wrote:
> > @@ -4135,7 +4141,7 @@ find_idlest_group(struct sched_domain *sd, struct
> > task_struct *p, int this_cpu)
> > if (local_group)
> > load = source_load(i);
> > el
On Tue, Dec 03, 2013 at 09:05:56AM +, Alex Shi wrote:
> Task migration happens when target just a bit less then source cpu load.
> To reduce such situation happens, aggravate the target cpu load with
> sd->imbalance_pct/100.
>
> This patch removes the hackbench thread regr
On 12/10/2013 10:02 PM, Frederic Weisbecker wrote:
>> > * We were idle, this means load 0, the current load might be
>> > * !0 due to remote wakeups and the sort.
>> > + * or we may has only one task and in NO_HZ_FULL, then still use
&g
On Tue, Dec 03, 2013 at 08:35:12PM +0800, Alex Shi wrote:
> We are not always 0 when update nohz cpu load, after nohz_full enabled.
> But current code still treat the cpu as idle. that is incorrect.
> Fix it to use correct cpu_load.
>
> Signed-off-by: Alex Shi
> ---
> ke
On 12/04/2013 02:17 PM, Alex Shi wrote:
> On 12/03/2013 08:35 PM, Alex Shi wrote:
>> > We are not always 0 when update nohz cpu load, after nohz_full enabled.
>> > But current code still treat the cpu as idle. that is incorrect.
>> > Fix it to use correct cpu_load.
> We obsevered 150% performance gain with vm-scalability/300s-mmap-pread-seq
> testcase with this patch applied. Here is a list of changes we got so far:
>
> testbox : brickland
got some explain of brickland on wiki:
High-end server platform based on the Ivy Bridge-EX processor
> testcase: vm-sc
On Tue, Dec 03, 2013 at 05:05:56PM +0800, Alex Shi wrote:
> Task migration happens when target just a bit less then source cpu load.
> To reduce such situation happens, aggravate the target cpu load with
> sd->imbalance_pct/100.
>
> This patch removes the hackbench thread regr
On 12/03/2013 08:35 PM, Alex Shi wrote:
> We are not always 0 when update nohz cpu load, after nohz_full enabled.
> But current code still treat the cpu as idle. that is incorrect.
> Fix it to use correct cpu_load.
Frederic, Could you like to give some comments?
>
> Signed-o
We are not always 0 when update nohz cpu load, after nohz_full enabled.
But current code still treat the cpu as idle. that is incorrect.
Fix it to use correct cpu_load.
Signed-off-by: Alex Shi
---
kernel/sched/proc.c | 8 +++-
1 file changed, 7 insertions(+), 1 deletion(-)
diff --git a
We are not always 0 when update nohz cpu load, after nohz_full enabled.
But current code still treat the cpu as idle. that is incorrect.
Fix it to use correct cpu_load.
Signed-off-by: Alex Shi
---
kernel/sched/proc.c | 8 +++-
1 file changed, 7 insertions(+), 1 deletion(-)
diff --git a
Task migration happens when target just a bit less then source cpu load.
To reduce such situation happens, aggravate the target cpu load with
sd->imbalance_pct/100.
This patch removes the hackbench thread regression on Daniel's
Intel Core2 server.
a5d6e63 +patch1~3
On 11/22/2013 02:37 PM, Alex Shi wrote:
> When a nohz_full cpu in tickless mode, it may update cpu_load in
> following chain:
> __tick_nohz_full_check
> tick_nohz_restart_sched_tick
> update_cpu_load_nohz
> then it will be set a incorrect cpu_load: 0.
> This patch try to fix it and give
When a nohz_full cpu in tickless mode, it may update cpu_load in
following chain:
__tick_nohz_full_check
tick_nohz_restart_sched_tick
update_cpu_load_nohz
then it will be set a incorrect cpu_load: 0.
This patch try to fix it and give it the correct cpu_load value.
Signed-off-by: Alex S
Commit-ID: 83dfd5235ebd66c284b97befe6eabff7132333e6
Gitweb: http://git.kernel.org/tip/83dfd5235ebd66c284b97befe6eabff7132333e6
Author: Alex Shi
AuthorDate: Thu, 20 Jun 2013 10:18:49 +0800
Committer: Ingo Molnar
CommitDate: Thu, 27 Jun 2013 10:07:33 +0200
sched: Update cpu load after
On Thu, Jun 20, 2013 at 10:45:39PM +0200, Frederic Weisbecker wrote:
> Gather the common code that computes the pending idle cpu load
> to decay.
>
> Signed-off-by: Frederic Weisbecker
> Cc: Ingo Molnar
> Cc: Li Zhong
> Cc: Paul E. McKenney
> Cc: Peter Zijlstra
&g
Now that the decaying cpu load stat indexes used by LB_BIAS
are ignored in full dynticks mode, let's conditionally build
that code to optimize the off case.
Signed-off-by: Frederic Weisbecker
Cc: Ingo Molnar
Cc: Li Zhong
Cc: Paul E. McKenney
Cc: Peter Zijlstra
Cc: Steven Rostedt
Cc: T
Gather the common code that computes the pending idle cpu load
to decay.
Signed-off-by: Frederic Weisbecker
Cc: Ingo Molnar
Cc: Li Zhong
Cc: Paul E. McKenney
Cc: Peter Zijlstra
Cc: Steven Rostedt
Cc: Thomas Gleixner
Cc: Borislav Petkov
Cc: Alex Shi
Cc: Paul Turner
Cc: Mike Galbraith
Cc
To get the latest runnable info, we need do this cpuload update after
task_tick.
Signed-off-by: Alex Shi
Reviewed-by: Paul Turner
---
kernel/sched/core.c | 2 +-
1 file changed, 1 insertion(+), 1 deletion(-)
diff --git a/kernel/sched/core.c b/kernel/sched/core.c
index c78a9e2..ee0225e 100644
-
We cannot compare two load directly from two cpus, since the cpu power
over two cpu may vary largely.
Suppose we meet such two kind of cpus.
CPU A:
No real time work, and there are 3 task, with rq->load.weight
being 512.
CPU B:
Has real time work, and it take 3/4 of the cpu power,
We cannot compare two load directly from two cpus, since the cpu power
over two cpu may vary largely.
Suppose we meet such two kind of cpus.
CPU A:
No real time work, and there are 3 task, with rq->load.weight
being 512.
CPU B:
Has real time work, and it take 3/4 of the cpu power,
We cannot compare two load directly from two cpus, since the cpu power
over two cpu may vary largely.
Suppose we meet such two kind of cpus.
CPU A:
No real time work, and there are 3 task, with rq->load.weight
being 512.
CPU B:
Has real time work, and it take 3/4 of the cpu power,
On Fri, Jun 7, 2013 at 12:20 AM, Alex Shi wrote:
> To get the latest runnable info, we need do this cpuload update after
> task_tick.
>
> Signed-off-by: Alex Shi
Reviewed-by: Paul Turner
> ---
> kernel/sched/core.c | 2 +-
> 1 file changed, 1 insertion(+), 1 deletion(-)
>
> diff --git a/kerne
To get the latest runnable info, we need do this cpuload update after
task_tick.
Signed-off-by: Alex Shi
---
kernel/sched/core.c | 2 +-
1 file changed, 1 insertion(+), 1 deletion(-)
diff --git a/kernel/sched/core.c b/kernel/sched/core.c
index 6f226c2..05176b8 100644
--- a/kernel/sched/core.c
+
To get the latest runnable info, we need do this cpuload update after
task_tick.
Signed-off-by: Alex Shi
---
kernel/sched/core.c | 2 +-
1 file changed, 1 insertion(+), 1 deletion(-)
diff --git a/kernel/sched/core.c b/kernel/sched/core.c
index 6f226c2..05176b8 100644
--- a/kernel/sched/core.c
+
To get the latest runnable info, we need do this cpuload update after
task_tick.
Signed-off-by: Alex Shi
---
kernel/sched/core.c |2 +-
1 files changed, 1 insertions(+), 1 deletions(-)
diff --git a/kernel/sched/core.c b/kernel/sched/core.c
index 866c05a..f1f9641 100644
--- a/kernel/sched/co
To get the latest runnable info, we need do this cpuload update after
task_tick.
Signed-off-by: Alex Shi
---
kernel/sched/core.c | 2 +-
1 file changed, 1 insertion(+), 1 deletion(-)
diff --git a/kernel/sched/core.c b/kernel/sched/core.c
index ecec7f1..33bcebf 100644
--- a/kernel/sched/core.c
+
To get the latest runnable info, we need do this cpuload update after
task_tick.
Signed-off-by: Alex Shi
---
kernel/sched/core.c | 2 +-
1 file changed, 1 insertion(+), 1 deletion(-)
diff --git a/kernel/sched/core.c b/kernel/sched/core.c
index e0c003a..0fedeed 100644
--- a/kernel/sched/core.c
+
On 04/09/2013 07:56 PM, Stratos Karafotis wrote:
On 04/05/2013 10:50 PM, Stratos Karafotis wrote:
Hi Viresh,
On 04/04/2013 07:54 AM, Viresh Kumar wrote:
Hi Stratos,
Yes, your results show some improvements. BUT if performance is the
only thing
we were looking for, then we will never use ondem
On 04/10/2013 06:22 AM, Viresh Kumar wrote:
On 9 April 2013 22:26, Stratos Karafotis wrote:
On 04/05/2013 10:50 PM, Stratos Karafotis wrote:
Hi Viresh,
On 04/04/2013 07:54 AM, Viresh Kumar wrote:
Hi Stratos,
Yes, your results show some improvements. BUT if performance is the only
thing
we
Currently, CPU for RX napi handler is scheduled when backlog packet count is
greater than budget * cores_in_use.
If more cpus are used for RX napi handler, there is low possibility backlog
packet count is greater than budget * cores_in_use.
This patch makes CPU for RX napi handler is scheduled w
On 9 April 2013 22:26, Stratos Karafotis wrote:
> On 04/05/2013 10:50 PM, Stratos Karafotis wrote:
>>
>> Hi Viresh,
>>
>> On 04/04/2013 07:54 AM, Viresh Kumar wrote:
>>>
>>> Hi Stratos,
>>>
>>> Yes, your results show some improvements. BUT if performance is the only
>>> thing
>>> we were looking f
On 04/05/2013 10:50 PM, Stratos Karafotis wrote:
Hi Viresh,
On 04/04/2013 07:54 AM, Viresh Kumar wrote:
Hi Stratos,
Yes, your results show some improvements. BUT if performance is the only thing
we were looking for, then we will never use ondemand governor but performance
governor.
I suspect
Hi Viresh,
On 04/04/2013 07:54 AM, Viresh Kumar wrote:
> Hi Stratos,
>
> Yes, your results show some improvements. BUT if performance is the only thing
> we were looking for, then we will never use ondemand governor but performance
> governor.
>
> I suspect this little increase in performance mu
s,
Stratos
Viresh Kumar wrote:
>On 4 April 2013 12:17, stratosk wrote:
>> Why do you suspect significant increased power? With ondemand the CPU will
>> go down to lowest freq as soon as the load will decreased. And the
>> measurement shows that the CPU load will dec
On 4 April 2013 12:17, stratosk wrote:
> Why do you suspect significant increased power? With ondemand the CPU will
> go down to lowest freq as soon as the load will decreased. And the
> measurement shows that the CPU load will decrease faster (because of faster
> calculation).
Hi Viresh,
I never use performance governor, but I want improved performance with ondemand.
Why do you suspect significant increased power? With ondemand the CPU will go
down to lowest freq as soon as the load will decreased. And the measurement
shows that the CPU load will decrease faster
Hi Stratos,
On 4 April 2013 05:00, Stratos Karafotis wrote:
> I tried to do some measurements simulating a CPU load with a loop that simply
> counts
> an integer. The first test simulates a CPU load that lasts 2 x sampling_rate
> = ~ 2us.
> The second ~4us and the
robably misunderstood it...
>>
>>> The goal is to detect CPU load as soon as possible to increase frequency.
>>>
>>> Could you please clarify this?
>>
>> But he is looking for some numbers to prove your patch. Some numbers
>> that shows performa
On Wednesday, April 03, 2013 12:13:56 PM Viresh Kumar wrote:
> On 3 April 2013 12:01, stratosk wrote:
> > I'm sorry, I don't understand.
> > The goal of this patch is not energy saving.
>
> He probably misunderstood it...
>
> > The goal is to detect
101 - 200 of 290 matches
Mail list logo