to that epi.
One thing is that the user should not change the events info returned by
epoll_wait().
It's just a propose, but if it works, there will be no limit on ONESHOT
any more ;-)
Regards,
Michael Wang
Signed-off-by: Paton J. Lewis pale...@adobe.com
---
fs/eventpoll.c
On 11/01/2012 02:57 AM, Paton J. Lewis wrote:
On 10/30/12 11:32 PM, Michael Wang wrote:
On 10/26/2012 08:08 AM, Paton J. Lewis wrote:
From: Paton J. Lewis pale...@adobe.com
It is not currently possible to reliably delete epoll items when
using the
same epoll set from multiple threads. After
lock?
2. what's the conflict caches?
3. how does their lock operation nested?
And I think it will be better if we have the bug log in patch comment,
so folks will easily know what's the reason we need this patch ;-)
Regards,
Michael Wang
Signed-off-by: Glauber Costa glom...@parallels.com
CC
On 11/02/2012 12:48 AM, Glauber Costa wrote:
On 11/01/2012 11:11 AM, Michael Wang wrote:
On 10/29/2012 06:49 PM, Glauber Costa wrote:
We currently provide lockdep annotation for kmalloc caches, and also
caches that have SLAB_DEBUG_OBJECTS enabled. The reason for this is that
we can quite
On 11/02/2012 02:47 AM, Paton J. Lewis wrote:
On 10/31/12 5:43 PM, Michael Wang wrote:
On 11/01/2012 02:57 AM, Paton J. Lewis wrote:
On 10/30/12 11:32 PM, Michael Wang wrote:
On 10/26/2012 08:08 AM, Paton J. Lewis wrote:
From: Paton J. Lewis pale...@adobe.com
It is not currently possible
On 09/18/2012 11:13 AM, Michael Wang wrote:
This patch try to fix the BUG:
[0.043953] BUG: scheduling while atomic: swapper/0/1/0x1002
[0.044017] no locks held by swapper/0/1.
[0.044692] Pid: 1, comm: swapper/0 Not tainted 3.6.0-rc1-00420-gb7aebb9
#34
[0.045861] Call
On 09/18/2012 04:16 PM, Michael Wang wrote:
The annotation for select_task_rq_fair() is wrong since commit c88d5910, it's
actually for a removed function.
This patch rewrite the wrong annotation to make it correct.
Could I get some comments on this patch?
Regards,
Michael Wang
Signed
On 09/19/2012 01:42 PM, Michael Wang wrote:
Since 'cpu == -1' in cpumask_next() is legal, no need to handle '*pos == 0'
specially.
About the comments:
/* just in case, cpu 0 is not the first */
A test with a cpumask in which cpu 0 is not the first has been done, and it
works well
On 09/26/2012 09:02 PM, Borislav Petkov wrote:
On Wed, Sep 26, 2012 at 11:43:52AM +0800, Michael Wang wrote:
On 09/19/2012 01:42 PM, Michael Wang wrote:
Since 'cpu == -1' in cpumask_next() is legal, no need to handle '*pos == 0'
specially.
About the comments:
/* just in case, cpu 0
On 09/26/2012 05:35 PM, Srivatsa S. Bhat wrote:
On 09/13/2012 06:17 PM, Srivatsa S. Bhat wrote:
On 09/13/2012 12:00 PM, Michael Wang wrote:
On 09/12/2012 11:31 PM, Paul E. McKenney wrote:
On Wed, Sep 12, 2012 at 06:06:20PM +0530, Srivatsa S. Bhat wrote:
On 07/19/2012 10:45 PM, Paul E
On 09/26/2012 11:43 AM, Michael Wang wrote:
On 09/18/2012 04:16 PM, Michael Wang wrote:
The annotation for select_task_rq_fair() is wrong since commit c88d5910, it's
actually for a removed function.
This patch rewrite the wrong annotation to make it correct.
Could I get some comments
On 09/26/2012 11:41 AM, Michael Wang wrote:
On 09/18/2012 11:13 AM, Michael Wang wrote:
This patch try to fix the BUG:
[0.043953] BUG: scheduling while atomic: swapper/0/1/0x1002
[0.044017] no locks held by swapper/0/1.
[0.044692] Pid: 1, comm: swapper/0 Not tainted 3.6.0-rc1
On 10/06/2012 05:04 PM, Michael Wang wrote:
On 09/26/2012 11:43 AM, Michael Wang wrote:
On 09/18/2012 04:16 PM, Michael Wang wrote:
The annotation for select_task_rq_fair() is wrong since commit c88d5910,
it's
actually for a removed function.
This patch rewrite the wrong annotation to make
On 10/06/2012 05:06 PM, Michael Wang wrote:
On 09/26/2012 11:41 AM, Michael Wang wrote:
On 09/18/2012 11:13 AM, Michael Wang wrote:
This patch try to fix the BUG:
[0.043953] BUG: scheduling while atomic: swapper/0/1/0x1002
[0.044017] no locks held by swapper/0/1.
[0.044692
is not 0, return BUSY
Please let me know if I miss some thing ;-)
Regards,
Michael Wang
Since it's RCU review to ensure I've not made any serious mistakes could
be quite helpful:
#define _LGPL_SOURCE 1
#define _GNU_SOURCE 1
#include stdlib.h
#include stdio.h
#include string.h
. But in code we
are checking (p-prio oldprio). i.e. reschedule if we were currently
running and our priority increased.
It's the user nice value I suppose, so it should be reversed when we are
talking about weight.
Regards,
Michael Wang
Sorry if i am wrong :(
--
viresh
--
To unsubscribe from
On 09/18/2012 11:13 AM, Michael Wang wrote:
This patch try to fix the BUG:
[0.043953] BUG: scheduling while atomic: swapper/0/1/0x1002
[0.044017] no locks held by swapper/0/1.
[0.044692] Pid: 1, comm: swapper/0 Not tainted 3.6.0-rc1-00420-gb7aebb9
#34
[0.045861] Call
On 09/18/2012 04:16 PM, Michael Wang wrote:
The annotation for select_task_rq_fair() is wrong since commit c88d5910, it's
actually for a removed function.
This patch rewrite the wrong annotation to make it correct.
Could I get some comments on the patch?
Regards,
Michael Wang
Signed-off
interrupt come in after apic
was shut down, I'm not sure whether this could do help to Huacai, just
as a clue...
Regards,
Michael Wang
That simply doesn't make any sense.
Signed-off-by: Huacai Chen che...@lemote.com
---
kernel/sched/core.c |5 +++--
1 files changed, 3 insertions(+), 2
, whatever you think it's good or junk, please let
me know :)
Regards,
Michael Wang
play.sh:
DURATION=10
NORMAL_THREADS=24
PERIOD=10
make clean
make
insmod ./schedtm.ko normalnr=$NORMAL_THREADS period=$PERIOD
sleep $DURATION
rmmod ./schedtm.ko
dmesg | grep schedtm
schedtm.c:
/*
* scheduler test
, but I need more
feedback and suggestions :)
Regards,
Michael Wang
Regards,
Charles
On 10/25/2012 01:40 PM, Michael Wang wrote:
Hi, Folks
Charles has raised a problem that we don't have any tool yet
for testing the scheduler with out any disturb from other
subsystem, and I also found
this is caused by apic issue and no matter with the rcu
before, so I really want to figure out whether it is very related with
commit b1420f1?
Regards,
Michael Wang
commit b1420f1c8bfc30ecf6380a31d0f686884834b599
Author: Paul E. McKenney paul.mcken...@linaro.org
Date: Thu Mar 1 13:18:08 2012
set itself to be UNINTERRUPTIBLE with out
any design on switch itself back later(or the time is too long), are you
accidentally using some bad designed module?
BTW, it's better to paste whole log in mail with text style not a picture.
Regards,
Michael Wang
BR,
Paweł.
--
To unsubscribe
On 11/12/2012 03:16 PM, Paweł Sikora wrote:
On Monday 12 of November 2012 11:04:12 Michael Wang wrote:
On 11/09/2012 09:48 PM, Paweł Sikora wrote:
Hi,
during playing with new ups i've caught an nice oops on reboot:
http://imgbin.org/index.php?page=imageid=10253
probably the upstream
On 11/12/2012 08:33 PM, Paweł Sikora wrote:
On Monday 12 of November 2012 11:22:47 Paweł Sikora wrote:
On Monday 12 of November 2012 15:40:31 Michael Wang wrote:
On 11/12/2012 03:16 PM, Paweł Sikora wrote:
On Monday 12 of November 2012 11:04:12 Michael Wang wrote:
On 11/09/2012 09:48 PM
On 11/13/2012 05:40 PM, Paweł Sikora wrote:
On Monday 12 of November 2012 13:33:39 Paweł Sikora wrote:
On Monday 12 of November 2012 11:22:47 Paweł Sikora wrote:
On Monday 12 of November 2012 15:40:31 Michael Wang wrote:
On 11/12/2012 03:16 PM, Paweł Sikora wrote:
On Monday 12 of November
On 11/14/2012 10:49 AM, Robert Hancock wrote:
On 11/13/2012 08:32 PM, Michael Wang wrote:
On 11/13/2012 05:40 PM, Paweł Sikora wrote:
On Monday 12 of November 2012 13:33:39 Paweł Sikora wrote:
On Monday 12 of November 2012 11:22:47 Paweł Sikora wrote:
On Monday 12 of November 2012 15:40:31
On 07/12/2012 10:07 PM, Peter Zijlstra wrote:
On Tue, 2012-07-03 at 14:34 +0800, Michael Wang wrote:
From: Michael Wang wang...@linux.vnet.ibm.com
it's impossible to enter else branch if we have set skip_clock_update
in task_yield_fair(), as yield_to_task_fair() will directly return
true
From: Michael Wang wang...@linux.vnet.ibm.com
This patch is trying to provide a way for user to dynamically change
the behaviour of load balance by setting flags of schedule domain.
Currently it's rely on cpu cgroup and only SD_LOAD_BALANCE was
implemented, usage:
1. /sys/fs/cgroup/domain
Add the missing cc list.
On 07/16/2012 05:16 PM, Michael Wang wrote:
From: Michael Wang wang...@linux.vnet.ibm.com
This patch is trying to provide a way for user to dynamically change
the behaviour of load balance by setting flags of schedule domain.
Currently it's rely on cpu cgroup
From: Michael Wang wang...@linux.vnet.ibm.com
This patch set provide a way for user to dynamically configure the scheduler
domain flags, which usually to be static.
We can do the configuration through cpuset cgroup, new file will be found
under each hierarchy:
sched_smt_domain_flag
From: Michael Wang wang...@linux.vnet.ibm.com
Add the variables we need for the implementation of dynamical domain
flags.
Signed-off-by: Michael Wang wang...@linux.vnet.ibm.com
---
include/linux/sched.h | 22 ++
kernel/cpuset.c |7 +++
2 files changed, 29
From: Michael Wang wang...@linux.vnet.ibm.com
Add the functions and code which will do initialization for dynamical
domain flags.
Signed-off-by: Michael Wang wang...@linux.vnet.ibm.com
---
include/linux/sched.h | 10 --
kernel/cpuset.c |8 ++--
kernel/sched/core.c
From: Michael Wang wang...@linux.vnet.ibm.com
We will record the domain flags for cpuset in update_domain_attr and
use it to replace the static domain flags in set_domain_attribute.
Signed-off-by: Michael Wang wang...@linux.vnet.ibm.com
---
kernel/cpuset.c |7 +++
kernel/sched
From: Michael Wang wang...@linux.vnet.ibm.com
Add the facility for user to configure the dynamical domain flags and
enable/disable it.
Signed-off-by: Michael Wang wang...@linux.vnet.ibm.com
---
kernel/cpuset.c | 85 +++
1 files changed, 85
to avoid the
warning info.
So is this the fix you mentioned? or someone has find out the true
reason and fixed it?
Regards,
Michael Wang
regards,
dan carpenter
--
To unsubscribe from this list: send the line unsubscribe linux-kernel in
the body of a message to majord...@vger.kernel.org
More
On 07/20/2012 03:00 PM, Mike Galbraith wrote:
On Fri, 2012-07-20 at 11:09 +0800, Michael Wang wrote:
Hi, Mike, Martin, Dan
I'm currently taking an eye on the rcu stall issue which was reported by
you in the mail:
rcu: endless stalls
From: Mike Galbraith
linux-3.4-rc7: rcu_sched self
On 07/20/2012 04:36 PM, Dan Carpenter wrote:
On Fri, Jul 20, 2012 at 04:24:25PM +0800, Michael Wang wrote:
On 07/20/2012 02:41 PM, Dan Carpenter wrote:
My bug was fixed in March. There was an email thread about it when
the merge window opened but I can't find it...
Hi, Dan
Thanks for your
On 04/10/2013 11:30 AM, Michael Wang wrote:
Log since RFC:
1. Throttle only when wake-affine failed. (thanks to PeterZ)
2. Do throttle inside wake_affine(). (thanks to PeterZ)
3. Other small fix.
Recently testing show that wake-affine stuff cause regression on pgbench
Hi, Mike
Thanks for your reply :)
On 04/22/2013 01:27 PM, Mike Galbraith wrote:
On Mon, 2013-04-22 at 12:21 +0800, Michael Wang wrote:
On 04/10/2013 11:30 AM, Michael Wang wrote:
Log since RFC:
1. Throttle only when wake-affine failed. (thanks to PeterZ)
2. Do throttle inside
On 04/03/2013 04:46 PM, Alex Shi wrote:
On 04/02/2013 03:23 PM, Michael Wang wrote:
| 15 GB | 12 | 45393 | | 43986 |
| 15 GB | 16 | 45110 | | 45719 |
| 15 GB | 24 | 41415 | | 36813 |-11.11%
| 15 GB | 32 | 35988 | | 34025 |
The reason may caused
,
Michael Wang
And this burst patch doesn't need on 3.9 kernel. Patch 1,2,4,5,6,7 are
enough and valid.
--
To unsubscribe from this list: send the line unsubscribe linux-kernel in
the body of a message to majord...@vger.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
On 03/25/2013 01:24 PM, Michael Wang wrote:
Recently testing show that wake-affine stuff cause regression on pgbench, the
hiding rat was finally catched out.
wake-affine stuff is always trying to pull wakee close to waker, by theory,
this will benefit us if waker's cpu cached hot data
On 04/08/2013 06:00 PM, Peter Zijlstra wrote:
On Mon, 2013-03-25 at 13:24 +0800, Michael Wang wrote:
if (affine_sd) {
- if (cpu != prev_cpu wake_affine(affine_sd, p,
sync))
+ if (cpu != prev_cpu wake_affine(affine_sd, p,
sync
| 32 | 35918 | | 37632 | +4.77% | 47923 | +33.42% | 52241 |
+45.45%
Suggested-by: Peter Zijlstra a.p.zijls...@chello.nl
Signed-off-by: Michael Wang wang...@linux.vnet.ibm.com
---
include/linux/sched.h |5 +
kernel/sched/fair.c | 31 +++
kernel/sysctl.c
On 04/10/2013 12:16 PM, Alex Shi wrote:
On 04/10/2013 11:30 AM, Michael Wang wrote:
Suggested-by: Peter Zijlstra a.p.zijls...@chello.nl
Signed-off-by: Michael Wang wang...@linux.vnet.ibm.com
Reviewed-by: Alex Shi alex@intel.com
Thanks for your review :)
BTW, could you try the kbulid
() was invoked with double locked, for all the se on src and
dst rq, no update should happen, isn't it?
Regards,
Michael Wang
Vincent
+}
+
/*
* move_tasks tries to move up to imbalance weighted load from busiest to
* this_rq, as part of a balancing operation within domain sd.
@@ -4001,7
Hi, Peter
Thanks for your reply :)
On 04/10/2013 04:51 PM, Peter Zijlstra wrote:
On Wed, 2013-04-10 at 11:30 +0800, Michael Wang wrote:
| 15 GB | 32 | 35918 | | 37632 | +4.77% | 47923 | +33.42% |
52241 | +45.45%
So I don't get this... is wake_affine() once every milisecond _that_
On 04/10/2013 05:22 PM, Michael Wang wrote:
Hi, Peter
Thanks for your reply :)
On 04/10/2013 04:51 PM, Peter Zijlstra wrote:
On Wed, 2013-04-10 at 11:30 +0800, Michael Wang wrote:
| 15 GB | 32 | 35918 | | 37632 | +4.77% | 47923 | +33.42% |
52241 | +45.45%
So I don't get
why the
benefit is so significant, since in such case, mother's little quicker
respond will make all the kids happy :)
Regards,
Michael Wang
-Mike
--
To unsubscribe from this list: send the line unsubscribe linux-kernel in
the body of a message to majord...@vger.kernel.org
More majordomo
On 04/11/2013 04:44 PM, Mike Galbraith wrote:
On Thu, 2013-04-11 at 16:26 +0800, Michael Wang wrote:
The 1:N is a good reason to explain why the chance that wakee's hot data
cached on curr_cpu is lower, and since it's just 'lower' not 'extinct',
after the throttle interval large enough
On 04/10/2013 04:51 PM, Peter Zijlstra wrote:
On Wed, 2013-04-10 at 11:30 +0800, Michael Wang wrote:
| 15 GB | 32 | 35918 | | 37632 | +4.77% | 47923 | +33.42% |
52241 | +45.45%
So I don't get this... is wake_affine() once every milisecond _that_
expensive?
Seeing we get a 45
is there, I think at least, no regression.
But the powersaving one suffered some regression in low-end, is that the
sacrifice we supposed to do for power saving?
Regards,
Michael Wang
results:
A, no clear performance change found on 'performance' policy.
B, specjbb2005 drop 5~7% on both
|
The reason may caused by wake_affine()'s higher overhead, and pgbench is
really sensitive to this stuff...
Regards,
Michael Wang
--
To unsubscribe from this list: send the line unsubscribe linux-kernel in
the body of a message to majord...@vger.kernel.org
More majordomo info at http://vger.kernel.org
() and pgbench appear in
the same sentence;)
I saw the patch touched the wake_affine(), just interested on what will
happen ;-)
The patch changed the overhead of wake_affine(), and also influence it's
result, I used to think the later one may do some help to the pgbench...
Regards,
Michael Wang
-Mike
On 04/02/2013 04:35 PM, Alex Shi wrote:
On 04/02/2013 03:23 PM, Michael Wang wrote:
[snip]
The reason may caused by wake_affine()'s higher overhead, and pgbench is
really sensitive to this stuff...
Thanks for testing. Could you like to remove the last patch and test it
again? I want
%
Very nice improvement, I'd like to test it with the wake-affine throttle
patch later, let's see what will happen ;-)
Any idea on why the last one caused the regression?
Regards,
Michael Wang
--
To unsubscribe from this list: send the line unsubscribe linux-kernel in
the body of a message
On 04/03/2013 10:56 AM, Alex Shi wrote:
On 04/03/2013 10:46 AM, Michael Wang wrote:
| 15 GB | 16 | 45110 | | 48091 |
| 15 GB | 24 | 41415 | | 47415 |
| 15 GB | 32 | 35988 | | 45749 |+27.12%
Very nice improvement, I'd like to test it with the wake-affine
);
this_load = target_load(this_cpu, idx);
Regards,
Michael Wang
+ }
/*
* If sync wakeup then subtract the (maximum possible)
--
To unsubscribe from this list: send the line unsubscribe linux-kernel in
the body of a message to majord...@vger.kernel.org
More majordomo
On 04/03/2013 01:38 PM, Michael Wang wrote:
On 04/03/2013 12:28 PM, Alex Shi wrote:
[snip]
but the patch may cause some unfairness if this/prev cpu are not burst at
same time. So could like try the following patch?
I will try it later, some doubt below :)
[snip]
+
+if (cpu_rq
On 04/03/2013 12:28 PM, Alex Shi wrote:
On 04/03/2013 11:23 AM, Michael Wang wrote:
On 04/03/2013 10:56 AM, Alex Shi wrote:
On 04/03/2013 10:46 AM, Michael Wang wrote:
[snip]
From 4722a7567dccfb19aa5afbb49982ffb6d65e6ae5 Mon Sep 17 00:00:00 2001
From: Alex Shi alex@intel.com
Date
On 04/03/2013 02:53 PM, Alex Shi wrote:
On 04/03/2013 02:22 PM, Michael Wang wrote:
If many tasks sleep long time, their runnable load are zero. And if they
are waked up bursty, too light runnable load causes big imbalance among
CPU. So such benchmark, like aim9 drop 5~7%.
With this patch
On 04/03/2013 04:46 PM, Alex Shi wrote:
On 04/02/2013 03:23 PM, Michael Wang wrote:
| 15 GB | 12 | 45393 | | 43986 |
| 15 GB | 16 | 45110 | | 45719 |
| 15 GB | 24 | 41415 | | 36813 |-11.11%
| 15 GB | 32 | 35988 | | 34025 |
The reason may caused
On 03/14/2013 06:58 PM, Peter Zijlstra wrote:
On Wed, 2013-03-13 at 11:07 +0800, Michael Wang wrote:
However, we already figure out the logical that wakeup related task
could benefit from closely running, this could promise us somewhat
reliable benefit.
I'm not convinced that the 2 task
with the default value 1ms (usually the initial
value of balance_interval and the value of min_interval), that will
based on the latest tip tree.
Regards,
Michael Wang
--
To unsubscribe from this list: send the line unsubscribe linux-kernel in
the body of a message to majord...@vger.kernel.org
More
| | 51060 | +42.16%
Suggested-by: Peter Zijlstra pet...@infradead.org
Signed-off-by: Michael Wang wang...@linux.vnet.ibm.com
---
include/linux/sched.h |5 +
kernel/sched/fair.c | 33 -
kernel/sysctl.c | 10 ++
3 files changed, 47
Hi, Mike
Thanks for your reply :)
On 03/25/2013 05:22 PM, Mike Galbraith wrote:
On Mon, 2013-03-25 at 13:24 +0800, Michael Wang wrote:
Recently testing show that wake-affine stuff cause regression on pgbench, the
hiding rat was finally catched out.
wake-affine stuff is always trying
will be better?
This knob is nothing but compromise, besides, it's a highlight to notify
us we still have a feature waiting for improve, if later we have the way
to build an accurate wake-affine, remove the knob should be easy.
Regards,
Michael Wang
--
To unsubscribe from this list: send the line
On 03/08/2013 12:52 AM, Peter Zijlstra wrote:
On Thu, 2013-03-07 at 17:46 +0800, Michael Wang wrote:
On 03/07/2013 04:36 PM, Peter Zijlstra wrote:
On Wed, 2013-03-06 at 15:06 +0800, Michael Wang wrote:
wake_affine() stuff is trying to bind related tasks closely, but it doesn't
work well
On 03/08/2013 01:21 AM, Peter Zijlstra wrote:
On Wed, 2013-03-06 at 15:06 +0800, Michael Wang wrote:
+static inline int wakeup_related(struct task_struct *p)
+{
+ if (wakeup_buddy(p, current)) {
+ /*
+* Now check whether current still focus on his buddy
On 03/07/2013 05:43 PM, Mike Galbraith wrote:
On Thu, 2013-03-07 at 09:36 +0100, Peter Zijlstra wrote:
On Wed, 2013-03-06 at 15:06 +0800, Michael Wang wrote:
wake_affine() stuff is trying to bind related tasks closely, but it doesn't
work well according to the test on 'perf bench sched pipe
On 03/08/2013 01:27 AM, Peter Zijlstra wrote:
On Wed, 2013-03-06 at 15:06 +0800, Michael Wang wrote:
@@ -3351,7 +3420,13 @@ select_task_rq_fair(struct task_struct *p, int
sd_flag, int wake_flags)
}
if (affine_sd) {
- if (cpu != prev_cpu wake_affine(affine_sd
On 03/08/2013 02:44 PM, Mike Galbraith wrote:
On Fri, 2013-03-08 at 10:37 +0800, Michael Wang wrote:
On 03/07/2013 05:43 PM, Mike Galbraith wrote:
On Thu, 2013-03-07 at 09:36 +0100, Peter Zijlstra wrote:
On Wed, 2013-03-06 at 15:06 +0800, Michael Wang wrote:
wake_affine() stuff is trying
On 03/08/2013 04:26 PM, Mike Galbraith wrote:
On Fri, 2013-03-08 at 15:30 +0800, Michael Wang wrote:
On 03/08/2013 02:44 PM, Mike Galbraith wrote:
In general, I think things would work better if we'd just rate limit how
frequently we can wakeup migrate each individual task.
Isn't
:)
Any objections?
Just one concern, may be I have misunderstand you, but will it cause
trouble if the prctl() was indiscriminately used by some applications,
will we get fake data?
Regards,
Michael Wang
Thanks,
Ingo
--
To unsubscribe from this list: send the line unsubscribe linux
On 03/11/2013 06:36 PM, Peter Zijlstra wrote:
On Fri, 2013-03-08 at 10:50 +0800, Michael Wang wrote:
OK, so there's two issues I have with all this are:
- it completely wrecks task placement for things like interrupts (sadly I
don't
have a good idea about a benchmark where
On 03/11/2013 05:40 PM, Ingo Molnar wrote:
* Michael Wang wang...@linux.vnet.ibm.com wrote:
Hi, Ingo
On 03/11/2013 04:21 PM, Ingo Molnar wrote:
[snip]
I have actually written the prctl() approach before, for instrumentation
purposes, and it does wonders to system analysis.
The idea
an extra
syscall.
If it's really bring benefit, I think they will consider about it,
whatever, that's the developer/users decision, what we need to do is
just make the stuff attractively.
Regards,
Michael Wang
Thanks,
Ingo
--
To unsubscribe from this list: send the line
a general optimization which benefit all
the cases.
And I don't agree to remove the stuff since we have many theories that
this could benefit us, but before it really show the benefit in all the
cases, provide a way to keep it quiet sounds necessary...
Regards,
Michael Wang
Something like
| 32 | 35983 | | 54946 | +52.70%
Signed-off-by: Michael Wang wang...@linux.vnet.ibm.com
---
include/linux/sched.h |8 +
kernel/sched/fair.c | 80 -
kernel/sysctl.c | 10 ++
3 files changed, 97 insertions(+), 1 deletions
still exist?
Regards,
Michael Wang
Original Message
Subject: Re: BUG: scheduling while atomic:
ifup-bonding/3711/0x0002 -- V3.6.7
Date:Wed, 28 Nov 2012 13:17:31 -0800
From:Linda Walsh l...@tlinx.org
To: Cong Wang xiyou.wangc...@gmail.com
CC
-alloc_lock) should match (#2
lock), the warn doesn't make sense to me...
Regards,
Michael Wang
[ 450.193366] #3: (cpuset_buffer_lock){+.+...}, at: [811a06d0]
cpuset_print_task_mems_allowed+0x60/0x150
[ 450.195281]
[ 450.195281] stack backtrace:
[ 450.195987] Pid: 6, comm: kworker/u
On 03/07/2013 03:05 PM, Linda Walsh wrote:
Michael Wang wrote:
On 03/02/2013 01:21 PM, Linda Walsh wrote:
Update -- it *used* to stop the messages in 3.6.7.
It no longer stops the messages in 3.8.1 -- (and isn't present by
default -- tried
adding the unlock/lock -- no difference
,
bond-dev-name, slave-dev-name,
Regards,
Michael Wang
[ 24.826556] [8105dab1] process_one_work+0x1a1/0x5d0
[ 24.826558] [8105da4d] ? process_one_work+0x13d/0x5d0
[ 24.826560] [8145ac80] ? bond_loadbalance_arp_mon+0x300/0x300
[ 24.826563
Hi, Peter
Thanks for your reply.
On 03/07/2013 04:36 PM, Peter Zijlstra wrote:
On Wed, 2013-03-06 at 15:06 +0800, Michael Wang wrote:
wake_affine() stuff is trying to bind related tasks closely, but it doesn't
work well according to the test on 'perf bench sched pipe' (thanks to Peter
of the
bond_update_speed_duplex() IMO, I think we need the folk who work on
this driver to make the decision ;-)
Regards,
Michael Wang
Michael Wang wrote:
And both bond_enslave() and bond_mii_monitor() are using
bond_update_speed_duplex()
with preempt disabled.
Along with the changes
On 12/05/2012 06:10 AM, Andrew Morton wrote:
On Tue, 04 Dec 2012 14:23:41 +0530
Srivatsa S. Bhat srivatsa.b...@linux.vnet.ibm.com wrote:
From: Michael Wang wang...@linux.vnet.ibm.com
There are places where preempt_disable() is used to prevent any CPU from
going offline during the critical
On 12/05/2012 10:56 AM, Michael Wang wrote:
[...]
I wonder about the cpu-online case. A typical caller might want to do:
/*
* Set each online CPU's foo to bar
*/
int global_bar;
void set_cpu_foo(int bar)
{
get_online_cpus_stable_atomic();
global_bar = bar
On 02/03/2013 01:50 AM, Sebastian Andrzej Siewior wrote:
On 01/31/2013 03:12 AM, Michael Wang wrote:
I'm not sure, but just concern about this case:
group 0 cpu 0 cpu 1
least idle 4 task
group 1 cpu 2
-by: Michael Wang wang...@linux.vnet.ibm.com
---
kernel/sched/fair.c | 135 ---
1 files changed, 74 insertions(+), 61 deletions(-)
diff --git a/kernel/sched/fair.c b/kernel/sched/fair.c
index 5eea870..0935c7d 100644
--- a/kernel/sched/fair.c
+++ b
it?
May be check that state in find_idlest_cpu() will be better?
Regards,
Michael Wang
if (!idlest || 100*this_load imbalance*min_load)
return NULL;
return idlest;
--
To unsubscribe from this list: send the line unsubscribe linux-kernel in
the body of a message
On 01/31/2013 01:16 PM, Namhyung Kim wrote:
Hi Sebastian and Michael,
On Thu, 31 Jan 2013 10:12:35 +0800, Michael Wang wrote:
On 01/31/2013 05:19 AM, Sebastian Andrzej Siewior wrote:
If a new CPU has to be choosen for a task, then the scheduler first selects
the group with the least load
On 01/31/2013 02:58 PM, Namhyung Kim wrote:
On Thu, 31 Jan 2013 14:39:20 +0800, Michael Wang wrote:
On 01/31/2013 01:16 PM, Namhyung Kim wrote:
Anyway, I have an idea with this in mind. It's like adding a new idle
load to each idle cpu rather than special casing the idle cpus like
above
On 01/31/2013 03:40 PM, Namhyung Kim wrote:
On Thu, 31 Jan 2013 15:30:02 +0800, Michael Wang wrote:
On 01/31/2013 02:58 PM, Namhyung Kim wrote:
But AFAIK the number of states in cpuidle is usually less than 10 so maybe
we can change the weight then, but there's no promise...
And I just got
On 01/31/2013 04:24 PM, Michael Wang wrote:
On 01/31/2013 03:40 PM, Namhyung Kim wrote:
On Thu, 31 Jan 2013 15:30:02 +0800, Michael Wang wrote:
On 01/31/2013 02:58 PM, Namhyung Kim wrote:
But AFAIK the number of states in cpuidle is usually less than 10 so maybe
we can change the weight
On 01/31/2013 04:45 PM, Michael Wang wrote:
On 01/31/2013 04:24 PM, Michael Wang wrote:
On 01/31/2013 03:40 PM, Namhyung Kim wrote:
On Thu, 31 Jan 2013 15:30:02 +0800, Michael Wang wrote:
On 01/31/2013 02:58 PM, Namhyung Kim wrote:
But AFAIK the number of states in cpuidle is usually less
| | 55358 | +53.84%
Signed-off-by: Michael Wang wang...@linux.vnet.ibm.com
---
include/linux/sched.h |8
kernel/sched/fair.c | 97 -
kernel/sysctl.c | 10 +
3 files changed, 113 insertions(+), 2 deletions(-)
diff --git
Hi, Mike
Thanks for your reply.
On 02/28/2013 03:18 PM, Mike Galbraith wrote:
On Thu, 2013-02-28 at 14:38 +0800, Michael Wang wrote:
+/*
+ * current is the only task on rq and it is
+ * going to sleep
On 02/28/2013 03:40 PM, Michael Wang wrote:
Hi, Mike
Thanks for your reply.
On 02/28/2013 03:18 PM, Mike Galbraith wrote:
On Thu, 2013-02-28 at 14:38 +0800, Michael Wang wrote:
+ /*
+* current is the only task on rq
On 02/28/2013 04:04 PM, Mike Galbraith wrote:
On Thu, 2013-02-28 at 15:40 +0800, Michael Wang wrote:
Hi, Mike
Thanks for your reply.
On 02/28/2013 03:18 PM, Mike Galbraith wrote:
On Thu, 2013-02-28 at 14:38 +0800, Michael Wang wrote
1 - 100 of 1619 matches
Mail list logo