N_HIGH_MEMORY stands for the nodes that has normal or high memory.
N_MEMORY stands for the nodes that has any memory.
The code here need to handle with the nodes which have memory, we should
use N_MEMORY instead.
Signed-off-by: Lai Jiangshan la...@cn.fujitsu.com
---
mm/oom_kill.c |2 +-
1
a N_MEMORY. We just intrude it as an alias to
N_HIGH_MEMORY and fix all im-proper usages of N_HIGH_MEMORY in late patches.
Signed-off-by: Lai Jiangshan la...@cn.fujitsu.com
---
include/linux/nodemask.h |1 +
1 files changed, 1 insertions(+), 0 deletions(-)
diff --git a/include/linux/nodemask.h b
online_movable is here:
https://lkml.org/lkml/2012/7/4/145
The new V2 discards the MIGRATE_HOTREMOVE approach, and use a more straight
implementation(only 1 patch).
Lai Jiangshan (21):
page_alloc.c: don't subtract unrelated memmap from zone's present
pages
memory_hotplug: fix missing nodemask
a N_MEMORY. We just intrude it as an alias to
N_HIGH_MEMORY and fix all im-proper usages of N_HIGH_MEMORY in late patches.
Signed-off-by: Lai Jiangshan la...@cn.fujitsu.com
Acked-by: Christoph Lameter c...@linux.com
Acked-by: Hillf Danton dhi...@gmail.com
---
include/linux/nodemask.h |1 +
1 files
N_HIGH_MEMORY stands for the nodes that has normal or high memory.
N_MEMORY stands for the nodes that has any memory.
The code here need to handle with the nodes which have memory, we should
use N_MEMORY instead.
Signed-off-by: Lai Jiangshan la...@cn.fujitsu.com
Acked-by: Hillf Danton dhi
update nodemasks management for N_MEMORY
Signed-off-by: Lai Jiangshan la...@cn.fujitsu.com
---
Documentation/memory-hotplug.txt |5 +++-
include/linux/memory.h |1 +
mm/memory_hotplug.c | 49 +
3 files changed, 48 insertions
From: Yasuaki Ishimatsu isimatu.yasu...@jp.fujitsu.com
memblock.current_limit is set directly though memblock_set_current_limit()
is prepared. So fix it.
Signed-off-by: Yasuaki Ishimatsu isimatu.yasu...@jp.fujitsu.com
Signed-off-by: Lai Jiangshan la...@cn.fujitsu.com
---
arch/x86/kernel/setup.c
.
The patch adds the check to memblock_find_in_range_node()
Signed-off-by: Yasuaki Ishimatsu isimatu.yasu...@jp.fujitsu.com
Signed-off-by: Lai Jiangshan la...@cn.fujitsu.com
---
mm/memblock.c |5 +++--
1 files changed, 3 insertions(+), 2 deletions(-)
diff --git a/mm/memblock.c b/mm/memblock.c
index
.
Signed-off-by: Yasuaki Ishimatsu isimatu.yasu...@jp.fujitsu.com
Signed-off-by: Lai Jiangshan la...@cn.fujitsu.com
---
include/linux/memblock.h |1 +
mm/memblock.c|5 -
mm/page_alloc.c |6 +-
3 files changed, 10 insertions(+), 2 deletions(-)
diff --git a/include
for THP.
Current constraints: Only the memoryblock which is adjacent to the ZONE_MOVABLE
can be onlined from ZONE_NORMAL to ZONE_MOVABLE.
For opposite onlining behavior, we also introduce online_kernel to change
a memoryblock of ZONE_MOVABLE to ZONE_KERNEL when online.
Signed-off-by: Lai
N_HIGH_MEMORY stands for the nodes that has normal or high memory.
N_MEMORY stands for the nodes that has any memory.
The code here need to handle with the nodes which have memory, we should
use N_MEMORY instead.
Signed-off-by: Lai Jiangshan la...@cn.fujitsu.com
---
init/main.c |2 +-
1
-off-by: Lai Jiangshan la...@cn.fujitsu.com
---
arch/x86/mm/init_64.c |4 +++-
mm/page_alloc.c | 40 ++--
2 files changed, 25 insertions(+), 19 deletions(-)
diff --git a/arch/x86/mm/init_64.c b/arch/x86/mm/init_64.c
index 2b6b4a3..005f00c 100644
).
Signed-off-by: Lai Jiangshan la...@cn.fujitsu.com
---
Documentation/kernel-parameters.txt |9 +
mm/page_alloc.c | 29 -
2 files changed, 37 insertions(+), 1 deletions(-)
diff --git a/Documentation/kernel-parameters.txt
b/Documentation
All are prepared, we can actually introduce N_MEMORY.
add CONFIG_MOVABLE_NODE make we can use it for movable-dedicated node
Signed-off-by: Lai Jiangshan la...@cn.fujitsu.com
---
drivers/base/node.c |6 ++
include/linux/nodemask.h |4
mm/Kconfig |8
().
Signed-off-by: Yasuaki Ishimatsu isimatu.yasu...@jp.fujitsu.com
Signed-off-by: Lai Jiangshan la...@cn.fujitsu.com
---
arch/x86/mm/numa.c |8 ++--
1 files changed, 6 insertions(+), 2 deletions(-)
diff --git a/arch/x86/mm/numa.c b/arch/x86/mm/numa.c
index 2d125be..a86e315 100644
--- a/arch
N_HIGH_MEMORY stands for the nodes that has normal or high memory.
N_MEMORY stands for the nodes that has any memory.
The code here need to handle with the nodes which have memory, we should
use N_MEMORY instead.
Signed-off-by: Lai Jiangshan la...@cn.fujitsu.com
Acked-by: Christoph Lameter c
N_HIGH_MEMORY stands for the nodes that has normal or high memory.
N_MEMORY stands for the nodes that has any memory.
The code here need to handle with the nodes which have memory, we should
use N_MEMORY instead.
Signed-off-by: Lai Jiangshan la...@cn.fujitsu.com
Acked-by: Christoph Lameter c
N_HIGH_MEMORY stands for the nodes that has normal or high memory.
N_MEMORY stands for the nodes that has any memory.
The code here need to handle with the nodes which have memory, we should
use N_MEMORY instead.
Signed-off-by: Lai Jiangshan la...@cn.fujitsu.com
Acked-by: Hillf Danton dhi
N_HIGH_MEMORY stands for the nodes that has normal or high memory.
N_MEMORY stands for the nodes that has any memory.
The code here need to handle with the nodes which have memory, we should
use N_MEMORY instead.
Signed-off-by: Lai Jiangshan la...@cn.fujitsu.com
---
mm/memcontrol.c | 18
N_HIGH_MEMORY stands for the nodes that has normal or high memory.
N_MEMORY stands for the nodes that has any memory.
The code here need to handle with the nodes which have memory, we should
use N_MEMORY instead.
Signed-off-by: Lai Jiangshan la...@cn.fujitsu.com
---
kernel/kthread.c |2
N_HIGH_MEMORY stands for the nodes that has normal or high memory.
N_MEMORY stands for the nodes that has any memory.
The code here need to handle with the nodes which have memory, we should
use N_MEMORY instead.
Signed-off-by: Lai Jiangshan la...@cn.fujitsu.com
Acked-by: Hillf Danton dhi
N_HIGH_MEMORY stands for the nodes that has normal or high memory.
N_MEMORY stands for the nodes that has any memory.
The code here need to handle with the nodes which have memory, we should
use N_MEMORY instead.
Signed-off-by: Lai Jiangshan la...@cn.fujitsu.com
Acked-by: Hillf Danton dhi
N_HIGH_MEMORY stands for the nodes that has normal or high memory.
N_MEMORY stands for the nodes that has any memory.
The code here need to handle with the nodes which have memory, we should
use N_MEMORY instead.
Signed-off-by: Lai Jiangshan la...@cn.fujitsu.com
---
mm/mempolicy.c | 12
SLUB only fucus on the nodes which has normal memory, so ignore the other
node's hot-adding and hot-removing.
Signed-off-by: Lai Jiangshan la...@cn.fujitsu.com
---
mm/slub.c |4 ++--
1 files changed, 2 insertions(+), 2 deletions(-)
diff --git a/mm/slub.c b/mm/slub.c
index 8c691fa..f8b137a
is needed to do it similarly
and new approach should also handle other long living unreclaimable memory.
Current blindly subtracted-present-pages-size approach does wrong, remove it.
Signed-off-by: Lai Jiangshan la...@cn.fujitsu.com
---
mm/page_alloc.c | 20 +---
1 files changed, 1
N_HIGH_MEMORY stands for the nodes that has normal or high memory.
N_MEMORY stands for the nodes that has any memory.
The code here need to handle with the nodes which have memory, we should
use N_MEMORY instead.
Signed-off-by: Lai Jiangshan la...@cn.fujitsu.com
Acked-by: Hillf Danton dhi
Make it more readability and easy to add new state.
Signed-off-by: Lai Jiangshan la...@cn.fujitsu.com
---
drivers/base/node.c | 20 ++--
1 files changed, 10 insertions(+), 10 deletions(-)
diff --git a/drivers/base/node.c b/drivers/base/node.c
index af1a177..5d7731e 100644
Currently memory_hotplug only manages the node_states[N_HIGH_MEMORY],
it forgot to manage node_states[N_NORMAL_MEMORY]. fix it.
Signed-off-by: Lai Jiangshan la...@cn.fujitsu.com
---
Documentation/memory-hotplug.txt |5 ++-
include/linux/memory.h |1 +
mm/memory_hotplug.c
srcu_reschedule(struct srcu_struct *sp)
}
if (pending)
- queue_delayed_work(system_nrt_wq, sp-work, SRCU_INTERVAL);
+ schedule_delayed_work(sp-work, SRCU_INTERVAL);
}
/*
Acked-By: Lai Jiangshan la...@cn.fujitsu.com
--
To unsubscribe from this list: send
this_cpu_dec() can do the same thing and sometimes it is better.
(avoid preempt_disable() and use more tiny instructions)
Signed-off-by: Lai Jiangshan la...@cn.fujitsu.com
---
kernel/srcu.c |4 +---
1 files changed, 1 insertions(+), 3 deletions(-)
diff --git a/kernel/srcu.c b/kernel/srcu.c
-by: Lai Jiangshan la...@cn.fujitsu.com
---
kernel/srcu.c | 11 +--
1 files changed, 5 insertions(+), 6 deletions(-)
diff --git a/kernel/srcu.c b/kernel/srcu.c
index 55524f6..38a762f 100644
--- a/kernel/srcu.c
+++ b/kernel/srcu.c
@@ -471,12 +471,11 @@ EXPORT_SYMBOL_GPL(synchronize_srcu
?
Signed-off-by: Lai Jiangshan la...@cn.fujitsu.com
---
kernel/srcu.c |3 +--
1 files changed, 1 insertions(+), 2 deletions(-)
diff --git a/kernel/srcu.c b/kernel/srcu.c
index 38a762f..224400a 100644
--- a/kernel/srcu.c
+++ b/kernel/srcu.c
@@ -294,9 +294,8 @@ int __srcu_read_lock(struct srcu_struct
The core of srcu is changed, but the comments of synchronize_srcu()
describe the old algorithm. Update it to match the new algorithm.
Signed-off-by: Lai Jiangshan la...@cn.fujitsu.com
---
kernel/srcu.c | 10 ++
1 files changed, 6 insertions(+), 4 deletions(-)
diff --git a/kernel
you need read-side critical sections that are respected
even though they are in the middle of the idle loop, during
user-mode execution, or on an offlined CPU? If so, SRCU is the
only choice that will work for you.
Signed-off-by: Lai Jiangshan la...@cn.fujitsu.com
Hi, Paul
These are tiny cleanups for srcu.
PATCH 4~8 make the code or the comments match the new SRCU.
Thanks,
Lai
Lai Jiangshan (8):
srcu: simplify __srcu_read_unlock() via this_cpu_dec()
srcu: add might_sleep() annotation to synchronize_srcu()
srcu: simple cleanup
synchronize_srcu() can sleep but it will not sleep if the fast path
succeeds. This annotation will helps us to catch the problem early
if it is called in a wrong contex which can't sleep.
Signed-off-by: Lai Jiangshan la...@cn.fujitsu.com
---
kernel/srcu.c |1 +
1 files changed, 1 insertions
you need read-side critical sections that are respected
even though they are in the middle of the idle loop, during
user-mode execution, or on an offlined CPU? If so, SRCU is the
only choice that will work for you.
Signed-off-by: Lai Jiangshan la...@cn.fujitsu.com
Pack 6 lines of code into 2 lines.
Signed-off-by: Lai Jiangshan la...@cn.fujitsu.com
---
kernel/srcu.c |8 ++--
1 files changed, 2 insertions(+), 6 deletions(-)
diff --git a/kernel/srcu.c b/kernel/srcu.c
index 48d0edb..ac08970 100644
--- a/kernel/srcu.c
+++ b/kernel/srcu.c
@@ -278,12
as movable.
+
+ Say Y here if you want to hotplug a whole node.
+ Say N here if you want kernel to use memory on all nodes evenly.
Thank you for adding the help text which should have been done by me.
Reviewed-by: Lai Jiangshan la...@cn.fujitsu.com
--
To unsubscribe from this list: send
Hi, Tejun
On 09/27/2012 02:38 AM, Tejun Heo wrote:
On Thu, Sep 27, 2012 at 01:20:42AM +0800, Lai Jiangshan wrote:
works in system_long_wq will be running long.
add WQ_CPU_INTENSIVE to system_long_wq to avoid these kinds of works occupy
the running wokers which delay the normal works
On 09/27/2012 02:28 AM, Tejun Heo wrote:
On Thu, Sep 27, 2012 at 01:20:35AM +0800, Lai Jiangshan wrote:
is_chained_work() is too complicated. we can simply found out
whether current task is worker by PF_WQ_WORKER or wq-rescuer.
Signed-off-by: Lai Jiangshan la...@cn.fujitsu.com
---
kernel
On 09/27/2012 02:36 AM, Tejun Heo wrote:
On Thu, Sep 27, 2012 at 01:20:38AM +0800, Lai Jiangshan wrote:
All newly created worker will enter idle soon,
WORKER_STARTED is not used any more, remove it.
Please merge this with the previous patch.
OK, I will do it.
Thanks,
Lai
On 09/27/2012 02:24 AM, Tejun Heo wrote:
On Thu, Sep 27, 2012 at 01:20:34AM +0800, Lai Jiangshan wrote:
There is no reason to use WORKER_PREP, remove it from rescuer.
And there is no reason to set it so early in alloc_worker(),
move worker-flags = WORKER_PREP to start_worker().
Merge
On 09/27/2012 02:07 AM, Tejun Heo wrote:
On Thu, Sep 27, 2012 at 01:20:32AM +0800, Lai Jiangshan wrote:
rescuer thread must be a worker which is WORKER_NOT_RUNNING:
If it is *not* WORKER_NOT_RUNNING, it will increase the nr_running
and it disables the normal workers wrongly.
So
On 09/27/2012 02:34 AM, Tejun Heo wrote:
(cc'ing Ray Jui)
On Thu, Sep 27, 2012 at 01:20:36AM +0800, Lai Jiangshan wrote:
rescuer is NOT_RUNNING, so there is no sense when it wakes up other workers,
if there are available normal workers, they are already woken up when needed.
Signed-off
It is safe to aquire scheduler lock in rnp-lock since the rcu read lock is
always deadlock-immunity(rnp-lock is always can't be nested in scheduler lock)
it partial revert patch 016a8d5b.
Signed-off-by: Lai Jiangshan la...@cn.fujitsu.com
---
kernel/rcutree.c | 17 ++---
kernel
,
the synchronize_rcu_expedited() will be slowed down, and it can't get help
from rcu_boost.
Signed-off-by: Lai Jiangshan la...@cn.fujitsu.com
---
kernel/rcutree_plugin.h | 13 -
1 files changed, 8 insertions(+), 5 deletions(-)
diff --git a/kernel/rcutree_plugin.h b/kernel/rcutree_plugin.h
index 54f7e45..6b23b6f
which defers rcu_read_unlock_special()
if irq is disabled when __rcu_read_unlock() is called.
So __rcu_read_unlock() can't work here(it is irq-disabled here)
if the next patch applied.
Signed-off-by: Lai Jiangshan la...@cn.fujitsu.com
---
kernel/rcutree_plugin.h | 11 +++
1 files changed, 7
Signed-off-by: Lai Jiangshan la...@cn.fujitsu.com
---
kernel/rcutree.h|1 +
kernel/rcutree_plugin.h |1 +
kernel/rcutree_trace.c |1 +
3 files changed, 3 insertions(+), 0 deletions(-)
diff --git a/kernel/rcutree.h b/kernel/rcutree.h
index 4a39d36..a5e9643 100644
--- a/kernel
After patch 10f39bb1, special RCU_READ_UNLOCK_BLOCKED can't be true
in irq nor softirq.(due to RCU_READ_UNLOCK_BLOCKED can only be set
when preemption)
Signed-off-by: Lai Jiangshan la...@cn.fujitsu.com
---
kernel/rcutree_plugin.h |6 --
1 files changed, 0 insertions(+), 6 deletions
described in 016a8d5b
is still existed(rcu_read_unlock_special() calls wake_up).
The problem is fixed in patch5.
Lai Jiangshan (8):
rcu: add a warn to rcu_preempt_note_context_switch()
rcu: rcu_read_unlock_special() can be nested in irq/softirq 10f39bb1
rcu: keep irqs disabled
if rcu_read_unlock_special() is deferred, we can invoke it earlier
in the schedule-tick.
Signed-off-by: Lai Jiangshan la...@cn.fujitsu.com
---
kernel/rcutree_plugin.h |5 -
1 files changed, 4 insertions(+), 1 deletions(-)
diff --git a/kernel/rcutree_plugin.h b/kernel/rcutree_plugin.h
().
The algorithm enlarges the probability of deferring, but the probability
is still very very low.
Deferring does add a small overhead, but it offers us:
1) really deadlock-immunity for rcu read site
2) remove the overhead of the irq-work(250 times per second in avg.)
Signed-off-by: Lai
It is expected that _nesting == INT_MIN if _nesting 0.
Add a warning to it if something unexpected happen.
Signed-off-by: Lai Jiangshan la...@cn.fujitsu.com
---
kernel/rcutree_plugin.h |1 +
1 files changed, 1 insertions(+), 0 deletions(-)
diff --git a/kernel/rcutree_plugin.h b/kernel
Just a clean-up, but it gives us better readability.
Sign-off-by: Lai Jiangshan la...@cn.fujitsu.com
---
diff --git a/kernel/trace/trace_events.c b/kernel/trace/trace_events.c
index 7d85429..a44f501 100644
--- a/kernel/trace/trace_events.c
+++ b/kernel/trace/trace_events.c
@@ -1827,13 +1827,13
,
and pool will not serve to works, any work which is tried to be queued
on that pool will be rejected except chained works.
2) when all the pending works are finished and all workers are idle, worker
thread will schedule offline_pool() to clear workers.
Signed-off-by: Lai Jiangshan la
On 07/25/2013 11:31 PM, Tejun Heo wrote:
Hello, Lai.
On Thu, Jul 25, 2013 at 06:52:02PM +0800, Lai Jiangshan wrote:
The unbound pools and their workers can be destroyed/cleared
when their refcnt become zero. But the cpu pool can't be destroyed
due to they are always referenced, their refcnt
On 07/26/2013 11:07 AM, Tejun Heo wrote:
Hello,
On Fri, Jul 26, 2013 at 10:13:25AM +0800, Lai Jiangshan wrote:
Hmmm... if I'm not confused, now the cpu pools just behave like a
normal unbound pool when the cpu goes down,
cpu pools are always referenced, they don't behave like unbound pool
On Fri, Jul 26, 2013 at 6:22 PM, Tejun Heo t...@kernel.org wrote:
On Fri, Jul 26, 2013 at 11:47:04AM +0800, Lai Jiangshan wrote:
any worker can't kill itself.
managers always tries to leave 2 workers.
so the workers of the offline cpu pool can't be totally destroyed.
But we *do* want
On 07/27/2013 07:19 AM, Paul E. McKenney wrote:
From: Paul E. McKenney paul...@linux.vnet.ibm.com
At least one CPU must keep the scheduling-clock tick running for
timekeeping purposes whenever there is a non-idle CPU. However, with
the new nohz_full adaptive-idle machinery, it is difficult
On 07/27/2013 07:19 AM, Paul E. McKenney wrote:
From: Paul E. McKenney paul...@linux.vnet.ibm.com
Because RCU's quiescent-state-forcing mechanism is used to drive the
full-system-idle state machine, and because this mechanism is executed
by RCU's grace-period kthreads, this commit forces
On 07/27/2013 07:19 AM, Paul E. McKenney wrote:
From: Paul E. McKenney paul...@linux.vnet.ibm.com
This commit adds the state machine that takes the per-CPU idle data
as input and produces a full-system-idle indication as output. This
state machine is driven out of RCU's
On 07/30/2013 12:52 AM, Paul E. McKenney wrote:
On Mon, Jul 29, 2013 at 11:36:05AM +0800, Lai Jiangshan wrote:
On 07/27/2013 07:19 AM, Paul E. McKenney wrote:
From: Paul E. McKenney paul...@linux.vnet.ibm.com
Because RCU's quiescent-state-forcing mechanism is used to drive the
full-system
[PATCH] rcu/rt_mutex: eliminate a kind of deadlock for rcu read site
Current rtmutex's lock-wait_lock doesn't disables softirq nor irq, it will
cause rcu read site deadlock when rcu overlaps with any
softirq-context/irq-context lock.
@L is a spinlock of softirq or irq context.
CPU1
On 08/26/2013 01:43 AM, Paul E. McKenney wrote:
On Sun, Aug 25, 2013 at 11:19:37PM +0800, Lai Jiangshan wrote:
Hi, Steven
Any comments about this patch?
For whatever it is worth, it ran without incident for two hours worth
of rcutorture on my P5 test (boosting but no CPU hotplug).
Lai
, which defaults to 8. Note that this is a build-time
definition.
Signed-off-by: Paul E. McKenney paul...@linux.vnet.ibm.com
Cc: Frederic Weisbecker fweis...@gmail.com
Cc: Steven Rostedt rost...@goodmis.org
Cc: Lai Jiangshan la...@cn.fujitsu.com
[ paulmck: Use true and false for boolean
On 08/27/2013 12:24 AM, Paul E. McKenney wrote:
On Mon, Aug 26, 2013 at 01:45:32PM +0800, Lai Jiangshan wrote:
On 08/20/2013 10:47 AM, Paul E. McKenney wrote:
From: Paul E. McKenney paul...@linux.vnet.ibm.com
This commit adds the state machine that takes the per-CPU idle data
as input
On 08/20/2013 10:42 AM, Paul E. McKenney wrote:
From: Paul E. McKenney paul...@linux.vnet.ibm.com
This commit drops an unneeded ACCESS_ONCE() and simplifies an our work
is done check in _rcu_barrier(). This applies feedback from Linus
(https://lkml.org/lkml/2013/7/26/777) that he gave to
On 08/20/2013 10:42 AM, Paul E. McKenney wrote:
From: Borislav Petkov b...@alien8.de
CONFIG_RCU_FAST_NO_HZ can increase grace-period durations by up to
a factor of four, which can result in long suspend and resume times.
Thus, this commit temporarily switches to expedited grace periods when
On 08/20/2013 10:51 AM, Paul E. McKenney wrote:
From: Paul E. McKenney paul...@linux.vnet.ibm.com
This commit adds a object_debug option to rcutorture to allow the
debug-object-based checks for duplicate call_rcu() invocations to
be deterministically tested.
Signed-off-by: Paul E.
On 08/21/2013 02:38 AM, Paul E. McKenney wrote:
On Tue, Aug 20, 2013 at 06:02:39PM +0800, Lai Jiangshan wrote:
On 08/20/2013 10:51 AM, Paul E. McKenney wrote:
From: Paul E. McKenney paul...@linux.vnet.ibm.com
This commit adds a object_debug option to rcutorture to allow the
debug-object
On 08/21/2013 11:17 AM, Paul E. McKenney wrote:
On Sat, Aug 10, 2013 at 08:07:15AM -0700, Paul E. McKenney wrote:
On Sat, Aug 10, 2013 at 11:43:59AM +0800, Lai Jiangshan wrote:
[ . . . ]
So I have to narrow the range of suspect locks. Two choices:
A) don't call rt_mutex_unlock() from
On 07/16/2013 10:41 PM, Srivatsa S. Bhat wrote:
Hi,
I have been seeing this warning every time during boot. I haven't
spent time digging through it though... Please let me know if
any machine-specific info is needed.
Regards,
Srivatsa S. Bhat
On 07/19/2013 04:23 AM, Srivatsa S. Bhat wrote:
On 07/17/2013 03:37 PM, Lai Jiangshan wrote:
On 07/16/2013 10:41 PM, Srivatsa S. Bhat wrote:
Hi,
I have been seeing this warning every time during boot. I haven't
spent time digging through it though... Please let me know if
any machine
-by: Tejun Heo t...@kernel.org
Acked-by: Lai Jiangshan la...@cn.fujitsu.com
---
Documentation/workqueue.txt | 18 ++
include/linux/workqueue.h | 7 ++-
2 files changed, 12 insertions(+), 13 deletions(-)
diff --git a/Documentation/workqueue.txt b/Documentation
sure we
can change it as needed without breaking all users.
Acked-by: Paul E. McKenney paul...@linux.vnet.ibm.com
Signed-off-by: Michael S. Tsirkin m...@redhat.com
Reviewed-by: Paul E. McKenney paul...@linux.vnet.ibm.com
Acked-by: Lai Jiangshan la...@cn.fujitsu.com
---
include/linux
On 10/08/2013 06:25 PM, Peter Zijlstra wrote:
From: Oleg Nesterov o...@redhat.com
Add the new struct rcu_sync_ops which holds sync/call methods, and
turn the function pointers in rcu_sync_struct into an array of struct
rcu_sync_ops.
Hi, Paul
I think this work should be done in
rcu_sync_exit(); eveybody will now
s/eveybody/everybody/
Please add
Reviewed-by: Lai Jiangshan la...@cn.fujitsu.com
Thanks,
Lai
+ * have observed the write side critical section. Let 'em rip!.
+ */
+ rss-cb_state = CB_IDLE;
+ rss-gp_state
On 10/17/2013 11:42 PM, Paul E. McKenney wrote:
On Thu, Oct 17, 2013 at 10:07:15AM +0800, Lai Jiangshan wrote:
On 10/08/2013 06:25 PM, Peter Zijlstra wrote:
From: Oleg Nesterov o...@redhat.com
Add the new struct rcu_sync_ops which holds sync/call methods, and
turn the function pointers
CC scheduler people.
I can't figure out what we get with this patch.
On 02/17/2014 07:27 PM, Tetsuo Handa wrote:
Tetsuo Handa wrote:
This is a draft patch which changes task_struct-comm to use RCU.
Changes from previous draft version:
Changed struct rcu_comm to use copy-on-write
not elaborate
any use case like this patch, but it is a valid way to use
kthread_stop().
CC: sta...@vger.kernel.org
Signed-off-by: Lai Jiangshan la...@cn.fujitsu.com
---
kernel/workqueue.c |3 +++
1 files changed, 3 insertions(+), 0 deletions(-)
diff --git a/kernel/workqueue.c b/kernel
from manager and kick worker to die
directly in idle timeout handler. And we remove %POOL_MANAGE_WORKERS which
help us remove a branch in worker_thread().
Signed-off-by: Lai Jiangshan la...@cn.fujitsu.com
---
kernel/workqueue.c | 85
1 files
When @worker is set %WORKER_DIE, it is moved out from
idle_listidr, no one can access it excepct kthread_data().
And in worker_thread, its task is clearred %PF_WQ_WORKER,
no one can access the @worker via kthread_data(),
we can safely free it.
Signed-off-by: Lai Jiangshan la...@cn.fujitsu.com
the destructions to ensure every
worker access to valid pool when performing self-destruction.
so this patch adds a special sync code to put_unbound_pool().
Signed-off-by: Lai Jiangshan la...@cn.fujitsu.com
---
kernel/workqueue.c | 47 ++-
1 files
Sorry, the cover letter was forgotten to send to LKML.
On 02/18/2014 12:24 AM, Lai Jiangshan wrote:
This patchset moves the worker-destruction(partial) to worker_thread(),
and worker to be die will perform self-destruction.
This async worker destruction helps us to reduce the mananger's
On 02/19/2014 05:37 AM, Tejun Heo wrote:
Hello, Lai.
I massaged the patch a bit and applied it to wq/for-3.14-fixes.
Thanks.
8
From 5bdfff96c69a4d5ab9c49e60abf9e070ecd2acbb Mon Sep 17 00:00:00 2001
From: Lai Jiangshan la...@cn.fujitsu.com
Date: Sat, 15 Feb 2014 22:02:28
If a worker is wokenup unexpectedly, it will start to work incorretly.
Although it hardly happen, we should catch it and wait for being started
if it does happen.
Signed-off-by: Lai Jiangshan la...@cn.fujitsu.com
---
kernel/workqueue.c |6 ++
1 files changed, 6 insertions(+), 0 deletions
On 02/20/2014 08:11 AM, Tejun Heo wrote:
Hello, Lai.
On Wed, Feb 19, 2014 at 11:47:58AM +0800, Lai Jiangshan wrote:
If a worker is wokenup unexpectedly, it will start to work incorretly.
Although it hardly happen, we should catch it and wait for being started
if it does happen.
Can
On 02/20/2014 09:50 AM, Lai Jiangshan wrote:
On 02/20/2014 08:11 AM, Tejun Heo wrote:
Hello, Lai.
On Wed, Feb 19, 2014 at 11:47:58AM +0800, Lai Jiangshan wrote:
If a worker is wokenup unexpectedly, it will start to work incorretly.
Although it hardly happen, we should catch it and wait
Acked-by: Lai Jiangshan la...@cn.fujitsu.com
On 02/01/2014 03:53 AM, Zoran Markovic wrote:
From: Shaibal Dutta shaibal.du...@broadcom.com
For better use of CPU idle time, allow the scheduler to select the CPU
on which the SRCU grace period work would be scheduled. This improves
idle
application cannot tolerate:
a. Build your kernel with CONFIG_SLUB=y rather than
CONFIG_SLAB=y, thus avoiding the slab allocator's periodic
Reviewed-by: Lai Jiangshan la...@cn.fujitsu.com
--
To unsubscribe from this list: send the line unsubscribe linux-kernel
On 02/12/2014 11:18 PM, Jason J. Herne wrote:
On 02/10/2014 06:17 PM, Tejun Heo wrote:
Hello,
On Mon, Feb 10, 2014 at 10:32:11AM -0500, Jason J. Herne wrote:
[ 950.778485] XXX: worker-flags=0x1 pool-flags=0x0 cpu=6
pool-cpu=2 rescue_wq= (null)
[ 950.778488] XXX: last_unbind=-7
On 02/12/2014 11:18 PM, Jason J. Herne wrote:
On 02/10/2014 06:17 PM, Tejun Heo wrote:
Hello,
On Mon, Feb 10, 2014 at 10:32:11AM -0500, Jason J. Herne wrote:
[ 950.778485] XXX: worker-flags=0x1 pool-flags=0x0 cpu=6
pool-cpu=2 rescue_wq= (null)
[ 950.778488] XXX: last_unbind=-7
On 06/11/2013 08:51 AM, Linus Torvalds wrote:
On Mon, Jun 10, 2013 at 5:44 PM, Steven Rostedt rost...@goodmis.org wrote:
OK, I haven't found a issue here yet, but youss are beiing trickssy! We
don't like trickssy, and we must find preiouss!!!
.. and I personally have my usual
On Mon, Jun 10, 2013 at 3:36 AM, Paul E. McKenney
paul...@linux.vnet.ibm.com wrote:
Breaking up locks is better than implementing high-contention locks, but
if we must have high-contention locks, why not make them automatically
switch between light-weight ticket locks at low contention and
On Tue, Jun 11, 2013 at 10:48 PM, Lai Jiangshan eag0...@gmail.com wrote:
On Mon, Jun 10, 2013 at 3:36 AM, Paul E. McKenney
paul...@linux.vnet.ibm.com wrote:
Breaking up locks is better than implementing high-contention locks, but
if we must have high-contention locks, why not make them
code and update comments (Steven Rostedt). ]
[ paulmck: Address Eric Dumazet review feedback. ]
[ paulmck: Use Lai Jiangshan idea to eliminate smp_mb(). ]
[ paulmck: Expand -head_tkt from s32 to s64 (Waiman Long). ]
[ paulmck: Move cpu_relax() to main spin loop (Steven Rostedt). ]
[ paulmck
On Wed, Jun 12, 2013 at 9:58 AM, Steven Rostedt rost...@goodmis.org wrote:
On Wed, 2013-06-12 at 09:19 +0800, Lai Jiangshan wrote:
+
+/*
+ * Hand the lock off to the first CPU on the queue.
+ */
+void tkt_q_do_wake(arch_spinlock_t *lock)
+{
+ struct tkt_q_head *tqhp
in ticket mode.
Signed-off-by: Paul E. McKenney paul...@linux.vnet.ibm.com
[ paulmck: Eliminate duplicate code and update comments (Steven Rostedt). ]
[ paulmck: Address Eric Dumazet review feedback. ]
[ paulmck: Use Lai Jiangshan idea to eliminate smp_mb(). ]
[ paulmck: Expand -head_tkt from s32
Rostedt). ]
[ paulmck: Address Eric Dumazet review feedback. ]
[ paulmck: Use Lai Jiangshan idea to eliminate smp_mb(). ]
[ paulmck: Expand -head_tkt from s32 to s64 (Waiman Long). ]
[ paulmck: Move cpu_relax() to main spin loop (Steven Rostedt). ]
[ paulmck: Reduce queue-switch contention (Waiman
401 - 500 of 2229 matches
Mail list logo