we use list_del_init(worker-entry) when we notice idles to rebind
or destroy idles.
So we can use list_empty(worker-entry) to know: does the worker
need to rebind or has the worker been killed.
WORKER_REBIND is not need any more, remove it and reduce the states
of workers.
Signed-off-by: Lai
when it is doing rebind in rebind_workers(), so we don't need to use two flags,
just one is enough. remove WORKER_REBIND from busy rebinding.
Signed-off-by: Lai Jiangshan la...@cn.fujitsu.com
---
kernel/workqueue.c | 19 ++-
1 files changed, 2 insertions(+), 17 deletions(-)
diff
It makes less sense to use __devinit(the memory will be discard
after boot when !HOTPLUG).
It will be more accurate to to use __cpuinit(the memory will be discard
after boot when !HOTPLUG_CPU).
Signed-off-by: Lai Jiangshan la...@cn.fujitsu.com
---
kernel/workqueue.c |4 ++--
1 files changed
idle_list to ensure the local-wake-up is correct
instead.
Patch3-7 do simple cleanup
Patch2-7 are ready for for-next. I have other devlopment and cleanup for
workqueue,
should I wait this patchset merged or send them at the same time?
Lai Jiangshan (7):
workqueue: clear WORKER_REBIND
We can't known what is being protected from the name of
manager_mutex or be misled by the name.
Actually, it protects the CPU-association of the gcwq,
rename it to assoc_mutex will be better.
Signed-off-by: Lai Jiangshan la...@cn.fujitsu.com
---
kernel/workqueue.c | 40
/workqueue.o.hotcpu_notifier
textdata bss dec hex filename
1851323871221 221215669 kernel/workqueue.o.cpu_notifier
1808223551221 21658549a kernel/workqueue.o.hotcpu_notifier
Signed-off-by: Lai Jiangshan la...@cn.fujitsu.com
---
kernel/workqueue.c |2 +-
1
The argument @delayed is always false in all call site,
we simply remove it.
Signed-off-by: Lai Jiangshan la...@cn.fujitsu.com
---
kernel/workqueue.c | 21 -
1 files changed, 8 insertions(+), 13 deletions(-)
diff --git a/kernel/workqueue.c b/kernel/workqueue.c
index
-pool before delete it
in try_to_grab_pending(), thus the tagalong is left in
cwq-pool like as grabbing non-delayed work.
Signed-off-by: Lai Jiangshan la...@cn.fujitsu.com
---
kernel/workqueue.c | 26 +++---
1 files changed, 23 insertions(+), 3 deletions(-)
diff --git a/kernel
, the patch needs go to -stable.
If it is user's responsibility. it is a nice cleanup, it can go to for-next.
I prefer it is workqueue's responsibility.
Signed-off-by: Lai Jiangshan la...@cn.fujitsu.com
---
kernel/workqueue.c |2 +-
1 files changed, 1 insertions(+), 1 deletions(-)
diff --git
Using a helper instead of open code makes thaw_workqueues() more clear.
Signed-off-by: Lai Jiangshan la...@cn.fujitsu.com
---
kernel/workqueue.c | 26 +-
1 files changed, 21 insertions(+), 5 deletions(-)
diff --git a/kernel/workqueue.c b/kernel/workqueue.c
index
, not just for SRCU,
but also for RCU-bh. Also document the fact that SRCU readers are
respected on CPUs executing in user mode, idle CPUs, and even on
offline CPUs.
Signed-off-by: Paul E. McKenney paul...@linux.vnet.ibm.com
Good. (Sorry, I'm late.)
Reviewed-by: Lai Jiangshan la
On 09/17/2012 11:46 PM, Lai Jiangshan wrote:
Patch1 fix new found possible bug.
Patch2 use async algorithm to replace the synchronous algorithm to rebind
idle workers.
The synchronous algorithm requires 3 hand shakes, it introduces much
complicated.
The new async algorithm does not do
On 09/19/2012 01:05 AM, Tejun Heo wrote:
On Tue, Sep 18, 2012 at 04:36:53PM +0800, Lai Jiangshan wrote:
The whole workqueue.c keeps activate-order equals to queue_work()-order
in any given cwq except workqueue_set_max_active().
If this order is not kept, something may be not good
On 09/19/2012 01:08 AM, Tejun Heo wrote:
On Tue, Sep 18, 2012 at 10:05:19AM -0700, Tejun Heo wrote:
On Tue, Sep 18, 2012 at 04:36:53PM +0800, Lai Jiangshan wrote:
The whole workqueue.c keeps activate-order equals to queue_work()-order
in any given cwq except workqueue_set_max_active
On 09/11/2012 06:18 PM, Yasuaki Ishimatsu wrote:
Hi Lai,
2012/09/11 18:44, Lai Jiangshan wrote:
On 09/11/2012 08:40 AM, Yasuaki Ishimatsu wrote:
Hi Lai,
Using memory_online to hot-added node's memory, the following kernel
messages
were shown. Is this a known issue?
Fixed.
Subject
-up is correct
instead.
Patch2-6 do simple cleanup
Lai Jiangshan (6):
workqueue: async idle rebinding
workqueue: new day don't need WORKER_REBIND for busy rebinding
workqueue: remove WORKER_REBIND
workqueue: rename manager_mutex to assoc_mutex
workqueue: use __cpuinit instead of __devinit
/workqueue.o.hotcpu_notifier
textdata bss dec hex filename
1851323871221 221215669 kernel/workqueue.o.cpu_notifier
1808223551221 21658549a kernel/workqueue.o.hotcpu_notifier
Signed-off-by: Lai Jiangshan la...@cn.fujitsu.com
---
kernel/workqueue.c |2
or @idle_list, and make them be aware to exile-operation.
(only change too_many_workers() at the result)
rebind_workers() become single pass and don't release gcwq-lock in between.
Signed-off-by: Lai Jiangshan la...@cn.fujitsu.com
---
kernel/workqueue.c | 126
we use list_del_init(worker-entry) when rebind(exile) idles
or destroy idles.
So we can use list_empty(worker-entry) to know: does the worker
has been exiled or killed.
WORKER_REBIND is not need any more, remove it to reduce the states
of workers.
Signed-off-by: Lai Jiangshan la
We can't known what is being protected from the name of
manager_mutex or be misled by the name.
Actually, it protects the CPU-association of the gcwq,
rename it to assoc_mutex will be better.
Signed-off-by: Lai Jiangshan la...@cn.fujitsu.com
---
kernel/workqueue.c | 40
It makes less sense to use __devinit(the memory will be discard
after boot when !HOTPLUG).
It will be more accurate to to use __cpuinit(the memory will be discard
after boot when !HOTPLUG_CPU).
Signed-off-by: Lai Jiangshan la...@cn.fujitsu.com
---
kernel/workqueue.c |4 ++--
1 files changed
when it is doing rebind in rebind_workers(), so we don't need to use two flags,
just one is enough. remove WORKER_REBIND from busy rebinding.
Signed-off-by: Lai Jiangshan la...@cn.fujitsu.com
---
kernel/workqueue.c |8 +---
1 files changed, 1 insertions(+), 7 deletions(-)
diff --git
, but works of the cwq in cwq-pool are all
NO_COLOR,
so even when these works are finished, cwq-nr_active will not be
decreased,
and no work will be moved from cwq-delayed_works. the cwq is frozen.
Fix it by moving the tagalong to cwq-pool in try_to_grab_pending().
Signed-off-by: Lai
On 08/29/2012 04:17 AM, Tejun Heo wrote:
Hello, Lai.
On Tue, Aug 28, 2012 at 07:34:37PM +0800, Lai Jiangshan wrote:
So this implement adds an all_done, thus rebind_workers() can't leave until
idle_worker_rebind() successful wait something until all other idle also
done,
so this wait
On 08/28/2012 03:04 AM, Tejun Heo wrote:
Hello, Lai.
On Tue, Aug 28, 2012 at 01:58:24AM +0800, Lai Jiangshan wrote:
busy_worker_rebind_fn() can't return until all idle workers are rebound,
the code of busy_worker_rebind_fn() ensure this.
So we can change the order of the code
When hotplug happens, the plug code will also grab the manager_mutex,
it will break too_many_workers()'s assumption, and make too_many_workers()
ugly(kick the timer wrongly, no found bug).
To avoid assumption-coruption, we add the original POOL_MANAGING_WORKERS back.
Signed-off-by: Lai Jiangshan
.
Signed-off-by: Lai Jiangshan la...@cn.fujitsu.com
---
kernel/workqueue.c | 61 ---
1 files changed, 48 insertions(+), 13 deletions(-)
diff --git a/kernel/workqueue.c b/kernel/workqueue.c
index 4f252d0..1363b39 100644
--- a/kernel/workqueue.c
+++ b
wake up.
so we use one write/modify instruction explicitly instead.
This bug will not occur on idle workers, because they have another
WORKER_NOT_RUNNING flags.
Signed-off-by: Lai Jiangshan la...@cn.fujitsu.com
---
kernel/workqueue.c |7 +--
1 files changed, 5 insertions(+), 2 deletions
Patch 1~4 fix possible bugs.
Patch 1 fix possible double-write bug
Patch 2,5,7 makes the waiting logic more clear
Patch 3,4 fix bugs from manage VS hotplug
Patch 7,8,9 explicit logic to wait in busy-work-rebind and make rebind_workers()
single pass.
Lai Jiangshan (9
.
Signed-off-by: Lai Jiangshan la...@cn.fujitsu.com
---
kernel/workqueue.c | 18 +++---
1 files changed, 3 insertions(+), 15 deletions(-)
diff --git a/kernel/workqueue.c b/kernel/workqueue.c
index 719d6ec..7e6145b 100644
--- a/kernel/workqueue.c
+++ b/kernel/workqueue.c
@@ -1437,16
-by: Lai Jiangshan la...@cn.fujitsu.com
---
kernel/workqueue.c | 20
1 files changed, 8 insertions(+), 12 deletions(-)
diff --git a/kernel/workqueue.c b/kernel/workqueue.c
index e40898a..eeb5752 100644
--- a/kernel/workqueue.c
+++ b/kernel/workqueue.c
@@ -185,8 +185,6
.
By the way, if manager_mutex is grabbed by a real manager,
POOL_MANAGING_WORKERS will be set, the last idle can go to process work.
Signed-off-by: Lai Jiangshan la...@cn.fujitsu.com
---
kernel/workqueue.c | 42 ++
1 files changed, 34 insertions(+), 8
WORKER_REBIND is not used for other purpose,
idle_worker_rebind() can directly clear it.
Signed-off-by: Lai Jiangshan la...@cn.fujitsu.com
---
kernel/workqueue.c | 13 ++---
1 files changed, 2 insertions(+), 11 deletions(-)
diff --git a/kernel/workqueue.c b/kernel/workqueue.c
index
().
The sleeping mutex_lock(worker-pool-manager_mutex) must be put in the top of
busy_worker_rebind_fn(), because this busy worker thread can sleep
before the WORKER_REBIND is cleared, but can't sleep after
the WORKER_REBIND cleared.
It adds a small overhead to the unlikely path.
Signed-off-by: Lai
Currently is single pass, we can wait on idle_done instead wait on rebind_hold.
So we can remove rebind_hold and make the code simpler.
Signed-off-by: Lai Jiangshan la...@cn.fujitsu.com
---
kernel/workqueue.c | 25 +
1 files changed, 9 insertions(+), 16 deletions
On 08/30/2012 02:21 AM, Tejun Heo wrote:
Hello, Lai.
On Thu, Aug 30, 2012 at 12:51:54AM +0800, Lai Jiangshan wrote:
When hotplug happens, the plug code will also grab the manager_mutex,
it will break too_many_workers()'s assumption, and make too_many_workers()
ugly(kick the timer wrongly
On 08/30/2012 02:25 AM, Tejun Heo wrote:
On Thu, Aug 30, 2012 at 12:51:55AM +0800, Lai Jiangshan wrote:
If hotplug code grabbed the manager_mutex and worker_thread try to create
a worker, the manage_worker() will return false and worker_thread go to
process work items. Now, on the CPU, all
On 08/30/2012 05:17 PM, Tejun Heo wrote:
Hello, Lai.
On Thu, Aug 30, 2012 at 05:16:01PM +0800, Lai Jiangshan wrote:
gcwq_unbind_fn() is unsafe even it is called from a work item.
so we need non_manager_role_manager_mutex_unlock().
If rebind_workers() is called from a work item, it is safe
on synchronize_all_idles_rebound())
Lai Jiangshan (10):
workqueue: ensure the wq_worker_sleeping() see the right flags
workqueue: fix deadlock in rebind_workers()
workqueue: add POOL_MANAGING_WORKERS
workqueue: add manage_workers_slowpath()
workqueue: move rebind_hold to idle_rebind
workqueue: simple
wake up.
so we use one write/modify instruction explicitly instead.
This bug will not occur on idle workers, because they have another
WORKER_NOT_RUNNING flags.
Signed-off-by: Lai Jiangshan la...@cn.fujitsu.com
---
kernel/workqueue.c |7 +--
1 files changed, 5 insertions(+), 2 deletions
synchronize_all_idles_rebound() must be called before
the WORKER_REBIND has been cleared.
It adds a small overhead to the unlikely path.
Signed-off-by: Lai Jiangshan la...@cn.fujitsu.com
---
kernel/workqueue.c | 16 +++-
1 files changed, 15 insertions(+), 1 deletions(-)
diff --git
When hotplug happens, the plug code will also grab the manager_mutex,
it will break too_many_workers()'s assumption, and make too_many_workers()
ugly(kick the timer wrongly, no found bug).
To avoid assumption-coruption, we add the original POOL_MANAGING_WORKERS back.
Signed-off-by: Lai Jiangshan
-by: Lai Jiangshan la...@cn.fujitsu.com
---
kernel/workqueue.c | 20
1 files changed, 8 insertions(+), 12 deletions(-)
diff --git a/kernel/workqueue.c b/kernel/workqueue.c
index d40e8d7..55864d1 100644
--- a/kernel/workqueue.c
+++ b/kernel/workqueue.c
@@ -185,8 +185,6
.
Signed-off-by: Lai Jiangshan la...@cn.fujitsu.com
---
kernel/workqueue.c | 61 ---
1 files changed, 48 insertions(+), 13 deletions(-)
diff --git a/kernel/workqueue.c b/kernel/workqueue.c
index 4f252d0..1bfe407 100644
--- a/kernel/workqueue.c
+++ b
manage_workers() failed to grab the manage_mutex.
This slowpath is hard to trigger, so I change
if (unlikely(!mutex_trylock(pool-manager_mutex)))
to if (1 || unlikely(!mutex_trylock(pool-manager_mutex)))
when testing, it uses manage_workers_slowpath() always.
Signed-off-by: Lai Jiangshan la
need later patch
to improve the readability)
Signed-off-by: Lai Jiangshan la...@cn.fujitsu.com
---
kernel/workqueue.c | 23 ---
1 files changed, 8 insertions(+), 15 deletions(-)
diff --git a/kernel/workqueue.c b/kernel/workqueue.c
index 16bcd84..6d68571 100644
--- a/kernel
Currently is single pass, we can wait on idle_done instead wait on rebind_hold.
So we can remove rebind_hold and make the code simpler.
Signed-off-by: Lai Jiangshan la...@cn.fujitsu.com
---
kernel/workqueue.c | 35 +++
1 files changed, 11 insertions(+), 24
WORKER_REBIND is not used for other purpose,
idle_worker_rebind() can directly clear it.
Signed-off-by: Lai Jiangshan la...@cn.fujitsu.com
---
kernel/workqueue.c | 13 ++---
1 files changed, 2 insertions(+), 11 deletions(-)
diff --git a/kernel/workqueue.c b/kernel/workqueue.c
index
And this pointer helps other workers know the progress of idle-rebinding.
when gcwq-idle_rebind is not NULL, it means the idle-rebinding is still
in progress.
and idle_worker_rebind() is split.
Signed-off-by: Lai Jiangshan la...@cn.fujitsu.com
---
kernel/workqueue.c | 67
On 08/31/2012 02:18 AM, Paul E. McKenney wrote:
From: Paul E. McKenney paul.mcken...@linaro.org
In kernels built with CONFIG_RCU_FAST_NO_HZ=y, CPUs can accumulate a
large number of lazy callbacks, which as the name implies will be slow
to be invoked. This can be a problem on small-memory
On 09/05/2012 07:39 AM, Tejun Heo wrote:
From d2ae38fc5e37b4bca3c4bec04a10dcf861a77b2b Mon Sep 17 00:00:00 2001
From: Lai Jiangshan la...@cn.fujitsu.com
Date: Sun, 2 Sep 2012 00:28:19 +0800
The compiler may compile the following code into TWO write/modify
instructions.
worker-flags
On 09/05/2012 08:54 AM, Tejun Heo wrote:
How about something like the following? This is more consistent with
the existing code and as the fixes need to go separately through
for-3.6-fixes, it's best to stay consistent regardless of the end
result after all the restructuring. It's not tested
On 09/05/2012 09:15 AM, Tejun Heo wrote:
On Sun, Sep 02, 2012 at 12:28:28AM +0800, Lai Jiangshan wrote:
Currently is single pass, we can wait on idle_done instead wait on
rebind_hold.
So we can remove rebind_hold and make the code simpler.
As I wrote before, in general, I do like
Ensure the gcwq-flags is only accessed with gcwq-lock held.
And make the code more easier to understand.
In all current callsite of create_worker(), DISASSOCIATED can't
be flipped while create_worker().
So the whole behavior is unchanged with this patch.
Signed-off-by: Lai Jiangshan la
two flags, just one is enough. remove WORKER_REBIND from busy rebinding.
Signed-off-by: Lai Jiangshan la...@cn.fujitsu.com
---
kernel/workqueue.c |9 +
1 files changed, 1 insertions(+), 8 deletions(-)
diff --git a/kernel/workqueue.c b/kernel/workqueue.c
index 3dd7ce2..ba0ba33 100644
release gcwq-lock.
Signed-off-by: Lai Jiangshan la...@cn.fujitsu.com
---
kernel/workqueue.c | 100 +---
1 files changed, 25 insertions(+), 75 deletions(-)
diff --git a/kernel/workqueue.c b/kernel/workqueue.c
index 050b2a5..3dd7ce2 100644
--- a/kernel
gcwq_unbind_fn() unbind manager by -manager pointer.
rebinding-manger, unbinding/rebinding newly created worker are done by
other place. so we don't need manager_mutex any more.
Also change the comment of @bind accordingly.
Signed-off-by: Lai Jiangshan la...@cn.fujitsu.com
---
kernel
no one except manage_workers() use it. remove it.
manage_workers() will use -manager instead.
Signed-off-by: Lai Jiangshan la...@cn.fujitsu.com
---
kernel/workqueue.c |7 ++-
1 files changed, 2 insertions(+), 5 deletions(-)
diff --git a/kernel/workqueue.c b/kernel/workqueue.c
index
Because the %UNBOUND bit of manager can't be cleared while it
is manage workers. maybe_rebind_manager() will be noticed and
will do rebind when needed.
It just prepares, this code is useless until
we unbind/rebind without manager_mutex
Signed-off-by: Lai Jiangshan la...@cn.fujitsu.com
when the newly created needs to be rebound. exile it!
it will rebind itself in worker_thead().
It just prepares, this code is useless until
we unbind/rebind without manager_mutex
Signed-off-by: Lai Jiangshan la...@cn.fujitsu.com
---
kernel/workqueue.c |5 +
1 files changed, 5 insertions
unbind newly created worker when manager is unbound.
It just prepares, this code is useless until
we unbind/rebind without manager_mutex
Signed-off-by: Lai Jiangshan la...@cn.fujitsu.com
---
kernel/workqueue.c | 23 ++-
1 files changed, 22 insertions(+), 1 deletions
wake up.
so we use one write/modify instruction explicitly instead.
This bug will not occur on idle workers, because they have another
WORKER_NOT_RUNNING flags.
Signed-off-by: Lai Jiangshan la...@cn.fujitsu.com
---
kernel/workqueue.c |7 +--
1 files changed, 5 insertions(+), 2 deletions
add -manager to make gcwq_unbind_fn() knows the manager and unbind it.
It just prepares, this code is useless until
we unbind/rebind without manager_mutex
Signed-off-by: Lai Jiangshan la...@cn.fujitsu.com
---
kernel/workqueue.c | 10 +-
1 files changed, 9 insertions(+), 1 deletions
exile operation = list_del_init(worker-entry).
and destory operation - list_del_init(worker-entry).
so we can use list_empty(worker-entry) to know: does the worker
has been exiled or killed.
WORKER_REBIND is not need any more, remove it to reduce the states
of workers.
Signed-off-by: Lai
by GCWQ_DISASSOCIATED bit and WORKER_UNBOUND bit.
The second core algorithm is exile-operation. Patch2,9
Patch 11 cleanup manager.
Patch 1 accepted, just resent.
Changed from V4:
Give up to make manage_mutex safer, remove it instead.
Lai Jiangshan (11):
workqueue: ensure the wq_worker_sleeping
On Wed, Sep 5, 2012 at 6:37 PM, Lai Jiangshan la...@cn.fujitsu.com wrote:
when the newly created needs to be rebound. exile it!
it will rebind itself in worker_thead().
It just prepares, this code is useless until
we unbind/rebind without manager_mutex
Signed-off-by: Lai Jiangshan la
instruction to void other CPU see wrong flags.
Patch6,7 small fix.
Lai Jiangshan (7):
wait on manager_mutex instead of rebind_hold
simple clear WORKER_REBIND
explit way to wait for idles workers to finish
single pass rebind
ensure the wq_worker_sleeping() see the right flags
init 0
static
-off-by: Lai Jiangshan la...@cn.fujitsu.com
---
kernel/workqueue.c | 19 +++
1 files changed, 7 insertions(+), 12 deletions(-)
diff --git a/kernel/workqueue.c b/kernel/workqueue.c
index f6e4394..96485c0 100644
--- a/kernel/workqueue.c
+++ b/kernel/workqueue.c
@@ -1394,16 +1394,12
wake up.
so we use one write/modify instruction explicitly instead.
This bug will not occur on idle workers, because they have another
WORKER_NOT_RUNNING flags.
Signed-off-by: Lai Jiangshan la...@cn.fujitsu.com
---
kernel/workqueue.c |7 +--
1 files changed, 5 insertions(+), 2 deletions
().
The sleeping mutex_lock(worker-pool-manager_mutex) must be put in the top of
busy_worker_rebind_fn(), because this busy worker thread can sleep
before the WORKER_REBIND is cleared, but can't sleep after
the WORKER_REBIND cleared.
It adds a small overhead to the unlikely path.
Signed-off-by: Lai
up, the idle_worker_rebind() can
returns.
This fix has an advantage: WORKER_REBIND is not used for wait_event(),
so we can clear it in idle_worker_rebind().(next patch)
Signed-off-by: Lai Jiangshan la...@cn.fujitsu.com
---
kernel/workqueue.c | 13 +++--
1 files changed, 3 insertions
rebind_workers() is protected by cpu_hotplug lock,
so struct idle_rebind is also proteced by it.
And we can use a compile time allocated idle_rebind instead
of allocating it from the stack. it makes code clean.
Signed-off-by: Lai Jiangshan la...@cn.fujitsu.com
---
kernel/workqueue.c | 28
Access idle_rebind.cnt is always protected by gcwq-lock,
don't need to init it as 1.
Signed-off-by: Lai Jiangshan la...@cn.fujitsu.com
---
kernel/workqueue.c |4 ++--
1 files changed, 2 insertions(+), 2 deletions(-)
diff --git a/kernel/workqueue.c b/kernel/workqueue.c
index ed23c9a..9f38a65
WORKER_REBIND is not used for other purpose,
idle_worker_rebind() can directly clear it.
Signed-off-by: Lai Jiangshan la...@cn.fujitsu.com
---
kernel/workqueue.c | 13 ++---
1 files changed, 2 insertions(+), 11 deletions(-)
diff --git a/kernel/workqueue.c b/kernel/workqueue.c
index
On Tue, Aug 28, 2012 at 4:05 AM, Tejun Heo t...@kernel.org wrote:
On Tue, Aug 28, 2012 at 01:58:26AM +0800, Lai Jiangshan wrote:
Access idle_rebind.cnt is always protected by gcwq-lock,
don't need to init it as 1.
But then the completion could be triggered prematurely
() to
finish rebinding and clearing, the CPU can't be offline,
busy_worker_rebind_fn() will not clear the wrong WORKER_UNBOUND.
Signed-off-by: Lai Jiangshan la...@cn.fujitsu.com
---
kernel/workqueue.c |6 +-
1 files changed, 1 insertions(+), 5 deletions(-)
diff --git a/kernel/workqueue.c b
-by: Lai Jiangshan la...@cn.fujitsu.com
---
kernel/workqueue.c |4
1 files changed, 0 insertions(+), 4 deletions(-)
diff --git a/kernel/workqueue.c b/kernel/workqueue.c
index 3e0bd20..eec11c3 100644
--- a/kernel/workqueue.c
+++ b/kernel/workqueue.c
@@ -1420,10 +1420,6 @@ static void
notify on all_done)
Signed-off-by: Lai Jiangshan la...@cn.fujitsu.com
---
kernel/workqueue.c | 108 ++-
1 files changed, 55 insertions(+), 53 deletions(-)
diff --git a/kernel/workqueue.c b/kernel/workqueue.c
index 692d976..5f63883 100644
kernelcore_max_addr
Patch23 Add online_movable
Lai Jiangshan (19):
node_states: introduce N_MEMORY
cpuset: use N_MEMORY instead N_HIGH_MEMORY
procfs: use N_MEMORY instead N_HIGH_MEMORY
oom: use N_MEMORY instead N_HIGH_MEMORY
mm,migrate: use N_MEMORY instead N_HIGH_MEMORY
mempolicy: use
N_HIGH_MEMORY stands for the nodes that has normal or high memory.
N_MEMORY stands for the nodes that has any memory.
The code here need to handle with the nodes which have memory, we should
use N_MEMORY instead.
Signed-off-by: Lai Jiangshan la...@cn.fujitsu.com
---
fs/proc/kcore.c|2
N_HIGH_MEMORY stands for the nodes that has normal or high memory.
N_MEMORY stands for the nodes that has any memory.
The code here need to handle with the nodes which have memory, we should
use N_MEMORY instead.
Signed-off-by: Lai Jiangshan la...@cn.fujitsu.com
---
mm/mempolicy.c | 12
N_HIGH_MEMORY stands for the nodes that has normal or high memory.
N_MEMORY stands for the nodes that has any memory.
The code here need to handle with the nodes which have memory, we should
use N_MEMORY instead.
Signed-off-by: Lai Jiangshan la...@cn.fujitsu.com
---
mm/vmstat.c |4 ++--
1
SLUB only fucus on the nodes which has normal memory, so ignore the other
node's hot-adding and hot-removing.
Signed-off-by: Lai Jiangshan la...@cn.fujitsu.com
---
mm/slub.c |6 ++
1 files changed, 6 insertions(+), 0 deletions(-)
diff --git a/mm/slub.c b/mm/slub.c
index 8c691fa..4c5bdc0
All are prepared, we can actually introduce N_MEMORY.
add CONFIG_MOVABLE_NODE make we can use it for movable-dedicated node
Signed-off-by: Lai Jiangshan la...@cn.fujitsu.com
---
drivers/base/node.c |6 ++
include/linux/nodemask.h |4
mm/Kconfig |8
.
Signed-off-by: Yasuaki Ishimatsu isimatu.yasu...@jp.fujitsu.com
Signed-off-by: Lai Jiangshan la...@cn.fujitsu.com
---
include/linux/memblock.h |1 +
mm/memblock.c|5 -
mm/page_alloc.c |6 +-
3 files changed, 10 insertions(+), 2 deletions(-)
diff --git a/include
From: Yasuaki Ishimatsu isimatu.yasu...@jp.fujitsu.com
memblock.current_limit is set directly though memblock_set_current_limit()
is prepared. So fix it.
Signed-off-by: Yasuaki Ishimatsu isimatu.yasu...@jp.fujitsu.com
Signed-off-by: Lai Jiangshan la...@cn.fujitsu.com
---
arch/x86/kernel/setup.c
for THP.
Current constraints: Only the memoryblock which is adjacent to the ZONE_MOVABLE
can be onlined from ZONE_NORMAL to ZONE_MOVABLE.
For opposite onlining behavior, we also introduce online_kernel to change
a memoryblock of ZONE_MOVABLE to ZONE_KERNEL when online.
Signed-off-by: Lai
.
The patch adds the check to memblock_find_in_range_node()
Signed-off-by: Yasuaki Ishimatsu isimatu.yasu...@jp.fujitsu.com
Signed-off-by: Lai Jiangshan la...@cn.fujitsu.com
---
mm/memblock.c |5 +++--
1 files changed, 3 insertions(+), 2 deletions(-)
diff --git a/mm/memblock.c b/mm/memblock.c
index
N_HIGH_MEMORY stands for the nodes that has normal or high memory.
N_MEMORY stands for the nodes that has any memory.
The code here need to handle with the nodes which have memory, we should
use N_MEMORY instead.
Signed-off-by: Lai Jiangshan la...@cn.fujitsu.com
---
Documentation/cgroups
N_HIGH_MEMORY stands for the nodes that has normal or high memory.
N_MEMORY stands for the nodes that has any memory.
The code here need to handle with the nodes which have memory, we should
use N_MEMORY instead.
Signed-off-by: Lai Jiangshan la...@cn.fujitsu.com
---
mm/vmscan.c |4 ++--
1
N_HIGH_MEMORY stands for the nodes that has normal or high memory.
N_MEMORY stands for the nodes that has any memory.
The code here need to handle with the nodes which have memory, we should
use N_MEMORY instead.
Signed-off-by: Lai Jiangshan la...@cn.fujitsu.com
---
init/main.c |2 +-
1
Currently memory_hotplug only manages the node_states[N_HIGH_MEMORY],
it forgot to manage node_states[N_NORMAL_MEMORY]. fix it.
Signed-off-by: Lai Jiangshan la...@cn.fujitsu.com
---
Documentation/memory-hotplug.txt |2 +-
mm/memory_hotplug.c | 23 +--
2
is needed to do it similarly
and new approach should also handle other long living unreclaimable memory.
Current blindly subtracted-present-pages-size approach does wrong, remove it.
Signed-off-by: Lai Jiangshan la...@cn.fujitsu.com
---
mm/page_alloc.c | 20 +---
1 files changed, 1
().
Signed-off-by: Yasuaki Ishimatsu isimatu.yasu...@jp.fujitsu.com
Signed-off-by: Lai Jiangshan la...@cn.fujitsu.com
---
arch/x86/mm/numa.c |8 ++--
1 files changed, 6 insertions(+), 2 deletions(-)
diff --git a/arch/x86/mm/numa.c b/arch/x86/mm/numa.c
index 2d125be..a86e315 100644
--- a/arch
).
Signed-off-by: Lai Jiangshan la...@cn.fujitsu.com
---
Documentation/kernel-parameters.txt |9 +
mm/page_alloc.c | 29 -
2 files changed, 37 insertions(+), 1 deletions(-)
diff --git a/Documentation/kernel-parameters.txt
b/Documentation
-off-by: Lai Jiangshan la...@cn.fujitsu.com
---
arch/x86/mm/init_64.c |4 +++-
mm/page_alloc.c | 40 ++--
2 files changed, 25 insertions(+), 19 deletions(-)
diff --git a/arch/x86/mm/init_64.c b/arch/x86/mm/init_64.c
index 2b6b4a3..005f00c 100644
N_HIGH_MEMORY stands for the nodes that has normal or high memory.
N_MEMORY stands for the nodes that has any memory.
The code here need to handle with the nodes which have memory, we should
use N_MEMORY instead.
Signed-off-by: Lai Jiangshan la...@cn.fujitsu.com
---
kernel/kthread.c |2
N_HIGH_MEMORY stands for the nodes that has normal or high memory.
N_MEMORY stands for the nodes that has any memory.
The code here need to handle with the nodes which have memory, we should
use N_MEMORY instead.
Signed-off-by: Lai Jiangshan la...@cn.fujitsu.com
---
drivers/base/node.c |2
N_HIGH_MEMORY stands for the nodes that has normal or high memory.
N_MEMORY stands for the nodes that has any memory.
The code here need to handle with the nodes which have memory, we should
use N_MEMORY instead.
Signed-off-by: Lai Jiangshan la...@cn.fujitsu.com
---
mm/memcontrol.c | 18
N_HIGH_MEMORY stands for the nodes that has normal or high memory.
N_MEMORY stands for the nodes that has any memory.
The code here need to handle with the nodes which have memory, we should
use N_MEMORY instead.
Signed-off-by: Lai Jiangshan la...@cn.fujitsu.com
---
mm/migrate.c |2 +-
1
301 - 400 of 2229 matches
Mail list logo