On Tue, Feb 26, 2013 at 5:02 PM, Srivatsa S. Bhat
srivatsa.b...@linux.vnet.ibm.com wrote:
On 02/26/2013 05:47 AM, Lai Jiangshan wrote:
On Tue, Feb 26, 2013 at 3:26 AM, Srivatsa S. Bhat
srivatsa.b...@linux.vnet.ibm.com wrote:
Hi Lai,
On 02/25/2013 09:23 PM, Lai Jiangshan wrote:
Hi, Srivatsa
On Tue, Feb 26, 2013 at 3:26 AM, Srivatsa S. Bhat
srivatsa.b...@linux.vnet.ibm.com wrote:
Hi Lai,
On 02/25/2013 09:23 PM, Lai Jiangshan wrote:
Hi, Srivatsa,
The target of the whole patchset is nice for me.
Cool! Thanks :-)
A question: How did you find out the such usages
On Mon, Feb 18, 2013 at 8:38 PM, Srivatsa S. Bhat
srivatsa.b...@linux.vnet.ibm.com wrote:
Using global rwlocks as the backend for per-CPU rwlocks helps us avoid many
lock-ordering related problems (unlike per-cpu locks). However, global
rwlocks lead to unnecessary cache-line bouncing even when
On Tue, Feb 26, 2013 at 10:22 PM, Srivatsa S. Bhat
srivatsa.b...@linux.vnet.ibm.com wrote:
Hi Lai,
I'm really not convinced that piggy-backing on lglocks would help
us in any way. But still, let me try to address some of the points
you raised...
On 02/26/2013 06:29 PM, Lai Jiangshan wrote
On Wed, Feb 27, 2013 at 3:30 AM, Srivatsa S. Bhat
srivatsa.b...@linux.vnet.ibm.com wrote:
On 02/26/2013 09:55 PM, Lai Jiangshan wrote:
On Tue, Feb 26, 2013 at 10:22 PM, Srivatsa S. Bhat
srivatsa.b...@linux.vnet.ibm.com wrote:
Hi Lai,
I'm really not convinced that piggy-backing on lglocks
On 02/27/2013 01:11 PM, Yinghai Lu wrote:
On Tue, Feb 26, 2013 at 8:43 PM, Yasuaki Ishimatsu
isimatu.yasu...@jp.fujitsu.com wrote:
2013/02/27 13:04, Yinghai Lu wrote:
On Tue, Feb 26, 2013 at 7:38 PM, Yasuaki Ishimatsu
isimatu.yasu...@jp.fujitsu.com wrote:
2013/02/27 11:30, Yinghai Lu
() is the same as in_interrupt().
so please remove in_serving_irq() and use in_interrupt() instead.
And add:
Reviewed-by: Lai Jiangshan la...@cn.fujitsu.com
In the long-term, the best solution is using percpu lockdep for
local_irq_disable()
and smp_call_function_many():
CPUACPUB
From c63f2be9a4cf7106a521dda169a0e14f8e4f7e3b Mon Sep 17 00:00:00 2001
From: Lai Jiangshan la...@cn.fujitsu.com
Date: Mon, 25 Feb 2013 23:14:27 +0800
Subject: [PATCH] lglock: add read-preference local-global rwlock
Current lglock is not read-preference, so it can't be used on some cases
which
On 28/02/13 05:19, Srivatsa S. Bhat wrote:
On 02/27/2013 06:03 AM, Lai Jiangshan wrote:
On Wed, Feb 27, 2013 at 3:30 AM, Srivatsa S. Bhat
srivatsa.b...@linux.vnet.ibm.com wrote:
On 02/26/2013 09:55 PM, Lai Jiangshan wrote:
On Tue, Feb 26, 2013 at 10:22 PM, Srivatsa S. Bhat
srivatsa.b
From 345a7a75c314ff567be48983e0892bc69c4452e7 Mon Sep 17 00:00:00 2001
From: Lai Jiangshan la...@cn.fujitsu.com
Date: Sat, 2 Mar 2013 20:33:14 +0800
Subject: [PATCH] lglock: add read-preference local-global rwlock
Current lglock is not read-preference, so it can't be used on some cases
which read
On 02/03/13 02:28, Oleg Nesterov wrote:
Lai, I didn't read this discussion except the code posted by Michel.
I'll try to read this patch carefully later, but I'd like to ask
a couple of questions.
This version looks more complex than Michel's, why? Just curious, I
am trying to understand
For the whole patchset
Reviewed-by: Lai Jiangshan la...@cn.fujitsu.com
The only concern: get_work_pool() may slow down __queue_work().
I think we can save the pool-id at work_struct-entry.next, It will
simply the code a little. More aggressive, we can save the work_pool
pointer at work_struct
Hi, tj
Thank you for adding this one.
Would you deffer workqueue: rename cpu_workqueue to pool_workqueue a
little? I don't want to rebase my almost-ready work again(not a good
reason... but please...)
I will answer your other emails soon and sent the patches.
Thanks,
Lai
On 14/02/13
modification to worker-pool is under pool lock held
Patch 14: remove hashtable totally
other patch is preparing-patch or cleanup.
Lai Jiangshan (15):
workqueue: add lock_work_pool()
workqueue: allow more work_pool id space
workqueue: remname worker-id to worker-id_in_pool
workqueue: add worker's
color bits is not used when offq, so we reuse them for pool IDs.
thus we will have more pool IDs.
Signed-off-by: Lai Jiangshan la...@cn.fujitsu.com
---
include/linux/workqueue.h |2 +-
1 files changed, 1 insertions(+), 1 deletions(-)
diff --git a/include/linux/workqueue.h b/include/linux
,
It will still look up the worker.
But this lookup is neeeded in later patches.
Signed-off-by: Lai Jiangshan la...@cn.fujitsu.com
---
kernel/workqueue.c | 170 ---
1 files changed, 93 insertions(+), 77 deletions(-)
diff --git a/kernel
We will use worker-id for global worker id.
Signed-off-by: Lai Jiangshan la...@cn.fujitsu.com
---
kernel/workqueue.c | 20 +++-
kernel/workqueue_internal.h |2 +-
2 files changed, 12 insertions(+), 10 deletions(-)
diff --git a/kernel/workqueue.c b/kernel
worker_maybe_bind_and_lock() uses both @task and @current and the same time,
they are the same(worker_maybe_bind_and_lock() can only be called by current
worker task)
We make it uses @current only.
Signed-off-by: Lai Jiangshan la...@cn.fujitsu.com
---
kernel/workqueue.c |7 +++
1 files
Signed-off-by: Lai Jiangshan la...@cn.fujitsu.com
---
kernel/workqueue.c |8 +---
1 files changed, 5 insertions(+), 3 deletions(-)
diff --git a/kernel/workqueue.c b/kernel/workqueue.c
index cdd5523..ab5c61a 100644
--- a/kernel/workqueue.c
+++ b/kernel/workqueue.c
@@ -544,9 +544,11
() requires the caller hold the pool lock and
the work must be *owned* by the pool.
or we can provide even more loose semantic,
but we don't need loose semantic in any case currently, KISS.
Signed-off-by: Lai Jiangshan la...@cn.fujitsu.com
---
include/linux/workqueue.h
pool-busy_list is touched when the worker processes every work.
if this code is moved out, we reduce this touch.
Signed-off-by: Lai Jiangshan la...@cn.fujitsu.com
---
kernel/workqueue.c |6 --
1 files changed, 4 insertions(+), 2 deletions(-)
diff --git a/kernel/workqueue.c b/kernel
Since we don't use the hashtable, thus we can use list to implement
the for_each_busy_worker().
Signed-off-by: Lai Jiangshan la...@cn.fujitsu.com
---
kernel/workqueue.c | 27 ++-
kernel/workqueue_internal.h |9 +++--
2 files changed, 13 insertions
Use already-known cwq instead of get_work_cwq(work) in try_to_grab_pending()
and cwq_activate_first_delayed().
It avoid unneeded calls to get_work_cwq() which becomes not so light-way
in later patches.
Signed-off-by: Lai Jiangshan la...@cn.fujitsu.com
---
kernel/workqueue.c | 11
-associated to the pool.
It is done with pool-lock held in ether set of above
Thus we have this semantic:
If pool-lock is held and worker-pool==pool, we can determine that
the worker is associated to the pool now.
Signed-off-by: Lai Jiangshan la...@cn.fujitsu.com
---
kernel
Allow we use delayed_flags only in different path in later patches.
Signed-off-by: Lai Jiangshan la...@cn.fujitsu.com
---
kernel/workqueue.c |8
1 files changed, 4 insertions(+), 4 deletions(-)
diff --git a/kernel/workqueue.c b/kernel/workqueue.c
index 7ac6824..cdd5523 100644
choice the later one.
Signed-off-by: Lai Jiangshan la...@cn.fujitsu.com
---
kernel/workqueue.c | 18 +-
1 files changed, 9 insertions(+), 9 deletions(-)
diff --git a/kernel/workqueue.c b/kernel/workqueue.c
index b47d1af..b987195 100644
--- a/kernel/workqueue.c
+++ b/kernel
Add new worker-id which is allocated from worker_idr. This
will be used to record the last running worker in work-data.
Signed-off-by: Lai Jiangshan la...@cn.fujitsu.com
---
kernel/workqueue.c | 28
kernel/workqueue_internal.h |1 +
2 files changed
solves it.
This patch slows down the very-slow-path destroy_worker(), if it is required,
we will move the synchronize_sched() out.
Signed-off-by: Lai Jiangshan la...@cn.fujitsu.com
---
include/linux/workqueue.h | 20 +++---
kernel/workqueue.c| 140
When a work is dequeued via try_to_grab_pending(), its pool id is recored
in work-data. but this recording is useless when the work is not running.
In this patch, we only record pool id when the work is running.
Signed-off-by: Lai Jiangshan la...@cn.fujitsu.com
---
kernel/workqueue.c | 11
On 19/02/13 00:12, Lai Jiangshan wrote:
Core patches are patch 1, patch 9, patch 13
Patch 1: enhance locking
Patch 9: recorde worker id to work-data instead of pool id
lookup worker via worker ID if offq
Patch 13:also lookup worker via worker ID if runningqueued,
remove lookup
implementation.
On Tue, Feb 19, 2013 at 3:50 AM, Tejun Heo t...@kernel.org wrote:
Hello, Lai.
On Tue, Feb 19, 2013 at 12:12:14AM +0800, Lai Jiangshan wrote:
+/**
+ * get_work_cwq - get cwq of the work
+ * @work: the work item of interest
+ *
+ * CONTEXT:
+ * spin_lock_irq(pool-lock
offline_pool() to clear workers.
Signed-off-by: Lai Jiangshan la...@cn.fujitsu.com
---
kernel/workqueue.c | 109 ++-
1 files changed, 98 insertions(+), 11 deletions(-)
diff --git a/kernel/workqueue.c b/kernel/workqueue.c
index 0b1e6f2..ffdc1db
.
Thanks,
Lai
PS: Some HA tools(I'm writing one) which takes checkpoints of
virtual-machines frequently, I guess this patchset can speedup the
tools.
From 01db542693a1b7fc6f9ece45d57cb529d9be5b66 Mon Sep 17 00:00:00 2001
From: Lai Jiangshan la...@cn.fujitsu.com
Date: Mon, 25 Feb 2013 23:14:27
On Tue, Feb 26, 2013 at 3:26 AM, Srivatsa S. Bhat
srivatsa.b...@linux.vnet.ibm.com wrote:
Hi Lai,
On 02/25/2013 09:23 PM, Lai Jiangshan wrote:
Hi, Srivatsa,
The target of the whole patchset is nice for me.
Cool! Thanks :-)
A question: How did you find out the such usages
On Tue, Feb 26, 2013 at 8:17 AM, Lai Jiangshan eag0...@gmail.com wrote:
On Tue, Feb 26, 2013 at 3:26 AM, Srivatsa S. Bhat
srivatsa.b...@linux.vnet.ibm.com wrote:
Hi Lai,
On 02/25/2013 09:23 PM, Lai Jiangshan wrote:
Hi, Srivatsa,
The target of the whole patchset is nice for me.
Cool
On 09/06/2012 03:49 AM, Tejun Heo wrote:
Hello,
On Wed, Sep 05, 2012 at 06:37:42PM +0800, Lai Jiangshan wrote:
Ensure the gcwq-flags is only accessed with gcwq-lock held.
And make the code more easier to understand.
In all current callsite of create_worker(), DISASSOCIATED can't
.
On Wed, Sep 05, 2012 at 06:37:39PM +0800, Lai Jiangshan wrote:
static void idle_worker_rebind(struct worker *worker)
{
struct global_cwq *gcwq = worker-pool-gcwq;
-/* CPU must be online at this point */
-WARN_ON(!worker_maybe_bind_and_lock(worker));
-if (!--worker-idle_rebind
On 09/06/2012 07:11 AM, Tejun Heo wrote:
On Tue, Sep 04, 2012 at 11:16:32PM -0700, Tejun Heo wrote:
Currently, rebind_workers() and idle_worker_rebind() are two-way
interlocked. rebind_workers() waits for idle workers to finish
rebinding and rebound idle workers wait for rebind_workers() to
On 09/05/2012 09:12 AM, Tejun Heo wrote:
Hello, Lai.
On Sun, Sep 02, 2012 at 12:28:22AM +0800, Lai Jiangshan wrote:
If hotplug code grabbed the manager_mutex and worker_thread try to create
a worker, the manage_worker() will return false and worker_thread go to
process work items. Now
On 09/06/2012 02:31 AM, Tejun Heo wrote:
On Wed, Sep 05, 2012 at 06:37:40PM +0800, Lai Jiangshan wrote:
because old busy_worker_rebind_fn() have to wait until all idle worker
finish.
so we have to use two flags WORKER_UNBOUND and WORKER_REBIND to avoid
prematurely clear all NOT_RUNNING bit
On 09/06/2012 10:53 AM, Minchan Kim wrote:
Normally, MIGRATE_ISOLATE type is used for memory-hotplug.
But it's irony type because the pages isolated would exist
as free page in free_area-free_list[MIGRATE_ISOLATE] so people
can think of it as allocatable pages but it is *never* allocatable.
On 09/06/2012 04:18 PM, Minchan Kim wrote:
Hello Lai,
On Thu, Sep 06, 2012 at 04:14:51PM +0800, Lai Jiangshan wrote:
On 09/06/2012 10:53 AM, Minchan Kim wrote:
Normally, MIGRATE_ISOLATE type is used for memory-hotplug.
But it's irony type because the pages isolated would exist
as free page
On 09/06/2012 04:18 PM, Minchan Kim wrote:
Hello Lai,
On Thu, Sep 06, 2012 at 04:14:51PM +0800, Lai Jiangshan wrote:
On 09/06/2012 10:53 AM, Minchan Kim wrote:
Normally, MIGRATE_ISOLATE type is used for memory-hotplug.
But it's irony type because the pages isolated would exist
as free page
On 09/06/2012 04:04 AM, Tejun Heo wrote:
Hello, Lai.
On Wed, Sep 05, 2012 at 06:37:47PM +0800, Lai Jiangshan wrote:
gcwq_unbind_fn() unbind manager by -manager pointer.
rebinding-manger, unbinding/rebinding newly created worker are done by
other place. so we don't need manager_mutex any
worker_pool *pool;
for_each_worker_pool(pool, gcwq)
mutex_unlock(pool-manager_mutex);
spin_unlock_irq(gcwq-lock);
}
Signed-off-by: Tejun Heo t...@kernel.org
Reported-by: Lai Jiangshan la...@cn.fujitsu.com
---
kernel/workqueue.c | 20
On 09/07/2012 12:51 AM, Tejun Heo wrote:
Hello, Lai.
On Thu, Sep 06, 2012 at 09:04:06AM +0800, Lai Jiangshan wrote:
This doesn't change anything. You're just moving the test to the
caller with comments there explaining how it won't change even if
gcwq-lock is released. It seems more
On 09/07/2012 04:08 AM, Tejun Heo wrote:
From 985aafbf530834a9ab16348300adc7cbf35aab76 Mon Sep 17 00:00:00 2001
From: Tejun Heo t...@kernel.org
Date: Thu, 6 Sep 2012 12:50:41 -0700
To simplify both normal and CPU hotplug paths, while CPU hotplug is in
progress, manager_mutex is held to
the rebase if I don't need to respin the patchset
as V7 ?)
Patch3,4 fix depletion problem, it is simple enough. it goes to 3.6.
Patch 5,6,7 are clean up. - 3.7
Lai Jiangshan (7):
workqueue: ensure the wq_worker_sleeping() see the right flags
workqueue: async idle rebinding
workqueue: add
on worker_pool and let hotplug code(gcwq_unbind_fn()) handle it.
also fix too_many_workers() to use this pointer.
Signed-off-by: Lai Jiangshan la...@cn.fujitsu.com
---
kernel/workqueue.c | 12 ++--
1 files changed, 10 insertions(+), 2 deletions(-)
diff --git a/kernel/workqueue.c b/kernel
wake up.
so we use one write/modify instruction explicitly instead.
This bug will not occur on idle workers, because they have another
WORKER_NOT_RUNNING flags.
Signed-off-by: Lai Jiangshan la...@cn.fujitsu.com
---
kernel/workqueue.c |7 +--
1 files changed, 5 insertions(+), 2 deletions
two flags, just one is enough. remove WORKER_REBIND from busy rebinding.
Signed-off-by: Lai Jiangshan la...@cn.fujitsu.com
---
kernel/workqueue.c |9 +
1 files changed, 1 insertions(+), 8 deletions(-)
diff --git a/kernel/workqueue.c b/kernel/workqueue.c
index d9765c4..4863162 100644
to the bottom of manage_workers().
Other result of narrowed C.S. manager_workers() becomes
simpler.
Signed-off-by: Lai Jiangshan la...@cn.fujitsu.com
---
kernel/workqueue.c | 63
1 files changed, 29 insertions(+), 34 deletions(-)
diff --git
itself when it is noticed.
Manager worker will be noticed by the bit of GCWQ_DISASSOCIATED and
WORKER_UNBIND. Because the %UNBOUND bit of manager can't be cleared
while it is managing workers. maybe_rebind_manager() will be noticed
when rebind_workers() happens.
Signed-off-by: Lai Jiangshan la
exile operation = list_del_init(worker-entry).
and destory operation - list_del_init(worker-entry).
so we can use list_empty(worker-entry) to know: does the worker
has been exiled or killed.
WORKER_REBIND is not need any more, remove it to reduce the states
of workers.
Signed-off-by: Lai
release gcwq-lock.
Signed-off-by: Lai Jiangshan la...@cn.fujitsu.com
---
kernel/workqueue.c | 100 +---
1 files changed, 25 insertions(+), 75 deletions(-)
diff --git a/kernel/workqueue.c b/kernel/workqueue.c
index 050b2a5..3dd7ce2 100644
--- a/kernel
On Sat, Sep 8, 2012 at 7:41 AM, Tejun Heo t...@kernel.org wrote:
I think this should do it. Can you spot any hole with the following
patch?
Thanks.
Index: work/kernel/workqueue.c
===
--- work.orig/kernel/workqueue.c
+++
On Sun, Sep 9, 2012 at 1:27 AM, Lai Jiangshan eag0...@gmail.com wrote:
On Sun, Sep 9, 2012 at 1:12 AM, Lai Jiangshan la...@cn.fujitsu.com wrote:
The patch set is based on 3b07e9ca26866697616097044f25fbe53dbab693 of wq.git
Patch 1,2 are accepted. Patch 1 goes to 3.6. tj has a replacement goes
On Sun, Sep 9, 2012 at 1:12 AM, Lai Jiangshan la...@cn.fujitsu.com wrote:
The patch set is based on 3b07e9ca26866697616097044f25fbe53dbab693 of wq.git
Patch 1,2 are accepted. Patch 1 goes to 3.6. tj has a replacement goes
to 3.6 instead of Patch 2. so Patch2 will go to 3.7. Patch2 will need
On Sun, Sep 9, 2012 at 1:32 AM, Tejun Heo t...@kernel.org wrote:
On Sat, Sep 08, 2012 at 10:29:50AM -0700, Tejun Heo wrote:
hotplug code can't iterate manager. not rebind_work() nor UNBOUND for
manager.
Ah, right. It isn't either on idle or busy list. Maybe have
pool-manager pointer?
On Sun, Sep 9, 2012 at 1:37 AM, Tejun Heo t...@kernel.org wrote:
Hello, Lai.
On Sun, Sep 09, 2012 at 01:27:37AM +0800, Lai Jiangshan wrote:
On Sun, Sep 9, 2012 at 1:12 AM, Lai Jiangshan la...@cn.fujitsu.com wrote:
The patch set is based on 3b07e9ca26866697616097044f25fbe53dbab693
On Sun, Sep 9, 2012 at 1:40 AM, Tejun Heo t...@kernel.org wrote:
Hello, Lai.
On Sun, Sep 09, 2012 at 01:12:53AM +0800, Lai Jiangshan wrote:
+/* does the manager need to be rebind after we just release gcwq-lock */
+static void maybe_rebind_manager(struct worker *manager)
+{
+ struct
On Sun, Sep 9, 2012 at 1:50 AM, Tejun Heo t...@kernel.org wrote:
Hello, Lai.
On Sun, Sep 09, 2012 at 01:46:59AM +0800, Lai Jiangshan wrote:
* Instead of MANAGING, add pool-manager.
* Fix the idle depletion bug by using pool-manager for exclusion and
always grabbing pool-manager_mutex
On Sun, Sep 9, 2012 at 1:53 AM, Tejun Heo t...@kernel.org wrote:
Hello,
On Sun, Sep 09, 2012 at 01:50:41AM +0800, Lai Jiangshan wrote:
+ if (worker_maybe_bind_and_lock(manager))
+ worker_clr_flags(manager, WORKER_UNBOUND);
+ }
+}
We can reuse
Hmmm... so, I'm having some difficulty communicating with you. We
need two separate patch series. One for for-3.6-fixes and the other
for restructuring on top of for-3.7 after the fixes are merged into
it.
As you currently posted, the patches are based on for-3.7 and fixes
and
On Sun, Sep 9, 2012 at 2:11 AM, Tejun Heo t...@kernel.org wrote:
On Sun, Sep 09, 2012 at 02:07:50AM +0800, Lai Jiangshan wrote:
when we release gcwq-lock and then grab it, we leave a hole that things
can be changed.
I don't want to open a hole. if the hole has bug we have to fix
On Sun, Sep 9, 2012 at 3:02 AM, Tejun Heo t...@kernel.org wrote:
Hello, Lai.
On Sun, Sep 09, 2012 at 02:34:02AM +0800, Lai Jiangshan wrote:
in 3.6 busy_worker_rebind() handle WORKER_REBIND bit,
not WORKER_UNBOUND bit.
busy_worker_rebind() takes struct work_struct *work argument, we have
When hotplug happens, the plug code will also grab the manager_mutex,
it will break too_many_workers()'s assumption, and make too_many_workers()
ugly(kick the timer wrongly, no found bug).
To avoid assumption-coruption, we add the original POOL_MANAGING_WORKERS back.
Signed-off-by: Lai Jiangshan
, if it fails, unbind itself.
Signed-off-by: Lai Jiangshan la...@cn.fujitsu.com
---
kernel/workqueue.c | 31 ++-
1 files changed, 30 insertions(+), 1 deletions(-)
diff --git a/kernel/workqueue.c b/kernel/workqueue.c
index 383548e..74434c8 100644
--- a/kernel/workqueue.c
+++ b
the MIGRATE_HOTREMOVE approach, and use a more straight
implementation(only 1 patch).
Lai Jiangshan (22):
page_alloc.c: don't subtract unrelated memmap from zone's present
pages
memory_hotplug: fix missing nodemask management
slub, hotplug: ignore unrelated node's hot-adding and hot-removing
node
is needed to do it similarly
and new approach should also handle other long living unreclaimable memory.
Current blindly subtracted-present-pages-size approach does wrong, remove it.
Signed-off-by: Lai Jiangshan la...@cn.fujitsu.com
---
mm/page_alloc.c | 20 +---
1 files changed, 1
use [index] = init_value
use N_x instead of hardcode.
Make it more readability and easy to add new state.
Signed-off-by: Lai Jiangshan la...@cn.fujitsu.com
---
drivers/base/node.c | 20 ++--
1 files changed, 10 insertions(+), 10 deletions(-)
diff --git a/drivers/base
N_HIGH_MEMORY stands for the nodes that has normal or high memory.
N_MEMORY stands for the nodes that has any memory.
The code here need to handle with the nodes which have memory, we should
use N_MEMORY instead.
Signed-off-by: Lai Jiangshan la...@cn.fujitsu.com
Acked-by: Hillf Danton dhi
N_HIGH_MEMORY stands for the nodes that has normal or high memory.
N_MEMORY stands for the nodes that has any memory.
The code here need to handle with the nodes which have memory, we should
use N_MEMORY instead.
Signed-off-by: Lai Jiangshan la...@cn.fujitsu.com
Acked-by: Hillf Danton dhi
N_HIGH_MEMORY stands for the nodes that has normal or high memory.
N_MEMORY stands for the nodes that has any memory.
The code here need to handle with the nodes which have memory, we should
use N_MEMORY instead.
Signed-off-by: Lai Jiangshan la...@cn.fujitsu.com
---
mm/mempolicy.c | 12
N_HIGH_MEMORY stands for the nodes that has normal or high memory.
N_MEMORY stands for the nodes that has any memory.
The code here need to handle with the nodes which have memory, we should
use N_MEMORY instead.
Signed-off-by: Lai Jiangshan la...@cn.fujitsu.com
Acked-by: Hillf Danton dhi
N_HIGH_MEMORY stands for the nodes that has normal or high memory.
N_MEMORY stands for the nodes that has any memory.
The code here need to handle with the nodes which have memory, we should
use N_MEMORY instead.
Signed-off-by: Lai Jiangshan la...@cn.fujitsu.com
Acked-by: Christoph Lameter c
.
Signed-off-by: Yasuaki Ishimatsu isimatu.yasu...@jp.fujitsu.com
Signed-off-by: Lai Jiangshan la...@cn.fujitsu.com
---
include/linux/memblock.h |1 +
mm/memblock.c|5 -
mm/page_alloc.c |6 +-
3 files changed, 10 insertions(+), 2 deletions(-)
diff --git a/include
).
Signed-off-by: Lai Jiangshan la...@cn.fujitsu.com
---
Documentation/kernel-parameters.txt |9 +
mm/page_alloc.c | 29 -
2 files changed, 37 insertions(+), 1 deletions(-)
diff --git a/Documentation/kernel-parameters.txt
b/Documentation
N_HIGH_MEMORY stands for the nodes that has normal or high memory.
N_MEMORY stands for the nodes that has any memory.
The code here need to handle with the nodes which have memory, we should
use N_MEMORY instead.
Signed-off-by: Lai Jiangshan la...@cn.fujitsu.com
---
mm/memcontrol.c | 18
-off-by: Lai Jiangshan la...@cn.fujitsu.com
---
arch/x86/mm/init_64.c |4 +++-
mm/page_alloc.c | 40 ++--
2 files changed, 25 insertions(+), 19 deletions(-)
diff --git a/arch/x86/mm/init_64.c b/arch/x86/mm/init_64.c
index 2b6b4a3..005f00c 100644
From: Yasuaki Ishimatsu isimatu.yasu...@jp.fujitsu.com
memblock.current_limit is set directly though memblock_set_current_limit()
is prepared. So fix it.
Signed-off-by: Yasuaki Ishimatsu isimatu.yasu...@jp.fujitsu.com
Signed-off-by: Lai Jiangshan la...@cn.fujitsu.com
---
arch/x86/kernel/setup.c
All are prepared, we can actually introduce N_MEMORY.
add CONFIG_MOVABLE_NODE make we can use it for movable-dedicated node
Signed-off-by: Lai Jiangshan la...@cn.fujitsu.com
---
drivers/base/node.c |6 ++
include/linux/nodemask.h |4
mm/Kconfig |8
N_HIGH_MEMORY stands for the nodes that has normal or high memory.
N_MEMORY stands for the nodes that has any memory.
The code here need to handle with the nodes which have memory, we should
use N_MEMORY instead.
Signed-off-by: Lai Jiangshan la...@cn.fujitsu.com
---
kernel/kthread.c |2
for THP.
Current constraints: Only the memoryblock which is adjacent to the ZONE_MOVABLE
can be onlined from ZONE_NORMAL to ZONE_MOVABLE.
For opposite onlining behavior, we also introduce online_kernel to change
a memoryblock of ZONE_MOVABLE to ZONE_KERNEL when online.
Signed-off-by: Lai
update nodemasks management for N_MEMORY
Signed-off-by: Lai Jiangshan la...@cn.fujitsu.com
---
Documentation/memory-hotplug.txt |5 +++-
include/linux/memory.h |1 +
mm/memory_hotplug.c | 49 +
3 files changed, 48 insertions
N_HIGH_MEMORY stands for the nodes that has normal or high memory.
N_MEMORY stands for the nodes that has any memory.
The code here need to handle with the nodes which have memory, we should
use N_MEMORY instead.
Signed-off-by: Lai Jiangshan la...@cn.fujitsu.com
Acked-by: Hillf Danton dhi
make online_movable/online_kernel can empty a zone
or can move memory to a empty zone.
Signed-off-by: Lai Jiangshan la...@cn.fujitsu.com
---
mm/memory_hotplug.c | 51 +--
1 files changed, 45 insertions(+), 6 deletions(-)
diff --git a/mm
.
The patch adds the check to memblock_find_in_range_node()
Signed-off-by: Yasuaki Ishimatsu isimatu.yasu...@jp.fujitsu.com
Signed-off-by: Lai Jiangshan la...@cn.fujitsu.com
---
mm/memblock.c |5 +++--
1 files changed, 3 insertions(+), 2 deletions(-)
diff --git a/mm/memblock.c b/mm/memblock.c
index
().
Signed-off-by: Yasuaki Ishimatsu isimatu.yasu...@jp.fujitsu.com
Signed-off-by: Lai Jiangshan la...@cn.fujitsu.com
---
arch/x86/mm/numa.c |8 ++--
1 files changed, 6 insertions(+), 2 deletions(-)
diff --git a/arch/x86/mm/numa.c b/arch/x86/mm/numa.c
index 2d125be..a86e315 100644
--- a/arch
N_HIGH_MEMORY stands for the nodes that has normal or high memory.
N_MEMORY stands for the nodes that has any memory.
The code here need to handle with the nodes which have memory, we should
use N_MEMORY instead.
Signed-off-by: Lai Jiangshan la...@cn.fujitsu.com
Acked-by: Christoph Lameter c
N_HIGH_MEMORY stands for the nodes that has normal or high memory.
N_MEMORY stands for the nodes that has any memory.
The code here need to handle with the nodes which have memory, we should
use N_MEMORY instead.
Signed-off-by: Lai Jiangshan la...@cn.fujitsu.com
---
init/main.c |2 +-
1
N_HIGH_MEMORY stands for the nodes that has normal or high memory.
N_MEMORY stands for the nodes that has any memory.
The code here need to handle with the nodes which have memory, we should
use N_MEMORY instead.
Signed-off-by: Lai Jiangshan la...@cn.fujitsu.com
Acked-by: Hillf Danton dhi
SLUB only fucus on the nodes which has normal memory, so ignore the other
node's hot-adding and hot-removing.
so we only do something when marg-status_change_nid_normal 0.
Signed-off-by: Lai Jiangshan la...@cn.fujitsu.com
---
mm/slub.c |4 ++--
1 files changed, 2 insertions(+), 2 deletions
a N_MEMORY. We just intrude it as an alias to
N_HIGH_MEMORY and fix all im-proper usages of N_HIGH_MEMORY in late patches.
Signed-off-by: Lai Jiangshan la...@cn.fujitsu.com
Acked-by: Christoph Lameter c...@linux.com
Acked-by: Hillf Danton dhi...@gmail.com
---
include/linux/nodemask.h |1 +
1 files
hotplug.
Also add @status_change_nid_normal to struct memory_notify, thus
the memory hotplug callbacks know whether the node_states[N_NORMAL_MEMORY]
are changed.
Signed-off-by: Lai Jiangshan la...@cn.fujitsu.com
---
Documentation/memory-hotplug.txt |5 ++-
include/linux/memory.h |1
] [81679bd9] system_call_fastpath+0x16/0x1b
...
Thanks,
Yasuaki Ishimatsu
2012/09/10 17:58, Lai Jiangshan wrote:
A) Introduction:
This patchset adds MOVABLE-dedicated node and online_movable for
memory-management.
It is used for anti-fragmentation(hugepage, big-order allocation
On 09/11/2012 09:37 AM, Yasuaki Ishimatsu wrote:
Hi Lai,
2012/09/11 10:22, Lai Jiangshan wrote:
On 09/11/2012 08:40 AM, Yasuaki Ishimatsu wrote:
Hi Lai,
Using memory_online to hot-added node's memory, the following kernel
messages
were shown. Is this a known issue?
Thank you for your
zone_start_pfn is not modified by init_currently_empty_zone()
grow_zone_span() needs to be updated to be aware of empty zone.
Signed-off-by: Lai Jiangshan la...@cn.fujitsu.com
Reported-by: Yasuaki ISIMATU isimatu.yasu...@jp.fujitsu.com
Tested-by: Wen Congyang we...@cn.fujitsu.com
---
diff --git a/mm
We must clear this WORKER_REBIND before busy_worker_rebind_fn() returns,
otherise the worker may go to call idle_worker_rebind() wrongly, which
may access to the invalid -idle_rebind and sleep forever in -rebind_hold.
Signed-off-by: Lai Jiangshan la...@cn.fujitsu.com
---
kernel/workqueue.c
a lot of comments.
4) clear WORKER_REBIND unconditionaly in idle_worker_rebind()
Signed-off-by: Lai Jiangshan la...@cn.fujitsu.com
---
kernel/workqueue.c | 146 +---
1 files changed, 47 insertions(+), 99 deletions(-)
diff --git a/kernel
201 - 300 of 2229 matches
Mail list logo