This logic is not simple to understand so that making separate function
helping readability. Additionally, we can use this change in the
following patch which implement for freelist to have another sized index
in according to nr objects.
Signed-off-by: Joonsoo Kim iamjoonsoo@lge.com
diff
likely branch to functions used for setting/getting
objects to/from the freelist, but we may get more benefits from
this change.
Signed-off-by: Joonsoo Kim iamjoonsoo@lge.com
diff --git a/mm/slab.c b/mm/slab.c
index a0e49bb..bd366e5 100644
--- a/mm/slab.c
+++ b/mm/slab.c
@@ -565,8 +565,16
bytes
so that 97 bytes, that is, more than 75% of object size, are wasted.
In a 64 byte sized slab case, no space is wasted if we use on-slab.
So set off-slab determining constraint to 128 bytes.
Signed-off-by: Joonsoo Kim iamjoonsoo@lge.com
diff --git a/mm/slab.c b/mm/slab.c
index bd366e5
.
This patchset comes from a Christoph's idea.
https://lkml.org/lkml/2013/8/23/315
Patches are on top of my previous posting.
https://lkml.org/lkml/2013/8/22/137
Joonsoo Kim (4):
slab: factor out calculate nr objects in cache_estimate
slab: introduce helper functions to get/set free object
slab
On Tue, Sep 03, 2013 at 03:01:46PM +0800, Wanpeng Li wrote:
There is a race window between vmap_area free and show vmap_area information.
AB
remove_vm_area
spin_lock(vmap_area_lock);
va-flags = ~VM_VM_AREA;
On Tue, Sep 03, 2013 at 03:51:39PM +0800, Wanpeng Li wrote:
On Tue, Sep 03, 2013 at 04:42:21PM +0900, Joonsoo Kim wrote:
On Tue, Sep 03, 2013 at 03:01:46PM +0800, Wanpeng Li wrote:
There is a race window between vmap_area free and show vmap_area
information
Signed-off-by: Joonsoo Kim iamjoonsoo@lge.com
diff --git a/mm/readahead.c b/mm/readahead.c
index daed28d..3932f28 100644
--- a/mm/readahead.c
+++ b/mm/readahead.c
@@ -166,6 +166,8 @@ __do_page_cache_readahead(struct address_space *mapping,
struct file *filp,
goto out
Signed-off-by: Joonsoo Kim iamjoonsoo@lge.com
diff --git a/include/linux/radix-tree.h b/include/linux/radix-tree.h
index ffc444c..045b325 100644
--- a/include/linux/radix-tree.h
+++ b/include/linux/radix-tree.h
@@ -230,6 +230,10 @@ unsigned long radix_tree_next_hole(struct radix_tree_root
. I don't have any trouble with
current allocator, however, I think that we need this feature soon,
because device I/O is getting faster rapidly and allocator should
catch up this speed.
Thanks.
Joonsoo Kim (5):
mm, page_alloc: support multiple pages allocation
mm, page_alloc: introduce
to allocate multiple pages
in first attempt(fast path). I think that multiple page allocation
is not valid for slow path, so current implementation consider
just fast path.
Signed-off-by: Joonsoo Kim iamjoonsoo@lge.com
diff --git a/include/linux/gfp.h b/include/linux/gfp.h
index 0f615eb..8bfa87b
Signed-off-by: Joonsoo Kim iamjoonsoo@lge.com
diff --git a/include/linux/pagemap.h b/include/linux/pagemap.h
index e3dea75..eb1472c 100644
--- a/include/linux/pagemap.h
+++ b/include/linux/pagemap.h
@@ -217,28 +217,33 @@ static inline void page_unfreeze_refs(struct page *page,
int count
Signed-off-by: Joonsoo Kim iamjoonsoo@lge.com
diff --git a/include/linux/gfp.h b/include/linux/gfp.h
index 8bfa87b..f8cde28 100644
--- a/include/linux/gfp.h
+++ b/include/linux/gfp.h
@@ -327,6 +327,16 @@ static inline struct page *alloc_pages_exact_node(int nid,
gfp_t gfp_mask
On Thu, Jul 04, 2013 at 12:01:43AM +0800, Zhang Yanfei wrote:
On 07/03/2013 11:51 PM, Zhang Yanfei wrote:
On 07/03/2013 11:28 PM, Michal Hocko wrote:
On Wed 03-07-13 17:34:15, Joonsoo Kim wrote:
[...]
For one page allocation at once, this patchset makes allocator slower than
before (-5
On Wed, Jul 03, 2013 at 03:57:45PM +, Christoph Lameter wrote:
On Wed, 3 Jul 2013, Joonsoo Kim wrote:
@@ -298,13 +298,15 @@ static inline void arch_alloc_page(struct page *page,
int order) { }
struct page *
__alloc_pages_nodemask(gfp_t gfp_mask, unsigned int order
On Mon, Sep 16, 2013 at 10:09:09PM +1000, David Gibson wrote:
+ *do_dequeue = false;
spin_unlock(hugetlb_lock);
page = alloc_buddy_huge_page(h, NUMA_NO_NODE);
if (!page) {
I think the counter also needs to be
We should clear the page's private flag when returing the page to
the page allocator or the hugepage pool. This patch fixes it.
Signed-off-by: Joonsoo Kim iamjoonsoo@lge.com
---
Hello, Andrew.
I sent the new version of commit ('07443a8') before you did pull request,
but it isn't included
On Mon, Sep 30, 2013 at 02:35:14PM -0700, Andrew Morton wrote:
On Mon, 30 Sep 2013 16:59:44 +0900 Joonsoo Kim iamjoonsoo@lge.com wrote:
We should clear the page's private flag when returing the page to
the page allocator or the hugepage pool. This patch fixes it.
Signed-off
On Fri, Jul 19, 2013 at 02:24:15PM -0700, Davidlohr Bueso wrote:
On Fri, 2013-07-19 at 17:14 +1000, David Gibson wrote:
On Thu, Jul 18, 2013 at 05:42:35PM +0900, Joonsoo Kim wrote:
On Wed, Jul 17, 2013 at 12:50:25PM -0700, Davidlohr Bueso wrote:
From: David Gibson da
In this time we are holding a hugetlb_lock, so hstate values can't
be changed. If we don't have any usable free huge page in this time,
we don't need to proceede the processing. So move this code up.
Signed-off-by: Joonsoo Kim iamjoonsoo@lge.com
diff --git a/mm/hugetlb.c b/mm/hugetlb.c
index
. This patch implement it.
Reviewed-by: Wanpeng Li liw...@linux.vnet.ibm.com
Reviewed-by: Aneesh Kumar K.V aneesh.ku...@linux.vnet.ibm.com
Signed-off-by: Joonsoo Kim iamjoonsoo@lge.com
diff --git a/mm/hugetlb.c b/mm/hugetlb.c
index 87e73bd..2ea6afd 100644
--- a/mm/hugetlb.c
+++ b/mm/hugetlb.c
Reviewed-by: Aneesh Kumar K.V aneesh.ku...@linux.vnet.ibm.com
Signed-off-by: Joonsoo Kim iamjoonsoo@lge.com
diff --git a/mm/hugetlb.c b/mm/hugetlb.c
index 8a61638..87e73bd 100644
--- a/mm/hugetlb.c
+++ b/mm/hugetlb.c
@@ -464,6 +464,8 @@ void reset_vma_resv_huge_pages(struct vm_area_struct *vma
== MAP_FAILED) {
fprintf(stderr, mmap() failed: %s\n, strerror(errno));
}
q[0] = 'c';
This patch solve this problem.
Reviewed-by: Wanpeng Li liw...@linux.vnet.ibm.com
Reviewed-by: Aneesh Kumar K.V aneesh.ku...@linux.vnet.ibm.com
Signed-off-by: Joonsoo Kim iamjoonsoo
If list is empty, list_for_each_entry_safe() doesn't do anything.
So, this check is redundant. Remove it.
Reviewed-by: Aneesh Kumar K.V aneesh.ku...@linux.vnet.ibm.com
Signed-off-by: Joonsoo Kim iamjoonsoo@lge.com
diff --git a/mm/hugetlb.c b/mm/hugetlb.c
index 3ac0a6f..7ca8733 100644
-off-by: Joonsoo Kim iamjoonsoo@lge.com
diff --git a/mm/hugetlb.c b/mm/hugetlb.c
index 7ca8733..8a61638 100644
--- a/mm/hugetlb.c
+++ b/mm/hugetlb.c
@@ -2508,7 +2508,6 @@ static int hugetlb_cow(struct mm_struct *mm, struct
vm_area_struct *vma,
{
struct hstate *h = hstate_vma(vma
[alloc|free] and
fix and clean-up node iteration code to alloc or free.
This makes code more understandable.
Signed-off-by: Joonsoo Kim iamjoonsoo@lge.com
diff --git a/mm/hugetlb.c b/mm/hugetlb.c
index 83edd17..3ac0a6f 100644
--- a/mm/hugetlb.c
+++ b/mm/hugetlb.c
@@ -752,33 +752,6 @@ static int
We can unify some codes for succeed allocation.
This makes code more readable.
There is no functional difference.
Reviewed-by: Aneesh Kumar K.V aneesh.ku...@linux.vnet.ibm.com
Signed-off-by: Joonsoo Kim iamjoonsoo@lge.com
diff --git a/mm/hugetlb.c b/mm/hugetlb.c
index d21a33a..83edd17 100644
The name of the mutex written in comment is wrong.
Fix it.
Reviewed-by: Aneesh Kumar K.V aneesh.ku...@linux.vnet.ibm.com
Acked-by: Hillf Danton dhi...@gmail.com
Signed-off-by: Joonsoo Kim iamjoonsoo@lge.com
diff --git a/mm/hugetlb.c b/mm/hugetlb.c
index d87f70b..d21a33a 100644
--- a/mm
This label is not needed now, because there is no error handling
except returing NULL.
Signed-off-by: Joonsoo Kim iamjoonsoo@lge.com
diff --git a/mm/hugetlb.c b/mm/hugetlb.c
index fc4988c..d87f70b 100644
--- a/mm/hugetlb.c
+++ b/mm/hugetlb.c
@@ -546,11 +546,11 @@ static struct page
.
Changes from v1.
Split patch 1 into two patches to clear it's purpose.
Remove useless indentation changes in 'clean-up alloc_huge_page()'
Fix new iteration code bug.
Add reviewed-by or acked-by.
Joonsoo Kim (10):
mm, hugetlb: move up the code which check availability of free huge
page
mm
On Mon, Jul 22, 2013 at 06:11:11PM +0200, Michal Hocko wrote:
On Mon 22-07-13 17:36:23, Joonsoo Kim wrote:
This label is not needed now, because there is no error handling
except returing NULL.
Signed-off-by: Joonsoo Kim iamjoonsoo@lge.com
diff --git a/mm/hugetlb.c b/mm
On Mon, Jul 22, 2013 at 04:51:50PM +0200, Michal Hocko wrote:
On Mon 15-07-13 18:52:41, Joonsoo Kim wrote:
We can unify some codes for succeed allocation.
This makes code more readable.
There is no functional difference.
This patch unifies successful allocation paths to make the code
On Mon, Jul 22, 2013 at 09:21:38PM +0530, Aneesh Kumar K.V wrote:
Joonsoo Kim iamjoonsoo@lge.com writes:
First 6 patches are almost trivial clean-up patches.
The others are for fixing three bugs.
Perhaps, these problems are minor, because this codes are used
for a long time
On Tue, Jul 23, 2013 at 01:45:50PM +0200, Michal Hocko wrote:
On Mon 22-07-13 17:36:28, Joonsoo Kim wrote:
Currently, we use a page with mapped count 1 in page cache for cow
optimization. If we find this condition, we don't allocate a new
page and copy contents. Instead, we map this page
On Wed, Jul 24, 2013 at 09:00:41AM +0800, Wanpeng Li wrote:
On Mon, Jul 22, 2013 at 05:36:24PM +0900, Joonsoo Kim wrote:
The name of the mutex written in comment is wrong.
Fix it.
Reviewed-by: Aneesh Kumar K.V aneesh.ku...@linux.vnet.ibm.com
Acked-by: Hillf Danton dhi...@gmail.com
Signed
On Tue, Jun 03, 2014 at 08:56:00AM +0200, Michal Nazarewicz wrote:
On Tue, Jun 03 2014, Joonsoo Kim wrote:
Currently, there are two users on CMA functionality, one is the DMA
subsystem and the other is the kvm on powerpc. They have their own code
to manage CMA reserved area even
On Tue, Jun 03, 2014 at 09:00:48AM +0200, Michal Nazarewicz wrote:
On Tue, Jun 03 2014, Joonsoo Kim wrote:
Now, we have general CMA reserved area management framework,
so use it for future maintainabilty. There is no functional change.
Signed-off-by: Joonsoo Kim iamjoonsoo@lge.com
On Thu, Jun 05, 2014 at 11:09:05PM +0530, Aneesh Kumar K.V wrote:
Joonsoo Kim iamjoonsoo@lge.com writes:
Currently, there are two users on CMA functionality, one is the DMA
subsystem and the other is the kvm on powerpc. They have their own code
to manage CMA reserved area even
On Mon, Jun 09, 2014 at 04:09:38PM -0700, David Rientjes wrote:
On Mon, 9 Jun 2014, Dave Jones wrote:
Kernel based on v3.15-7257-g963649d735c8
Dave
Oops: [#1] PREEMPT SMP
Modules linked in: dlci 8021q garp snd_seq_dummy bnep llc2 af_key bridge
stp fuse tun
On Fri, Jun 06, 2014 at 05:22:45PM +0400, Vladimir Davydov wrote:
Since a dead memcg cache is destroyed only after the last slab allocated
to it is freed, we must disable caching of empty slabs for such caches,
otherwise they will be hanging around forever.
This patch makes SLAB discard dead
On Fri, Jun 06, 2014 at 05:22:40PM +0400, Vladimir Davydov wrote:
This will be used by the next patches.
Signed-off-by: Vladimir Davydov vdavy...@parallels.com
Acked-by: Christoph Lameter c...@linux.com
---
include/linux/slab.h |2 ++
mm/memcontrol.c |1 +
mm/slab.h
(see
memcg_unregister_all_caches).
Signed-off-by: Vladimir Davydov vdavy...@parallels.com
Thanks-to: Joonsoo Kim iamjoonsoo@lge.com
---
mm/slub.c | 20
1 file changed, 20 insertions(+)
diff --git a/mm/slub.c b/mm/slub.c
index e46d6abe8a68..1dad7e2c586a 100644
-off-by: Joonsoo Kim iamjoonsoo@lge.com
diff --git a/mm/slab.c b/mm/slab.c
index 889957b..3bb5e11 100644
--- a/mm/slab.c
+++ b/mm/slab.c
@@ -997,9 +997,9 @@ static void free_alien_cache(struct alien_cache **alc_ptr)
}
static void __drain_alien_cache(struct kmem_cache *cachep
BAD_ALIEN_MAGIC value isn't used anymore. So remove it.
Signed-off-by: Joonsoo Kim iamjoonsoo@lge.com
diff --git a/mm/slab.c b/mm/slab.c
index 4030a89..8476ffc 100644
--- a/mm/slab.c
+++ b/mm/slab.c
@@ -437,8 +437,6 @@ static struct kmem_cache kmem_cache_boot = {
.name = kmem_cache
.
directly return return value of clear_obj_pfmemalloc().
Signed-off-by: Joonsoo Kim iamjoonsoo@lge.com
diff --git a/mm/slab.c b/mm/slab.c
index 1fede40..e2c80df 100644
--- a/mm/slab.c
+++ b/mm/slab.c
@@ -215,9 +215,9 @@ static inline void set_obj_pfmemalloc(void **objp)
return;
}
-static
node isn't changed, so we don't need to retreive this structure
everytime we move the object. Maybe compiler do this optimization,
but making it explicitly is better.
Acked-by: Christoph Lameter c...@linux.com
Signed-off-by: Joonsoo Kim iamjoonsoo@lge.com
diff --git a/mm/slab.c b/mm/slab.c
-by: Joonsoo Kim iamjoonsoo@lge.com
diff --git a/mm/slab.c b/mm/slab.c
index 92d08e3..7647728 100644
--- a/mm/slab.c
+++ b/mm/slab.c
@@ -242,7 +242,8 @@ static struct kmem_cache_node __initdata
init_kmem_cache_node[NUM_INIT_LISTS];
static int drain_freelist(struct kmem_cache *cache
...@google.com
Signed-off-by: Joonsoo Kim iamjoonsoo@lge.com
diff --git a/mm/slab.c b/mm/slab.c
index 25317fd..1fede40 100644
--- a/mm/slab.c
+++ b/mm/slab.c
@@ -2993,7 +2993,7 @@ static void *cache_alloc_debugcheck_after(struct
kmem_cache *cachep,
static bool slab_should_failslab(struct
Factor out initialization of array cache to use it in following patch.
Acked-by: Christoph Lameter c...@linux.com
Signed-off-by: Joonsoo Kim iamjoonsoo@lge.com
diff --git a/mm/slab.c b/mm/slab.c
index 7647728..755fb57 100644
--- a/mm/slab.c
+++ b/mm/slab.c
@@ -741,13 +741,8 @@ static void
removing it would be better. This patch prepare it by
introducing alien_cache and using it. In the following patch,
we remove spinlock in array_cache.
Acked-by: Christoph Lameter c...@linux.com
Signed-off-by: Joonsoo Kim iamjoonsoo@lge.com
diff --git a/mm/slab.c b/mm/slab.c
index 755fb57..41b7651
Now, we have separate alien_cache structure, so it'd be better to hold
the lock on alien_cache while manipulating alien_cache. After that,
we don't need the lock on array_cache, so remove it.
Acked-by: Christoph Lameter c...@linux.com
Signed-off-by: Joonsoo Kim iamjoonsoo@lge.com
diff --git
Now, there is no code to hold two lock simultaneously, since
we don't call slab_destroy() with holding any lock. So, lockdep
annotation is useless now. Remove it.
v2: don't remove BAD_ALIEN_MAGIC in this patch. It will be removed
in the following patch.
Signed-off-by: Joonsoo Kim iamjoonsoo
. As short stat noted, this makes SLAB code much simpler.
Many of this series get Ack from Christoph Lameter on previous iteration,
but 1, 2, 9 and 10 need to get Ack. There is no big change from previous
iteration. It is just rebased on current linux-next.
Thanks.
Joonsoo Kim (10):
slab: add
Most popular use of zram is the in-memory swap for small embedded system
so I don't want to increase memory footprint without good reason although
it makes synthetic benchmark. Alhought it's 1M for 1G, it isn't small if we
consider compression ratio and real free memory after boot
We can use
optimization which remove useless re-trial and patch 3
is for removing useless alloc flag, so these are not important.
See patch 2 for more detailed description.
This patchset is based on v3.15-rc4.
Thanks.
Joonsoo Kim (3):
CMA: remove redundant retrying code in __alloc_contig_migrate_range
CMA
can say that
this patch have advantages and disadvantages in terms of latency.
Signed-off-by: Joonsoo Kim iamjoonsoo@lge.com
diff --git a/include/linux/mmzone.h b/include/linux/mmzone.h
index fac5509..3ff24d4 100644
--- a/include/linux/mmzone.h
+++ b/include/linux/mmzone.h
@@ -389,6 +389,12
. Now, previous patch changes the behaviour of allocator that
movable allocation uses the page on cma reserved region aggressively,
so this watermark hack isn't needed anymore. Therefore remove it.
Signed-off-by: Joonsoo Kim iamjoonsoo@lge.com
diff --git a/mm/compaction.c b/mm/compaction.c
index
, however, current __alloc_contig_migrate_range() does. But,
I think that this isn't problem, because in this case, we may fail again
with same reason.
Signed-off-by: Joonsoo Kim iamjoonsoo@lge.com
diff --git a/mm/page_alloc.c b/mm/page_alloc.c
index 5dba293..674ade7 100644
--- a/mm/page_alloc.c
On Tue, May 06, 2014 at 07:22:52PM -0700, David Rientjes wrote:
Async compaction terminates prematurely when need_resched(), see
compact_checklock_irqsave(). This can never trigger, however, if the
cond_resched() in isolate_migratepages_range() always takes care of the
scheduling.
If the
On Wed, May 07, 2014 at 02:09:10PM +0200, Vlastimil Babka wrote:
The compaction free scanner in isolate_freepages() currently remembers PFN of
the highest pageblock where it successfully isolates, to be used as the
starting pageblock for the next invocation. The rationale behind this is that
...@samsung.com
Cc: Joonsoo Kim iamjoonsoo@lge.com
Cc: Mel Gorman mgor...@suse.de
Cc: Minchan Kim minc...@kernel.org
Cc: KOSAKI Motohiro kosaki.motoh...@jp.fujitsu.com
Cc: Marek Szyprowski m.szyprow...@samsung.com
Cc: Hugh Dickins hu...@google.com
Cc: Rik van Riel r...@redhat.com
;
}
}
rcu_read_unlock();
v2: add more commit description from Eric
[eduma...@google.com: add more commit description]
Reported-by: Richard Yao r...@gentoo.org
Acked-by: Eric Dumazet eduma...@google.com
Signed-off-by: Joonsoo Kim iamjoonsoo@lge.com
diff --git a/mm
...@hurleysoftware.com
User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:24.0) Gecko/20100101
Thunderbird/24.5.0
MIME-Version: 1.0
To: Joonsoo Kim iamjoonsoo@lge.com, Andrew Morton
a...@linux-foundation.org
CC: Zhang Yanfei zhangyanfei@gmail.com, Johannes Weiner
han...@cmpxchg.org,
Andi Kleen
On Tue, Jun 10, 2014 at 07:18:34PM +0400, Vladimir Davydov wrote:
On Tue, Jun 10, 2014 at 09:26:19AM -0500, Christoph Lameter wrote:
On Tue, 10 Jun 2014, Vladimir Davydov wrote:
Frankly, I incline to shrinking dead SLAB caches periodically from
cache_reap too, because it looks neater
: Mel Gorman mgor...@suse.de
Cc: Joonsoo Kim iamjoonsoo@lge.com
Cc: Michal Nazarewicz min...@mina86.com
Cc: Naoya Horiguchi n-horigu...@ah.jp.nec.com
Cc: Christoph Lameter c...@linux.com
Cc: Rik van Riel r...@redhat.com
Cc: David Rientjes rient...@google.com
---
mm/compaction.c | 33
We should free memory for bitmap when we find zone mis-match,
otherwise this memory will leak.
Additionally, I copy code comment from ppc kvm's cma code to notify
why we need to check zone mis-match.
Signed-off-by: Joonsoo Kim iamjoonsoo@lge.com
diff --git a/drivers/base/dma-contiguous.c b
().
Signed-off-by: Joonsoo Kim iamjoonsoo@lge.com
diff --git a/drivers/base/dma-contiguous.c b/drivers/base/dma-contiguous.c
index 83969f8..bd0bb81 100644
--- a/drivers/base/dma-contiguous.c
+++ b/drivers/base/dma-contiguous.c
@@ -144,7 +144,7 @@ void __init dma_contiguous_reserve(phys_addr_t limit
APIs while extending
core functions.
Signed-off-by: Joonsoo Kim iamjoonsoo@lge.com
diff --git a/drivers/base/dma-contiguous.c b/drivers/base/dma-contiguous.c
index fb0cdce..8a44c82 100644
--- a/drivers/base/dma-contiguous.c
+++ b/drivers/base/dma-contiguous.c
@@ -231,9 +231,9 @@ core_initcall
ppc kvm's cma area management needs alignment constraint on
cma region. So support it to prepare generalization of cma area
management functionality.
Additionally, add some comments which tell us why alignment
constraint is needed on cma region.
Signed-off-by: Joonsoo Kim iamjoonsoo@lge.com
Now, we have general CMA reserved area management framework,
so use it for future maintainabilty. There is no functional change.
Acked-by: Michal Nazarewicz min...@mina86.com
Acked-by: Paolo Bonzini pbonz...@redhat.com
Signed-off-by: Joonsoo Kim iamjoonsoo@lge.com
diff --git a/arch/powerpc
-by: Joonsoo Kim iamjoonsoo@lge.com
diff --git a/drivers/base/dma-contiguous.c b/drivers/base/dma-contiguous.c
index bc4c171..9bc9340 100644
--- a/drivers/base/dma-contiguous.c
+++ b/drivers/base/dma-contiguous.c
@@ -38,6 +38,7 @@ struct cma {
unsigned long base_pfn;
unsigned
and now it's time to do it. This patch
moves core functions to mm/cma.c and change DMA APIs to use
these functions.
There is no functional change in DMA APIs.
v2: There is no big change from v1 in mm/cma.c. Mostly renaming.
Acked-by: Michal Nazarewicz min...@mina86.com
Signed-off-by: Joonsoo Kim
Currently, we should take the mutex for manipulating bitmap.
This job may be really simple and short so we don't need to sleep
if contended. So I change it to spinlock.
Signed-off-by: Joonsoo Kim iamjoonsoo@lge.com
diff --git a/mm/cma.c b/mm/cma.c
index 22a5b23..3085e8c 100644
--- a/mm/cma.c
is same with v1's one. So I carry Ack to patch 6-7.
Patch 1-5 prepare some features to cover ppc kvm's requirements.
Patch 6-7 generalize CMA reserved area management code and change users
to use it.
Patch 8-10 clean-up minor things.
Joonsoo Kim (10):
DMA, CMA: clean-up log message
DMA, CMA: fix
Conventionally, we put output param to the end of param list.
cma_declare_contiguous() doesn't look like that, so change it.
Additionally, move down cma_areas reference code to the position
where it is really needed.
Signed-off-by: Joonsoo Kim iamjoonsoo@lge.com
diff --git a/arch/powerpc
We can remove one call sites for clear_cma_bitmap() if we first
call it before checking error number.
Signed-off-by: Joonsoo Kim iamjoonsoo@lge.com
diff --git a/mm/cma.c b/mm/cma.c
index 1e1b017..01a0713 100644
--- a/mm/cma.c
+++ b/mm/cma.c
@@ -282,11 +282,12 @@ struct page *cma_alloc(struct
On Thu, Jun 12, 2014 at 10:11:19AM +0530, Aneesh Kumar K.V wrote:
Joonsoo Kim iamjoonsoo@lge.com writes:
We don't need explicit 'CMA:' prefix, since we already define prefix
'cma:' in pr_fmt. So remove it.
And, some logs print function name and others doesn't. This looks
bad to me
On Thu, Jun 12, 2014 at 02:18:53PM +0900, Minchan Kim wrote:
Hi Joonsoo,
On Thu, Jun 12, 2014 at 12:21:38PM +0900, Joonsoo Kim wrote:
We don't need explicit 'CMA:' prefix, since we already define prefix
'cma:' in pr_fmt. So remove it.
And, some logs print function name and others
On Thu, Jun 12, 2014 at 02:25:43PM +0900, Minchan Kim wrote:
On Thu, Jun 12, 2014 at 12:21:39PM +0900, Joonsoo Kim wrote:
We should free memory for bitmap when we find zone mis-match,
otherwise this memory will leak.
Then, -stable stuff?
I don't think so. This is just possible leak
On Thu, Jun 12, 2014 at 02:52:20PM +0900, Minchan Kim wrote:
On Thu, Jun 12, 2014 at 12:21:41PM +0900, Joonsoo Kim wrote:
ppc kvm's cma area management needs alignment constraint on
cma region. So support it to prepare generalization of cma area
management functionality.
Additionally
On Thu, Jun 12, 2014 at 03:06:10PM +0900, Minchan Kim wrote:
On Thu, Jun 12, 2014 at 12:21:42PM +0900, Joonsoo Kim wrote:
ppc kvm's cma region management requires arbitrary bitmap granularity,
since they want to reserve very large memory and manage this region
with bitmap that one bit
On Thu, Jun 12, 2014 at 01:24:34AM +0400, Vladimir Davydov wrote:
On Tue, Jun 10, 2014 at 07:18:34PM +0400, Vladimir Davydov wrote:
On Tue, Jun 10, 2014 at 09:26:19AM -0500, Christoph Lameter wrote:
On Tue, 10 Jun 2014, Vladimir Davydov wrote:
Frankly, I incline to shrinking dead
On Fri, Jun 06, 2014 at 05:22:42PM +0400, Vladimir Davydov wrote:
Since per memcg cache destruction is scheduled when the last slab is
freed, to avoid use-after-free in kmem_cache_free we should either
rearrange code in kmem_cache_free so that it won't dereference the cache
ptr after freeing
On Thu, Jun 12, 2014 at 04:08:11PM +0900, Minchan Kim wrote:
On Thu, Jun 12, 2014 at 12:21:42PM +0900, Joonsoo Kim wrote:
ppc kvm's cma region management requires arbitrary bitmap granularity,
since they want to reserve very large memory and manage this region
with bitmap that one bit
On Thu, Jun 12, 2014 at 04:13:11PM +0900, Minchan Kim wrote:
On Thu, Jun 12, 2014 at 12:21:43PM +0900, Joonsoo Kim wrote:
Currently, there are two users on CMA functionality, one is the DMA
subsystem and the other is the kvm on powerpc. They have their own code
to manage CMA reserved area
On Thu, Jun 12, 2014 at 04:19:31PM +0900, Minchan Kim wrote:
On Thu, Jun 12, 2014 at 12:21:46PM +0900, Joonsoo Kim wrote:
Conventionally, we put output param to the end of param list.
cma_declare_contiguous() doesn't look like that, so change it.
If you says Conventionally, I'd like
On Thu, Jun 12, 2014 at 04:40:29PM +0900, Minchan Kim wrote:
On Thu, Jun 12, 2014 at 12:21:47PM +0900, Joonsoo Kim wrote:
Currently, we should take the mutex for manipulating bitmap.
This job may be really simple and short so we don't need to sleep
if contended. So I change it to spinlock
On Tue, May 20, 2014 at 08:18:59AM +0900, Minchan Kim wrote:
On Mon, May 19, 2014 at 01:50:01PM +0900, Joonsoo Kim wrote:
On Mon, May 19, 2014 at 11:53:05AM +0900, Minchan Kim wrote:
On Mon, May 19, 2014 at 11:11:21AM +0900, Joonsoo Kim wrote:
On Thu, May 15, 2014 at 11:43:53AM +0900
On Tue, May 20, 2014 at 02:57:47PM +0900, Gioh Kim wrote:
Thanks for your advise, Michal Nazarewicz.
Having discuss with Joonsoo, I'm adding fallback allocation after
__alloc_from_contiguous().
The fallback allocation works if CMA kernel options is turned on but CMA size
is zero.
On Tue, May 20, 2014 at 04:05:52PM +0900, Gioh Kim wrote:
That case, device-specific coherent memory allocation, is handled at
dma_alloc_coherent in arm_dma_alloc.
__dma_alloc handles only general coherent memory allocation.
I'm sorry missing mention about it.
Hello,
AFAIK, *coherent*
On Wed, May 07, 2014 at 03:06:10PM +0900, Joonsoo Kim wrote:
This patchset does some clean-up and tries to remove lockdep annotation.
Patches 1~3 are just for really really minor improvement.
Patches 4~10 are for clean-up and removing lockdep annotation.
There are two cases that lockdep
On Tue, May 13, 2014 at 12:00:57PM +0900, Minchan Kim wrote:
Hey Joonsoo,
On Thu, May 08, 2014 at 09:32:23AM +0900, Joonsoo Kim wrote:
CMA is introduced to provide physically contiguous pages at runtime.
For this purpose, it reserves memory at boot time. Although it reserve
memory
On Wed, May 14, 2014 at 02:12:19PM +0530, Aneesh Kumar K.V wrote:
Joonsoo Kim iamjoonsoo@lge.com writes:
CMA is introduced to provide physically contiguous pages at runtime.
For this purpose, it reserves memory at boot time. Although it reserve
memory, this reserved memory can be used
On Wed, May 14, 2014 at 03:14:30PM +0530, Aneesh Kumar K.V wrote:
Joonsoo Kim iamjoonsoo@lge.com writes:
On Fri, May 09, 2014 at 02:39:20PM +0200, Marek Szyprowski wrote:
Hello,
On 2014-05-08 02:32, Joonsoo Kim wrote:
This series tries to improve CMA.
CMA is introduced
On Tue, May 13, 2014 at 10:54:58AM +0200, Vlastimil Babka wrote:
On 05/13/2014 02:44 AM, Joonsoo Kim wrote:
On Mon, May 12, 2014 at 04:15:11PM +0200, Vlastimil Babka wrote:
Compaction uses compact_checklock_irqsave() function to periodically check
for
lock contention and need_resched
On Thu, May 15, 2014 at 11:43:53AM +0900, Minchan Kim wrote:
On Thu, May 15, 2014 at 10:53:01AM +0900, Joonsoo Kim wrote:
On Tue, May 13, 2014 at 12:00:57PM +0900, Minchan Kim wrote:
Hey Joonsoo,
On Thu, May 08, 2014 at 09:32:23AM +0900, Joonsoo Kim wrote:
CMA is introduced
On Thu, May 15, 2014 at 10:47:18AM +0100, Mel Gorman wrote:
On Thu, May 15, 2014 at 11:10:55AM +0900, Joonsoo Kim wrote:
That doesn't always prefer CMA region. It would be nice to
understand why grouping in pageblock_nr_pages is beneficial. Also in
your patch you decrement nr_try_cma
On Sun, May 18, 2014 at 11:06:08PM +0530, Aneesh Kumar K.V wrote:
Joonsoo Kim iamjoonsoo@lge.com writes:
On Wed, May 14, 2014 at 02:12:19PM +0530, Aneesh Kumar K.V wrote:
Joonsoo Kim iamjoonsoo@lge.com writes:
Another issue i am facing with the current code is the atomic
On Mon, May 19, 2014 at 11:53:05AM +0900, Minchan Kim wrote:
On Mon, May 19, 2014 at 11:11:21AM +0900, Joonsoo Kim wrote:
On Thu, May 15, 2014 at 11:43:53AM +0900, Minchan Kim wrote:
On Thu, May 15, 2014 at 10:53:01AM +0900, Joonsoo Kim wrote:
On Tue, May 13, 2014 at 12:00:57PM +0900
On Mon, May 19, 2014 at 10:47:12AM +0900, Gioh Kim wrote:
Thank you for your advice. I didn't notice it.
I'm adding followings according to your advice:
- range restrict for CMA_SIZE_MBYTES and *CMA_SIZE_PERCENTAGE*
I think this can prevent the wrong kernel option.
- change size_cmdline
Signed-off-by: Joonsoo Kim iamjoonsoo@lge.com
diff --git a/mm/vmalloc.c b/mm/vmalloc.c
index f64632b..fdbb116 100644
--- a/mm/vmalloc.c
+++ b/mm/vmalloc.c
@@ -2690,14 +2690,14 @@ void get_vmalloc_info(struct vmalloc_info *vmi)
prev_end = VMALLOC_START;
- spin_lock(vmap_area_lock
701 - 800 of 4538 matches
Mail list logo