2020년 3월 13일 (금) 오후 8:38, Vlastimil Babka 님이 작성:
>
> On 3/13/20 12:04 PM, Srikar Dronamraju wrote:
> >> I lost all the memory about it. :)
> >> Anyway, how about this?
> >>
> >> 1. make node_present_pages() safer
> >> static inline node_present_pages(nid)
> >> {
> >> if (!node_online(nid)) return
2020년 3월 13일 (금) 오전 1:42, Vlastimil Babka 님이 작성:
>
> On 3/12/20 5:13 PM, Srikar Dronamraju wrote:
> > * Vlastimil Babka [2020-03-12 14:51:38]:
> >
> >> > * Vlastimil Babka [2020-03-12 10:30:50]:
> >> >
> >> >> On 3/12/20 9:23 AM, Sachin Sant wrote:
> >> >> >> On 12-Mar-2020, at 10:57 AM, Srikar
2018-07-12 16:15 GMT+09:00 Christoph Hellwig :
> On Thu, Jul 12, 2018 at 11:48:47AM +0900, Joonsoo Kim wrote:
>> One of existing user is general DMA layer and it takes gfp flags that is
>> provided by user. I don't check all the DMA allocation sites but how do
>> you convince
2018-07-11 17:54 GMT+09:00 Michal Hocko :
> On Wed 11-07-18 16:35:28, Joonsoo Kim wrote:
>> 2018-07-10 18:50 GMT+09:00 Michal Hocko :
>> > On Tue 10-07-18 16:19:32, Joonsoo Kim wrote:
>> >> Hello, Marek.
>> >>
>> >> 2018-07-09 21:19 GMT+09:00 M
2018-07-10 18:50 GMT+09:00 Michal Hocko :
> On Tue 10-07-18 16:19:32, Joonsoo Kim wrote:
>> Hello, Marek.
>>
>> 2018-07-09 21:19 GMT+09:00 Marek Szyprowski :
>> > cma_alloc() function doesn't really support gfp flags other than
>> > __GFP_NOWARN, so conver
Hello, Marek.
2018-07-09 21:19 GMT+09:00 Marek Szyprowski :
> cma_alloc() function doesn't really support gfp flags other than
> __GFP_NOWARN, so convert gfp_mask parameter to boolean no_warn parameter.
Although gfp_mask isn't used in cma_alloc() except no_warn, it can be used
in
On Fri, Jul 08, 2016 at 04:48:38PM -0400, Kees Cook wrote:
> On Fri, Jul 8, 2016 at 1:41 PM, Kees Cook wrote:
> > On Fri, Jul 8, 2016 at 12:20 PM, Christoph Lameter wrote:
> >> On Fri, 8 Jul 2016, Kees Cook wrote:
> >>
> >>> Is check_valid_pointer() making
non-STD_MMU_64 builds to use the generic __kernel_map_pages().
I'd be happy to take this through the powerpc tree for 3.20, but for this:
depends on:
From: Joonsoo Kim iamjoonsoo@lge.com
Date: Thu, 22 Jan 2015 10:28:58 +0900
Subject: [PATCH] mm/debug_pagealloc: fix build failure on ppc
On Thu, Jan 22, 2015 at 10:45:51AM +0900, Joonsoo Kim wrote:
On Wed, Jan 21, 2015 at 09:57:59PM +0900, Akinobu Mita wrote:
2015-01-21 9:07 GMT+09:00 Andrew Morton a...@linux-foundation.org:
On Tue, 20 Jan 2015 15:01:50 -0800 j...@joshtriplett.org wrote:
On Tue, Jan 20, 2015 at 02:02
7cb9d1ed8a785df152cb8934e187031c8ebd1bb2 Mon Sep 17 00:00:00 2001
From: Joonsoo Kim iamjoonsoo@lge.com
Date: Thu, 22 Jan 2015 10:28:58 +0900
Subject: [PATCH] mm/debug_pagealloc: fix build failure on ppc and some other
archs
Kim Phillips reported following build failure.
LD init/built-in.o
mm
().
This replaces get_order() with order_base_2() (round-up version of ilog2).
Suggested-by: Paul Mackerras pau...@samba.org
Cc: Alexander Graf ag...@suse.de
Cc: Aneesh Kumar K.V aneesh.ku...@linux.vnet.ibm.com
Cc: Joonsoo Kim iamjoonsoo@lge.com
Cc: Benjamin Herrenschmidt b
On Tue, Jun 24, 2014 at 04:36:47PM +1000, Michael Ellerman wrote:
Commit e58e263 PPC, KVM, CMA: use general CMA reserved area management
framework in next-20140624 removed arch/powerpc/kvm/book3s_hv_cma.c but
neglected to update the Makefile, thus breaking the build.
Signed-off-by: Michael
On Wed, Jun 18, 2014 at 01:51:44PM -0700, Andrew Morton wrote:
On Tue, 17 Jun 2014 10:25:07 +0900 Joonsoo Kim iamjoonsoo@lge.com wrote:
v2:
- Although this patchset looks very different with v1, the end result,
that is, mm/cma.c is same with v1's one. So I carry Ack to patch
On Wed, Jun 18, 2014 at 01:48:15PM -0700, Andrew Morton wrote:
On Mon, 16 Jun 2014 14:40:46 +0900 Joonsoo Kim iamjoonsoo@lge.com wrote:
PPC KVM's CMA area management requires arbitrary bitmap granularity,
since they want to reserve very large memory and manage this region
with bitmap
On Mon, Jun 16, 2014 at 11:11:35AM +0200, Marek Szyprowski wrote:
Hello,
On 2014-06-16 07:40, Joonsoo Kim wrote:
Currently, there are two users on CMA functionality, one is the DMA
subsystem and the other is the KVM on powerpc. They have their own code
to manage CMA reserved area even
On Mon, Jun 16, 2014 at 03:27:19PM +0900, Minchan Kim wrote:
Hi, Joonsoo
On Mon, Jun 16, 2014 at 02:40:43PM +0900, Joonsoo Kim wrote:
We should free memory for bitmap when we find zone mis-match,
otherwise this memory will leak.
Additionally, I copy code comment from PPC KVM's CMA
On Thu, Jun 12, 2014 at 11:53:16AM +0200, Michal Nazarewicz wrote:
On Thu, Jun 12 2014, Michal Nazarewicz min...@mina86.com wrote:
I used “function(arg1, arg2, …)” at the *beginning* of functions when
the arguments passed to the function were included in the message. In
all other cases I
On Thu, Jun 12, 2014 at 12:02:38PM +0200, Michal Nazarewicz wrote:
On Thu, Jun 12 2014, Joonsoo Kim iamjoonsoo@lge.com wrote:
ppc kvm's cma area management needs alignment constraint on
I've noticed it earlier and cannot seem to get to terms with this. It
should IMO be PPC, KVM and CMA
On Thu, Jun 12, 2014 at 12:19:54PM +0200, Michal Nazarewicz wrote:
On Thu, Jun 12 2014, Joonsoo Kim iamjoonsoo@lge.com wrote:
ppc kvm's cma region management requires arbitrary bitmap granularity,
since they want to reserve very large memory and manage this region
with bitmap that one
On Thu, Jun 12, 2014 at 02:37:43PM +0900, Minchan Kim wrote:
On Thu, Jun 12, 2014 at 12:21:40PM +0900, Joonsoo Kim wrote:
To prepare future generalization work on cma area management code,
we need to separate core cma management codes from DMA APIs.
We will extend these core functions
On Sat, Jun 14, 2014 at 03:46:44PM +0530, Aneesh Kumar K.V wrote:
Joonsoo Kim iamjoonsoo@lge.com writes:
Currently, there are two users on CMA functionality, one is the DMA
subsystem and the other is the kvm on powerpc. They have their own code
to manage CMA reserved area even
On Sat, Jun 14, 2014 at 03:35:33PM +0530, Aneesh Kumar K.V wrote:
Joonsoo Kim iamjoonsoo@lge.com writes:
Now, we have general CMA reserved area management framework,
so use it for future maintainabilty. There is no functional change.
Acked-by: Michal Nazarewicz min...@mina86.com
On Sat, Jun 14, 2014 at 12:55:39PM +0530, Aneesh Kumar K.V wrote:
Joonsoo Kim iamjoonsoo@lge.com writes:
Currently, there are two users on CMA functionality, one is the DMA
subsystem and the other is the kvm on powerpc. They have their own code
to manage CMA reserved area even
On Sat, Jun 14, 2014 at 02:23:59PM +0530, Aneesh Kumar K.V wrote:
Joonsoo Kim iamjoonsoo@lge.com writes:
Now, we have general CMA reserved area management framework,
so use it for future maintainabilty. There is no functional change.
Acked-by: Michal Nazarewicz min...@mina86.com
APIs while extending
core functions.
v3: move decriptions to exporeted APIs (Minchan)
pass aligned base and size to dma_contiguous_early_fixup() (Minchan)
Acked-by: Michal Nazarewicz min...@mina86.com
Reviewed-by: Aneesh Kumar K.V aneesh.ku...@linux.vnet.ibm.com
Signed-off-by: Joonsoo Kim
-alignment (Minchan)
clear code documentation by Minchan's comment (Minchan)
Acked-by: Michal Nazarewicz min...@mina86.com
Reviewed-by: Aneesh Kumar K.V aneesh.ku...@linux.vnet.ibm.com
Signed-off-by: Joonsoo Kim iamjoonsoo@lge.com
diff --git a/drivers/base/dma-contiguous.c b/drivers/base/dma
)
Acked-by: Michal Nazarewicz min...@mina86.com
Reviewed-by: Aneesh Kumar K.V aneesh.ku...@linux.vnet.ibm.com
Signed-off-by: Joonsoo Kim iamjoonsoo@lge.com
diff --git a/arch/powerpc/kvm/book3s_hv_builtin.c
b/arch/powerpc/kvm/book3s_hv_builtin.c
index 3960e0b..6cf498a 100644
--- a/arch/powerpc
on linux-next 20140610.
Patch 1-4 prepare some features to cover PPC KVM's requirements.
Patch 5-6 generalize CMA reserved area management code and change users
to use it.
Patch 7-9 clean-up minor things.
Joonsoo Kim (9):
DMA, CMA: fix possible memory leak
DMA, CMA: separate core CMA
Nazarewicz min...@mina86.com
Acked-by: Paolo Bonzini pbonz...@redhat.com
Tested-by: Aneesh Kumar K.V aneesh.ku...@linux.vnet.ibm.com
Signed-off-by: Joonsoo Kim iamjoonsoo@lge.com
diff --git a/arch/powerpc/kvm/book3s_64_mmu_hv.c
b/arch/powerpc/kvm/book3s_64_mmu_hv.c
index 8056107..a41e625 100644
this possibility during code-review and, IMO,
this patch isn't suitable for stable tree.
Acked-by: Zhang Yanfei zhangyan...@cn.fujitsu.com
Reviewed-by: Michal Nazarewicz min...@mina86.com
Reviewed-by: Aneesh Kumar K.V aneesh.ku...@linux.vnet.ibm.com
Signed-off-by: Joonsoo Kim iamjoonsoo@lge.com
diff
...@linux.vnet.ibm.com
Signed-off-by: Joonsoo Kim iamjoonsoo@lge.com
diff --git a/mm/cma.c b/mm/cma.c
index 0cf50da..b442a13 100644
--- a/mm/cma.c
+++ b/mm/cma.c
@@ -285,11 +285,12 @@ struct page *cma_alloc(struct cma *cma, int count,
unsigned int align)
if (ret == 0
We don't need explicit 'CMA:' prefix, since we already define prefix
'cma:' in pr_fmt. So remove it.
Acked-by: Michal Nazarewicz min...@mina86.com
Reviewed-by: Zhang Yanfei zhangyan...@cn.fujitsu.com
Signed-off-by: Joonsoo Kim iamjoonsoo@lge.com
diff --git a/mm/cma.c b/mm/cma.c
index 9961120
...@cn.fujitsu.com
Acked-by: Minchan Kim minc...@kernel.org
Reviewed-by: Aneesh Kumar K.V aneesh.ku...@linux.vnet.ibm.com
Signed-off-by: Joonsoo Kim iamjoonsoo@lge.com
diff --git a/drivers/base/dma-contiguous.c b/drivers/base/dma-contiguous.c
index 5f62c28..c6eeb2c 100644
--- a/drivers/base/dma
...@linux.vnet.ibm.com
Signed-off-by: Joonsoo Kim iamjoonsoo@lge.com
diff --git a/arch/arm/mm/dma-mapping.c b/arch/arm/mm/dma-mapping.c
index 4c88935..3116880 100644
--- a/arch/arm/mm/dma-mapping.c
+++ b/arch/arm/mm/dma-mapping.c
@@ -26,6 +26,7 @@
#include linux/io.h
#include linux/vmalloc.h
#include
On Thu, Jun 12, 2014 at 02:52:20PM +0900, Minchan Kim wrote:
On Thu, Jun 12, 2014 at 12:21:41PM +0900, Joonsoo Kim wrote:
ppc kvm's cma area management needs alignment constraint on
cma region. So support it to prepare generalization of cma area
management functionality.
Additionally
On Thu, Jun 12, 2014 at 03:06:10PM +0900, Minchan Kim wrote:
On Thu, Jun 12, 2014 at 12:21:42PM +0900, Joonsoo Kim wrote:
ppc kvm's cma region management requires arbitrary bitmap granularity,
since they want to reserve very large memory and manage this region
with bitmap that one bit
On Thu, Jun 12, 2014 at 04:08:11PM +0900, Minchan Kim wrote:
On Thu, Jun 12, 2014 at 12:21:42PM +0900, Joonsoo Kim wrote:
ppc kvm's cma region management requires arbitrary bitmap granularity,
since they want to reserve very large memory and manage this region
with bitmap that one bit
On Thu, Jun 12, 2014 at 04:13:11PM +0900, Minchan Kim wrote:
On Thu, Jun 12, 2014 at 12:21:43PM +0900, Joonsoo Kim wrote:
Currently, there are two users on CMA functionality, one is the DMA
subsystem and the other is the kvm on powerpc. They have their own code
to manage CMA reserved area
On Thu, Jun 12, 2014 at 04:19:31PM +0900, Minchan Kim wrote:
On Thu, Jun 12, 2014 at 12:21:46PM +0900, Joonsoo Kim wrote:
Conventionally, we put output param to the end of param list.
cma_declare_contiguous() doesn't look like that, so change it.
If you says Conventionally, I'd like
On Thu, Jun 12, 2014 at 04:40:29PM +0900, Minchan Kim wrote:
On Thu, Jun 12, 2014 at 12:21:47PM +0900, Joonsoo Kim wrote:
Currently, we should take the mutex for manipulating bitmap.
This job may be really simple and short so we don't need to sleep
if contended. So I change it to spinlock
We can remove one call sites for clear_cma_bitmap() if we first
call it before checking error number.
Signed-off-by: Joonsoo Kim iamjoonsoo@lge.com
diff --git a/mm/cma.c b/mm/cma.c
index 1e1b017..01a0713 100644
--- a/mm/cma.c
+++ b/mm/cma.c
@@ -282,11 +282,12 @@ struct page *cma_alloc(struct
ppc kvm's cma area management needs alignment constraint on
cma region. So support it to prepare generalization of cma area
management functionality.
Additionally, add some comments which tell us why alignment
constraint is needed on cma region.
Signed-off-by: Joonsoo Kim iamjoonsoo@lge.com
Currently, we should take the mutex for manipulating bitmap.
This job may be really simple and short so we don't need to sleep
if contended. So I change it to spinlock.
Signed-off-by: Joonsoo Kim iamjoonsoo@lge.com
diff --git a/mm/cma.c b/mm/cma.c
index 22a5b23..3085e8c 100644
--- a/mm/cma.c
We should free memory for bitmap when we find zone mis-match,
otherwise this memory will leak.
Additionally, I copy code comment from ppc kvm's cma code to notify
why we need to check zone mis-match.
Signed-off-by: Joonsoo Kim iamjoonsoo@lge.com
diff --git a/drivers/base/dma-contiguous.c b
().
Signed-off-by: Joonsoo Kim iamjoonsoo@lge.com
diff --git a/drivers/base/dma-contiguous.c b/drivers/base/dma-contiguous.c
index 83969f8..bd0bb81 100644
--- a/drivers/base/dma-contiguous.c
+++ b/drivers/base/dma-contiguous.c
@@ -144,7 +144,7 @@ void __init dma_contiguous_reserve(phys_addr_t limit
Conventionally, we put output param to the end of param list.
cma_declare_contiguous() doesn't look like that, so change it.
Additionally, move down cma_areas reference code to the position
where it is really needed.
Signed-off-by: Joonsoo Kim iamjoonsoo@lge.com
diff --git a/arch/powerpc
-by: Joonsoo Kim iamjoonsoo@lge.com
diff --git a/drivers/base/dma-contiguous.c b/drivers/base/dma-contiguous.c
index bc4c171..9bc9340 100644
--- a/drivers/base/dma-contiguous.c
+++ b/drivers/base/dma-contiguous.c
@@ -38,6 +38,7 @@ struct cma {
unsigned long base_pfn;
unsigned
is same with v1's one. So I carry Ack to patch 6-7.
Patch 1-5 prepare some features to cover ppc kvm's requirements.
Patch 6-7 generalize CMA reserved area management code and change users
to use it.
Patch 8-10 clean-up minor things.
Joonsoo Kim (10):
DMA, CMA: clean-up log message
DMA, CMA: fix
APIs while extending
core functions.
Signed-off-by: Joonsoo Kim iamjoonsoo@lge.com
diff --git a/drivers/base/dma-contiguous.c b/drivers/base/dma-contiguous.c
index fb0cdce..8a44c82 100644
--- a/drivers/base/dma-contiguous.c
+++ b/drivers/base/dma-contiguous.c
@@ -231,9 +231,9 @@ core_initcall
Now, we have general CMA reserved area management framework,
so use it for future maintainabilty. There is no functional change.
Acked-by: Michal Nazarewicz min...@mina86.com
Acked-by: Paolo Bonzini pbonz...@redhat.com
Signed-off-by: Joonsoo Kim iamjoonsoo@lge.com
diff --git a/arch/powerpc
On Thu, Jun 12, 2014 at 10:11:19AM +0530, Aneesh Kumar K.V wrote:
Joonsoo Kim iamjoonsoo@lge.com writes:
We don't need explicit 'CMA:' prefix, since we already define prefix
'cma:' in pr_fmt. So remove it.
And, some logs print function name and others doesn't. This looks
bad to me
On Thu, Jun 12, 2014 at 02:18:53PM +0900, Minchan Kim wrote:
Hi Joonsoo,
On Thu, Jun 12, 2014 at 12:21:38PM +0900, Joonsoo Kim wrote:
We don't need explicit 'CMA:' prefix, since we already define prefix
'cma:' in pr_fmt. So remove it.
And, some logs print function name and others
On Thu, Jun 12, 2014 at 02:25:43PM +0900, Minchan Kim wrote:
On Thu, Jun 12, 2014 at 12:21:39PM +0900, Joonsoo Kim wrote:
We should free memory for bitmap when we find zone mis-match,
otherwise this memory will leak.
Then, -stable stuff?
I don't think so. This is just possible leak
On Tue, Jun 03, 2014 at 08:56:00AM +0200, Michal Nazarewicz wrote:
On Tue, Jun 03 2014, Joonsoo Kim wrote:
Currently, there are two users on CMA functionality, one is the DMA
subsystem and the other is the kvm on powerpc. They have their own code
to manage CMA reserved area even
On Tue, Jun 03, 2014 at 09:00:48AM +0200, Michal Nazarewicz wrote:
On Tue, Jun 03 2014, Joonsoo Kim wrote:
Now, we have general CMA reserved area management framework,
so use it for future maintainabilty. There is no functional change.
Signed-off-by: Joonsoo Kim iamjoonsoo@lge.com
On Thu, Jun 05, 2014 at 11:09:05PM +0530, Aneesh Kumar K.V wrote:
Joonsoo Kim iamjoonsoo@lge.com writes:
Currently, there are two users on CMA functionality, one is the DMA
subsystem and the other is the kvm on powerpc. They have their own code
to manage CMA reserved area even
who related to this stuff before actually
trying to merge this patchset. If all agree with this change, I will
resend it after rc1.
Thanks.
Joonsoo Kim (3):
CMA: generalize CMA reserved area management functionality
DMA, CMA: use general CMA reserved area management framework
PPC, KVM, CMA
through
this patch.
This change could also help developer who want to use CMA in their
new feature development, since they can use CMA easily without
copying pasting this reserved area management code.
Signed-off-by: Joonsoo Kim iamjoonsoo@lge.com
diff --git a/drivers/base/Kconfig b/drivers
Now, we have general CMA reserved area management framework,
so use it for future maintainabilty. There is no functional change.
Signed-off-by: Joonsoo Kim iamjoonsoo@lge.com
diff --git a/drivers/base/Kconfig b/drivers/base/Kconfig
index b3fe1cc..4eac559 100644
--- a/drivers/base/Kconfig
Now, we have general CMA reserved area management framework,
so use it for future maintainabilty. There is no functional change.
Signed-off-by: Joonsoo Kim iamjoonsoo@lge.com
diff --git a/arch/powerpc/kvm/book3s_hv_builtin.c
b/arch/powerpc/kvm/book3s_hv_builtin.c
index 8cd0dae..43c3f81
On Fri, May 16, 2014 at 04:37:35PM -0700, Nishanth Aravamudan wrote:
On 06.02.2014 [17:07:04 +0900], Joonsoo Kim wrote:
Currently, if allocation constraint to node is NUMA_NO_NODE, we search
a partial slab on numa_node_id() node. This doesn't work properly on the
system having memoryless
On Tue, Feb 18, 2014 at 10:38:01AM -0600, Christoph Lameter wrote:
On Mon, 17 Feb 2014, Joonsoo Kim wrote:
On Wed, Feb 12, 2014 at 04:16:11PM -0600, Christoph Lameter wrote:
Here is another patch with some fixes. The additional logic is only
compiled in if CONFIG_HAVE_MEMORYLESS_NODES
On Wed, Feb 12, 2014 at 04:16:11PM -0600, Christoph Lameter wrote:
Here is another patch with some fixes. The additional logic is only
compiled in if CONFIG_HAVE_MEMORYLESS_NODES is set.
Subject: slub: Memoryless node support
Support memoryless nodes by tracking which allocations are
On Wed, Feb 12, 2014 at 10:51:37PM -0800, Nishanth Aravamudan wrote:
Hi Joonsoo,
Also, given that only ia64 and (hopefuly soon) ppc64 can set
CONFIG_HAVE_MEMORYLESS_NODES, does that mean x86_64 can't have
memoryless nodes present? Even with fakenuma? Just curious.
I don't know, because I'm
On Mon, Feb 10, 2014 at 11:13:21AM -0800, Nishanth Aravamudan wrote:
Hi Christoph,
On 07.02.2014 [12:51:07 -0600], Christoph Lameter wrote:
Here is a draft of a patch to make this work with memoryless nodes.
The first thing is that we modify node_match to also match if we hit an
empty
On Sat, Feb 08, 2014 at 01:57:39AM -0800, David Rientjes wrote:
On Fri, 7 Feb 2014, Joonsoo Kim wrote:
It seems like a better approach would be to do this when a node is
brought
online and determine the fallback node based not on the zonelists as you
do here but rather
On Fri, Feb 07, 2014 at 01:38:55PM -0800, Nishanth Aravamudan wrote:
On 07.02.2014 [12:51:07 -0600], Christoph Lameter wrote:
Here is a draft of a patch to make this work with memoryless nodes.
Hi Christoph, this should be tested instead of Joonsoo's patch 2 (and 3)?
Hello,
I guess that
On Fri, Feb 07, 2014 at 11:49:57AM -0600, Christoph Lameter wrote:
On Fri, 7 Feb 2014, Joonsoo Kim wrote:
This check wouild need to be something that checks for other contigencies
in the page allocator as well. A simple solution would be to actually run
a GFP_THIS_NODE alloc to see
On Fri, Feb 07, 2014 at 12:51:07PM -0600, Christoph Lameter wrote:
Here is a draft of a patch to make this work with memoryless nodes.
The first thing is that we modify node_match to also match if we hit an
empty node. In that case we simply take the current slab if its there.
Why not
On Thu, Feb 06, 2014 at 11:28:12AM -0800, Nishanth Aravamudan wrote:
On 06.02.2014 [10:59:55 -0800], Nishanth Aravamudan wrote:
On 06.02.2014 [17:04:18 +0900], Joonsoo Kim wrote:
On Wed, Feb 05, 2014 at 06:07:57PM -0800, Nishanth Aravamudan wrote:
On 24.01.2014 [16:25:58 -0800], David
Signed-off-by: Joonsoo Kim iamjoonsoo@lge.com
diff --git a/mm/slub.c b/mm/slub.c
index cc1f995..c851f82 100644
--- a/mm/slub.c
+++ b/mm/slub.c
@@ -1700,6 +1700,14 @@ static void *get_partial(struct kmem_cache *s, gfp_t
flags, int node,
void *object;
int searchnode = (node
Signed-off-by: Joonsoo Kim iamjoonsoo@lge.com
diff --git a/include/linux/topology.h b/include/linux/topology.h
index 12ae6ce..a6d5438 100644
--- a/include/linux/topology.h
+++ b/include/linux/topology.h
@@ -233,11 +233,20 @@ static inline int numa_node_id(void)
* Use the accessor functions
always fallback to numa_mem_id() first. So
searching a partial slab on numa_node_id() in that case is proper solution
for memoryless node case.
Signed-off-by: Joonsoo Kim iamjoonsoo@lge.com
diff --git a/mm/slub.c b/mm/slub.c
index 545a170..cc1f995 100644
--- a/mm/slub.c
+++ b/mm/slub.c
@@ -1698,7
2014-02-06 David Rientjes rient...@google.com:
On Thu, 6 Feb 2014, Joonsoo Kim wrote:
Signed-off-by: Joonsoo Kim iamjoonsoo@lge.com
I may be misunderstanding this patch and there's no help because there's
no changelog.
Sorry about that.
I made this patch just for testing. :)
Thanks
On Thu, Feb 06, 2014 at 11:11:31AM -0800, Nishanth Aravamudan wrote:
diff --git a/include/linux/topology.h b/include/linux/topology.h
index 12ae6ce..66b19b8 100644
--- a/include/linux/topology.h
+++ b/include/linux/topology.h
@@ -233,11 +233,20 @@ static inline int numa_node_id(void)
On Thu, Feb 06, 2014 at 11:30:20AM -0600, Christoph Lameter wrote:
On Thu, 6 Feb 2014, Joonsoo Kim wrote:
diff --git a/mm/slub.c b/mm/slub.c
index cc1f995..c851f82 100644
--- a/mm/slub.c
+++ b/mm/slub.c
@@ -1700,6 +1700,14 @@ static void *get_partial(struct kmem_cache *s, gfp_t
On Thu, Feb 06, 2014 at 12:52:11PM -0800, David Rientjes wrote:
On Thu, 6 Feb 2014, Joonsoo Kim wrote:
From bf691e7eb07f966e3aed251eaeb18f229ee32d1f Mon Sep 17 00:00:00 2001
From: Joonsoo Kim iamjoonsoo@lge.com
Date: Thu, 6 Feb 2014 17:07:05 +0900
Subject: [RFC PATCH 2/3 v2
On Fri, Jan 24, 2014 at 05:10:42PM -0800, Nishanth Aravamudan wrote:
On 24.01.2014 [16:25:58 -0800], David Rientjes wrote:
On Fri, 24 Jan 2014, Nishanth Aravamudan wrote:
Thank you for clarifying and providing a test patch. I ran with this on
the system showing the original problem,
On Tue, Jan 07, 2014 at 05:52:31PM +0800, Wanpeng Li wrote:
On Tue, Jan 07, 2014 at 04:41:36PM +0900, Joonsoo Kim wrote:
On Tue, Jan 07, 2014 at 01:21:00PM +1100, Anton Blanchard wrote:
Index: b/mm/slub.c
===
--- a/mm/slub.c
On Tue, Jan 07, 2014 at 01:21:00PM +1100, Anton Blanchard wrote:
We noticed a huge amount of slab memory consumed on a large ppc64 box:
Slab:2094336 kB
Almost 2GB. This box is not balanced and some nodes do not have local
memory, causing slub to be very inefficient in its
On Tue, Jan 07, 2014 at 04:48:40PM +0800, Wanpeng Li wrote:
Hi Joonsoo,
On Tue, Jan 07, 2014 at 04:41:36PM +0900, Joonsoo Kim wrote:
On Tue, Jan 07, 2014 at 01:21:00PM +1100, Anton Blanchard wrote:
[...]
Hello,
I think that we need more efforts to solve unbalanced node problem
On Tue, Jan 07, 2014 at 05:21:45PM +0800, Wanpeng Li wrote:
On Tue, Jan 07, 2014 at 06:10:16PM +0900, Joonsoo Kim wrote:
On Tue, Jan 07, 2014 at 04:48:40PM +0800, Wanpeng Li wrote:
Hi Joonsoo,
On Tue, Jan 07, 2014 at 04:41:36PM +0900, Joonsoo Kim wrote:
On Tue, Jan 07, 2014 at 01:21:00PM
commit ea96025a('Don't use alloc_bootmem() in init_IRQ() path')
changed alloc_bootmem() to kzalloc(),
but missed to change free_bootmem() to kfree().
So correct it.
Signed-off-by: Joonsoo Kim js1...@gmail.com
diff --git a/arch/powerpc/platforms/82xx/pq2ads-pci-pic.c
b/arch/powerpc/platforms
83 matches
Mail list logo