Re: [PATCH v3 -next 1/9] DMA, CMA: fix possible memory leak

2014-06-16 Thread Minchan Kim
Hi, Joonsoo

On Mon, Jun 16, 2014 at 02:40:43PM +0900, Joonsoo Kim wrote:
 We should free memory for bitmap when we find zone mis-match,
 otherwise this memory will leak.
 
 Additionally, I copy code comment from PPC KVM's CMA code to inform
 why we need to check zone mis-match.
 
 * Note
 Minchan suggested to add a tag for the stable, but, I don't do it,
 because I found this possibility during code-review and, IMO,
 this patch isn't suitable for stable tree.

Nice idea to put the comment in here. Thanks Joonsoo.

It seems you obey It must fix a real bug that bothers people
on Documentation/stable_kernel_rules.txt but it's a really obvious
bug and hard to get a report from people because limited user and
hard to detect small such small memory leak.

In my experince, Andrew perfered stable marking for such a obvious
problem but simple fix like this but not sure so let's pass the decision
to him and will learn a lesson from him and will follow the decision
from now on.

Thanks.

Acked-by: Minchan Kim minc...@kernel.org

 
 Acked-by: Zhang Yanfei zhangyan...@cn.fujitsu.com
 Reviewed-by: Michal Nazarewicz min...@mina86.com
 Reviewed-by: Aneesh Kumar K.V aneesh.ku...@linux.vnet.ibm.com
 Signed-off-by: Joonsoo Kim iamjoonsoo@lge.com
 
 diff --git a/drivers/base/dma-contiguous.c b/drivers/base/dma-contiguous.c
 index 83969f8..6467c91 100644
 --- a/drivers/base/dma-contiguous.c
 +++ b/drivers/base/dma-contiguous.c
 @@ -176,14 +176,24 @@ static int __init cma_activate_area(struct cma *cma)
   base_pfn = pfn;
   for (j = pageblock_nr_pages; j; --j, pfn++) {
   WARN_ON_ONCE(!pfn_valid(pfn));
 + /*
 +  * alloc_contig_range requires the pfn range
 +  * specified to be in the same zone. Make this
 +  * simple by forcing the entire CMA resv range
 +  * to be in the same zone.
 +  */
   if (page_zone(pfn_to_page(pfn)) != zone)
 - return -EINVAL;
 + goto err;
   }
   init_cma_reserved_pageblock(pfn_to_page(base_pfn));
   } while (--i);
  
   mutex_init(cma-lock);
   return 0;
 +
 +err:
 + kfree(cma-bitmap);
 + return -EINVAL;
  }
  
  static struct cma cma_areas[MAX_CMA_AREAS];
 -- 
 1.7.9.5
 
 --
 To unsubscribe, send a message with 'unsubscribe linux-mm' in
 the body to majord...@kvack.org.  For more info on Linux MM,
 see: http://www.linux-mm.org/ .
 Don't email: a href=mailto:d...@kvack.org; em...@kvack.org /a

-- 
Kind regards,
Minchan Kim
--
To unsubscribe from this list: send the line unsubscribe kvm-ppc in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: [PATCH v2 07/10] PPC, KVM, CMA: use general CMA reserved area management framework

2014-06-16 Thread Aneesh Kumar K.V
Joonsoo Kim iamjoonsoo@lge.com writes:

 On Sat, Jun 14, 2014 at 02:23:59PM +0530, Aneesh Kumar K.V wrote:
 Joonsoo Kim iamjoonsoo@lge.com writes:
 
  Now, we have general CMA reserved area management framework,
  so use it for future maintainabilty. There is no functional change.
 
  Acked-by: Michal Nazarewicz min...@mina86.com
  Acked-by: Paolo Bonzini pbonz...@redhat.com
  Signed-off-by: Joonsoo Kim iamjoonsoo@lge.com
 
 Need this. We may want to keep the VM_BUG_ON by moving
 KVM_CMA_CHUNK_ORDER around.
 
 diff --git a/arch/powerpc/kvm/book3s_64_mmu_hv.c 
 b/arch/powerpc/kvm/book3s_64_mmu_hv.c
 index 8056107..1932e0e 100644
 --- a/arch/powerpc/kvm/book3s_64_mmu_hv.c
 +++ b/arch/powerpc/kvm/book3s_64_mmu_hv.c
 @@ -37,8 +37,6 @@
  #include asm/ppc-opcode.h
  #include asm/cputable.h
  
 -#include book3s_hv_cma.h
 -
  /* POWER7 has 10-bit LPIDs, PPC970 has 6-bit LPIDs */
  #define MAX_LPID_970   63
  
 @@ -64,7 +62,6 @@ long kvmppc_alloc_hpt(struct kvm *kvm, u32 *htab_orderp)
 }
  
 kvm-arch.hpt_cma_alloc = 0;
 -   VM_BUG_ON(order  KVM_CMA_CHUNK_ORDER);
 page = kvm_alloc_hpt(1  (order - PAGE_SHIFT));
 if (page) {
 hpt = (unsigned long)pfn_to_kaddr(page_to_pfn(page));
 
 
 
 -aneesh

 Okay.
 So do you also want this?

 @@ -131,16 +135,18 @@ struct page *kvm_alloc_hpt(unsigned long nr_pages)
  {
 unsigned long align_pages = HPT_ALIGN_PAGES;

 +   VM_BUG_ON(get_order(nr_pages)  KVM_CMA_CHUNK_ORDER - PAGE_SHIFT);
 +
 /* Old CPUs require HPT aligned on a multiple of its size */
 if (!cpu_has_feature(CPU_FTR_ARCH_206))
 align_pages = nr_pages;
 -   return kvm_alloc_cma(nr_pages, align_pages);
 +   return cma_alloc(kvm_cma, nr_pages, get_order(align_pages));
  }

That would also work.

Thanks
-aneesh

--
To unsubscribe from this list: send the line unsubscribe kvm-ppc in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: [PATCH v2 00/10] CMA: generalize CMA reserved area management code

2014-06-16 Thread Aneesh Kumar K.V
Joonsoo Kim iamjoonsoo@lge.com writes:

 On Sat, Jun 14, 2014 at 12:55:39PM +0530, Aneesh Kumar K.V wrote:
 Joonsoo Kim iamjoonsoo@lge.com writes:
 
  Currently, there are two users on CMA functionality, one is the DMA
  subsystem and the other is the kvm on powerpc. They have their own code
  to manage CMA reserved area even if they looks really similar.
  From my guess, it is caused by some needs on bitmap management. Kvm side
  wants to maintain bitmap not for 1 page, but for more size. Eventually it
  use bitmap where one bit represents 64 pages.
 
  When I implement CMA related patches, I should change those two places
  to apply my change and it seem to be painful to me. I want to change
  this situation and reduce future code management overhead through
  this patch.
 
  This change could also help developer who want to use CMA in their
  new feature development, since they can use CMA easily without
  copying  pasting this reserved area management code.
 
  v2:
Although this patchset looks very different with v1, the end result,
that is, mm/cma.c is same with v1's one. So I carry Ack to patch 6-7.
 
  Patch 1-5 prepare some features to cover ppc kvm's requirements.
  Patch 6-7 generalize CMA reserved area management code and change users
  to use it.
  Patch 8-10 clean-up minor things.
 
 
 I wanted to test the ppc changes and found that the patch series doesn't 
 apply
 against v3.15 . Do you have a kernel tree which I can clone to test this
 series ?

 This is based on linux-next -next-20140610.
 And my tree is on following link.

 https://github.com/JoonsooKim/linux/tree/cma-general-v2.0-next-20140610

 But, I think I'm late, because you have already added a Tested-by tag.

linux-next kexec is broken on ppc64, hence I hand picked few of
dependent patches for dma CMA on top of 3.15 and used that for testing.

-aneesh

--
To unsubscribe from this list: send the line unsubscribe kvm-ppc in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: [PATCH 5/6] KVM: PPC: Book3S HV: Access XICS in BE

2014-06-16 Thread Paul Mackerras
On Wed, Jun 11, 2014 at 12:33:50PM +0200, Alexander Graf wrote:
 On the exit path from the guest we check what type of interrupt we received
 if we received one. This means we're doing hardware access to the XICS 
 interrupt
 controller.
 
 However, when running on a little endian system, this access is byte reversed.
 
 So let's make sure to swizzle the bytes back again and virtuall make XICS
 accesses big endian.
...

 @@ -2241,7 +2253,8 @@ kvmppc_read_intr:
  42:  /* It's not an IPI and it's for the host, stash it in the PACA
* before exit, it will be picked up by the host ICP driver
*/
 - stw r0, HSTATE_SAVED_XIRR(r13)
 + li  r4, HSTATE_SAVED_XIRR
 + STWX_BE r0, r13, r4

This is a paca field, not something mandated by PAPR or shared with
the guest, so why do we need to keep it BE?  If you do make it BE,
don't you also need to fix kvmppc_get_xics_latch()?

Paul.
--
To unsubscribe from this list: send the line unsubscribe kvm-ppc in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: [PATCH 1/6] PPC: Add asm helpers for BE 32bit load/store

2014-06-16 Thread Paul Mackerras
On Wed, Jun 11, 2014 at 12:33:46PM +0200, Alexander Graf wrote:
 From assembly code we might not only have to explicitly BE access 64bit 
 values,
 but sometimes also 32bit ones. Add helpers that allow for easy use of 
 lwzx/stwx
 in their respective byte-reverse or native form.
 
 Signed-off-by: Alexander Graf ag...@suse.de
 CC: Benjamin Herrenschmidt b...@kernel.crashing.org
 ---
  arch/powerpc/include/asm/asm-compat.h | 4 
  1 file changed, 4 insertions(+)
 
 diff --git a/arch/powerpc/include/asm/asm-compat.h 
 b/arch/powerpc/include/asm/asm-compat.h
 index 4b237aa..21164ff 100644
 --- a/arch/powerpc/include/asm/asm-compat.h
 +++ b/arch/powerpc/include/asm/asm-compat.h
 @@ -34,10 +34,14 @@
  #define PPC_MIN_STKFRM   112
  
  #ifdef __BIG_ENDIAN__
 +#define LWZX_BE  stringify_in_c(lwzx)
  #define LDX_BE   stringify_in_c(ldx)
 +#define STDX_BE  stringify_in_c(stwx)

This should be STWX_BE, shouldn't it?

  #define STDX_BE  stringify_in_c(stdx)
  #else
 +#define LWZX_BE  stringify_in_c(lwbrx)
  #define LDX_BE   stringify_in_c(ldbrx)
 +#define STWX_BE  stringify_in_c(stwbrx)
  #define STDX_BE  stringify_in_c(stdbrx)
  #endif

Paul.
--
To unsubscribe from this list: send the line unsubscribe kvm-ppc in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: [PATCH v3 -next 0/9] CMA: generalize CMA reserved area management code

2014-06-16 Thread Joonsoo Kim
On Mon, Jun 16, 2014 at 11:11:35AM +0200, Marek Szyprowski wrote:
 Hello,
 
 On 2014-06-16 07:40, Joonsoo Kim wrote:
 Currently, there are two users on CMA functionality, one is the DMA
 subsystem and the other is the KVM on powerpc. They have their own code
 to manage CMA reserved area even if they looks really similar.
 From my guess, it is caused by some needs on bitmap management. Kvm side
 wants to maintain bitmap not for 1 page, but for more size. Eventually it
 use bitmap where one bit represents 64 pages.
 
 When I implement CMA related patches, I should change those two places
 to apply my change and it seem to be painful to me. I want to change
 this situation and reduce future code management overhead through
 this patch.
 
 This change could also help developer who want to use CMA in their
 new feature development, since they can use CMA easily without
 copying  pasting this reserved area management code.
 
 v3:
- Simplify old patch 1(log format fix) and move it to the end of patchset.
- Patch 2: Pass aligned base and size to dma_contiguous_early_fixup()
- Patch 5: Add some accessor functions to pass aligned base and size to
dma_contiguous_early_fixup() function
- Patch 5: Move MAX_CMA_AREAS definition to cma.h
- Patch 6: Add CMA region zeroing to PPC KVM's CMA alloc function
- Patch 8: put 'base' ahead of 'size' in cma_declare_contiguous()
- Remaining minor fixes are noted in commit description of each one
 
 v2:
- Although this patchset looks very different with v1, the end result,
that is, mm/cma.c is same with v1's one. So I carry Ack to patch 6-7.
 
 This patchset is based on linux-next 20140610.
 
 Thanks for taking care of this. I will test it with my setup and if
 everything goes well, I will take it to my -next tree. If any branch
 is required for anyone to continue his works on top of those patches,
 let me know, I will also prepare it.

Hello,

I'm glad to hear that. :)
But, there is one concern. As you already know, I am preparing further
patches (Aggressively allocate the pages on CMA reserved memory). It
may be highly related to MM branch and also slightly depends on this CMA
changes. In this case, what is the best strategy to merge this
patchset? IMHO, Anrew's tree is more appropriate branch. If there is
no issue in this case, I am willing to develope further patches based
on your tree.

Thanks.
--
To unsubscribe from this list: send the line unsubscribe kvm-ppc in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: [PATCH v3 -next 1/9] DMA, CMA: fix possible memory leak

2014-06-16 Thread Joonsoo Kim
On Mon, Jun 16, 2014 at 03:27:19PM +0900, Minchan Kim wrote:
 Hi, Joonsoo
 
 On Mon, Jun 16, 2014 at 02:40:43PM +0900, Joonsoo Kim wrote:
  We should free memory for bitmap when we find zone mis-match,
  otherwise this memory will leak.
  
  Additionally, I copy code comment from PPC KVM's CMA code to inform
  why we need to check zone mis-match.
  
  * Note
  Minchan suggested to add a tag for the stable, but, I don't do it,
  because I found this possibility during code-review and, IMO,
  this patch isn't suitable for stable tree.
 
 Nice idea to put the comment in here. Thanks Joonsoo.
 
 It seems you obey It must fix a real bug that bothers people
 on Documentation/stable_kernel_rules.txt but it's a really obvious
 bug and hard to get a report from people because limited user and
 hard to detect small such small memory leak.
 
 In my experince, Andrew perfered stable marking for such a obvious
 problem but simple fix like this but not sure so let's pass the decision
 to him and will learn a lesson from him and will follow the decision
 from now on.

Yes, I intended to pass the decision to others. :)

 
 Thanks.
 
 Acked-by: Minchan Kim minc...@kernel.org

Thanks.
--
To unsubscribe from this list: send the line unsubscribe kvm-ppc in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html