> -----Original Message-----
> From: Christoph Hellwig [mailto:h...@lst.de]
> Sent: Friday, July 24, 2020 12:01 AM
> To: Song Bao Hua (Barry Song) <song.bao....@hisilicon.com>
> Cc: Christoph Hellwig <h...@lst.de>; m.szyprow...@samsung.com;
> robin.mur...@arm.com; w...@kernel.org; ganapatrao.kulka...@cavium.com;
> catalin.mari...@arm.com; io...@lists.linux-foundation.org; Linuxarm
> <linux...@huawei.com>; linux-arm-ker...@lists.infradead.org;
> linux-kernel@vger.kernel.org; Jonathan Cameron
> <jonathan.came...@huawei.com>; Nicolas Saenz Julienne
> <nsaenzjulie...@suse.de>; Steve Capper <steve.cap...@arm.com>; Andrew
> Morton <a...@linux-foundation.org>; Mike Rapoport <r...@linux.ibm.com>;
> Zengtao (B) <prime.z...@hisilicon.com>; huangdaode
> <huangda...@huawei.com>
> Subject: Re: [PATCH v3 1/2] dma-direct: provide the ability to reserve
> per-numa CMA
> 
> On Wed, Jul 22, 2020 at 09:41:50PM +0000, Song Bao Hua (Barry Song)
> wrote:
> > I got a kernel robot warning which said dev should be checked before
> > being accessed when I did a similar change in v1. Probably it was an
> > invalid warning if dev should never be null.
> 
> That usually shows up if a function is inconsistent about sometimes checking 
> it
> and sometimes now.
> 
> > Yes, it looks much better.
> 
> Below is a prep patch to rebase on top of:

Thanks for letting me know.

Will rebase on top of your patch.

> 
> ---
> From b81a5e1da65fce9750f0a8b66dbb6f842cbfdd4d Mon Sep 17 00:00:00
> 2001
> From: Christoph Hellwig <h...@lst.de>
> Date: Wed, 22 Jul 2020 16:33:43 +0200
> Subject: dma-contiguous: cleanup dma_alloc_contiguous
> 
> Split out a cma_alloc_aligned helper to deal with the "interesting"
> calling conventions for cma_alloc, which then allows to the main function to
> be written straight forward.  This also takes advantage of the fact that NULL
> dev arguments have been gone from the DMA API for a while.
> 
> Signed-off-by: Christoph Hellwig <h...@lst.de>
> ---
>  kernel/dma/contiguous.c | 31 ++++++++++++++-----------------
>  1 file changed, 14 insertions(+), 17 deletions(-)
> 
> diff --git a/kernel/dma/contiguous.c b/kernel/dma/contiguous.c index
> 15bc5026c485f2..cff7e60968b9e1 100644
> --- a/kernel/dma/contiguous.c
> +++ b/kernel/dma/contiguous.c
> @@ -215,6 +215,13 @@ bool dma_release_from_contiguous(struct device
> *dev, struct page *pages,
>       return cma_release(dev_get_cma_area(dev), pages, count);  }
> 
> +static struct page *cma_alloc_aligned(struct cma *cma, size_t size,
> +gfp_t gfp) {
> +     unsigned int align = min(get_order(size), CONFIG_CMA_ALIGNMENT);
> +
> +     return cma_alloc(cma, size >> PAGE_SHIFT, align, gfp & __GFP_NOWARN);
> +}
> +
>  /**
>   * dma_alloc_contiguous() - allocate contiguous pages
>   * @dev:   Pointer to device for which the allocation is performed.
> @@ -231,24 +238,14 @@ bool dma_release_from_contiguous(struct device
> *dev, struct page *pages,
>   */
>  struct page *dma_alloc_contiguous(struct device *dev, size_t size, gfp_t gfp)
> {
> -     size_t count = size >> PAGE_SHIFT;
> -     struct page *page = NULL;
> -     struct cma *cma = NULL;
> -
> -     if (dev && dev->cma_area)
> -             cma = dev->cma_area;
> -     else if (count > 1)
> -             cma = dma_contiguous_default_area;
> -
>       /* CMA can be used only in the context which permits sleeping */
> -     if (cma && gfpflags_allow_blocking(gfp)) {
> -             size_t align = get_order(size);
> -             size_t cma_align = min_t(size_t, align, CONFIG_CMA_ALIGNMENT);
> -
> -             page = cma_alloc(cma, count, cma_align, gfp & __GFP_NOWARN);
> -     }
> -
> -     return page;
> +     if (!gfpflags_allow_blocking(gfp))
> +             return NULL;
> +     if (dev->cma_area)
> +             return cma_alloc_aligned(dev->cma_area, size, gfp);
> +     if (size <= PAGE_SIZE || !dma_contiguous_default_area)
> +             return NULL;
> +     return cma_alloc_aligned(dma_contiguous_default_area, size, gfp);
>  }
> 
>  /**
> --
> 2.27.0

Thanks
Barry

Reply via email to