Re: [PATCH v3 2/3] mm/vmalloc: respect passed gfp_mask when do preloading

2019-10-19 Thread Uladzislau Rezki
> > > 
> > > This is explaining what but it doesn't say why. I would go with
> > > "
> > > Allocation functions should comply with the given gfp_mask as much as
> > > possible. The preallocation code in alloc_vmap_area doesn't follow that
> > > pattern and it is using a hardcoded GFP_KERNEL. Although this doesn't
> > > really make much difference because vmalloc is not GFP_NOWAIT compliant
> > > in general (e.g. page table allocations are GFP_KERNEL) there is no
> > > reason to spread that bad habit and it is good to fix the antipattern.
> > > "
> > I can go with that, agree. I am not sure if i need to update the patch
> > and send v4. Or maybe Andrew can directly update it in his tree.
> > 
> > Andrew, should i send or can update?
> 
> I updated the changelog with Michal's words prior to committing.  You
> were cc'ed :)
> 
Ah, i saw the email stating that the patch has been added to the "mm"
tree, but i did not check the commit message. Now i see everything is
sorted out :)

Thank you!

--
Vlad Rezki


Re: [PATCH v3 2/3] mm/vmalloc: respect passed gfp_mask when do preloading

2019-10-18 Thread Andrew Morton
On Fri, 18 Oct 2019 11:40:49 +0200 Uladzislau Rezki  wrote:

> > > alloc_vmap_area() is given a gfp_mask for the page allocator.
> > > Let's respect that mask and consider it even in the case when
> > > doing regular CPU preloading, i.e. where a context can sleep.
> > 
> > This is explaining what but it doesn't say why. I would go with
> > "
> > Allocation functions should comply with the given gfp_mask as much as
> > possible. The preallocation code in alloc_vmap_area doesn't follow that
> > pattern and it is using a hardcoded GFP_KERNEL. Although this doesn't
> > really make much difference because vmalloc is not GFP_NOWAIT compliant
> > in general (e.g. page table allocations are GFP_KERNEL) there is no
> > reason to spread that bad habit and it is good to fix the antipattern.
> > "
> I can go with that, agree. I am not sure if i need to update the patch
> and send v4. Or maybe Andrew can directly update it in his tree.
> 
> Andrew, should i send or can update?

I updated the changelog with Michal's words prior to committing.  You
were cc'ed :)


From: "Uladzislau Rezki (Sony)" 
Subject: mm/vmalloc: respect passed gfp_mask when doing preloading

Allocation functions should comply with the given gfp_mask as much as
possible.  The preallocation code in alloc_vmap_area doesn't follow that
pattern and it is using a hardcoded GFP_KERNEL.  Although this doesn't
really make much difference because vmalloc is not GFP_NOWAIT compliant in
general (e.g.  page table allocations are GFP_KERNEL) there is no reason
to spread that bad habit and it is good to fix the antipattern.

[mho...@suse.com: rewrite changelog]
Link: http://lkml.kernel.org/r/20191016095438.12391-2-ure...@gmail.com
Signed-off-by: Uladzislau Rezki (Sony) 
Acked-by: Michal Hocko 
Cc: Daniel Wagner 
Cc: Hillf Danton 
Cc: Matthew Wilcox 
Cc: Oleksiy Avramchenko 
Cc: Peter Zijlstra 
Cc: Sebastian Andrzej Siewior 
Cc: Steven Rostedt 
Cc: Thomas Gleixner 
Signed-off-by: Andrew Morton 
---

 mm/vmalloc.c |8 
 1 file changed, 4 insertions(+), 4 deletions(-)

--- a/mm/vmalloc.c~mm-vmalloc-respect-passed-gfp_mask-when-do-preloading
+++ a/mm/vmalloc.c
@@ -1063,9 +1063,9 @@ static struct vmap_area *alloc_vmap_area
return ERR_PTR(-EBUSY);
 
might_sleep();
+   gfp_mask = gfp_mask & GFP_RECLAIM_MASK;
 
-   va = kmem_cache_alloc_node(vmap_area_cachep,
-   gfp_mask & GFP_RECLAIM_MASK, node);
+   va = kmem_cache_alloc_node(vmap_area_cachep, gfp_mask, node);
if (unlikely(!va))
return ERR_PTR(-ENOMEM);
 
@@ -1073,7 +1073,7 @@ static struct vmap_area *alloc_vmap_area
 * Only scan the relevant parts containing pointers to other objects
 * to avoid false negatives.
 */
-   kmemleak_scan_area(>rb_node, SIZE_MAX, gfp_mask & GFP_RECLAIM_MASK);
+   kmemleak_scan_area(>rb_node, SIZE_MAX, gfp_mask);
 
 retry:
/*
@@ -1099,7 +1099,7 @@ retry:
 * Just proceed as it is. If needed "overflow" path
 * will refill the cache we allocate from.
 */
-   pva = kmem_cache_alloc_node(vmap_area_cachep, GFP_KERNEL, node);
+   pva = kmem_cache_alloc_node(vmap_area_cachep, gfp_mask, node);
 
spin_lock(_area_lock);
 
_



Re: [PATCH v3 2/3] mm/vmalloc: respect passed gfp_mask when do preloading

2019-10-18 Thread Uladzislau Rezki
> > alloc_vmap_area() is given a gfp_mask for the page allocator.
> > Let's respect that mask and consider it even in the case when
> > doing regular CPU preloading, i.e. where a context can sleep.
> 
> This is explaining what but it doesn't say why. I would go with
> "
> Allocation functions should comply with the given gfp_mask as much as
> possible. The preallocation code in alloc_vmap_area doesn't follow that
> pattern and it is using a hardcoded GFP_KERNEL. Although this doesn't
> really make much difference because vmalloc is not GFP_NOWAIT compliant
> in general (e.g. page table allocations are GFP_KERNEL) there is no
> reason to spread that bad habit and it is good to fix the antipattern.
> "
I can go with that, agree. I am not sure if i need to update the patch
and send v4. Or maybe Andrew can directly update it in his tree.

Andrew, should i send or can update?

Thank you in advance!

--
Vlad Rezki


Re: [PATCH v3 2/3] mm/vmalloc: respect passed gfp_mask when do preloading

2019-10-16 Thread Michal Hocko
On Wed 16-10-19 11:54:37, Uladzislau Rezki (Sony) wrote:
> alloc_vmap_area() is given a gfp_mask for the page allocator.
> Let's respect that mask and consider it even in the case when
> doing regular CPU preloading, i.e. where a context can sleep.

This is explaining what but it doesn't say why. I would go with
"
Allocation functions should comply with the given gfp_mask as much as
possible. The preallocation code in alloc_vmap_area doesn't follow that
pattern and it is using a hardcoded GFP_KERNEL. Although this doesn't
really make much difference because vmalloc is not GFP_NOWAIT compliant
in general (e.g. page table allocations are GFP_KERNEL) there is no
reason to spread that bad habit and it is good to fix the antipattern.
"
> 
> Signed-off-by: Uladzislau Rezki (Sony) 

Acked-by: Michal Hocko 

> ---
>  mm/vmalloc.c | 8 
>  1 file changed, 4 insertions(+), 4 deletions(-)
> 
> diff --git a/mm/vmalloc.c b/mm/vmalloc.c
> index b7b443bfdd92..593bf554518d 100644
> --- a/mm/vmalloc.c
> +++ b/mm/vmalloc.c
> @@ -1064,9 +1064,9 @@ static struct vmap_area *alloc_vmap_area(unsigned long 
> size,
>   return ERR_PTR(-EBUSY);
>  
>   might_sleep();
> + gfp_mask = gfp_mask & GFP_RECLAIM_MASK;
>  
> - va = kmem_cache_alloc_node(vmap_area_cachep,
> - gfp_mask & GFP_RECLAIM_MASK, node);
> + va = kmem_cache_alloc_node(vmap_area_cachep, gfp_mask, node);
>   if (unlikely(!va))
>   return ERR_PTR(-ENOMEM);
>  
> @@ -1074,7 +1074,7 @@ static struct vmap_area *alloc_vmap_area(unsigned long 
> size,
>* Only scan the relevant parts containing pointers to other objects
>* to avoid false negatives.
>*/
> - kmemleak_scan_area(>rb_node, SIZE_MAX, gfp_mask & GFP_RECLAIM_MASK);
> + kmemleak_scan_area(>rb_node, SIZE_MAX, gfp_mask);
>  
>  retry:
>   /*
> @@ -1100,7 +1100,7 @@ static struct vmap_area *alloc_vmap_area(unsigned long 
> size,
>* Just proceed as it is. If needed "overflow" path
>* will refill the cache we allocate from.
>*/
> - pva = kmem_cache_alloc_node(vmap_area_cachep, GFP_KERNEL, node);
> + pva = kmem_cache_alloc_node(vmap_area_cachep, gfp_mask, node);
>  
>   spin_lock(_area_lock);
>  
> -- 
> 2.20.1
> 

-- 
Michal Hocko
SUSE Labs


[PATCH v3 2/3] mm/vmalloc: respect passed gfp_mask when do preloading

2019-10-16 Thread Uladzislau Rezki (Sony)
alloc_vmap_area() is given a gfp_mask for the page allocator.
Let's respect that mask and consider it even in the case when
doing regular CPU preloading, i.e. where a context can sleep.

Signed-off-by: Uladzislau Rezki (Sony) 
---
 mm/vmalloc.c | 8 
 1 file changed, 4 insertions(+), 4 deletions(-)

diff --git a/mm/vmalloc.c b/mm/vmalloc.c
index b7b443bfdd92..593bf554518d 100644
--- a/mm/vmalloc.c
+++ b/mm/vmalloc.c
@@ -1064,9 +1064,9 @@ static struct vmap_area *alloc_vmap_area(unsigned long 
size,
return ERR_PTR(-EBUSY);
 
might_sleep();
+   gfp_mask = gfp_mask & GFP_RECLAIM_MASK;
 
-   va = kmem_cache_alloc_node(vmap_area_cachep,
-   gfp_mask & GFP_RECLAIM_MASK, node);
+   va = kmem_cache_alloc_node(vmap_area_cachep, gfp_mask, node);
if (unlikely(!va))
return ERR_PTR(-ENOMEM);
 
@@ -1074,7 +1074,7 @@ static struct vmap_area *alloc_vmap_area(unsigned long 
size,
 * Only scan the relevant parts containing pointers to other objects
 * to avoid false negatives.
 */
-   kmemleak_scan_area(>rb_node, SIZE_MAX, gfp_mask & GFP_RECLAIM_MASK);
+   kmemleak_scan_area(>rb_node, SIZE_MAX, gfp_mask);
 
 retry:
/*
@@ -1100,7 +1100,7 @@ static struct vmap_area *alloc_vmap_area(unsigned long 
size,
 * Just proceed as it is. If needed "overflow" path
 * will refill the cache we allocate from.
 */
-   pva = kmem_cache_alloc_node(vmap_area_cachep, GFP_KERNEL, node);
+   pva = kmem_cache_alloc_node(vmap_area_cachep, gfp_mask, node);
 
spin_lock(_area_lock);
 
-- 
2.20.1