Re: mm/page_alloc: add a bulk page allocator

2021-03-30 Thread Mel Gorman
On Mon, Mar 29, 2021 at 04:18:09PM +0100, Colin Ian King wrote:
> Hi,
> 
> Static analysis on linux-next with Coverity has found a potential
> uninitialized variable issue in function __alloc_pages_bulk with the
> following commit:
> 
> commit b0e0a469733fa571ddd8fe147247c9561b51b2da
> Author: Mel Gorman 
> Date:   Mon Mar 29 11:12:24 2021 +1100
> 
> mm/page_alloc: add a bulk page allocator
> 
> The analysis is as follows:
> 
> > 
>
> 5050if (nr_pages - nr_populated == 1)
> 5051goto failed;
> 5052
> 5053/* May set ALLOC_NOFRAGMENT, fragmentation will return 1
> page. */
> 5054gfp &= gfp_allowed_mask;
> 5055alloc_gfp = gfp;
> 
> Uninitialized scalar variable (UNINIT)
> 15. uninit_use_in_call: Using uninitialized value alloc_flags when
> calling prepare_alloc_pages.
> 
> 5056if (!prepare_alloc_pages(gfp, 0, preferred_nid, nodemask,
> , _gfp, _flags))

Ok, so Coverity thinks that alloc_flags is potentially uninitialised and
without digging into every part of the report, Coverity is right.

> 
>
> So alloc_flags in gfp_to_alloc_flags_cma is being updated with the |=
> operator and we managed to get to this path with uninitialized
> alloc_flags.  Should alloc_flags be initialized to zero in
> __alloc_page_bulk()?
> 

You are correct about the |= updating an initial value, but I think the
initialized value should be ALLOC_WMARK_LOW. A value of 0 would be the same
as ALLOC_WMARK_MIN and that would allow the bulk allocator to potentially
consume too many pages without waking kswapd.  I'll put together a patch
shortly. Thanks Colin!

-- 
Mel Gorman
SUSE Labs


re: mm/page_alloc: add a bulk page allocator

2021-03-29 Thread Colin Ian King
Hi,

Static analysis on linux-next with Coverity has found a potential
uninitialized variable issue in function __alloc_pages_bulk with the
following commit:

commit b0e0a469733fa571ddd8fe147247c9561b51b2da
Author: Mel Gorman 
Date:   Mon Mar 29 11:12:24 2021 +1100

mm/page_alloc: add a bulk page allocator

The analysis is as follows:

5023 unsigned long __alloc_pages_bulk(gfp_t gfp, int preferred_nid,
5024nodemask_t *nodemask, int nr_pages,
5025struct list_head *page_list,
5026struct page **page_array)
5027 {
5028struct page *page;
5029unsigned long flags;
5030struct zone *zone;
5031struct zoneref *z;
5032struct per_cpu_pages *pcp;
5033struct list_head *pcp_list;
5034struct alloc_context ac;
5035gfp_t alloc_gfp;
1. var_decl: Declaring variable alloc_flags without initializer.
5036unsigned int alloc_flags;
5037int nr_populated = 0;
5038
2. Condition !!(nr_pages <= 0), taking false branch.
5039if (unlikely(nr_pages <= 0))
5040return 0;
5041
5042/*
5043 * Skip populated array elements to determine if any pages need
5044 * to be allocated before disabling IRQs.
5045 */
3. Condition page_array, taking true branch.
4. Condition page_array[nr_populated], taking true branch.
5. Condition nr_populated < nr_pages, taking true branch.
7. Condition page_array, taking true branch.
8. Condition page_array[nr_populated], taking true branch.
9. Condition nr_populated < nr_pages, taking true branch.
11. Condition page_array, taking true branch.
12. Condition page_array[nr_populated], taking true branch.
13. Condition nr_populated < nr_pages, taking false branch.
5046while (page_array && page_array[nr_populated] &&
nr_populated < nr_pages)
6. Jumping back to the beginning of the loop.
10. Jumping back to the beginning of the loop.
5047nr_populated++;
5048
5049/* Use the single page allocator for one page. */
14. Condition nr_pages - nr_populated == 1, taking false branch.
5050if (nr_pages - nr_populated == 1)
5051goto failed;
5052
5053/* May set ALLOC_NOFRAGMENT, fragmentation will return 1
page. */
5054gfp &= gfp_allowed_mask;
5055alloc_gfp = gfp;

Uninitialized scalar variable (UNINIT)
15. uninit_use_in_call: Using uninitialized value alloc_flags when
calling prepare_alloc_pages.

5056if (!prepare_alloc_pages(gfp, 0, preferred_nid, nodemask,
, _gfp, _flags))
5057return 0;

And in prepare_alloc_pages():

4957 static inline bool prepare_alloc_pages(gfp_t gfp_mask, unsigned int
order,
4958int preferred_nid, nodemask_t *nodemask,
4959struct alloc_context *ac, gfp_t *alloc_gfp,
4960unsigned int *alloc_flags)
4961 {
4962ac->highest_zoneidx = gfp_zone(gfp_mask);
4963ac->zonelist = node_zonelist(preferred_nid, gfp_mask);
4964ac->nodemask = nodemask;
4965ac->migratetype = gfp_migratetype(gfp_mask);
4966

1. Condition cpusets_enabled(), taking false branch.

4967if (cpusets_enabled()) {
4968*alloc_gfp |= __GFP_HARDWALL;
4969/*
4970 * When we are in the interrupt context, it is
irrelevant
4971 * to the current task context. It means that any
node ok.
4972 */
4973if (!in_interrupt() && !ac->nodemask)
4974ac->nodemask = _current_mems_allowed;
4975else
4976*alloc_flags |= ALLOC_CPUSET;
4977}
4978
4979fs_reclaim_acquire(gfp_mask);
4980fs_reclaim_release(gfp_mask);
4981
2. Condition gfp_mask & 1024U /* (gfp_t)1024U */, taking true branch.
4982might_sleep_if(gfp_mask & __GFP_DIRECT_RECLAIM);
4983
3. Condition should_fail_alloc_page(gfp_mask, order), taking false
branch.
4984if (should_fail_alloc_page(gfp_mask, order))
4985return false;
4986
4. read_value: Reading value *alloc_flags when calling
gfp_to_alloc_flags_cma.
4987*alloc_flags = gfp_to_alloc_flags_cma(gfp_mask, *alloc_flags);

And in call gfp_to_alloc_flags_cma():

in /mm/page_alloc.c

3853 static inline unsigned int gfp_to_alloc_flags_cma(gfp_t gfp_mask,
3854  unsigned int
alloc_flags)
3855 {
3856#ifdef CONFIG_CMA
1. Condition gfp_migratetype(gfp_mask) == MIGRATE_MOVABLE, taking
true branch.
3857if (gfp_migratetype(gfp_mask) == MIGRATE_MOVABLE)
2. read_value: Reading value alloc_flags.
3858alloc_flags |= ALLOC_CMA;
3859#endif
3860return alloc_flags;
3861 }

So alloc_flags in gfp_to_alloc_flags_cma is being updated with the |=
operator and we managed to get to this path with uninitialized