Re: [PATCH 1/9] mm/page_alloc: Rename alloced to allocated

2021-04-12 Thread Vlastimil Babka
On 3/25/21 12:42 PM, Mel Gorman wrote:
> Review feedback of the bulk allocator twice found problems with "alloced"
> being a counter for pages allocated. The naming was based on the API name
> "alloc" and was based on the idea that verbal communication about malloc
> tends to use the fake word "malloced" instead of the fake word mallocated.
> To be consistent, this preparation patch renames alloced to allocated
> in rmqueue_bulk so the bulk allocator and per-cpu allocator use similar
> names when the bulk allocator is introduced.
> 
> Signed-off-by: Mel Gorman 

Acked-by: Vlastimil Babka 

> ---
>  mm/page_alloc.c | 8 
>  1 file changed, 4 insertions(+), 4 deletions(-)
> 
> diff --git a/mm/page_alloc.c b/mm/page_alloc.c
> index dfa9af064f74..8a3e13277e22 100644
> --- a/mm/page_alloc.c
> +++ b/mm/page_alloc.c
> @@ -2908,7 +2908,7 @@ static int rmqueue_bulk(struct zone *zone, unsigned int 
> order,
>   unsigned long count, struct list_head *list,
>   int migratetype, unsigned int alloc_flags)
>  {
> - int i, alloced = 0;
> + int i, allocated = 0;
>  
>   spin_lock(>lock);
>   for (i = 0; i < count; ++i) {
> @@ -2931,7 +2931,7 @@ static int rmqueue_bulk(struct zone *zone, unsigned int 
> order,
>* pages are ordered properly.
>*/
>   list_add_tail(>lru, list);
> - alloced++;
> + allocated++;
>   if (is_migrate_cma(get_pcppage_migratetype(page)))
>   __mod_zone_page_state(zone, NR_FREE_CMA_PAGES,
> -(1 << order));
> @@ -2940,12 +2940,12 @@ static int rmqueue_bulk(struct zone *zone, unsigned 
> int order,
>   /*
>* i pages were removed from the buddy list even if some leak due
>* to check_pcp_refill failing so adjust NR_FREE_PAGES based
> -  * on i. Do not confuse with 'alloced' which is the number of
> +  * on i. Do not confuse with 'allocated' which is the number of
>* pages added to the pcp list.
>*/
>   __mod_zone_page_state(zone, NR_FREE_PAGES, -(i << order));
>   spin_unlock(>lock);
> - return alloced;
> + return allocated;
>  }
>  
>  #ifdef CONFIG_NUMA
> 



Re: [PATCH 1/9] mm/page_alloc: Rename alloced to allocated

2021-03-25 Thread Matthew Wilcox
On Thu, Mar 25, 2021 at 11:42:20AM +, Mel Gorman wrote:
> Review feedback of the bulk allocator twice found problems with "alloced"
> being a counter for pages allocated. The naming was based on the API name
> "alloc" and was based on the idea that verbal communication about malloc
> tends to use the fake word "malloced" instead of the fake word mallocated.
> To be consistent, this preparation patch renames alloced to allocated
> in rmqueue_bulk so the bulk allocator and per-cpu allocator use similar
> names when the bulk allocator is introduced.
> 
> Signed-off-by: Mel Gorman 

Reviewed-by: Matthew Wilcox (Oracle) 


[PATCH 1/9] mm/page_alloc: Rename alloced to allocated

2021-03-25 Thread Mel Gorman
Review feedback of the bulk allocator twice found problems with "alloced"
being a counter for pages allocated. The naming was based on the API name
"alloc" and was based on the idea that verbal communication about malloc
tends to use the fake word "malloced" instead of the fake word mallocated.
To be consistent, this preparation patch renames alloced to allocated
in rmqueue_bulk so the bulk allocator and per-cpu allocator use similar
names when the bulk allocator is introduced.

Signed-off-by: Mel Gorman 
---
 mm/page_alloc.c | 8 
 1 file changed, 4 insertions(+), 4 deletions(-)

diff --git a/mm/page_alloc.c b/mm/page_alloc.c
index dfa9af064f74..8a3e13277e22 100644
--- a/mm/page_alloc.c
+++ b/mm/page_alloc.c
@@ -2908,7 +2908,7 @@ static int rmqueue_bulk(struct zone *zone, unsigned int 
order,
unsigned long count, struct list_head *list,
int migratetype, unsigned int alloc_flags)
 {
-   int i, alloced = 0;
+   int i, allocated = 0;
 
spin_lock(>lock);
for (i = 0; i < count; ++i) {
@@ -2931,7 +2931,7 @@ static int rmqueue_bulk(struct zone *zone, unsigned int 
order,
 * pages are ordered properly.
 */
list_add_tail(>lru, list);
-   alloced++;
+   allocated++;
if (is_migrate_cma(get_pcppage_migratetype(page)))
__mod_zone_page_state(zone, NR_FREE_CMA_PAGES,
  -(1 << order));
@@ -2940,12 +2940,12 @@ static int rmqueue_bulk(struct zone *zone, unsigned int 
order,
/*
 * i pages were removed from the buddy list even if some leak due
 * to check_pcp_refill failing so adjust NR_FREE_PAGES based
-* on i. Do not confuse with 'alloced' which is the number of
+* on i. Do not confuse with 'allocated' which is the number of
 * pages added to the pcp list.
 */
__mod_zone_page_state(zone, NR_FREE_PAGES, -(i << order));
spin_unlock(>lock);
-   return alloced;
+   return allocated;
 }
 
 #ifdef CONFIG_NUMA
-- 
2.26.2