Re: [Cluster-devel] [PATCH v6 04/19] mm: Rearrange readahead loop

2020-02-18 Thread Dave Chinner
On Tue, Feb 18, 2020 at 05:57:36AM -0800, Matthew Wilcox wrote:
> On Tue, Feb 18, 2020 at 04:08:24PM +1100, Dave Chinner wrote:
> > On Mon, Feb 17, 2020 at 10:45:45AM -0800, Matthew Wilcox wrote:
> > > From: "Matthew Wilcox (Oracle)" 
> > > 
> > > Move the declaration of 'page' to inside the loop and move the 'kick
> > > off a fresh batch' code to the end of the function for easier use in
> > > subsequent patches.
> > 
> > Stale? the "kick off" code is moved to the tail of the loop, not the
> > end of the function.
> 
> Braino; I meant to write end of the loop.
> 
> > > @@ -183,14 +183,14 @@ void __do_page_cache_readahead(struct address_space 
> > > *mapping,
> > >   page = xa_load(>i_pages, page_offset);
> > >   if (page && !xa_is_value(page)) {
> > >   /*
> > > -  * Page already present?  Kick off the current batch of
> > > -  * contiguous pages before continuing with the next
> > > -  * batch.
> > > +  * Page already present?  Kick off the current batch
> > > +  * of contiguous pages before continuing with the
> > > +  * next batch.  This page may be the one we would
> > > +  * have intended to mark as Readahead, but we don't
> > > +  * have a stable reference to this page, and it's
> > > +  * not worth getting one just for that.
> > >*/
> > > - if (readahead_count())
> > > - read_pages(, _pool, gfp_mask);
> > > - rac._nr_pages = 0;
> > > - continue;
> > > + goto read;
> > >   }
> > >  
> > >   page = __page_cache_alloc(gfp_mask);
> > > @@ -201,6 +201,11 @@ void __do_page_cache_readahead(struct address_space 
> > > *mapping,
> > >   if (page_idx == nr_to_read - lookahead_size)
> > >   SetPageReadahead(page);
> > >   rac._nr_pages++;
> > > + continue;
> > > +read:
> > > + if (readahead_count())
> > > + read_pages(, _pool, gfp_mask);
> > > + rac._nr_pages = 0;
> > >   }
> > 
> > Also, why? This adds a goto from branched code that continues, then
> > adds a continue so the unbranched code doesn't execute the code the
> > goto jumps to. In absence of any explanation, this isn't an
> > improvement and doesn't make any sense...
> 
> I thought I was explaining it ... "for easier use in subsequent patches".

Sorry, my braino there. :) I commented on the problem with the first
part of the sentence, then the rest of the sentence completely
failed to sink in.

-Dave.
-- 
Dave Chinner
da...@fromorbit.com




Re: [Cluster-devel] [PATCH v6 04/19] mm: Rearrange readahead loop

2020-02-18 Thread John Hubbard
On 2/17/20 10:45 AM, Matthew Wilcox wrote:
> From: "Matthew Wilcox (Oracle)" 
> 
> Move the declaration of 'page' to inside the loop and move the 'kick
> off a fresh batch' code to the end of the function for easier use in
> subsequent patches.
> 
> Signed-off-by: Matthew Wilcox (Oracle) 
> ---
>  mm/readahead.c | 21 +
>  1 file changed, 13 insertions(+), 8 deletions(-)
> 
> diff --git a/mm/readahead.c b/mm/readahead.c
> index 15329309231f..3eca59c43a45 100644
> --- a/mm/readahead.c
> +++ b/mm/readahead.c
> @@ -154,7 +154,6 @@ void __do_page_cache_readahead(struct address_space 
> *mapping,
>   unsigned long lookahead_size)
>  {
>   struct inode *inode = mapping->host;
> - struct page *page;
>   unsigned long end_index;/* The last page we want to read */
>   LIST_HEAD(page_pool);
>   int page_idx;
> @@ -175,6 +174,7 @@ void __do_page_cache_readahead(struct address_space 
> *mapping,
>* Preallocate as many pages as we will need.
>*/
>   for (page_idx = 0; page_idx < nr_to_read; page_idx++) {
> + struct page *page;
>   pgoff_t page_offset = offset + page_idx;
>  
>   if (page_offset > end_index)
> @@ -183,14 +183,14 @@ void __do_page_cache_readahead(struct address_space 
> *mapping,
>   page = xa_load(>i_pages, page_offset);
>   if (page && !xa_is_value(page)) {
>   /*
> -  * Page already present?  Kick off the current batch of
> -  * contiguous pages before continuing with the next
> -  * batch.
> +  * Page already present?  Kick off the current batch
> +  * of contiguous pages before continuing with the
> +  * next batch.  This page may be the one we would
> +  * have intended to mark as Readahead, but we don't
> +  * have a stable reference to this page, and it's
> +  * not worth getting one just for that.
>*/
> - if (readahead_count())
> - read_pages(, _pool, gfp_mask);
> - rac._nr_pages = 0;
> - continue;


A fine point:  you'll get better readability and a less complex function by
factoring that into a static subroutine, instead of jumping around with 
goto's. (This clearly wants to be a subroutine, and in fact you've effectively 
created one inside this function, at the "read:" label. Either way, though...


> + goto read;
>   }
>  
>   page = __page_cache_alloc(gfp_mask);
> @@ -201,6 +201,11 @@ void __do_page_cache_readahead(struct address_space 
> *mapping,
>   if (page_idx == nr_to_read - lookahead_size)
>   SetPageReadahead(page);
>   rac._nr_pages++;
> + continue;
> +read:
> + if (readahead_count())
> + read_pages(, _pool, gfp_mask);
> + rac._nr_pages = 0;
>   }
>  
>   /*
> 

...no errors spotted, I'm confident that this patch is correct,

Reviewed-by: John Hubbard 


thanks,
-- 
John Hubbard
NVIDIA



Re: [Cluster-devel] [PATCH v6 04/19] mm: Rearrange readahead loop

2020-02-18 Thread Matthew Wilcox
On Tue, Feb 18, 2020 at 04:08:24PM +1100, Dave Chinner wrote:
> On Mon, Feb 17, 2020 at 10:45:45AM -0800, Matthew Wilcox wrote:
> > From: "Matthew Wilcox (Oracle)" 
> > 
> > Move the declaration of 'page' to inside the loop and move the 'kick
> > off a fresh batch' code to the end of the function for easier use in
> > subsequent patches.
> 
> Stale? the "kick off" code is moved to the tail of the loop, not the
> end of the function.

Braino; I meant to write end of the loop.

> > @@ -183,14 +183,14 @@ void __do_page_cache_readahead(struct address_space 
> > *mapping,
> > page = xa_load(>i_pages, page_offset);
> > if (page && !xa_is_value(page)) {
> > /*
> > -* Page already present?  Kick off the current batch of
> > -* contiguous pages before continuing with the next
> > -* batch.
> > +* Page already present?  Kick off the current batch
> > +* of contiguous pages before continuing with the
> > +* next batch.  This page may be the one we would
> > +* have intended to mark as Readahead, but we don't
> > +* have a stable reference to this page, and it's
> > +* not worth getting one just for that.
> >  */
> > -   if (readahead_count())
> > -   read_pages(, _pool, gfp_mask);
> > -   rac._nr_pages = 0;
> > -   continue;
> > +   goto read;
> > }
> >  
> > page = __page_cache_alloc(gfp_mask);
> > @@ -201,6 +201,11 @@ void __do_page_cache_readahead(struct address_space 
> > *mapping,
> > if (page_idx == nr_to_read - lookahead_size)
> > SetPageReadahead(page);
> > rac._nr_pages++;
> > +   continue;
> > +read:
> > +   if (readahead_count())
> > +   read_pages(, _pool, gfp_mask);
> > +   rac._nr_pages = 0;
> > }
> 
> Also, why? This adds a goto from branched code that continues, then
> adds a continue so the unbranched code doesn't execute the code the
> goto jumps to. In absence of any explanation, this isn't an
> improvement and doesn't make any sense...

I thought I was explaining it ... "for easier use in subsequent patches".




Re: [Cluster-devel] [PATCH v6 04/19] mm: Rearrange readahead loop

2020-02-17 Thread Dave Chinner
On Mon, Feb 17, 2020 at 10:45:45AM -0800, Matthew Wilcox wrote:
> From: "Matthew Wilcox (Oracle)" 
> 
> Move the declaration of 'page' to inside the loop and move the 'kick
> off a fresh batch' code to the end of the function for easier use in
> subsequent patches.

Stale? the "kick off" code is moved to the tail of the loop, not the
end of the function.

> @@ -183,14 +183,14 @@ void __do_page_cache_readahead(struct address_space 
> *mapping,
>   page = xa_load(>i_pages, page_offset);
>   if (page && !xa_is_value(page)) {
>   /*
> -  * Page already present?  Kick off the current batch of
> -  * contiguous pages before continuing with the next
> -  * batch.
> +  * Page already present?  Kick off the current batch
> +  * of contiguous pages before continuing with the
> +  * next batch.  This page may be the one we would
> +  * have intended to mark as Readahead, but we don't
> +  * have a stable reference to this page, and it's
> +  * not worth getting one just for that.
>*/
> - if (readahead_count())
> - read_pages(, _pool, gfp_mask);
> - rac._nr_pages = 0;
> - continue;
> + goto read;
>   }
>  
>   page = __page_cache_alloc(gfp_mask);
> @@ -201,6 +201,11 @@ void __do_page_cache_readahead(struct address_space 
> *mapping,
>   if (page_idx == nr_to_read - lookahead_size)
>   SetPageReadahead(page);
>   rac._nr_pages++;
> + continue;
> +read:
> + if (readahead_count())
> + read_pages(, _pool, gfp_mask);
> + rac._nr_pages = 0;
>   }

Also, why? This adds a goto from branched code that continues, then
adds a continue so the unbranched code doesn't execute the code the
goto jumps to. In absence of any explanation, this isn't an
improvement and doesn't make any sense...

-- 
Dave Chinner
da...@fromorbit.com




[Cluster-devel] [PATCH v6 04/19] mm: Rearrange readahead loop

2020-02-17 Thread Matthew Wilcox
From: "Matthew Wilcox (Oracle)" 

Move the declaration of 'page' to inside the loop and move the 'kick
off a fresh batch' code to the end of the function for easier use in
subsequent patches.

Signed-off-by: Matthew Wilcox (Oracle) 
---
 mm/readahead.c | 21 +
 1 file changed, 13 insertions(+), 8 deletions(-)

diff --git a/mm/readahead.c b/mm/readahead.c
index 15329309231f..3eca59c43a45 100644
--- a/mm/readahead.c
+++ b/mm/readahead.c
@@ -154,7 +154,6 @@ void __do_page_cache_readahead(struct address_space 
*mapping,
unsigned long lookahead_size)
 {
struct inode *inode = mapping->host;
-   struct page *page;
unsigned long end_index;/* The last page we want to read */
LIST_HEAD(page_pool);
int page_idx;
@@ -175,6 +174,7 @@ void __do_page_cache_readahead(struct address_space 
*mapping,
 * Preallocate as many pages as we will need.
 */
for (page_idx = 0; page_idx < nr_to_read; page_idx++) {
+   struct page *page;
pgoff_t page_offset = offset + page_idx;
 
if (page_offset > end_index)
@@ -183,14 +183,14 @@ void __do_page_cache_readahead(struct address_space 
*mapping,
page = xa_load(>i_pages, page_offset);
if (page && !xa_is_value(page)) {
/*
-* Page already present?  Kick off the current batch of
-* contiguous pages before continuing with the next
-* batch.
+* Page already present?  Kick off the current batch
+* of contiguous pages before continuing with the
+* next batch.  This page may be the one we would
+* have intended to mark as Readahead, but we don't
+* have a stable reference to this page, and it's
+* not worth getting one just for that.
 */
-   if (readahead_count())
-   read_pages(, _pool, gfp_mask);
-   rac._nr_pages = 0;
-   continue;
+   goto read;
}
 
page = __page_cache_alloc(gfp_mask);
@@ -201,6 +201,11 @@ void __do_page_cache_readahead(struct address_space 
*mapping,
if (page_idx == nr_to_read - lookahead_size)
SetPageReadahead(page);
rac._nr_pages++;
+   continue;
+read:
+   if (readahead_count())
+   read_pages(, _pool, gfp_mask);
+   rac._nr_pages = 0;
}
 
/*
-- 
2.25.0