On Thu 15-09-16 14:54:59, Kirill A. Shutemov wrote:
> We writeback whole huge page a time.

This is one of the things I don't understand. Firstly I didn't see where
changes of writeback like this would happen (maybe they come later).
Secondly I'm not sure why e.g. writeback should behave atomically wrt huge
pages. Is this because radix-tree multiorder entry tracks dirtiness for us
at that granularity? BTW, can you also explain why do we need multiorder
entries? What do they solve for us?

I'm sorry for these basic questions but I'd just like to understand how is
this supposed to work...

                                                                Honza


> 
> Signed-off-by: Kirill A. Shutemov <kirill.shute...@linux.intel.com>
> ---
>  mm/filemap.c | 5 +++++
>  1 file changed, 5 insertions(+)
> 
> diff --git a/mm/filemap.c b/mm/filemap.c
> index 05b42d3e5ed8..53da93156e60 100644
> --- a/mm/filemap.c
> +++ b/mm/filemap.c
> @@ -372,9 +372,14 @@ static int __filemap_fdatawait_range(struct 
> address_space *mapping,
>                       if (page->index > end)
>                               continue;
>  
> +                     page = compound_head(page);
>                       wait_on_page_writeback(page);
>                       if (TestClearPageError(page))
>                               ret = -EIO;
> +                     if (PageTransHuge(page)) {
> +                             index = page->index + HPAGE_PMD_NR;
> +                             i += index - pvec.pages[i]->index - 1;
> +                     }
>               }
>               pagevec_release(&pvec);
>               cond_resched();
> -- 
> 2.9.3
> 
> 
-- 
Jan Kara <j...@suse.com>
SUSE Labs, CR
--
To unsubscribe from this list: send the line "unsubscribe linux-block" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html

Reply via email to