Most of work happans on head page. Only when we need to do copy data to
userspace we find relevant subpage.

We are still limited by PAGE_SIZE per iteration. Lifting this limitation
would require some more work.

Signed-off-by: Kirill A. Shutemov <kirill.shute...@linux.intel.com>
---
 mm/filemap.c | 5 ++++-
 1 file changed, 4 insertions(+), 1 deletion(-)

diff --git a/mm/filemap.c b/mm/filemap.c
index 50afe17230e7..b77bcf6843ee 100644
--- a/mm/filemap.c
+++ b/mm/filemap.c
@@ -1860,6 +1860,7 @@ find_page:
                        if (unlikely(page == NULL))
                                goto no_cached_page;
                }
+               page = compound_head(page);
                if (PageReadahead(page)) {
                        page_cache_async_readahead(mapping,
                                        ra, filp, page,
@@ -1936,7 +1937,8 @@ page_ok:
                 * now we can copy it to user space...
                 */
 
-               ret = copy_page_to_iter(page, offset, nr, iter);
+               ret = copy_page_to_iter(page + index - page->index, offset,
+                               nr, iter);
                offset += ret;
                index += offset >> PAGE_SHIFT;
                offset &= ~PAGE_MASK;
@@ -2356,6 +2358,7 @@ page_not_uptodate:
         * because there really aren't any performance issues here
         * and we need to check for errors.
         */
+       page = compound_head(page);
        ClearPageError(page);
        error = mapping->a_ops->readpage(file, page);
        if (!error) {
-- 
2.9.3

--
To unsubscribe from this list: send the line "unsubscribe linux-block" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html

Reply via email to