On Mon, Nov 02, 2020 at 06:39:20AM -0800, Chris Goldsworthy wrote:
> The current approach to increasing CMA utilization introduced in
> commit 16867664936e ("mm,page_alloc,cma: conditionally prefer cma
> pageblocks for movable allocations") increases CMA utilization by
> redirecting MIGRATE_MOVABLE
On Mon, Nov 02, 2020 at 03:32:59PM +0100, Greg Kroah-Hartman wrote:
> On Mon, Nov 02, 2020 at 02:08:36PM +0000, Matthew Wilcox wrote:
> > On Mon, Nov 02, 2020 at 02:33:43PM +0100, Greg Kroah-Hartman wrote:
> > > > Oh, ugh, sysfs_emit() should be able to work on a buffer th
On Mon, Nov 02, 2020 at 01:21:39PM +0800, Rong Chen wrote:
> On 10/30/20 10:58 PM, Matthew Wilcox wrote:
> > Can you reproduce this? Here's my results:
[snipped]
>
> Hi Matthew,
>
> IIUC, yes, we can reproduce it, here is the result from the server:
>
> $
On Mon, Nov 02, 2020 at 02:33:43PM +0100, Greg Kroah-Hartman wrote:
> > Oh, ugh, sysfs_emit() should be able to work on a buffer that isn't
> > page aligned. Greg, how about this?
>
> How can sysfs_emit() be called on a non-page-aligned buffer? It's being
> used on the buffer that was passed to
On Sun, Nov 01, 2020 at 01:43:13PM -0800, Joe Perches wrote:
> > Why did you change this?
>
> Are you asking about the function argument alignment or the commit message?
The indentation. Don't change the fucking indentation, Joe.
> > Look, this isn't performance sensitive code. Just do somethi
On Sun, Nov 01, 2020 at 10:27:38PM +0100, Paweł Jasiak wrote:
> I am trying to run examples from man fanotify.7 but fanotify_mark always
> fail with errno = EFAULT.
>
> fanotify_mark declaration is
>
> SYSCALL_DEFINE5(fanotify_mark, int, fanotify_fd, unsigned int, flags,
>
On Sun, Nov 01, 2020 at 01:04:35PM -0800, Joe Perches wrote:
> On Sun, 2020-11-01 at 20:48 +0000, Matthew Wilcox wrote:
> > On Sun, Nov 01, 2020 at 12:12:51PM -0800, Joe Perches wrote:
> > > @@ -4024,7 +4024,7 @@ int __init shmem_init(void)
> > >
On Sun, Nov 01, 2020 at 12:12:51PM -0800, Joe Perches wrote:
> @@ -4024,7 +4024,7 @@ int __init shmem_init(void)
>
> #if defined(CONFIG_TRANSPARENT_HUGEPAGE) && defined(CONFIG_SYSFS)
> static ssize_t shmem_enabled_show(struct kobject *kobj,
> - struct kobj_attribute *attr, char *buf
On Fri, Oct 30, 2020 at 11:57:15AM -0400, Zi Yan wrote:
> In isolate_migratepages_block, when cc->alloc_contig is true, we are
> able to isolate compound pages, nr_migratepages and nr_isolated did not
> count compound pages correctly, causing us to isolate more pages than we
> thought. Use thp_nr_p
On Fri, Oct 30, 2020 at 10:02:45PM +0800, Chen, Rong A wrote:
> On 10/30/2020 9:17 PM, Matthew Wilcox wrote:
> > On Fri, Oct 30, 2020 at 03:17:15PM +0800, kernel test robot wrote:
> > > De
On Fri, Oct 30, 2020 at 08:14:40AM -0600, Jonathan Corbet wrote:
> On Fri, 30 Oct 2020 15:10:26 +0100
> Mauro Carvalho Chehab wrote:
>
> > I see a few alternatives:
> >
> > 1) fix automarkup.py for it to work again with python 2.7;
> >
> > 2) conf.py could gain some logic to disable automarkup
On Fri, Oct 30, 2020 at 03:17:15PM +0800, kernel test robot wrote:
> Details are as below:
> -->
>
>
> To reproduce:
>
> git clone https://github.com/intel/lkp-tests.git
> cd lkp-tests
On Thu, Oct 29, 2020 at 11:18:06PM +0100, Thomas Gleixner wrote:
> This series provides kmap_local.* iomap_local variants which only disable
> migration to keep the virtual mapping address stable accross preemption,
> but do neither disable pagefaults nor preemption. The new functions can be
> used
On Wed, Oct 28, 2020 at 02:11:06PM +, David Howells wrote:
> +static inline unsigned int afs_page_dirty_resolution(void)
I've been using size_t for offsets within a struct page. I don't know
that we'll ever support pages larger than 2GB (they're completely
impractical with today's bus speeds)
On Wed, Oct 28, 2020 at 05:05:08PM +, David Howells wrote:
> Matthew Wilcox wrote:
>
> > > +{
> > > + if (PAGE_SIZE - 1 <= __AFS_PAGE_PRIV_MASK)
> > > + return 1;
> > > + else
> > > + return PAGE_SIZE / (__AFS_PAGE_PRIV_MA
On Wed, Oct 28, 2020 at 02:10:24PM +, David Howells wrote:
> +++ b/fs/afs/dir.c
> @@ -283,6 +283,7 @@ static struct afs_read *afs_read_dir(struct afs_vnode
> *dvnode, struct key *key)
>
> set_page_private(req->pages[i], 1);
> SetPagePrivate(req->pa
On Tue, Oct 27, 2020 at 06:58:09PM +, Christoph Hellwig wrote:
> > +/**
> > + * mapping_seek_hole_data - Seek for SEEK_DATA / SEEK_HOLE in the page
> > cache.
> > + * @mapping: Address space to search.
> > + * @start: First byte to consider.
> > + * @end: Limit of search (exclusive).
> > + * @
On Mon, Oct 26, 2020 at 05:35:46PM +0100, David Sterba wrote:
> On Sun, Oct 04, 2020 at 07:04:27PM +0100, Matthew Wilcox (Oracle) wrote:
> > On 32-bit systems, this shift will overflow for files larger than 4GB.
> >
> > Cc: sta...@vger.kernel.org
> > Fixes: 53b381b3ab
On Mon, Oct 26, 2020 at 10:51:02PM +0800, Muchun Song wrote:
> +static void split_vmemmap_pmd(pmd_t *pmd, pte_t *pte_p, unsigned long addr)
> +{
> + struct mm_struct *mm = &init_mm;
> + struct page *page;
> + pmd_t old_pmd, _pmd;
> + int i;
> +
> + old_pmd = READ_ONCE(*pmd);
> +
On Mon, Oct 26, 2020 at 10:50:55PM +0800, Muchun Song wrote:
> For tail pages, the value of compound_dtor is the same. So we can reuse
compound_dtor is only set on the first tail page. compound_head is
what you mean here, I think.
On Sun, Oct 25, 2020 at 09:44:07PM -0700, John Hubbard wrote:
> On 10/25/20 9:21 PM, Matthew Wilcox wrote:
> > I don't think the page pinning approach is ever valid. For file
>
> Could you qualify that? Surely you don't mean that the entire pin_user_pages
> story is a
On Mon, Oct 26, 2020 at 10:49:48AM +0100, Jan Kara wrote:
> On Thu 22-10-20 01:49:06, Matthew Wilcox wrote:
> > On Wed, Oct 21, 2020 at 08:30:18PM -0400, Qian Cai wrote:
> > > Today's linux-next starts to trigger this wondering if anyone has any
> > > clue.
>
On Mon, Oct 26, 2020 at 11:48:06AM +0100, Jan Kara wrote:
> > +static inline loff_t page_seek_hole_data(struct page *page,
> > + loff_t start, loff_t end, bool seek_data)
> > +{
> > + if (xa_is_value(page) || PageUptodate(page))
>
> Please add a comment here that this is currently tmpf
On Thu, Oct 22, 2020 at 12:58:14PM -0700, John Hubbard wrote:
> On 10/22/20 4:49 AM, Matthew Wilcox wrote:
> > On Tue, Oct 20, 2020 at 01:25:59AM -0700, John Hubbard wrote:
> > > Should copy_to_guest() use pin_user_pages_unlocked() instead of
> > > gup_unlocked?
&g
All callers want to fetch the full size of the pvec.
Signed-off-by: Matthew Wilcox (Oracle)
Reviewed-by: Jan Kara
Reviewed-by: William Kucharski
---
include/linux/pagevec.h | 2 +-
mm/swap.c | 4 ++--
mm/truncate.c | 5 ++---
3 files changed, 5 insertions(+), 6
There is a lot of common code in find_get_entries(),
find_get_pages_range() and find_get_pages_range_tag(). Factor out
xas_find_get_entry() which simplifies all three functions.
Signed-off-by: Matthew Wilcox (Oracle)
Reviewed-by: Jan Kara
Reviewed-by: William Kucharski
---
mm/filemap.c | 98
d-off-by: Matthew Wilcox (Oracle)
Reviewed-by: Jan Kara
Reviewed-by: William Kucharski
---
mm/shmem.c | 11 +--
1 file changed, 1 insertion(+), 10 deletions(-)
diff --git a/mm/shmem.c b/mm/shmem.c
index 537c137698f8..a33972126b60 100644
--- a/mm/shmem.c
+++ b/mm/shmem.c
@@ -842,7 +
Simplifies the callers and uses the existing functionality
in find_get_entries(). We can also drop the final argument of
truncate_exceptional_pvec_entries() and simplify the logic in that
function.
Signed-off-by: Matthew Wilcox (Oracle)
Reviewed-by: Jan Kara
Reviewed-by: William Kucharski
All callers of find_get_entries() use a pvec, so pass it directly
instead of manipulating it in the caller.
Signed-off-by: Matthew Wilcox (Oracle)
Reviewed-by: Jan Kara
Reviewed-by: William Kucharski
---
include/linux/pagemap.h | 3 +--
mm/filemap.c| 21 +
mm
first and second loops through the address space.
After this patch, that functionality is left for the second loop, which
is arguably more appropriate since the first loop is supposed to run
through all the pages quickly, and splitting a page can sleep.
Signed-off-by: Matthew Wilcox (Oracle
This simplifies the callers and leads to a more efficient implementation
since the XArray has this functionality already.
Signed-off-by: Matthew Wilcox (Oracle)
Reviewed-by: Jan Kara
Reviewed-by: William Kucharski
---
include/linux/pagemap.h | 4 ++--
mm/filemap.c| 9
pagevec_lookup_entries() is now just a wrapper around find_get_entries()
so remove it and convert all its callers.
Signed-off-by: Matthew Wilcox (Oracle)
Reviewed-by: Jan Kara
Reviewed-by: William Kucharski
---
include/linux/pagevec.h | 3 ---
mm/swap.c | 36
s for filesystems.
Matthew Wilcox (Oracle) (12):
mm: Make pagecache tagged lookups return only head pages
mm/shmem: Use pagevec_lookup in shmem_unlock_mapping
mm/filemap: Add helper for finding pages
mm/filemap: Add mapping_seek_hole_data
mm: Add and use find_lock_entries
mm: Add an 'end&
: Matthew Wilcox (Oracle)
Reviewed-by: Jan Kara
Reviewed-by: William Kucharski
---
mm/filemap.c | 10 +-
1 file changed, 5 insertions(+), 5 deletions(-)
diff --git a/mm/filemap.c b/mm/filemap.c
index d5e7c2029d16..edde5dc0d28f 100644
--- a/mm/filemap.c
+++ b/mm/filemap.c
@@ -2066,7
n add some more complex logic to restore
the optimisation if it proves to be worthwhile.
Signed-off-by: Matthew Wilcox (Oracle)
Reviewed-by: William Kucharski
---
mm/internal.h | 1 +
mm/shmem.c| 97 ++---
mm/trunca
append to a pagevec with existing contents, although we
don't make use of that functionality anywhere yet.
Signed-off-by: Matthew Wilcox (Oracle)
Reviewed-by: Jan Kara
Reviewed-by: William Kucharski
---
include/linux/pagemap.h | 2 --
mm/filemap.c
Rewrite shmem_seek_hole_data() and move it to filemap.c.
Signed-off-by: Matthew Wilcox (Oracle)
Reviewed-by: William Kucharski
---
include/linux/pagemap.h | 2 ++
mm/filemap.c| 76 +
mm/shmem.c | 72
On Thu, Oct 01, 2020 at 09:17:28AM +0200, Jan Kara wrote:
> > I have a followup patch which isn't part of this series which fixes it:
> >
> > http://git.infradead.org/users/willy/pagecache.git/commitdiff/364283163847d1c106463223b858308c730592a1
>
> Yeah, that looks good. How about partial THPs? T
On Sun, Oct 25, 2020 at 11:56:52AM -0400, Theodore Y. Ts'o wrote:
> On Sun, Oct 25, 2020 at 04:44:38AM +, Matthew Wilcox wrote:
> > @@ -3068,6 +3069,12 @@ static int submit_bh_wbc(int op, int op_flags,
> > struct buffer_head *bh,
> > }
&g
On my laptop, I have about 31MB allocated to buffer_heads.
buffer_head 182728 299910104 391 : tunables000 :
slabdata 7690 7690 0
Reducing the size of the buffer_head by 8 bytes gets us to 96 bytes,
which means we get 42 per page instead of 39 and saves me 2
s set
in sysfs for anon THPs. What do others think?
> Signed-off-by: Rik van Riel
> ---
> v4: rename alloc_hugepage_direct_gfpmask to vma_thp_gfp_mask (Matthew Wilcox)
> v3: fix NULL vma issue spotted by Hugh Dickins & tested
> v2: move gfp calculation to shmem_getpage_gfp
On Fri, Oct 23, 2020 at 04:47:08PM -0400, Rik van Riel wrote:
> +++ b/include/linux/gfp.h
> @@ -614,6 +614,8 @@ bool gfp_pfmemalloc_allowed(gfp_t gfp_mask);
> extern void pm_restrict_gfp_mask(void);
> extern void pm_restore_gfp_mask(void);
>
> +extern gfp_t alloc_hugepage_direct_gfpmask(struct
On Fri, Oct 23, 2020 at 09:13:35AM -0700, Eric Biggers wrote:
> On Fri, Oct 23, 2020 at 02:21:38PM +0100, Matthew Wilcox wrote:
> > I wonder about allocating bios that can accommodate more bvecs. Not sure
> > how often filesystems have adjacent blocks which go into non-adjace
On Fri, Oct 23, 2020 at 06:44:23PM +0300, Konstantin Komarov wrote:
> +
> +/*ntfs_readpage*/
> +/*ntfs_readpages*/
> +/*ntfs_writepage*/
> +/*ntfs_writepages*/
> +/*ntfs_block_truncate_page*/
What are these for?
> +int ntfs_readpage(struct file *file, struct page *page)
> +{
> + int err;
> +
On Fri, Oct 23, 2020 at 06:33:41PM +0200, Mauro Carvalho Chehab wrote:
> /**
> - * This helper is similar with the above one, except that it accounts for
> pages
> - * that are likely on a pagevec and count them in @nr_pagevec, which will
> used by
> + * invalidate_mapping_pagevec - This helper
On Thu, Oct 22, 2020 at 04:40:11PM -0700, Eric Biggers wrote:
> On Thu, Oct 22, 2020 at 10:22:25PM +0100, Matthew Wilcox (Oracle) wrote:
> > +static int readpage_submit_bhs(struct page *page, struct blk_completion
> > *cmpl,
> > + unsigned int nr, str
On Thu, Oct 22, 2020 at 11:40:53PM -0400, Rik van Riel wrote:
> On Thu, 2020-10-22 at 19:54 -0700, Hugh Dickins wrote:
> > Michal is right to remember pushback before, because tmpfs is a
> > filesystem, and "huge=" is a mount option: in using a huge=always
> > filesystem, the user has already decla
On Fri, Oct 23, 2020 at 08:22:07AM +0200, Hannes Reinecke wrote:
> On 10/22/20 5:22 PM, Matthew Wilcox wrote:
> Hmm. You are aware, of course, that hch et al are working on replacing bhs
> with iomap, right?
$ git shortlog --author=Wilcox origin/master -- fs/iomap |head -1
Matthew Wilco
Pass a bio to decrypt_bio instead of a buffer_head to decrypt_bh.
Another step towards doing decryption per-BIO instead of per-BH.
Signed-off-by: Matthew Wilcox (Oracle)
---
fs/buffer.c | 21 +++--
1 file changed, 11 insertions(+), 10 deletions(-)
diff --git a/fs/buffer.c b/fs
Use the new decrypt_end_bio() instead of readpage_end_bio() if
fscrypt needs to be used. Remove the old end_buffer_async_read()
now that all BHs go through readpage_end_bio().
Signed-off-by: Matthew Wilcox (Oracle)
---
fs/buffer.c | 198
1
This new data structure allows a task to wait for N things to complete.
Usually the submitting task will handle cleanup, but if it is killed,
the last completer will take care of it.
Signed-off-by: Matthew Wilcox (Oracle)
---
block/blk-core.c| 61
If the filesystem returns an error from get_block, report it
instead of ineffectually setting PageError. Don't bother starting
any I/Os in this case since they won't bring the page Uptodate.
Signed-off-by: Matthew Wilcox (Oracle)
---
fs/buffer.c | 24 ++--
1 fi
I don't have a system with fscrypt enabled, so I'd appreciate some
testing from the fscrypt people.
Matthew Wilcox (Oracle) (6):
block: Add blk_completion
fs: Return error from block_read_full_page
fs: Convert block_read_full_page to be synchronous
fs: Hoist fscrypt decr
Use the new blk_completion infrastructure to wait for multiple I/Os.
Also coalesce adjacent buffer heads into a single BIO instead of
submitting one BIO per buffer head. This doesn't work for fscrypt yet,
so keep the old code around for now.
Signed-off-by: Matthew Wilcox (Oracle)
--
This is prep work for doing decryption at the BIO level instead of
the BH level. It still works on one BH at a time for now.
Signed-off-by: Matthew Wilcox (Oracle)
---
fs/buffer.c | 45 +
1 file changed, 21 insertions(+), 24 deletions(-)
diff --git
On Thu, Oct 22, 2020 at 11:35:26AM -0400, Qian Cai wrote:
> On Thu, 2020-10-22 at 01:49 +0100, Matthew Wilcox wrote:
> > On Wed, Oct 21, 2020 at 08:30:18PM -0400, Qian Cai wrote:
> > > Today's linux-next starts to trigger this wondering if anyone has any
> > &g
On Thu, Oct 22, 2020 at 07:23:33AM -0600, William Kucharski wrote:
>
>
> > On Oct 21, 2020, at 6:49 PM, Matthew Wilcox wrote:
> >
> > On Wed, Oct 21, 2020 at 08:30:18PM -0400, Qian Cai wrote:
> >> Today's linux-next starts to trigger this wondering if a
On Thu, Oct 22, 2020 at 04:35:17PM +, David Laight wrote:
> Wait...
> readv(2) defines:
> ssize_t readv(int fd, const struct iovec *iov, int iovcnt);
It doesn't really matter what the manpage says. What does the AOSP
libc header say?
> But the syscall is defined as:
>
> SYSCALL_DEFINE
I'm working on making readpage synchronous so that it can actually return
errors instead of futilely setting PageError. Something that's common
between most of the block based filesystems is the need to submit N
I/Os and wait for them to all complete (some filesystems don't support
sub-page block
On Tue, Oct 20, 2020 at 01:25:59AM -0700, John Hubbard wrote:
> Should copy_to_guest() use pin_user_pages_unlocked() instead of gup_unlocked?
> We wrote a "Case 5" in Documentation/core-api/pin_user_pages.rst, just for
> this
> situation, I think:
>
>
> CASE 5: Pinning in order to write to the
On Wed, Oct 21, 2020 at 08:30:18PM -0400, Qian Cai wrote:
> Today's linux-next starts to trigger this wondering if anyone has any clue.
I've seen that occasionally too. I changed that BUG_ON to VM_BUG_ON_PAGE
to try to get a clue about it. Good to know it's not the THP patches
since they aren't
On Wed, Oct 21, 2020 at 03:57:45PM -0400, Kent Overstreet wrote:
> }
> -ALLOW_ERROR_INJECTION(__add_to_page_cache_locked, ERRNO);
> +ALLOW_ERROR_INJECTION(__add_to_page_cache, ERRNO);
[..]
> +int add_to_page_cache(struct page *page, struct address_space *mapping,
> + pgoff_t offs
On Wed, Oct 21, 2020 at 03:39:26PM -0400, Jeff Layton wrote:
> With the merge of 2e1692966034 (ceph: have ceph_writepages_start call
> pagevec_lookup_range_tag), nothing calls this anymore.
>
> Cc: Matthew Wilcox
> Signed-off-by: Jeff Layton
Reviewed-by: Matthew Wilcox (Oracle)
On Wed, Oct 21, 2020 at 03:55:55PM +0300, Sergei Shtepa wrote:
> The 10/21/2020 14:44, Matthew Wilcox wrote:
> > I don't understand why O_DIRECT gets to bypass the block filter. Nor do
> > I understand why anybody would place a block filter on the swap device.
> > B
> Fixes: a8cf7f272b5a ("mm: add find_lock_head")
> Signed-off-by: Mauro Carvalho Chehab
Reviewed-by: Matthew Wilcox (Oracle)
On Wed, Oct 21, 2020 at 09:21:36AM +, Damien Le Moal wrote:
> > + * submit_bio_direct - submit a bio to the block device layer for I/O
> > + * bypass filter.
> > + * @bio: The bio describing the location in memory and on the device.
> > *
> > + * Description:
You don't need this line.
> >
On Wed, Oct 21, 2020 at 11:55:57AM +0200, Mauro Carvalho Chehab wrote:
> Hi Matthew,
>
> Em Tue, 13 Oct 2020 13:26:54 +0100
> Matthew Wilcox escreveu:
>
> > On Tue, Oct 13, 2020 at 02:14:37PM +0200, Mauro Carvalho Chehab wrote:
> > > Changeset 6c8adf8446a3 ("m
Su (1):
radix-tree: fix the comment of radix_tree_next_slot()
Matthew Wilcox (Oracle) (8):
radix tree test suite: Fix compilation
ida: Free allocated bitmap in error path
XArray: Test two more things about xa_cmpxchg
XArray: Test marked multiorder iterations
On Tue, Oct 20, 2020 at 08:53:07AM +0100, Christoph Hellwig wrote:
> Hmm,
>
> what prevents us from killing of the last ->readpages instance?
> Leaving half-finished API conversions in the tree usually doesn't end
> well..
Dave's working on it. Git tree:
https://git.kernel.org/pub/scm/linux/ker
gt; Co-developed-by: Matthew Wilcox
Signed-off-by: Matthew Wilcox (Oracle)
> Signed-off-by: Richard Weinberger
And as a bonus, $ grep PageTables /proc/meminfo
PageTables: 128720 kB
gets more accurate!
On Mon, Oct 19, 2020 at 02:59:11PM -0400, Kent Overstreet wrote:
> @@ -885,29 +886,30 @@ static int __add_to_page_cache_locked(struct page *page,
> page->mapping = NULL;
> /* Leave page->index set: truncation relies upon it */
> put_page(page);
> + __ClearPageLocked(page);
>
ttps://bugzilla.opensuse.org/show_bug.cgi?id=1175245
> Fixes: 9ae326a69004 ("CacheFiles: A cache that backs onto a mounted
> filesystem")
> Signed-off-by: Takashi Iwai
> Signed-off-by: David Howells
> cc: Matthew Wilcox (Oracle)
Acked-by: Matthew Wilcox (Oracle)
On Sun, Oct 18, 2020 at 08:12:52PM +0300, Mike Rapoport wrote:
> On Sun, Oct 18, 2020 at 04:01:46PM +0100, Matthew Wilcox wrote:
> > On Sun, Oct 18, 2020 at 04:39:27PM +0200, Geert Uytterhoeven wrote:
> > > Hi Matthew,
> > >
> > > On Sun, Oct 18, 2020 at
On Sun, Oct 18, 2020 at 12:13:35PM -0700, James Bottomley wrote:
> On Sun, 2020-10-18 at 19:59 +0100, Matthew Wilcox wrote:
> > On Sat, Oct 17, 2020 at 09:09:28AM -0700, t...@redhat.com wrote:
> > > clang has a number of useful, new warnings see
> > > https:
On Sat, Oct 17, 2020 at 09:09:28AM -0700, t...@redhat.com wrote:
> clang has a number of useful, new warnings see
> https://urldefense.com/v3/__https://clang.llvm.org/docs/DiagnosticsReference.html__;!!GqivPVa7Brio!Krxz78O3RKcB9JBMVo_F98FupVhj_jxX60ddN6tKGEbv_cnooXc1nnBmchm-e_O9ieGnyQ$
>
Please
On Sun, Oct 18, 2020 at 04:39:27PM +0200, Geert Uytterhoeven wrote:
> Hi Matthew,
>
> On Sun, Oct 18, 2020 at 4:25 PM Matthew Wilcox wrote:
> > On Sun, Oct 18, 2020 at 04:04:45PM +0200, Geert Uytterhoeven wrote:
> > > The test module to check that free_pages() does
On Sun, Oct 18, 2020 at 04:04:45PM +0200, Geert Uytterhoeven wrote:
> The test module to check that free_pages() does not leak memory does not
> provide any feedback whatsoever its state or progress, but may take some
> time on slow machines. Add the printing of messages upon starting each
> phase
On Thu, Oct 15, 2020 at 06:58:48PM +0100, Christoph Hellwig wrote:
> On Thu, Oct 15, 2020 at 05:43:33PM +0100, Matthew Wilcox wrote:
> > I prefer assigning ctx conditionally to propagating the knowledge
> > that !rac means synchronous. I've gone with this:
>
> And I
On Thu, Oct 15, 2020 at 10:42:03AM +0100, Christoph Hellwig wrote:
> > +static void iomap_read_page_end_io(struct bio_vec *bvec,
> > + struct completion *done, bool error)
>
> I really don't like the parameters here. Part of the problem is
> that ctx is only assigned to bi_private condi
On Thu, Oct 15, 2020 at 10:06:51AM +0100, Christoph Hellwig wrote:
> Don't we also need to handle the new return value in a few other places
> like cachefiles_read_reissue swap_readpage? Maybe those don't get
> called on the currently converted instances, but just leaving them
> without handling A
On Thu, Oct 15, 2020 at 10:02:42AM +0100, Christoph Hellwig wrote:
> On Fri, Oct 09, 2020 at 03:30:48PM +0100, Matthew Wilcox (Oracle) wrote:
> > Ideally all filesystems would return from ->readpage with the page
> > Uptodate and Locked, but it's a bit painful to convert a
On Wed, Oct 14, 2020 at 05:31:21PM -0700, Darrick J. Wong wrote:
> I would like to move all the generic helpers for the vfs remap range
> functionality (aka clonerange and dedupe) into a separate file so that
> they won't be scattered across the vfs and the mm subsystems. The
> eventual goal is to
On Tue, Oct 13, 2020 at 11:44:29AM -0700, Dan Williams wrote:
> On Fri, Oct 9, 2020 at 12:52 PM wrote:
> >
> > From: Ira Weiny
> >
> > The kmap() calls in this FS are localized to a single thread. To avoid
> > the over head of global PKRS updates use the new kmap_thread() call.
> >
> > Cc: Nicol
On Tue, Oct 13, 2020 at 04:09:41PM +0200, Peter Zijlstra wrote:
> On Tue, Oct 13, 2020 at 02:11:16PM +0100, Matthew Wilcox wrote:
> > On Tue, Oct 13, 2020 at 02:52:06PM +0200, Peter Zijlstra wrote:
> > > On Tue, Oct 13, 2020 at 02:14:31PM +0200, Mauro Car
On Tue, Oct 13, 2020 at 02:52:06PM +0200, Peter Zijlstra wrote:
> On Tue, Oct 13, 2020 at 02:14:31PM +0200, Mauro Carvalho Chehab wrote:
> > + = ===
> > + ``.`` acquired while irqs disabled and not in irq context
> > + ``-`` acquired in i
On Tue, Oct 13, 2020 at 02:14:37PM +0200, Mauro Carvalho Chehab wrote:
> Changeset 6c8adf8446a3 ("mm: add find_lock_head") renamed the
> index parameter, but forgot to update the kernel-doc markups
> accordingly.
The patch is correct (thank you!), but the description here references
a git commit i
On Mon, Oct 12, 2020 at 12:53:54PM -0700, Ira Weiny wrote:
> On Mon, Oct 12, 2020 at 05:44:38PM +0100, Matthew Wilcox wrote:
> > On Mon, Oct 12, 2020 at 09:28:29AM -0700, Dave Hansen wrote:
> > > kmap_atomic() is always preferred over kmap()/kmap_thread().
> > > k
On Mon, Oct 12, 2020 at 09:28:29AM -0700, Dave Hansen wrote:
> kmap_atomic() is always preferred over kmap()/kmap_thread().
> kmap_atomic() is _much_ more lightweight since its TLB invalidation is
> always CPU-local and never broadcast.
>
> So, basically, unless you *must* sleep while the mapping
On Mon, Oct 12, 2020 at 02:00:17PM +, linmiaohe wrote:
> Hi all:
>
> Many thanks for brilliant z3fold code. I am reading it and have some
> questions about it. It's very nice of you if you can explain it for me.
> 1.page->private is used in z3fold but PagePrivate flag is never set
s removed
from the cache again. But I have no problem with this approach.
I want to note that this is a silent data corruption for reads.
generic_file_buffered_read() has a reference to the page, so this
patch will fix it, but before it could be copying the wrong data
to userspace.
Reviewed-by: Matthew Wilcox (Oracle)
On Fri, Oct 09, 2020 at 02:34:34PM -0700, Eric Biggers wrote:
> On Fri, Oct 09, 2020 at 12:49:57PM -0700, ira.we...@intel.com wrote:
> > The kmap() calls in this FS are localized to a single thread. To avoid
> > the over head of global PKRS updates use the new kmap_thread() call.
> >
> > @@ -2410,
On Fri, Oct 09, 2020 at 08:57:55AM -0600, Jens Axboe wrote:
> > + if (unlikely(!cur_uring)) {
> > int ret;
> >
> > ret = io_uring_alloc_task_context(current);
> > if (unlikely(ret))
> > return ret;
> > }
>
> I think this is missing a:
Having this code inline helps the function read more easily.
Signed-off-by: Matthew Wilcox (Oracle)
---
mm/filemap.c | 20
1 file changed, 4 insertions(+), 16 deletions(-)
diff --git a/mm/filemap.c b/mm/filemap.c
index 95b68ec1f22c..0ef06d515532 100644
--- a/mm/filemap.c
The cifs readpage implementation was already synchronous, so use
AOP_UPDATED_PAGE to avoid cycling the page lock.
Signed-off-by: Matthew Wilcox (Oracle)
---
fs/cifs/file.c | 8 ++--
1 file changed, 6 insertions(+), 2 deletions(-)
diff --git a/fs/cifs/file.c b/fs/cifs/file.c
index
The cramfs readpage implementation was already synchronous, so use
AOP_UPDATED_PAGE to avoid cycling the page lock.
Signed-off-by: Matthew Wilcox (Oracle)
---
fs/cramfs/inode.c | 5 ++---
1 file changed, 2 insertions(+), 3 deletions(-)
diff --git a/fs/cramfs/inode.c b/fs/cramfs/inode.c
index
The 9p readpage implementation was already synchronous, so use
AOP_UPDATED_PAGE to avoid cycling the page lock.
Signed-off-by: Matthew Wilcox (Oracle)
Acked-by: Dominique Martinet
---
fs/9p/vfs_addr.c | 6 +-
1 file changed, 5 insertions(+), 1 deletion(-)
diff --git a/fs/9p/vfs_addr.c b
page.
This patchset is against iomap-for-next. Andrew, it would make merging
the THP patchset much easier if you could merge at least the first patch
adding AOP_UPDATED_PAGE during the merge window which opens next week.
Matthew Wilcox (Oracle) (16):
mm: Add AOP_UPDATED_PAGE return value
mm: In
The fuse readpage implementation was already synchronous, so use
AOP_UPDATED_PAGE to avoid cycling the page lock.
Signed-off-by: Matthew Wilcox (Oracle)
---
fs/fuse/file.c | 2 ++
1 file changed, 2 insertions(+)
diff --git a/fs/fuse/file.c b/fs/fuse/file.c
index 6611ef3269a8..7aa5626bc582
The udf inline data readpage implementation was already synchronous,
so use AOP_UPDATED_PAGE to avoid cycling the page lock.
Signed-off-by: Matthew Wilcox (Oracle)
Reviewed-by: Jan Kara
---
fs/udf/file.c | 3 +--
1 file changed, 1 insertion(+), 2 deletions(-)
diff --git a/fs/udf/file.c b/fs
iomap_set_range_uptodate() is the only caller of
iomap_iop_set_range_uptodate() and it makes future patches easier to
have it inline.
Signed-off-by: Matthew Wilcox (Oracle)
---
fs/iomap/buffered-io.c | 24 ++--
1 file changed, 10 insertions(+), 14 deletions(-)
diff --git a
901 - 1000 of 3299 matches
Mail list logo