On Mon, Jan 22, 2024 at 10:38:58AM -0500, Jeff Layton wrote:
> On Mon, 2024-01-22 at 12:38 +, David Howells wrote:
> > Filesystems should not be using folio->index not folio_index(folio) and
>
> I think you mean "should be" here.
Also these are not internal functions! They're just functions
; Change this automagically with:
>
> perl -p -i -e 's/folio_mapping[(]([^)]*)[)]/\1->mapping/g' fs/erofs/*.c
> perl -p -i -e 's/folio_file_mapping[(]([^)]*)[)]/\1->mapping/g' fs/erofs/*.c
> perl -p -i -e 's/folio_index[(]([^)]*)[)]/\1->index/g' fs/erofs/*.c
>
> Repor
On Thu, Dec 21, 2023 at 04:57:04PM +0800, Yu Kuai wrote:
> @@ -3674,16 +3670,17 @@ struct btrfs_super_block
> *btrfs_read_dev_one_super(struct block_device *bdev,
>* Drop the page of the primary superblock, so later read will
>* always read from the device.
>
On Tue, Dec 05, 2023 at 08:37:15PM +0800, Yu Kuai wrote:
> +struct folio *bdev_read_folio(struct block_device *bdev, pgoff_t index)
> +{
> + return read_mapping_folio(bdev->bd_inode->i_mapping, index, NULL);
> +}
> +EXPORT_SYMBOL_GPL(bdev_read_folio);
I'm coming to the opinion that 'index' is
On Thu, Nov 09, 2023 at 10:50:45PM +0100, Andreas Gruenbacher wrote:
> On Tue, Nov 7, 2023 at 10:27 PM Matthew Wilcox (Oracle)
> wrote:
> > +static inline void folio_fill_tail(struct folio *folio, size_t offset,
> > + const char *from, size_t len)
>
On Wed, Nov 08, 2023 at 03:06:06PM -0800, Andrew Morton wrote:
> >
> > +/**
> > + * folio_zero_tail - Zero the tail of a folio.
> > + * @folio: The folio to zero.
> > + * @kaddr: The address the folio is currently mapped to.
> > + * @offset: The byte offset in the folio to start zeroing at.
>
>
Instead of unmapping the folio after copying the data to it, then mapping
it again to zero the tail, provide folio_zero_tail() to zero the tail
of an already-mapped folio.
Signed-off-by: Matthew Wilcox (Oracle)
---
fs/ext4/inline.c| 3 +--
include/linux/highmem.h | 38
er; these ones seemed like good
examples as they're already partly or completely converted to folios.
Matthew Wilcox (Oracle) (3):
mm: Add folio_zero_tail() and use it in ext4
mm: Add folio_fill_tail() and use it in iomap
gfs2: Convert stuffed_readpage() to stuffed_read_folio()
fs/ext4/inline
Use folio_fill_tail() to implement the unstuffing and folio_end_read()
to simultaneously mark the folio uptodate and unlock it. Unifies a
couple of code paths.
Signed-off-by: Matthew Wilcox (Oracle)
---
fs/gfs2/aops.c | 37 +
1 file changed, 17 insertions
The iomap code was limited to PAGE_SIZE bytes; generalise it to cover
an arbitrary-sized folio, and move it to be a common helper.
Signed-off-by: Matthew Wilcox (Oracle)
---
fs/iomap/buffered-io.c | 14 ++
include/linux/highmem.h | 38 ++
2 files
dit their driver.
The switch constructs have to be changed to if/else constructs to prevent
GCC from warning on builds with 3-level page tables where PMD_ORDER and
PUD_ORDER have the same value.
Signed-off-by: Matthew Wilcox (Oracle)
---
Documentation/filesystems/locking.rst |
On Sat, Jul 08, 2023 at 03:27:42AM +0900, Hyeonggon Yoo wrote:
> Hmm, was it UAF because it references wrong field ->mapping,
> instead of swapper address space?
Ooh, I know this one!
When a folio is in use as an anonymous page, ->mapping has the bottom
two bits set to 01b. The rest of the
On Fri, Jul 07, 2023 at 02:12:06PM -0400, David Wysochanski wrote:
> I think myself / Daire Byrne may have already tracked this down and I
> found a 1-liner that fixed a similar crash in his environment.
>
> Can you try this patch on top and let me know if it still crashes?
>
On Tue, Jul 04, 2023 at 07:06:26AM -0700, Bart Van Assche wrote:
> On 7/4/23 05:21, Jan Kara wrote:
> > +struct bdev_handle {
> > + struct block_device *bdev;
> > + void *holder;
> > +};
>
> Please explain in the patch description why a holder pointer is introduced
> in struct bdev_handle and
On Tue, Jul 04, 2023 at 02:21:28PM +0200, Jan Kara wrote:
> +struct bdev_handle *blkdev_get_handle_by_dev(dev_t dev, blk_mode_t mode,
> + void *holder, const struct blk_holder_ops *hops)
> +{
> + struct bdev_handle *handle = kmalloc(sizeof(struct bdev_handle),
> +
On Wed, Jun 28, 2023 at 05:34:54PM +0800, Yangtao Li wrote:
> Introduce queue_logical_block_mask() and bdev_logical_block_mask()
> to simplify code, which replace (queue_logical_block_size(q) - 1)
> and (bdev_logical_block_size(bdev) - 1).
The thing is that I know what queue_logical_block_size -
On Thu, Dec 22, 2022 at 03:02:02PM +, David Howells wrote:
> Make filemap_release_folio() check folio_has_private(). Then, in most
> cases, where a call to folio_has_private() is immediately followed by a
> call to filemap_release_folio(), we can get rid of the test in the pair.
>
> The same
On Fri, Dec 23, 2022 at 07:31:14AM -0800, Christoph Hellwig wrote:
> On Thu, Dec 22, 2022 at 03:02:29PM +, David Howells wrote:
> > Make filemap_release_folio() return one of three values:
> >
> > (0) FILEMAP_CANT_RELEASE_FOLIO
> >
> > Couldn't release the folio's private data, so the
On Sat, Sep 10, 2022 at 08:50:54AM +0200, Christoph Hellwig wrote:
> @@ -480,11 +487,14 @@ static inline int ra_alloc_folio(struct
> readahead_control *ractl, pgoff_t index,
> if (index == mark)
> folio_set_readahead(folio);
> err = filemap_add_folio(ractl->mapping,
On Mon, Mar 21, 2022 at 03:30:52PM +, David Howells wrote:
> Matthew Wilcox wrote:
>
> > Absolutely; just use xa_lock() to protect both setting & testing the
> > flag.
>
> How should Jeffle deal with xarray dropping the lock internally in order to do
> an allo
On Mon, Mar 21, 2022 at 11:18:05PM +0800, JeffleXu wrote:
> >> Besides, IMHO read-write lock shall be more performance friendly, since
> >> most cases are the read side.
> >
> > That's almost never true. rwlocks are usually a bad idea because you
> > still have to bounce the cacheline, so you
On Mon, Mar 21, 2022 at 10:08:47PM +0800, JeffleXu wrote:
> reqs_lock is also used to protect the check of cache->flags. Please
> refer to patch 4 [1] of this patchset.
Yes, that's exactly what I meant by "bad idea".
> ```
> + /*
> + * Enqueue the pending request.
> + *
> + *
On Wed, Mar 16, 2022 at 09:17:04PM +0800, Jeffle Xu wrote:
> +#ifdef CONFIG_CACHEFILES_ONDEMAND
> + struct xarray reqs; /* xarray of pending
> on-demand requests */
> + rwlock_treqs_lock; /* Lock for reqs xarray
> */
Why do you
On Wed, Jan 12, 2022 at 05:02:13PM +0800, JeffleXu wrote:
> I'm afraid IDR can't be replaced by xarray here. Because we need an 'ID'
> for each pending read request, so that after fetching data from remote,
> user daemon could notify kernel which read request has finished by this
> 'ID'.
>
>
On Mon, Dec 27, 2021 at 08:54:40PM +0800, Jeffle Xu wrote:
> + spin_lock(>reqs_lock);
> + ret = idr_alloc(>reqs, req, 0, 0, GFP_KERNEL);
GFP_KERNEL while holding a spinlock?
You should be using an XArray instead of an IDR in new code anyway.
On Thu, Nov 04, 2021 at 11:09:19PM -0400, Theodore Ts'o wrote:
> On Thu, Nov 04, 2021 at 12:04:43PM -0700, Darrick J. Wong wrote:
> > > Note that I've avoided implementing read/write fops for dax devices
> > > partly out of concern for not wanting to figure out shared-mmap vs
> > > write coherence
As far as I can tell, the following filesystems support compressed data:
bcachefs, btrfs, erofs, ntfs, squashfs, zisofs
I'd like to make it easier and more efficient for filesystems to
implement compressed data. There are a lot of approaches in use today,
but none of them seem quite right to
On Thu, Jul 29, 2021 at 05:54:56AM +0200, Andreas Gruenbacher wrote:
> > - /* inline data must start page aligned in the file */
> > - if (WARN_ON_ONCE(offset_in_page(iomap->offset)))
> > - return -EIO;
>
> Maybe add a WARN_ON_ONCE(size > PAGE_SIZE - poff) here?
Sure!
Remove the restriction that inline data must start on a page boundary
in a file. This allows, for example, the first 2KiB to be stored out
of line and the trailing 30 bytes to be stored inline.
Signed-off-by: Matthew Wilcox (Oracle)
---
v2:
- Rebase on top of iomap: Support file tail packing
On Tue, Jul 27, 2021 at 10:20:42AM +0200, David Sterba wrote:
> On Mon, Jul 26, 2021 at 02:17:02PM +0200, Christoph Hellwig wrote:
> > > Subject: iomap: Support tail packing
> >
> > I can't say I like this "tail packing" language here when we have the
> > perfectly fine inline wording. Same for
Please make the Subject: 'iomap: Support file tail packing' as there
are clearly a number of ways to make the inline data support more
flexible ;-)
Other than that:
Reviewed-by: Matthew Wilcox (Oracle)
On Mon, Jul 26, 2021 at 01:06:11PM +0200, Andreas Gruenbacher wrote:
> @@ -671,11 +683,11 @@ static size_t iomap_write_end_inline(struct inode
> *inode, struct page *page,
> void *addr;
>
> WARN_ON_ONCE(!PageUptodate(page));
> - BUG_ON(pos + copied > PAGE_SIZE -
On Mon, Jul 26, 2021 at 12:16:39AM +0200, Andreas Gruenbacher wrote:
> @@ -247,7 +251,6 @@ iomap_readpage_actor(struct inode *inode, loff_t pos,
> loff_t length, void *data,
> sector_t sector;
>
> if (iomap->type == IOMAP_INLINE) {
> - WARN_ON_ONCE(pos);
>
On Sat, Jul 24, 2021 at 12:46:45PM +0800, Gao Xiang wrote:
> Hi Matthew,
>
> On Sat, Jul 24, 2021 at 04:44:35AM +0100, Matthew Wilcox (Oracle) wrote:
> > Remove the restriction that inline data must start on a page boundary
> > in a file. This allows, for example, the first
Remove the restriction that inline data must start on a page boundary
in a file. This allows, for example, the first 2KiB to be stored out
of line and the trailing 30 bytes to be stored inline.
Signed-off-by: Matthew Wilcox (Oracle)
---
fs/iomap/buffered-io.c | 18 --
1 file
tps://git.infradead.org/users/willy/pagecache.git/shortlog/refs/heads/folio
as usual. I haven't applied all the R-b yet (and I should probably
figure out which ones still apply since I did some substantial changes
to a couple of the patches).
Gao Xiang (1):
iomap: Support file tail packing
Matthew Wil
be used
for testing. It'd be better to be implemented if upcoming real users
care and provide a real pattern rather than leave untested dead code
around.
Tested-by: Huang Jianan # erofs
Signed-off-by: Gao Xiang
Signed-off-by: Matthew Wilcox (Oracle)
---
fs/iomap/buffered-io.c | 42
pack a small tail adjacent to
the inode. Generalise inline data to allow for tail packing. Tails
may not cross a page boundary in memory.
... but I'm not sure that's necessarily better than what you've written
here.
> Cc: Christoph Hellwig
> Cc: Darrick J. Wong
> Cc: Matthew Wilcox
On Fri, Jul 23, 2021 at 11:23:38PM +0800, Gao Xiang wrote:
> Hi Matthew,
>
> On Fri, Jul 23, 2021 at 04:05:29PM +0100, Matthew Wilcox wrote:
> > On Thu, Jul 22, 2021 at 07:39:47AM +0200, Christoph Hellwig wrote:
> > > @@ -675,7 +676,7 @@ static size_t iomap_write
On Thu, Jul 22, 2021 at 07:39:47AM +0200, Christoph Hellwig wrote:
> @@ -675,7 +676,7 @@ static size_t iomap_write_end_inline(struct inode *inode,
> struct page *page,
>
> flush_dcache_page(page);
> addr = kmap_atomic(page);
> - memcpy(iomap->inline_data + pos, addr + pos,
On Thu, Jul 22, 2021 at 06:53:42PM +0200, Christoph Hellwig wrote:
> On Thu, Jul 22, 2021 at 09:51:09AM -0700, Darrick J. Wong wrote:
> > The commit message is a little misleading -- this adds support for
> > inline data pages at nonzero (but page-aligned) file offsets, not file
> > offsets into
On Tue, Jul 20, 2021 at 01:42:24PM -0700, Darrick J. Wong wrote:
> > - BUG_ON(page_has_private(page));
> > - BUG_ON(page->index);
> > - BUG_ON(size > PAGE_SIZE - offset_in_page(iomap->inline_data));
> > + /* inline source data must be inside a single page */
> > + BUG_ON(iomap->length >
On Mon, Jul 19, 2021 at 09:39:17AM +0100, Christoph Hellwig wrote:
> On Fri, Jul 16, 2021 at 06:28:10PM +0100, Matthew Wilcox wrote:
> > > > memcpy(addr, iomap->inline_data, size);
> > > > memset(addr + size, 0, PAGE_SIZE - size);
>
On Tue, Jul 20, 2021 at 12:11:49AM +0800, Gao Xiang wrote:
> On Mon, Jul 19, 2021 at 05:13:10PM +0200, Christoph Hellwig wrote:
> > On Mon, Jul 19, 2021 at 04:02:30PM +0100, Matthew Wilcox wrote:
> > > > + if (iomap->type == IOMAP_INLINE) {
> > > > +
On Mon, Jul 19, 2021 at 10:47:47PM +0800, Gao Xiang wrote:
> @@ -246,18 +245,19 @@ iomap_readpage_actor(struct inode *inode, loff_t pos,
> loff_t length, void *data,
> unsigned poff, plen;
> sector_t sector;
>
> - if (iomap->type == IOMAP_INLINE) {
> -
On Sat, Jul 17, 2021 at 11:15:58PM +0800, Gao Xiang wrote:
> Hi Matthew,
>
> On Sat, Jul 17, 2021 at 04:01:38PM +0100, Matthew Wilcox wrote:
> > On Sat, Jul 17, 2021 at 09:38:18PM +0800, Gao Xiang wrote:
> > > Sorry about some late. I've revised a version based on C
On Sat, Jul 17, 2021 at 09:38:18PM +0800, Gao Xiang wrote:
> Sorry about some late. I've revised a version based on Christoph's
> version and Matthew's thought above. I've preliminary checked with
> EROFS, if it does make sense, please kindly help check on the gfs2
> side as well..
I don't
On Fri, Jul 16, 2021 at 04:04:18PM +0100, Christoph Hellwig wrote:
> On Fri, Jul 16, 2021 at 04:00:32PM +0100, Matthew Wilcox (Oracle) wrote:
> > Inline data needs to be flushed from the kernel's view of a page before
> > it's mapped by userspace.
> >
> > Cc: sta..
Inline data needs to be flushed from the kernel's view of a page before
it's mapped by userspace.
Cc: sta...@vger.kernel.org
Fixes: 19e0c58f6552 ("iomap: generic inline data handling")
Signed-off-by: Matthew Wilcox (Oracle)
---
fs/iomap/buffered-io.c | 1 +
1 file changed, 1 insertio
On Fri, Jul 16, 2021 at 09:56:23PM +0800, Gao Xiang wrote:
> Hi Matthew,
>
> On Fri, Jul 16, 2021 at 02:02:29PM +0100, Matthew Wilcox wrote:
> > On Fri, Jul 16, 2021 at 01:07:23PM +0800, Gao Xiang wrote:
> > > This tries to add tail packing inline read to iomap. Different
On Fri, Jul 16, 2021 at 02:47:35PM +0100, Matthew Wilcox wrote:
> I think it looks something like this ...
>
> @@ -211,23 +211,18 @@ struct iomap_readpage_ctx {
> };
>
> static void iomap_read_inline_data(struct inode *inode, struct folio *folio,
> -
On Fri, Jul 16, 2021 at 10:19:09AM +0100, Christoph Hellwig wrote:
> static void
> iomap_read_inline_data(struct inode *inode, struct page *page,
> - struct iomap *iomap)
> + struct iomap *iomap, loff_t pos, unsigned int size)
> {
> - size_t size =
On Fri, Jul 16, 2021 at 01:07:23PM +0800, Gao Xiang wrote:
> This tries to add tail packing inline read to iomap. Different from
> the previous approach, it only marks the block range uptodate in the
> page it covers.
Why? This path is called under two circumstances: readahead and readpage.
In
On Tue, Jul 06, 2021 at 02:32:53AM +0800, Gao Xiang wrote:
> In that way, pages can be accessed directly with xarray.
I didn't mean "open code readahead_page()". I meant "Wouldn't it be
great if z_erofs_do_read_page() used readahead_expand() in order to
allocate the extra pages in the extents
On Mon, Oct 12, 2020 at 12:53:54PM -0700, Ira Weiny wrote:
> On Mon, Oct 12, 2020 at 05:44:38PM +0100, Matthew Wilcox wrote:
> > On Mon, Oct 12, 2020 at 09:28:29AM -0700, Dave Hansen wrote:
> > > kmap_atomic() is always preferred over kmap()/kmap_thread().
> > > k
On Mon, Oct 12, 2020 at 09:28:29AM -0700, Dave Hansen wrote:
> kmap_atomic() is always preferred over kmap()/kmap_thread().
> kmap_atomic() is _much_ more lightweight since its TLB invalidation is
> always CPU-local and never broadcast.
>
> So, basically, unless you *must* sleep while the mapping
On Fri, Oct 09, 2020 at 02:34:34PM -0700, Eric Biggers wrote:
> On Fri, Oct 09, 2020 at 12:49:57PM -0700, ira.we...@intel.com wrote:
> > The kmap() calls in this FS are localized to a single thread. To avoid
> > the over head of global PKRS updates use the new kmap_thread() call.
> >
> > @@
On Fri, Jun 19, 2020 at 11:39:16AM +0200, Andreas Gruenbacher wrote:
> static int gfs2_readpage(struct file *file, struct page *page)
> {
> - struct address_space *mapping = page->mapping;
> - struct gfs2_inode *ip = GFS2_I(mapping->host);
> - struct gfs2_holder gh;
> int
On Thu, Jun 18, 2020 at 02:46:03PM +0200, Andreas Gruenbacher wrote:
> On Wed, Jun 17, 2020 at 4:22 AM Matthew Wilcox wrote:
> > On Wed, Jun 17, 2020 at 02:57:14AM +0200, Andreas Grünbacher wrote:
> > > Right, the approach from the following thread might fix this:
On Wed, Jun 17, 2020 at 02:57:14AM +0200, Andreas Grünbacher wrote:
> Am Mi., 17. Juni 2020 um 02:33 Uhr schrieb Matthew Wilcox
> :
> >
> > On Wed, Jun 17, 2020 at 12:36:13AM +0200, Andreas Gruenbacher wrote:
> > > Am Mi., 15. Apr. 2020 um 23:39 Uhr schrieb Matthew W
On Wed, Jun 17, 2020 at 12:36:13AM +0200, Andreas Gruenbacher wrote:
> Am Mi., 15. Apr. 2020 um 23:39 Uhr schrieb Matthew Wilcox
> :
> > From: "Matthew Wilcox (Oracle)"
> >
> > Implement the new readahead aop and convert all callers (block_dev,
> >
On Mon, Apr 20, 2020 at 01:14:17PM +0200, Miklos Szeredi wrote:
> > + for (;;) {
> > + struct fuse_io_args *ia;
> > + struct fuse_args_pages *ap;
> > +
> > + nr_pages = readahead_count(rac) - nr_pages;
>
> Hmm. I see what's going on here, but it's
On Tue, Apr 14, 2020 at 09:56:16PM -0700, Andrew Morton wrote:
> On Tue, 14 Apr 2020 19:18:08 -0700 Matthew Wilcox wrote:
> > Hmm. They don't seem that big to me.
>
> They're really big!
v5.7-rc1: 11636 636 224 1249630d0 fs/iomap/buffered-io.o
readah
On Tue, Apr 14, 2020 at 06:17:05PM -0700, Andrew Morton wrote:
> On Tue, 14 Apr 2020 08:02:13 -0700 Matthew Wilcox wrote:
> > From: "Matthew Wilcox (Oracle)"
> >
> > Filesystems which implement the upcoming ->readahead method will get
> &g
From: "Matthew Wilcox (Oracle)"
Simplify the callers by moving the check for nr_pages and the BUG_ON
into read_pages().
Signed-off-by: Matthew Wilcox (Oracle)
Reviewed-by: Zi Yan
Reviewed-by: John Hubbard
Reviewed-by: Christoph Hellwig
Reviewed-by: William Kucharski
---
mm/r
From: "Matthew Wilcox (Oracle)"
Ensure that memory allocations in the readahead path do not attempt to
reclaim file-backed pages, which could lead to a deadlock. It is
possible, though unlikely this is the root cause of a problem observed
by Cong Wang.
Signed-off-by: Matthew Wilc
From: "Matthew Wilcox (Oracle)"
Use the new readahead operation in ext4
Signed-off-by: Matthew Wilcox (Oracle)
Reviewed-by: William Kucharski
Reviewed-by: Eric Biggers
---
fs/ext4/ext4.h | 3 +--
fs/ext4/inode.c| 21 +
fs/ext4/readp
From: "Matthew Wilcox (Oracle)"
ext4 and f2fs have duplicated the guts of the readahead code so
they can read past i_size. Instead, separate out the guts of the
readahead code so they can call it directly.
Signed-off-by: Matthew Wilcox (Oracle)
Reviewed-by: Christoph Hellwig
From: "Matthew Wilcox (Oracle)"
Use the new readahead operation in iomap. Convert XFS and ZoneFS to
use it.
Signed-off-by: Matthew Wilcox (Oracle)
Reviewed-by: Christoph Hellwig
Reviewed-by: Darrick J. Wong
Reviewed-by: William Kucharski
---
fs/iomap/buffered
From: "Matthew Wilcox (Oracle)"
Implement the new readahead method in btrfs using the new
readahead_page_batch() function.
Signed-off-by: Matthew Wilcox (Oracle)
Reviewed-by: William Kucharski
---
fs/btrfs/extent_io.c | 43 ---
fs/btrfs/e
From: "Matthew Wilcox (Oracle)"
By reducing nr_to_read, we can eliminate this check from inside the loop.
Signed-off-by: Matthew Wilcox (Oracle)
Reviewed-by: John Hubbard
Reviewed-by: William Kucharski
---
mm/readahead.c | 14 --
1 file changed, 8 insertions(+), 6
From: "Matthew Wilcox (Oracle)"
Implement the new readahead aop and convert all callers (block_dev,
exfat, ext2, fat, gfs2, hpfs, isofs, jfs, nilfs2, ocfs2, omfs, qnx6,
reiserfs & udf). The callers are all trivial except for GFS2 & OCFS2.
Signed-off-by: Matthew Wilcox
From: "Matthew Wilcox (Oracle)"
Implement the new readahead operation in fuse by using __readahead_batch()
to fill the array of pages in fuse_args_pages directly. This lets us
inline fuse_readpages_fill() into fuse_readahead().
Signed-off-by: Matthew Wilcox (Oracle)
Reviewed-by: Da
From: "Matthew Wilcox (Oracle)"
If the page is already in cache, we don't set PageReadahead on it.
Signed-off-by: Matthew Wilcox (Oracle)
Reviewed-by: Christoph Hellwig
Reviewed-by: William Kucharski
---
mm/readahead.c | 9 ++---
1 file changed, 6 insertions(+), 3 deletion
From: "Matthew Wilcox (Oracle)"
This function now only uses the mapping argument to look up the inode,
and both callers already have the inode, so just pass the inode instead
of the mapping.
Signed-off-by: Matthew Wilcox (Oracle)
Reviewed-by: William Kucharski
Reviewed-by: Er
From: "Matthew Wilcox (Oracle)"
This replaces ->readpages with a saner interface:
- Return void instead of an ignored error code.
- Page cache is already populated with locked pages when ->readahead
is called.
- New arguments can be passed to the implementation without
From: "Matthew Wilcox (Oracle)"
The readahead code is part of the page cache so should be found in the
pagemap.h file. force_page_cache_readahead is only used within mm,
so move it to mm/internal.h instead. Remove the parameter names where
they add no value, and rename the ones
From: "Matthew Wilcox (Oracle)"
This series adds a readahead address_space operation to replace the
readpages operation. The key difference is that pages are added to the
page cache as they are allocated (and then looked up by the filesystem)
instead of passing them on a list to the
From: "Matthew Wilcox (Oracle)"
Change the type of page_idx to unsigned long, and rename it -- it's
just a loop counter, not a page index.
Suggested-by: John Hubbard
Signed-off-by: Matthew Wilcox (Oracle)
Reviewed-by: Dave Chinner
Reviewed-by: William Kucharski
---
mm/reada
From: "Matthew Wilcox (Oracle)"
Use the new readahead operation in f2fs
Signed-off-by: Matthew Wilcox (Oracle)
Reviewed-by: William Kucharski
Reviewed-by: Eric Biggers
Reviewed-by: Chao Yu
Acked-by: Jaegeuk Kim
---
fs/f2fs/data.c
From: "Matthew Wilcox (Oracle)"
When populating the page cache for readahead, mappings that use
->readpages must populate the page cache themselves as the pages are
passed on a linked list which would normally be used for the page cache's
LRU. For mappings that use ->readpage
From: "Matthew Wilcox (Oracle)"
Use the new readahead operation in erofs
Signed-off-by: Matthew Wilcox (Oracle)
Acked-by: Gao Xiang
Reviewed-by: William Kucharski
Reviewed-by: Chao Yu
---
fs/erofs/data.c | 39 +---
fs/ero
From: "Matthew Wilcox (Oracle)"
Filesystems which implement the upcoming ->readahead method will get
their pages by calling readahead_page() or readahead_page_batch().
These functions support large pages, even though none of the filesystems
to be converted do yet.
Signed-off-by: M
From: "Matthew Wilcox (Oracle)"
ondemand_readahead has two callers, neither of which use the return value.
That means that both ra_submit and __do_page_cache_readahead() can return
void, and we don't need to worry that a present page in the readahead
window causes us to return a smalle
From: "Matthew Wilcox (Oracle)"
Use the new readahead operation in erofs.
Signed-off-by: Matthew Wilcox (Oracle)
Acked-by: Gao Xiang
Reviewed-by: Dave Chinner
Reviewed-by: William Kucharski
Reviewed-by: Chao Yu
---
fs/erofs/zdata.c | 29 +
1 file
From: "Matthew Wilcox (Oracle)"
Replace the page_offset variable with 'index + i'.
Signed-off-by: Matthew Wilcox (Oracle)
Reviewed-by: John Hubbard
Reviewed-by: Christoph Hellwig
Reviewed-by: William Kucharski
---
mm/readahead.c | 8 +++-
1 file changed, 3 insertions(+), 5
From: "Matthew Wilcox (Oracle)"
In this patch, only between __do_page_cache_readahead() and read_pages(),
but it will be extended in upcoming patches. The read_pages() function
becomes aops centric, as this makes the most sense by the end of the
patchset.
Signed-off-by: Matthew Wilc
From: "Matthew Wilcox (Oracle)"
We used to assign the return value to a variable, which we then ignored.
Remove the pretence of caring.
Signed-off-by: Matthew Wilcox (Oracle)
Reviewed-by: Christoph Hellwig
Reviewed-by: Dave Chinner
Reviewed-by: John Hubbard
Reviewed-by: William
On Wed, Mar 25, 2020 at 03:43:02PM +0100, Miklos Szeredi wrote:
> >
> > - while ((page = readahead_page(rac))) {
> > - if (fuse_readpages_fill(, page) != 0)
> > + nr_pages = min(readahead_count(rac), fc->max_pages);
>
> Missing fc->max_read clamp.
Yeah, I
On Wed, Mar 25, 2020 at 10:42:56AM +0100, Miklos Szeredi wrote:
> > + while ((page = readahead_page(rac))) {
> > + if (fuse_readpages_fill(, page) != 0)
>
> Shouldn't this unlock + put page on error?
We're certainly inconsistent between the two error exits from
From: "Matthew Wilcox (Oracle)"
If the page is already in cache, we don't set PageReadahead on it.
Signed-off-by: Matthew Wilcox (Oracle)
Reviewed-by: Christoph Hellwig
Reviewed-by: William Kucharski
---
mm/readahead.c | 9 ++---
1 file changed, 6 insertions(+), 3 deletion
From: "Matthew Wilcox (Oracle)"
Replace the page_offset variable with 'index + i'.
Signed-off-by: Matthew Wilcox (Oracle)
Reviewed-by: John Hubbard
Reviewed-by: Christoph Hellwig
Reviewed-by: William Kucharski
---
mm/readahead.c | 8 +++-
1 file changed, 3 insertions(+), 5
From: "Matthew Wilcox (Oracle)"
Use the new readahead operation in fuse. Switching away from the
read_cache_pages() helper gets rid of an implicit call to put_page(),
so we can get rid of the get_page() call in fuse_readpages_fill().
Signed-off-by: Matthew Wilcox (Oracle)
Reviewe
From: "Matthew Wilcox (Oracle)"
Implement the new readahead aop and convert all callers (block_dev,
exfat, ext2, fat, gfs2, hpfs, isofs, jfs, nilfs2, ocfs2, omfs, qnx6,
reiserfs & udf). The callers are all trivial except for GFS2 & OCFS2.
Signed-off-by: Matthew Wilcox
From: "Matthew Wilcox (Oracle)"
When populating the page cache for readahead, mappings that use
->readpages must populate the page cache themselves as the pages are
passed on a linked list which would normally be used for the page cache's
LRU. For mappings that use ->readpage
From: "Matthew Wilcox (Oracle)"
We used to assign the return value to a variable, which we then ignored.
Remove the pretence of caring.
Signed-off-by: Matthew Wilcox (Oracle)
Reviewed-by: Christoph Hellwig
Reviewed-by: Dave Chinner
Reviewed-by: John Hubbard
Reviewed-by: William
From: "Matthew Wilcox (Oracle)"
Ensure that memory allocations in the readahead path do not attempt to
reclaim file-backed pages, which could lead to a deadlock. It is
possible, though unlikely this is the root cause of a problem observed
by Cong Wang.
Signed-off-by: Matthew Wilc
From: "Matthew Wilcox (Oracle)"
Use the new readahead operation in f2fs
Signed-off-by: Matthew Wilcox (Oracle)
Reviewed-by: William Kucharski
Reviewed-by: Eric Biggers
Reviewed-by: Chao Yu
Acked-by: Jaegeuk Kim
---
fs/f2fs/data.c
From: "Matthew Wilcox (Oracle)"
Use the new readahead operation in ext4
Signed-off-by: Matthew Wilcox (Oracle)
Reviewed-by: William Kucharski
Reviewed-by: Eric Biggers
---
fs/ext4/ext4.h | 3 +--
fs/ext4/inode.c| 21 +
fs/ext4/readp
From: "Matthew Wilcox (Oracle)"
Change the type of page_idx to unsigned long, and rename it -- it's
just a loop counter, not a page index.
Suggested-by: John Hubbard
Signed-off-by: Matthew Wilcox (Oracle)
Reviewed-by: Dave Chinner
Reviewed-by: William Kucharski
---
mm/reada
1 - 100 of 295 matches
Mail list logo