Remove the PG_fscache alias for PG_private_2 and use the latter directly.
Use of this flag for marking pages undergoing writing to the cache should
be considered deprecated and the folios should be marked dirty instead and
the write done in ->writepages().
Note that PG_private_2 itself should be
Remove the deprecated use of PG_private_2 in netfslib.
Signed-off-by: David Howells
cc: Jeff Layton
cc: Matthew Wilcox (Oracle)
cc: linux-cach...@redhat.com
cc: linux-fsde...@vger.kernel.org
cc: linux...@kvack.org
---
fs/ceph/addr.c | 19 +-
fs/netfs/buffered_read.c | 8 +--
Use writepages-based flushing invalidation instead of
invalidate_inode_pages2() and ->launder_folio(). This will allow
->launder_folio() to be removed eventually.
Signed-off-by: David Howells
cc: Steve French
cc: Shyam Prasad N
cc: Rohith Surabattula
cc: Jeff Layton
cc:
Implement a replacement for launder_folio. The key feature of
invalidate_inode_pages2() is that it locks each folio individually, unmaps
it to prevent mmap'd accesses interfering and calls the ->launder_folio()
address_space op to flush it. This has problems: firstly, each folio is
written
Make the netfs_io_request::subreq_counter, used to generate values for
netfs_io_subrequest::debug_index, into an atomic_t so that it can be called
from the retry thread at the same time as the app thread issuing writes.
Signed-off-by: David Howells
cc: Jeff Layton
cc: ne...@lists.linux.dev
cc:
Use the subreq_counter in netfs_io_request to allocate subrequest
debug_index values in read ops as well as write ops.
Signed-off-by: David Howells
cc: Jeff Layton
cc: ne...@lists.linux.dev
cc: linux-fsde...@vger.kernel.org
---
fs/netfs/io.c | 7 ++-
fs/netfs/objects.c | 1 +
Use writepages-based flushing invalidation instead of
invalidate_inode_pages2() and ->launder_folio(). This will allow
->launder_folio() to be removed eventually.
Signed-off-by: David Howells
cc: Eric Van Hensbergen
cc: Latchesar Ionkov
cc: Dominique Martinet
cc: Christian Schoenebeck
cc:
When dirty data is being written to the cache, setting/waiting on/clearing
the fscache flag is always done in tandem with setting/waiting on/clearing
the writeback flag. The netfslib buffered write routines wait on and set
both flags and the write request cleanup clears both flags, so the fscache
fscache emits a lot of duplicate cookie warnings with cifs because the
index key for the fscache cookies does not include everything that the
cifs_find_inode() function does. The latter is used with iget5_locked() to
distinguish between inodes in the local inode cache.
Fix this by adding the
Hi Christian, Willy,
The primary purpose of these patches is to rework the netfslib writeback
implementation such that pages read from the cache are written to the cache
through ->writepages(), thereby allowing the fscache page flag to be
retired.
The reworking also:
(1) builds on top of the
Remove the kdoc for the removed 'req' member of the 9p_conn struct.
Remove a pair of set-but-not-used v9ses variables.
Signed-off-by: David Howells
cc: Eric Van Hensbergen
cc: Latchesar Ionkov
cc: Dominique Martinet
cc: Christian Schoenebeck
cc: v...@lists.linux.dev
---
Update i_blocks when i_size is updated when we finish making a write to the
pagecache to reflect the amount of space we think will be consumed.
Signed-off-by: David Howells
cc: Steve French
cc: Shyam Prasad N
cc: Rohith Surabattula
cc: Jeff Layton
cc: linux-c...@vger.kernel.org
cc:
Use mempools for allocating requests and subrequests in an effort to make
sure that allocation always succeeds so that when performing writeback we
can always make progress.
Signed-off-by: David Howells
cc: Jeff Layton
cc: ne...@lists.linux.dev
cc: linux-fsde...@vger.kernel.org
cc:
Remove support for ->launder_folio() from netfslib and expect filesystems
to use filemap_invalidate_inode() instead. netfs_launder_folio() can then
be got rid of.
Signed-off-by: David Howells
cc: Jeff Layton
cc: Eric Van Hensbergen
cc: Latchesar Ionkov
cc: Dominique Martinet
cc: Christian
Switch to using unsigned long long rather than loff_t in netfslib to avoid
problems with the sign flipping in the maths when we're dealing with the
byte at position 0x7fff.
Signed-off-by: David Howells
cc: Jeff Layton
cc: Ilya Dryomov
cc: Xiubo Li
cc: ne...@lists.linux.dev
cc:
Fix the error return in netfs_perform_write() acting in writethrough-mode
to return any cached error in the case that netfs_end_writethrough()
returns 0.
Signed-off-by: David Howells
cc: Jeff Layton
cc: ne...@lists.linux.dev
cc: linux-fsde...@vger.kernel.org
---
fs/netfs/buffered_write.c | 10
Use writepages-based flushing invalidation instead of
invalidate_inode_pages2() and ->launder_folio(). This will allow
->launder_folio() to be removed eventually.
Signed-off-by: David Howells
cc: Marc Dionne
cc: Jeff Layton
cc: linux-...@lists.infradead.org
cc: ne...@lists.linux.dev
cc:
Add some write-side stats to count buffered writes, buffered writethrough,
and writepages calls.
Whilst we're at it, clean up the naming on some of the existing stats
counters and organise the output into two sets.
Signed-off-by: David Howells
cc: Jeff Layton
cc: ne...@lists.linux.dev
cc:
Export writeback_iter() so that it can be used by netfslib as a module.
Signed-off-by: David Howells
cc: Matthew Wilcox (Oracle)
cc: Christoph Hellwig
cc: linux...@kvack.org
---
mm/page-writeback.c | 1 +
1 file changed, 1 insertion(+)
diff --git a/mm/page-writeback.c b/mm/page-writeback.c
The current netfslib writeback implementation creates writeback requests of
contiguous folio data and then separately tiles subrequests over the space
twice, once for the server and once for the cache. This creates a few
issues:
(1) Every time there's a discontiguity or a change between writing
Implement the helpers for the new write code in cachefiles. There's now an
optional ->prepare_write() that allows the filesystem to set the parameters
for the next write, such as maximum size and maximum segment count, and an
->issue_write() that is called to initiate an (asynchronous) write
Cut over to using the new writeback code. The old code is #ifdef'd out or
otherwise removed from compilation to avoid conflicts and will be removed
in a future patch.
Signed-off-by: David Howells
cc: Jeff Layton
cc: Eric Van Hensbergen
cc: Latchesar Ionkov
cc: Dominique Martinet
cc:
Remove the old writeback code.
Signed-off-by: David Howells
cc: Jeff Layton
cc: Eric Van Hensbergen
cc: Latchesar Ionkov
cc: Dominique Martinet
cc: Christian Schoenebeck
cc: Marc Dionne
cc: v...@lists.linux.dev
cc: linux-...@lists.infradead.org
cc: ne...@lists.linux.dev
cc:
Implement the helpers for the new write code in afs. There's now an
optional ->prepare_write() that allows the filesystem to set the parameters
for the next write, such as maximum size and maximum segment count, and an
->issue_write() that is called to initiate an (asynchronous) write
operation.
Do a couple of miscellaneous tidy ups:
(1) Add a qualifier into a file banner comment.
(2) Put the writeback folio traces back into alphabetical order.
(3) Remove some unused folio traces.
Signed-off-by: David Howells
cc: Jeff Layton
cc: ne...@lists.linux.dev
cc:
Use a hook in the new writeback code's retry algorithm to rotate the keys
once all the outstanding subreqs have failed rather than doing it
separately on each subreq.
Signed-off-by: David Howells
cc: Marc Dionne
cc: Jeff Layton
cc: linux-...@lists.infradead.org
cc: ne...@lists.linux.dev
cc:
Implement the helpers for the new write code in 9p. There's now an
optional ->prepare_write() that allows the filesystem to set the parameters
for the next write, such as maximum size and maximum segment count, and an
->issue_write() that is called to initiate an (asynchronous) write
operation.
27 matches
Mail list logo