t; more than a page's worth of bio_vec structs.
>
> Signed-off-by: David Howells
> cc: Jeff Layton
> cc: linux-cachefs@redhat.com
> cc: linux-fsde...@vger.kernel.org
> cc: linux...@kvack.org
> ---
> fs/netfs/internal.h |
ly, if the I/O is asynchronous, we must copy the iov_iter describing
> the buffer before returning to the caller as it may be immediately
> deallocated.
>
> Signed-off-by: David Howells
> cc: Jeff Layton
> cc: linux-cachefs@redhat.com
> cc: linux-fsde...@vger.kernel.org
TARTSYS if waits are interrupted.
>
> Signed-off-by: David Howells
> cc: Jeff Layton
> cc: linux-cachefs@redhat.com
> cc: linux-fsde...@vger.kernel.org
> cc: linux...@kvack.org
> ---
> fs/netfs/Makefile | 1 +
> fs/netfs/locking.c| 209 ++
; Signed-off-by: David Howells
> cc: Jeff Layton
> cc: linux-cachefs@redhat.com
> cc: linux-fsde...@vger.kernel.org
> cc: linux...@kvack.org
> ---
> fs/afs/file.c | 1 +
> fs/ceph/addr.c| 2 ++
> include/linux/netfs.h | 1 +
> 3 files changed, 4 insertions(+
On Fri, 2023-10-13 at 16:56 +0100, David Howells wrote:
> Allow O_NONBLOCK to be noted in the netfs_io_request struct. Also add a
> flag, NETFS_RREQ_BLOCKED to record if we did block.
>
> Signed-off-by: David Howells
> cc: Jeff Layton
> cc: linux-cachefs@redhat.co
> Signed-off-by: David Howells
> cc: Jeff Layton
> cc: linux-cachefs@redhat.com
> cc: linux-fsde...@vger.kernel.org
> cc: linux...@kvack.org
> ---
> fs/9p/vfs_addr.c | 33 ++-
> fs/afs/file.c | 53 --
, any read from the server at or above the zero_point
> position will return all zeroes.
>
> The zero_point value can be stored in the cache, provided the above rules
> are applied to it by any code that culls part of the local cache.
>
> Signed-off-by: David Howel
On Fri, 2023-10-13 at 16:56 +0100, David Howells wrote:
> Add a procfile, /proc/fs/netfs/requests, to list in-progress netfslib I/O
> requests.
>
> Signed-off-by: David Howells
> cc: Jeff Layton
> cc: linux-cachefs@redhat.com
> cc: linux-fsde...@vger.kernel.org
>
On Mon, 2023-09-11 at 13:02 -0400, Jeff Layton wrote:
> On Thu, 2023-06-08 at 17:41 -0400, Dave Wysochanski wrote:
> > If a network filesystem using netfs implements a clamp_length()
> > function, it can set subrequest lengths smaller than a page size.
> > When we loop
; test_bit(NETFS_SREQ_COPY_TO_CACHE, &subreq->flags)) {
> folio_start_fscache(folio);
> + folio_started = true;
> + }
> pg_failed |= subreq_failed;
> sreq_end = subreq->start + subreq->len - 1;
> if (pg_end < sreq_end)
The logic looks correct though.
Reviewed-by: Jeff Layton
--
Linux-cachefs mailing list
Linux-cachefs@redhat.com
https://listman.redhat.com/mailman/listinfo/linux-cachefs
uid and gid before issuing the "bind"
> command and the cache must've been chown'd to those IDs.
>
> Signed-off-by: David Howells
> cc: David Howells
> cc: Jeff Layton
> cc: linux-cachefs@redhat.com
> cc: linux-er...@lists.ozlabs.org
> cc: linux-fsde...@
yet affect anything as cifs, the only current user, only
> passes in non-user-backed iterators.
>
> Fixes: 018584697533 ("netfs: Add a function to extract an iterator into a
> scatterlist")
> Signed-off-by: David Howells
> cc: Jeff Layton
> cc: Steve French
&
> - wake_up_bit(&volume->flags, FSCACHE_VOLUME_CREATING);
> + clear_and_wake_up_bit(FSCACHE_VOLUME_CREATING, &volume->flags);
> fscache_put_volume(volume, fscache_volume_put_create_work);
> }
>
Reviewed-by: Jeff Layton
--
Linux-cachefs mailing list
Linux-cachefs@redhat.com
https://listman.redhat.com/mailman/listinfo/linux-cachefs
cache_see_volume(cursor,
> fscache_volume_see_hash_wake);
> - clear_bit(FSCACHE_VOLUME_ACQUIRE_PENDING,
> &cursor->flags);
> - wake_up_bit(&cursor->flags,
> FSCACHE_VOLUME_ACQUIRE_PENDING);
> + clear_and_wake_up_bit(FSCACHE_VOLUME_ACQUIRE_PENDING,
> + &cursor->flags);
> return;
> }
> }
Reviewed-by: Jeff Layton
--
Linux-cachefs mailing list
Linux-cachefs@redhat.com
https://listman.redhat.com/mailman/listinfo/linux-cachefs
__entry->flags = flags;
> __entry->source = source;
> __entry->why = why;
> - __entry->len= sreq->len;
> - __entry->start = sreq->start;
> - __entry->netfs_inode = sreq->rreq->inode->i_ino;
> + __entry->len= len;
> + __entry->start = start;
> __entry->cache_inode = cache_inode;
> ),
>
> - TP_printk("R=%08x[%u] %s %s f=%02x s=%llx %zx ni=%x B=%x",
> - __entry->rreq, __entry->index,
> + TP_printk("o=%08x %s %s f=%02x s=%llx %zx B=%x",
> + __entry->obj,
> __print_symbolic(__entry->source, netfs_sreq_sources),
> __print_symbolic(__entry->why,
> cachefiles_prepare_read_traces),
> __entry->flags,
> __entry->start, __entry->len,
> - __entry->netfs_inode, __entry->cache_inode)
> + __entry->cache_inode)
> );
>
> TRACE_EVENT(cachefiles_read,
Reviewed-by: Jeff Layton
--
Linux-cachefs mailing list
Linux-cachefs@redhat.com
https://listman.redhat.com/mailman/listinfo/linux-cachefs
> __entry->why= why;
> - __entry->len= sreq->len;
> - __entry->start = sreq->start;
> - __entry->netfs_inode = sreq->rreq->inode->i_ino;
> + __entry->len= len;
> + __entry->start = start;
> __entry->cache_inode = cache_inode;
> ),
>
> - TP_printk("R=%08x[%u] %s %s f=%02x s=%llx %zx ni=%x B=%x",
> - __entry->rreq, __entry->index,
> + TP_printk("o=%08x %s %s f=%02x s=%llx %zx B=%x",
> + __entry->obj,
> __print_symbolic(__entry->source, netfs_sreq_sources),
> __print_symbolic(__entry->why,
> cachefiles_prepare_read_traces),
> __entry->flags,
> __entry->start, __entry->len,
> - __entry->netfs_inode, __entry->cache_inode)
> + __entry->cache_inode)
> );
>
> TRACE_EVENT(cachefiles_read,
The rest looks pretty reasonable though.
--
Jeff Layton
--
Linux-cachefs mailing list
Linux-cachefs@redhat.com
https://listman.redhat.com/mailman/listinfo/linux-cachefs
> > > > completion
> > > > > > > > > > and
> > > > > > > > > > return code. In contrast, netfs is subrequest based, a
> > > > > > > > > > single
> > > > > > > > &
he issue by using folio_pos() and folio_size() to calculate the end
> position of the page.
>
> Fixes: 3d3c95046742 ("netfs: Provide readahead and readpage netfs helpers")
> Reported-by: Matthew Wilcox
> Signed-off-by: David Howells
> cc: Jeff Layton
> cc: linux-cach
> > > > > > > caller
> > > > > > > > > > to
> > > > > > > > > > know how many RPCs will be sent and how the pages will be
> > > > > > > > > > broken
> > > > > >
T 501G 1.4T 27% /files
>
> benmaynard@bjmtesting-source:~$ cat /etc/exports
> /files
> 10.0.0.0/8(rw,sync,wdelay,no_root_squash,no_all_squash,no_subtree_check,sec=sys,secure,nohide)
>
>
> Kind Regards
> Benjamin Maynard
>
>
> Kind Regards
>
> Benjamin
code on failure. The only exception is that, the length of the range
> - * instead of the error code is returned on failure after netfs_io_request is
> - * allocated, so that .readahead() could advance rac accordingly.
> + * instead of the error code is returned on failure after request is
> allocated,
> + * so that .readahead() could advance rac accordingly.
> */
> static int erofs_fscache_data_read(struct address_space *mapping,
> loff_t pos, size_t len, bool *unlock)
> {
> struct inode *inode = mapping->host;
> struct super_block *sb = inode->i_sb;
> - struct netfs_io_request *rreq;
> + struct erofs_fscache_request *req;
> struct erofs_map_blocks map;
> struct erofs_map_dev mdev;
> struct iov_iter iter;
> @@ -314,13 +237,17 @@ static int erofs_fscache_data_read(struct address_space
> *mapping,
> if (ret)
> return ret;
>
> - rreq = erofs_fscache_alloc_request(mapping, pos, count);
> - if (IS_ERR(rreq))
> - return PTR_ERR(rreq);
> + req = erofs_fscache_req_alloc(mapping, pos, count);
> + if (IS_ERR(req))
> + return PTR_ERR(req);
>
> *unlock = false;
> - erofs_fscache_read_folios_async(mdev.m_fscache->cookie,
> - rreq, mdev.m_pa + (pos - map.m_la));
> + ret = erofs_fscache_read_folios_async(mdev.m_fscache->cookie,
> + req, mdev.m_pa + (pos - map.m_la), count);
> + if (ret)
> + req->error = ret;
> +
> + erofs_fscache_req_put(req);
> return count;
> }
>
--
Jeff Layton
--
Linux-cachefs mailing list
Linux-cachefs@redhat.com
https://listman.redhat.com/mailman/listinfo/linux-cachefs
__entry->rreq = sreq->rreq ? sreq->rreq->debug_id : 0;
> __entry->index = sreq->debug_index;
> __entry->flags = sreq->flags;
> __entry->source = source;
>
fs/iostat.h| 17 ---
> fs/nfs/nfstrace.h | 91 --
> fs/nfs/pagelist.c | 12 ++
> fs/nfs/pnfs.c | 12 +-
> fs/nfs/read.c | 110 +
> fs/nfs/super.c | 11 --
> fs/nfs/write.c
" in nfs_pageio_add_page(), and the
> corner case where NFS requests a full page at the end of the file,
> even when i_size reflects only a partial page (NFS overread).
>
> Suggested-by: Jeff Layton
> Signed-off-by: Dave Wysochanski
> ---
> fs/nfs/fscache.c | 232 +
he number of bytes successfully transferred. Cap
> the length passed to netfs_subreq_terminated() to the length of the
> subrequest, which prevents possible "Subreq overread" if NFS requests
> a full page at the end of the file, even when the i_size does not
> co
On Sun, 2022-09-04 at 15:51 -0400, David Wysochanski wrote:
> On Sun, Sep 4, 2022 at 9:59 AM Jeff Layton wrote:
> >
> > On Sun, 2022-09-04 at 05:05 -0400, Dave Wysochanski wrote:
> > > Convert the NFS buffered read code paths to corresponding netfs APIs,
> > > bu
read_completion() to update the final error value and bytes
> read, and check the refcount to determine whether this is the final
> RPC completion. If this is the last RPC, then in the final put on
> the structure, call into netfs_subreq_terminated() with the final
> error value or the nu
read_completion() to update the final error value and bytes
> read, and check the refcount to determine whether this is the final
> RPC completion. If this is the last RPC, then in the final put on
> the structure, call into netfs_subreq_terminated() with the final
> error value or the nu
t and VFS inode */
> +#else
> struct inodevfs_inode;
> +#endif
> +
>
> #ifdef CONFIG_NFS_V4_2
> struct nfs4_xattr_cache *xattr_cache;
> @@ -281,10 +287,25 @@ struct nfs4_copy_state {
> #define NFS_INO_LAYOUTSTATS (11)/* layoutstats inflight */
> #define NFS_INO_ODIRECT (12)/* I/O setting is
> O_DIRECT */
>
> +#ifdef CONFIG_NFS_FSCACHE
> +static inline struct inode *VFS_I(struct nfs_inode *nfsi)
> +{
> + return &nfsi->netfs.inode;
> +}
> +static inline struct nfs_inode *NFS_I(const struct inode *inode)
> +{
> + return container_of(inode, struct nfs_inode, netfs.inode);
> +}
> +#else
> +static inline struct inode *VFS_I(struct nfs_inode *nfsi)
> +{
> + return &nfsi->vfs_inode;
> +}
> static inline struct nfs_inode *NFS_I(const struct inode *inode)
> {
> return container_of(inode, struct nfs_inode, vfs_inode);
> }
> +#endif
>
> static inline struct nfs_server *NFS_SB(const struct super_block *s)
> {
> @@ -328,15 +349,6 @@ static inline int NFS_STALE(const struct inode *inode)
> return test_bit(NFS_INO_STALE, &NFS_I(inode)->flags);
> }
>
> -static inline struct fscache_cookie *nfs_i_fscache(struct inode *inode)
> -{
> -#ifdef CONFIG_NFS_FSCACHE
> - return NFS_I(inode)->fscache;
> -#else
> - return NULL;
> -#endif
> -}
> -
> static inline __u64 NFS_FILEID(const struct inode *inode)
> {
> return NFS_I(inode)->fileid;
Much nicer.
Reviewed-by: Jeff Layton
--
Linux-cachefs mailing list
Linux-cachefs@redhat.com
https://listman.redhat.com/mailman/listinfo/linux-cachefs
On Thu, 2022-09-01 at 09:38 -0400, David Wysochanski wrote:
> On Thu, Sep 1, 2022 at 8:45 AM Jeff Layton wrote:
> >
> > On Wed, 2022-08-31 at 20:48 -0400, Dave Wysochanski wrote:
> > > Convert the NFS buffered read code paths to corresponding netfs APIs,
> > > bu
olio)
> if (NFS_STALE(inode))
> goto out_unlock;
>
> +#ifdef CONFIG_NFS_FSCACHE
> + if (netfs_inode(inode)->cache) {
> + ret = netfs_read_folio(file, folio);
> + goto out;
> + }
> +#endif
> if (file == NULL) {
return error;
> }
> @@ -355,6 +362,12 @@ int nfs_read_folio(struct file *file, struct folio
> *folio)
> if (NFS_STALE(inode))
> goto out_unlock;
>
> +#ifdef CONFIG_NFS_FSCACHE
> + if (netfs_inode(inode)->cache) {
> + ret = netfs_rea
(12)/* I/O setting is
> O_DIRECT */
>
> +static inline struct inode *NFSI_TO_INODE(struct nfs_inode *nfsi)
Not crazy about the name here. Maybe VFS_I() ? ntfs and xfs have private
helpers named VFS_I that do something similar, so it seems more
idiomatic.
> +{
> +#ifdef CONFIG_NFS_FSCACHE
> + return &nfsi->netfs.inode;
> +#else
> + return &nfsi->vfs_inode;
> +#endif
> +}
> +
These are hard to read (and reason about) defined this way. I think I'd
rather see less #ifdef-ery here. Instead if having the #ifdef's inside
the functions, do:
#ifdef CONFIG_NFS_FSCACHE
/* define all static inlines here for fscache case */
#else
/* and here for the !fscache case */
#endif
> static inline struct nfs_inode *NFS_I(const struct inode *inode)
> {
> +#ifdef CONFIG_NFS_FSCACHE
> + return container_of(inode, struct nfs_inode, netfs.inode);
> +#else
> return container_of(inode, struct nfs_inode, vfs_inode);
> +#endif
> }
>
> static inline struct nfs_server *NFS_SB(const struct super_block *s)
> @@ -328,15 +349,6 @@ static inline int NFS_STALE(const struct inode *inode)
> return test_bit(NFS_INO_STALE, &NFS_I(inode)->flags);
> }
>
> -static inline struct fscache_cookie *nfs_i_fscache(struct inode *inode)
> -{
> -#ifdef CONFIG_NFS_FSCACHE
> - return NFS_I(inode)->fscache;
> -#else
> - return NULL;
> -#endif
> -}
> -
> static inline __u64 NFS_FILEID(const struct inode *inode)
> {
> return NFS_I(inode)->fileid;
--
Jeff Layton
--
Linux-cachefs mailing list
Linux-cachefs@redhat.com
https://listman.redhat.com/mailman/listinfo/linux-cachefs
gt; + ctx = get_nfs_open_context(nfs_file_open_context(file));
>
> - nfs_pageio_init_read(&desc.pgio, inode, false,
> + nfs_pageio_init_read(&pgio, inode, false,
>&nfs_async_read_completion_ops);
>
> while ((page = r
Signed-off-by: Jeff Layton
---
fs/fscache/cookie.c| 2 ++
include/trace/events/fscache.h | 2 ++
2 files changed, 4 insertions(+)
diff --git a/fs/fscache/cookie.c b/fs/fscache/cookie.c
index 26a6d395737a..451d8a077e12 100644
--- a/fs/fscache/cookie.c
+++ b/fs/fscache/cookie.c
usly clear.
Also, ensure that we attempt to clear the bit when the cookie is
"FAILED" and put the reference to avoid an access leak.
Suggested-by: David Howells
Signed-off-by: Jeff Layton
---
fs/fscache/cookie.c | 7 +--
1 file changed, 5 insertions(+), 2 deletions(-)
diff --git a
usly clear.
Suggested-by: David Howells
Signed-off-by: Jeff Layton
---
fs/fscache/cookie.c | 4 ++--
1 file changed, 2 insertions(+), 2 deletions(-)
diff --git a/fs/fscache/cookie.c b/fs/fscache/cookie.c
index 74920826d8f6..8b1499be3d62 100644
--- a/fs/fscache/cookie.c
+++ b/fs/fscache/cookie.c
gt; + folio_put(*folio);
Don't you also need this?
*folio = NULL;
> + return -ESTALE;
> + }
> +
> + return 0;
> }
>
> static void afs_free_request(struct netfs_io_request *rreq)
--
Jeff Layton
--
Linux-cachefs mailing list
Linux-cachefs@redhat.com
https://listman.redhat.com/mailman/listinfo/linux-cachefs
> hlist_bl_unlock(h);
>
> - if (test_bit(FSCACHE_VOLUME_ACQUIRE_PENDING, &candidate->flags))
> + if (fscache_is_acquire_pending(candidate))
> fscache_wait_on_volume_collision(candidate, collidee_debug_id);
> return true;
>
Nice catch:
Reviewed-by: Jeff Layton
--
Linux-cachefs mailing list
Linux-cachefs@redhat.com
https://listman.redhat.com/mailman/listinfo/linux-cachefs
On Tue, 2022-07-05 at 14:21 +0100, David Howells wrote:
> Jeff Layton wrote:
>
> > I don't know here... I think it might be better to just expect that when
> > this function returns an error that the folio has already been unlocked.
> > Doing it this way will mean
R11: 0246 R12: 55944d3962f0
> kernel: R13: 0048 R14: 7f49905bb880 R15: 0048
> kernel:
>
>
>
> Xiubo Li (2):
> netfs: do not unlock and put the folio twice
> afs: unlock the folio when vnode is marked deleted
>
&
On Fri, 2022-07-01 at 10:29 +0800, xiu...@redhat.com wrote:
> From: Xiubo Li
>
> The lower layer filesystem should always make sure the folio is
> locked and do the unlock and put the folio in netfs layer.
>
> URL: https://tracker.ceph.com/issues/56423
> Signed-off-by: Xiubo Li
> ---
> fs/netf
ext4/xfs/btrfs/etc. but it always asks for
> whole pages to be written or read.
>
> Fixes: 7ff5062079ef ("iov_iter: Add ITER_XARRAY")
> Reported-by: Jeff Layton
> Signed-off-by: David Howells
> cc: Alexander Viro
> cc: Dominique Martinet
> cc: Mike Marshall
> cc:
gt; Also, rename ->cleanup() to ->free_request() to match the ->init_request()
> function.
>
> Signed-off-by: David Howells
> cc: Jeff Layton
> cc: Steve French
> cc: Dominique Martinet
> cc: Jeff Layton
> cc: David Wysochanski
> cc: Ilya Dryomov
>
On Thu, 2022-05-19 at 15:16 +0100, David Howells wrote:
> Export netfs_put_subrequest() and a couple of tracepoints.
>
> Signed-off-by: David Howells
> cc: Jeff Layton
> cc: linux-cachefs@redhat.com
> ---
>
> fs/netfs/main.c|3 +++
> fs/netfs/objects.c |
bin5bGwAtRmGm.bin
Description: Binary data
--
Linux-cachefs mailing list
Linux-cachefs@redhat.com
https://listman.redhat.com/mailman/listinfo/linux-cachefs
There's no reason that userland can't request to read beyond the EOF. A
short read is expected in that situation.
Signed-off-by: Jeff Layton
---
fs/netfs/io.c | 5 -
1 file changed, 5 deletions(-)
diff --git a/fs/netfs/io.c b/fs/netfs/io.c
index fc3e1601..b94f2d27127e 10064
Signed-off-by: Jeff Layton
---
fs/ceph/addr.c | 6 +-
1 file changed, 5 insertions(+), 1 deletion(-)
diff --git a/fs/ceph/addr.c b/fs/ceph/addr.c
index e7a7b5d29c7d..0726494a0981 100644
--- a/fs/ceph/addr.c
+++ b/fs/ceph/addr.c
@@ -190,6 +190,8 @@ static bool ceph_netfs_clamp_length(struct
Signed-off-by: Jeff Layton
---
fs/ceph/inode.c | 3 ++-
1 file changed, 2 insertions(+), 1 deletion(-)
diff --git a/fs/ceph/inode.c b/fs/ceph/inode.c
index 1dad69a0ab70..8ea1b53b6ce9 100644
--- a/fs/ceph/inode.c
+++ b/fs/ceph/inode.c
@@ -450,6 +450,7 @@ static int ceph_fill_fragtree(struct
Signed-off-by: Jeff Layton
---
fs/ceph/addr.c | 41 +
fs/ceph/file.c | 3 +--
2 files changed, 30 insertions(+), 14 deletions(-)
diff --git a/fs/ceph/addr.c b/fs/ceph/addr.c
index 0726494a0981..bc575bbbf8b7 100644
--- a/fs/ceph/addr.c
+++ b/fs/ceph
re
feeding into -next? Then we can just base our -next feeder branch onto
yours.
[1]:
https://git.kernel.org/pub/scm/linux/kernel/git/dhowells/linux-fs.git/log/?h=netfs-lib
David Howells (1):
ceph: Use the provided iterator in ceph_netfs_issue_op()
Jeff Layton (4):
netfs: don't error out o
, volume->vcookie->debug_id, len);
>
> - len += sizeof(*buf);
> - buf = kmalloc(len, GFP_KERNEL);
> + buf = kmalloc(sizeof(*buf) + len, GFP_KERNEL);
> if (!buf)
> return false;
> buf->reserved = cpu_to_be32(0);
I hit this bug earlier today too.
Reviewed-and-Tested-by: Jeff Layton
--
Linux-cachefs mailing list
Linux-cachefs@redhat.com
https://listman.redhat.com/mailman/listinfo/linux-cachefs
requirements[2].
>
> ver #2)
> - Adjust documentation to match.
> - Use "#if IS_ENABLED()" in netfs_i_cookie(), not "#ifdef".
> - Move the cap check from ceph_readahead() to ceph_init_request() to be
>called from netfslib.
>
c | 160 +++
> fs/netfs/read_helper.c | 1205 ---
> fs/netfs/stats.c |1 -
> fs/nfs/fscache.c|8 -
> include/linux/fscache.h | 14 +
> include/linux/netfs.h | 162 ++-
> include/trace/events/cachefiles.h |6 +-
> include/trace/events/netfs.h| 190 ++-
> 35 files changed, 1867 insertions(+), 1628 deletions(-)
> create mode 100644 fs/netfs/buffered_read.c
> create mode 100644 fs/netfs/io.c
> create mode 100644 fs/netfs/main.c
> create mode 100644 fs/netfs/objects.c
> delete mode 100644 fs/netfs/read_helper.c
>
>
I ran this through xfstests on ceph, with fscache enabled and it seemed
to do fine.
Tested-by: Jeff Layton
--
Linux-cachefs mailing list
Linux-cachefs@redhat.com
https://listman.redhat.com/mailman/listinfo/linux-cachefs
On Fri, 2022-03-11 at 13:49 +, David Howells wrote:
> Jeff Layton wrote:
>
> > > +static int ceph_init_request(struct netfs_io_request *rreq, struct file
> > > *file)
> > > +{
> > > + struct inode *inode = rreq->inode;
> > > + int go
s.h b/include/trace/events/netfs.h
> index f00e3e1821c8..beec534cbaab 100644
> --- a/include/trace/events/netfs.h
> +++ b/include/trace/events/netfs.h
> @@ -56,17 +56,18 @@
> EM(netfs_fail_check_write_begin,"check-write-begin")\
> EM(netfs_fail_copy_to_cache,"copy-to-cache")\
> EM(netfs_fail_read, "read") \
> - EM(netfs_fail_short_readpage, "short-readpage") \
> - EM(netfs_fail_short_write_begin,"short-write-begin")\
> + EM(netfs_fail_short_read, "short-read") \
> E_(netfs_fail_prepare_write,"prep-write")
>
> #define netfs_rreq_ref_traces\
> EM(netfs_rreq_trace_get_hold, "GET HOLD ") \
> EM(netfs_rreq_trace_get_subreq, "GET SUBREQ ") \
> EM(netfs_rreq_trace_put_complete, "PUT COMPLT ") \
> + EM(netfs_rreq_trace_put_discard,"PUT DISCARD") \
> EM(netfs_rreq_trace_put_failed, "PUT FAILED ") \
> EM(netfs_rreq_trace_put_hold, "PUT HOLD ") \
> EM(netfs_rreq_trace_put_subreq, "PUT SUBREQ ") \
> + EM(netfs_rreq_trace_put_zero_len, "PUT ZEROLEN") \
> E_(netfs_rreq_trace_new,"NEW")
>
> #define netfs_sreq_ref_traces\
>
>
Reviewed-by: Jeff Layton
--
Linux-cachefs mailing list
Linux-cachefs@redhat.com
https://listman.redhat.com/mailman/listinfo/linux-cachefs
On Thu, 2022-03-10 at 16:18 +, David Howells wrote:
> Add a netfs_i_context struct that should be included in the network
> filesystem's own inode struct wrapper, directly after the VFS's inode
> struct, e.g.:
>
> struct my_inode {
> struct {
> /* Thes
gt; &got);
> - if (ret < 0)
> - dout("start_read %p, error getting cap\n", inode);
> - else if (!(got & want))
> - dout("start_read %p, no cache cap\n", inode);
> -
> - if (ret <= 0)
> - return;
> - }
> - netfs_readahead(ractl, &ceph_netfs_read_ops, (void *)(uintptr_t)got);
> + netfs_readahead(ractl, &ceph_netfs_read_ops, NULL);
> }
>
> #ifdef CONFIG_CEPH_FSCACHE
>
>
--
Jeff Layton
--
Linux-cachefs mailing list
Linux-cachefs@redhat.com
https://listman.redhat.com/mailman/listinfo/linux-cachefs
.i_size = max(pos + size, i_size);
> + op->store.i_size = max(pos + size, ictx->remote_i_size);
Ahh ok, so if i_size is larger than is represented by this write, you'll
have a zeroed out region until writeback catches up. Makes sense.
> op->store.laundering = laundering;
> op->mtime = vnode->vfs_inode.i_mtime;
> op->flags |= AFS_OPERATION_UNINTR;
>
>
Reviewed-by: Jeff Layton
--
Linux-cachefs mailing list
Linux-cachefs@redhat.com
https://listman.redhat.com/mailman/listinfo/linux-cachefs
tifications of size changes from the MDS. We'll have to consider
how to integrate this with what it does. Probably this will replace one
(or more?) of its fields.
We may need to offer up some guidance wrt locking.
Reviewed-by: Jeff Layton
--
Linux-cachefs mailing list
Linux-cachefs@redhat.com
https://listman.redhat.com/mailman/listinfo/linux-cachefs
is usable whether or not caching is enabled.
> - */
> -int netfs_write_begin(struct file *file, struct address_space *mapping,
> - loff_t pos, unsigned int len, unsigned int aop_flags,
> - struct folio **_folio, void **_fsdata)
> -{
> - struct netf
On Wed, 2022-03-09 at 19:23 +, David Howells wrote:
> Jeff Layton wrote:
>
> > > Add a netfs_i_context struct that should be included in the network
> > > filesystem's own inode struct wrapper, directly after the VFS's inode
> > > s
de/trace/events/netfs.h
> +++ b/include/trace/events/netfs.h
> @@ -56,17 +56,18 @@
> EM(netfs_fail_check_write_begin,"check-write-begin")\
> EM(netfs_fail_copy_to_cache,"copy-to-cache")\
> EM(netfs_fail_read, "read") \
> - EM(netfs_fail_short_readpage, "short-readpage") \
> - EM(netfs_fail_short_write_begin,"short-write-begin")\
> + EM(netfs_fail_short_read, "short-read") \
> E_(netfs_fail_prepare_write,"prep-write")
>
> #define netfs_rreq_ref_traces\
> EM(netfs_rreq_trace_get_hold, "GET HOLD ") \
> EM(netfs_rreq_trace_get_subreq, "GET SUBREQ ") \
> EM(netfs_rreq_trace_put_complete, "PUT COMPLT ") \
> + EM(netfs_rreq_trace_put_discard,"PUT DISCARD") \
> EM(netfs_rreq_trace_put_failed, "PUT FAILED ") \
> EM(netfs_rreq_trace_put_hold, "PUT HOLD ") \
> EM(netfs_rreq_trace_put_subreq, "PUT SUBREQ ") \
> + EM(netfs_rreq_trace_put_zero_len, "PUT ZEROLEN") \
> E_(netfs_rreq_trace_new,"NEW")
>
> #define netfs_sreq_ref_traces\
>
>
Seems reasonable otherwise.
--
Jeff Layton
--
Linux-cachefs mailing list
Linux-cachefs@redhat.com
https://listman.redhat.com/mailman/listinfo/linux-cachefs
On Wed, 2022-03-09 at 15:49 +, David Howells wrote:
> Jeff Layton wrote:
>
> > Should you undef EM and E_ here after creating these?
>
> Maybe. So far it hasn't mattered...
>
I wasn't suggesting there was a bug there, more just a code hygiene
thing. With
On Tue, 2022-03-08 at 23:28 +, David Howells wrote:
> Add a netfs_i_context struct that should be included in the network
> filesystem's own inode struct wrapper, directly after the VFS's inode
> struct, e.g.:
>
> struct my_inode {
> struct {
> struct
goto error;
> + }
> rreq->no_unlock_folio = folio_index(folio);
> __set_bit(NETFS_RREQ_NO_UNLOCK_FOLIO, &rreq->flags);
> netfs_priv = NULL;
> diff --git a/include/linux/netfs.h b/include/linux/netfs.h
> index 7dc741d9b21b..4b99e38f73d9 100644
> --- a/include/linux/netfs.h
> +++ b/include/linux/netfs.h
> @@ -193,7 +193,7 @@ struct netfs_io_request {
> */
> struct netfs_request_ops {
> bool (*is_cache_enabled)(struct inode *inode);
> - void (*init_request)(struct netfs_io_request *rreq, struct file *file);
> + int (*init_request)(struct netfs_io_request *rreq, struct file *file);
> int (*begin_cache_operation)(struct netfs_io_request *rreq);
> void (*expand_readahead)(struct netfs_io_request *rreq);
> bool (*clamp_length)(struct netfs_io_subrequest *subreq);
>
>
Reviewed-by: Jeff Layton
--
Linux-cachefs mailing list
Linux-cachefs@redhat.com
https://listman.redhat.com/mailman/listinfo/linux-cachefs
trace, what)
>),
>
> @@ -182,8 +182,8 @@ TRACE_EVENT(netfs_sreq,
>
> TP_printk("R=%08x[%u] %s %s f=%02x s=%llx %zx/%zx e=%d",
> __entry->rreq, __entry->index,
> - __print
(
> - struct netfs_io_request *rreq)
> -{
> - struct netfs_io_subrequest *subreq;
> -
> - subreq = kzalloc(sizeof(struct netfs_io_subrequest), GFP_KERNEL);
> - if (subreq) {
> - INIT_LIST_HEAD(&subreq->rreq_link);
> - refcount_set(&subreq->usage, 2);
> - subreq->rreq = rreq;
> - netfs_get_request(rreq);
> - netfs_stat(&netfs_n_rh_sreq);
> - }
> -
> - return subreq;
> -}
> -
> -static void netfs_get_subrequest(struct netfs_io_subrequest *subreq)
> -{
> - refcount_inc(&subreq->usage);
> -}
> -
> -static void __netfs_put_subrequest(struct netfs_io_subrequest *subreq,
> -bool was_async)
> -{
> - struct netfs_io_request *rreq = subreq->rreq;
> -
> - trace_netfs_sreq(subreq, netfs_sreq_trace_free);
> - kfree(subreq);
> - netfs_stat_d(&netfs_n_rh_sreq);
> - netfs_put_request(rreq, was_async);
> -}
> -
> /*
> * Clear the unread part of an I/O request.
> */
> @@ -558,7 +442,7 @@ static void netfs_rreq_assess(struct netfs_io_request
> *rreq, bool was_async)
> netfs_rreq_completed(rreq, was_async);
> }
>
> -static void netfs_rreq_work(struct work_struct *work)
> +void netfs_rreq_work(struct work_struct *work)
> {
> struct netfs_io_request *rreq =
> container_of(work, struct netfs_io_request, work);
>
>
Reviewed-by: Jeff Layton
--
Linux-cachefs mailing list
Linux-cachefs@redhat.com
https://listman.redhat.com/mailman/listinfo/linux-cachefs
struct inode*inode; /* The file being accessed */
> struct address_space*mapping; /* The mapping being accessed */
> struct netfs_cache_resources cache_resources;
> - struct list_headsubrequests;/* Requests to fetch I/O from
> disk or net */
> + struct list_headsubrequests;/* Contributory I/O operations
> */
> void*netfs_priv;/* Private data for the netfs */
> unsigned intdebug_id;
> - atomic_tnr_outstanding; /* Number of read ops in
> progress */
> - atomic_tnr_copy_ops;/* Number of write ops in
> progress */
> + atomic_tnr_outstanding; /* Number of ops in progress */
> + atomic_tnr_copy_ops;/* Number of copy-to-cache ops
> in progress */
> size_t submitted; /* Amount submitted for I/O so
> far */
> size_t len;/* Length of the request */
> short error; /* 0 or error that occurred */
> @@ -171,7 +171,7 @@ struct netfs_io_request {
> refcount_t usage;
> unsigned long flags;
> #define NETFS_RREQ_INCOMPLETE_IO 0 /* Some ioreqs terminated short
> or with error */
> -#define NETFS_RREQ_WRITE_TO_CACHE1 /* Need to write to the cache */
> +#define NETFS_RREQ_COPY_TO_CACHE 1 /* Need to write to the cache */
> #define NETFS_RREQ_NO_UNLOCK_FOLIO 2 /* Don't unlock no_unlock_folio
> on completion */
> #define NETFS_RREQ_DONT_UNLOCK_FOLIOS3 /* Don't unlock the
> folios on completion */
> #define NETFS_RREQ_FAILED4 /* The request failed */
> @@ -188,7 +188,7 @@ struct netfs_request_ops {
> int (*begin_cache_operation)(struct netfs_io_request *rreq);
> void (*expand_readahead)(struct netfs_io_request *rreq);
> bool (*clamp_length)(struct netfs_io_subrequest *subreq);
> - void (*issue_op)(struct netfs_io_subrequest *subreq);
> + void (*issue_read)(struct netfs_io_subrequest *subreq);
> bool (*is_still_valid)(struct netfs_io_request *rreq);
> int (*check_write_begin)(struct file *file, loff_t pos, unsigned len,
>struct folio *folio, void **_fsdata);
>
>
Another (mostly) mechanical change...
Reviewed-by: Jeff Layton
--
Linux-cachefs mailing list
Linux-cachefs@redhat.com
https://listman.redhat.com/mailman/listinfo/linux-cachefs
enum netfs_read_trace { netfs_read_traces } __mode(byte);
> +enum netfs_rreq_trace { netfs_rreq_traces } __mode(byte);
> +enum netfs_sreq_trace { netfs_sreq_traces } __mode(byte);
> +enum netfs_failure { netfs_failures } __mode(byte);
> +
Should you undef EM and E_ here after creating these?
> +#endif
>
> /*
> * Export enum symbols via userspace.
>
>
Looks fine otherwise:
Acked-by: Jeff Layton
--
Linux-cachefs mailing list
Linux-cachefs@redhat.com
https://listman.redhat.com/mailman/listinfo/linux-cachefs
On Tue, 2022-03-08 at 23:25 +, David Howells wrote:
> Rename netfs_read_*request to netfs_io_*request so that the same structures
> can be used for the write helpers too.
>
> perl -p -i -e 's/netfs_read_(request|subrequest)/netfs_io_$1/g' \
>`git grep -l 'netfs_read_\(sub\|\)request'`
> pe
the netfs lib
> + * @cres: The cache resources for the read operation
> + *
> + * Clean up the resources at the end of the read request.
> + */
> +static inline void fscache_end_operation(struct netfs_cache_resources *cres)
> +{
> + const struct netfs_cache_ops *ops = fscache_operation_valid(cres);
> +
> + if (ops)
> + ops->end_operation(cres);
> +}
> +
> /**
> * fscache_read - Start a read from the cache.
> * @cres: The cache resources to use
>
>
Reviewed-by: Jeff Layton
--
Linux-cachefs mailing list
Linux-cachefs@redhat.com
https://listman.redhat.com/mailman/listinfo/linux-cachefs
erency data at the moment.
>
> Fixes: 32e150037dce ("fscache, cachefiles: Store the volume coherency data")
> Reported-by: Rohith Surabattula
> Signed-off-by: David Howells
> cc: Steve French
> cc: Jeff Layton
> cc: linux-c...@vger.kernel.org
> cc: linux-cache
if (ret < 0) {
> trace_cachefiles_io_error(object, file_inode(file), ret,
>
> cachefiles_trace_fallocate_error);
>
>
Looks good!
I could often force the cache to fill up with the right fsstress run on
ceph, but with this in place I'm on the 5th
this change it
> shows "VOL OK" instead.
>
> Fixes: 32e150037dce ("fscache, cachefiles: Store the volume coherency data")
> Signed-off-by: David Howells
> cc: Jeff Layton
> cc: Steve French
> cc: linux-c...@vger.kernel.org
> cc: linux-cachefs@redhat.
| 126 +--
> fs/cifs/fscache.h | 79 ---
> include/linux/netfs.h | 7 +
> 7 files changed, 322 insertions(+), 188 deletions(-)
>
>
Acked-by: Jeff Layton
--
Linux-cachefs mailing list
Linux-cachefs@redhat.com
https://listman.redhat.com/mailman/listinfo/linux-cachefs
> + */
> + cache->tag = kstrdup("CacheFiles", GFP_KERNEL);
> + if (!cache->tag)
> + return -ENOMEM;
> + }
> +
> return cachefiles_add_cache(cache);
> }
>
>
>
Reviewed-by: Jeff Layton
--
Linux-cachefs mailing list
Linux-cachefs@redhat.com
https://listman.redhat.com/mailman/listinfo/linux-cachefs
++ b/fs/cachefiles/io.c
> @@ -264,7 +264,7 @@ static int cachefiles_write(struct netfs_cache_resources
> *cres,
> ki->term_func = term_func;
> ki->term_func_priv = term_func_priv;
> ki->was_async = true;
> - ki->b_writing = (len + (1 << cache->bshift)) >> cache->bshift;
> + ki->b_writing = (len + (1 << cache->bshift) - 1) >>
> cache->bshift;
>
> if (ki->term_func)
> ki->iocb.ki_complete = cachefiles_write_complete;
>
>
Reviewed-by: Jeff Layton
--
Linux-cachefs mailing list
Linux-cachefs@redhat.com
https://listman.redhat.com/mailman/listinfo/linux-cachefs
t;o=%08x B=%lx",
> + __entry->obj, __entry->inode)
> + );
> +
> TRACE_EVENT(cachefiles_mark_inactive,
> TP_PROTO(struct cachefiles_object *obj,
>struct inode *inode),
>
>
Reviewed-by: Jeff Layton
--
Linux-cachefs mailing list
Linux-cachefs@redhat.com
https://listman.redhat.com/mailman/listinfo/linux-cachefs
__entry->obj,
> __entry->backer,
> __print_symbolic(__entry->why, cachefiles_trunc_traces),
> @@ -549,7 +569,7 @@ TRACE_EVENT(cachefiles_mark_active,
> __entry->inode = inode->i_ino;
> ),
>
> - TP_printk("o=%08x i=%lx",
> + TP_printk("o=%08x B=%lx",
> __entry->obj, __entry->inode)
> );
>
> @@ -570,7 +590,7 @@ TRACE_EVENT(cachefiles_mark_inactive,
> __entry->inode = inode->i_ino;
> ),
>
> - TP_printk("o=%08x i=%lx",
> + TP_printk("o=%08x B=%lx",
> __entry->obj, __entry->inode)
> );
>
> @@ -594,7 +614,7 @@ TRACE_EVENT(cachefiles_vfs_error,
> __entry->where = where;
> ),
>
> - TP_printk("o=%08x b=%08x %s e=%d",
> + TP_printk("o=%08x B=%x %s e=%d",
> __entry->obj,
> __entry->backer,
> __print_symbolic(__entry->where, cachefiles_error_traces),
> @@ -621,7 +641,7 @@ TRACE_EVENT(cachefiles_io_error,
> __entry->where = where;
> ),
>
> - TP_printk("o=%08x b=%08x %s e=%d",
> + TP_printk("o=%08x B=%x %s e=%d",
> __entry->obj,
> __entry->backer,
> __print_symbolic(__entry->where, cachefiles_error_traces),
>
>
Reviewed-by: Jeff Layton
--
Linux-cachefs mailing list
Linux-cachefs@redhat.com
https://listman.redhat.com/mailman/listinfo/linux-cachefs
Signed-off-by: Jeff Layton
---
fs/fscache/cookie.c | 11 ---
1 file changed, 8 insertions(+), 3 deletions(-)
diff --git a/fs/fscache/cookie.c b/fs/fscache/cookie.c
index 9bb1ab5fe5ed..f9ebaaca5eb5 100644
--- a/fs/fscache/cookie.c
+++ b/fs/fscache/cookie.c
@@ -372,17 +372,22 @@ static
e8e7d..44da7646f789 100644
> --- a/fs/cifs/file.c
> +++ b/fs/cifs/file.c
> @@ -376,8 +376,6 @@ static void cifsFileInfo_put_final(struct cifsFileInfo
> *cifs_file)
> struct cifsLockInfo *li, *tmp;
> struct super_block *sb = inode->i_sb;
>
> - cifs_fscache_release_inode_cookie(inode);
> -
> /*
>* Delete any outstanding lock records. We'll lose them when the file
>* is closed anyway.
>
Looks good.
Acked-by: Jeff Layton
--
Linux-cachefs mailing list
Linux-cachefs@redhat.com
https://listman.redhat.com/mailman/listinfo/linux-cachefs
, 1);
> __set_bit(NETFS_RREQ_IN_PROGRESS, &rreq->flags);
> - ops->init_rreq(rreq, file);
> + if (ops->init_rreq)
> + ops->init_rreq(rreq, file);
> netfs_stat(&netfs_n_rh_rreq);
> }
>
This looks reasonable to me, since ceph doesn't need anything here
anyway.
Reviewed-by: Jeff Layton
--
Linux-cachefs mailing list
Linux-cachefs@redhat.com
https://listman.redhat.com/mailman/listinfo/linux-cachefs
shed before being freed.
> - Fixed fscache to use remove_proc_subtree() to remove /proc/fs/fscache/.
>
> ver #2:
> - Fix an unused-var warning due to CONFIG_9P_FSCACHE=n.
> - Use gfpflags_allow_blocking() rather than using flag directly.
> - Fixed some error logging in a couple
test_bit(CACHEFILES_OBJECT_USING_TMPFILE, &object->flags))
> {
> + atomic_long_add(inode->i_blocks, &cache->b_released);
> + if (atomic_inc_return(&cache->f_released))
> + cachefiles_state_changed(c
t; + name[0] = 'E';
> + name[1] = '0' + pad;
> + len = 2;
> + kend = key + keylen;
> + do {
> + acc = *key++;
> + if (key < kend) {
> + acc |= *key++ << 8;
> + if (key < kend)
> + acc |= *key++ << 16;
> + }
> +
> + name[len++] = cachefiles_charmap[acc & 63];
> + acc >>= 6;
> + name[len++] = cachefiles_charmap[acc & 63];
> + acc >>= 6;
> + name[len++] = cachefiles_charmap[acc & 63];
> + acc >>= 6;
> + name[len++] = cachefiles_charmap[acc & 63];
> + } while (key < kend);
It might be good to eventually consolidate this code with the base64
scheme that fscrypt uses. Are they compatible? If so, then that can be
done in a later merge.
> +
> +success:
> + name[len] = 0;
> + object->d_name = name;
> + object->d_name_len = len;
> + _leave(" = %s", object->d_name);
> + return true;
> +}
>
>
--
Jeff Layton
--
Linux-cachefs mailing list
Linux-cachefs@redhat.com
https://listman.redhat.com/mailman/listinfo/linux-cachefs
struct fscache_cache_ops cachefiles_cache_ops = {
> + .name = "cachefiles",
> +};
> diff --git a/fs/cachefiles/internal.h b/fs/cachefiles/internal.h
> index 48768a3ab105..77e874c2bbe7 100644
> --- a/fs/cachefiles/internal.h
> +++ b/fs/cachefiles/internal.h
> @@
(unsigned int, obj )
> + __field(ino_t, inode )
> + ),
> +
> + TP_fast_assign(
> + __entry->obj= obj ? obj->debug_id : 0;
> + __entry->inode = inode->i_ino;
> +),
> +
> + TP_printk("o=%08x i=%lx",
> + __entry->obj, __entry->inode)
> + );
> +
> TRACE_EVENT(cachefiles_vfs_error,
> TP_PROTO(struct cachefiles_object *obj, struct inode *backer,
>int error, enum cachefiles_error_trace where),
>
>
--
Jeff Layton
--
Linux-cachefs mailing list
Linux-cachefs@redhat.com
https://listman.redhat.com/mailman/listinfo/linux-cachefs
))
> + clear_bit(FSCACHE_COOKIE_NO_DATA_TO_READ, &cookie->flags);
> +}
> +
> #endif /* _LINUX_FSCACHE_H */
>
>
Is this logic correct?
FSCACHE_COOKIE_HAVE_DATA gets set in cachefiles_write_complete, but will
that ever be called on a cookie that has no data? Will we ever call
cachefiles_write at all when there is no data to be written?
--
Jeff Layton
--
Linux-cachefs mailing list
Linux-cachefs@redhat.com
https://listman.redhat.com/mailman/listinfo/linux-cachefs
shed before being freed.
> - Fixed fscache to use remove_proc_subtree() to remove /proc/fs/fscache/.
>
> ver #2:
> - Fix an unused-var warning due to CONFIG_9P_FSCACHE=n.
> - Use gfpflags_allow_blocking() rather than using flag directly.
> - Fixed some error logging in a couple
shed before being freed.
> - Fixed fscache to use remove_proc_subtree() to remove /proc/fs/fscache/.
>
> ver #2:
> - Fix an unused-var warning due to CONFIG_9P_FSCACHE=n.
> - Use gfpflags_allow_blocking() rather than using flag directly.
> - Fixed some error logging in a cou
ork,
> fscache_cookie_see_active,
> + fscache_cookie_see_lru_discard,
> + fscache_cookie_see_lru_do_one,
> fscache_cookie_see_relinquish,
> fscache_cookie_see_withdraw,
> fscache_cookie_see_work,
> @@ -68,6 +73,7 @@ enum fscache_access_trace {
> fscache_access_acquire_volume_end,
> fscache_access_cache_pin,
> fscache_access_cache_unpin,
> + fscache_access_lookup_cookie,
> fscache_access_lookup_cookie_end,
> fscache_access_lookup_cookie_end_failed,
> fscache_access_relinquish_volume,
> @@ -110,13 +116,18 @@ enum fscache_access_trace {
> EM(fscache_cookie_discard, "DISCARD ")\
> EM(fscache_cookie_get_hash_collision, "GET hcoll")\
> EM(fscache_cookie_get_end_access, "GQ endac")\
> + EM(fscache_cookie_get_lru, "GET lru ")\
> + EM(fscache_cookie_get_use_work, "GQ use ")\
> EM(fscache_cookie_new_acquire, "NEW acq ")\
> EM(fscache_cookie_put_hash_collision, "PUT hcoll")\
> + EM(fscache_cookie_put_lru, "PUT lru ")\
> EM(fscache_cookie_put_over_queued, "PQ overq")\
> EM(fscache_cookie_put_relinquish, "PUT relnq")\
> EM(fscache_cookie_put_withdrawn,"PUT wthdn")\
> EM(fscache_cookie_put_work, "PQ work ")\
> EM(fscache_cookie_see_active, "- activ")\
> + EM(fscache_cookie_see_lru_discard, "- x-lru")\
> + EM(fscache_cookie_see_lru_do_one, "- lrudo")\
> EM(fscache_cookie_see_relinquish, "- x-rlq")\
> EM(fscache_cookie_see_withdraw, "- x-wth")\
> E_(fscache_cookie_see_work, "- work ")
> @@ -126,6 +137,7 @@ enum fscache_access_trace {
> EM(fscache_access_acquire_volume_end, "END acq_vol")\
> EM(fscache_access_cache_pin,"PIN cache ")\
> EM(fscache_access_cache_unpin, "UNPIN cache ")\
> + EM(fscache_access_lookup_cookie,"BEGIN lookup ")\
> EM(fscache_access_lookup_cookie_end,"END lookup ")\
> EM(fscache_access_lookup_cookie_end_failed,"END lookupf") \
> EM(fscache_access_relinquish_volume,"BEGIN rlq_vol")\
>
>
--
Jeff Layton
--
Linux-cachefs mailing list
Linux-cachefs@redhat.com
https://listman.redhat.com/mailman/listinfo/linux-cachefs
uct fscache_cookie
> *cookie,
> enum fscache_cookie_trace where);
> extern void fscache_end_cookie_access(struct fscache_cookie *cookie,
> enum fscache_access_trace why);
> -extern void fscache_set_cookie_s
he_cache_put_alloc_volume, "PUT alvol")\
> EM(fscache_cache_put_cache, "PUT cache")\
> EM(fscache_cache_put_prep_failed, "PUT pfail")\
> - E_(fscache_cache_put_relinquish,"PUT relnq")
> + EM(fscache_cache_put_relinquish,"PUT relnq")\
> + E_(fscache_cache_put_volume,"PUT vol ")
> +
> +#define fscache_volume_traces
> \
> + EM(fscache_volume_collision,"*COLLIDE*")\
> + EM(fscache_volume_get_cookie, "GET cook ")\
> + EM(fscache_volume_get_create_work, "GET creat")\
> + EM(fscache_volume_get_hash_collision, "GET hcoll")\
> + EM(fscache_volume_free, "FREE ")\
> + EM(fscache_volume_new_acquire, "NEW acq ")\
> + EM(fscache_volume_put_cookie, "PUT cook ")\
> + EM(fscache_volume_put_create_work, "PUT creat")\
> + EM(fscache_volume_put_hash_collision, "PUT hcoll")\
> + EM(fscache_volume_put_relinquish, "PUT relnq")\
> + EM(fscache_volume_see_create_work, "SEE creat")\
> + E_(fscache_volume_see_hash_wake,"SEE hwake")
>
> /*
> * Export enum symbols via userspace.
> @@ -50,6 +83,7 @@ enum fscache_cache_trace {
> #define E_(a, b) TRACE_DEFINE_ENUM(a);
>
> fscache_cache_traces;
> +fscache_volume_traces;
>
> /*
> * Now redefine the EM() and E_() macros to map the enums to the strings that
> @@ -86,6 +120,31 @@ TRACE_EVENT(fscache_cache,
> __entry->usage)
> );
>
> +TRACE_EVENT(fscache_volume,
> + TP_PROTO(unsigned int volume_debug_id,
> + int usage,
> + enum fscache_volume_trace where),
> +
> + TP_ARGS(volume_debug_id, usage, where),
> +
> + TP_STRUCT__entry(
> + __field(unsigned int, volume )
> + __field(int,usage )
> + __field(enum fscache_volume_trace, where )
> + ),
> +
> + TP_fast_assign(
> + __entry->volume = volume_debug_id;
> + __entry->usage = usage;
> + __entry->where = where;
> +),
> +
> + TP_printk("V=%08x %s u=%d",
> + __entry->volume,
> + __print_symbolic(__entry->where, fscache_volume_traces),
> + __entry->usage)
> + );
> +
> #endif /* _TRACE_FSCACHE_H */
>
> /* This part must be outside protection */
>
>
--
Jeff Layton
--
Linux-cachefs mailing list
Linux-cachefs@redhat.com
https://listman.redhat.com/mailman/listinfo/linux-cachefs
d their string mappings for display.
> */
> +#define fscache_cache_traces \
> + EM(fscache_cache_collision, "*COLLIDE*")\
> + EM(fscache_cache_get_acquire, "GET acq ")\
> + EM(fscache_cache_new_acquire, "NEW acq ")\
> + EM(fscache_cache_put_cache, "PUT cache")\
> + EM(fscache_cache_put_prep_failed, "PUT pfail")\
> + E_(fscache_cache_put_relinquish,"PUT relnq")
>
> /*
> * Export enum symbols via userspace.
> @@ -33,6 +49,8 @@
> #define EM(a, b) TRACE_DEFINE_ENUM(a);
> #define E_(a, b) TRACE_DEFINE_ENUM(a);
>
> +fscache_cache_traces;
> +
> /*
> * Now redefine the EM() and E_() macros to map the enums to the strings that
> * will be printed in the output.
> @@ -43,6 +61,31 @@
> #define E_(a, b) { a, b }
>
>
> +TRACE_EVENT(fscache_cache,
> + TP_PROTO(unsigned int cache_debug_id,
> + int usage,
> + enum fscache_cache_trace where),
> +
> + TP_ARGS(cache_debug_id, usage, where),
> +
> + TP_STRUCT__entry(
> + __field(unsigned int, cache )
> + __field(int,usage )
> + __field(enum fscache_cache_trace, where )
> + ),
> +
> + TP_fast_assign(
> + __entry->cache = cache_debug_id;
> + __entry->usage = usage;
> + __entry->where = where;
> +),
> +
> + TP_printk("C=%08x %s r=%d",
> + __entry->cache,
> + __print_symbolic(__entry->where, fscache_cache_traces),
> + __entry->usage)
> + );
> +
> #endif /* _TRACE_FSCACHE_H */
>
> /* This part must be outside protection */
>
>
--
Jeff Layton
--
Linux-cachefs mailing list
Linux-cachefs@redhat.com
https://listman.redhat.com/mailman/listinfo/linux-cachefs
patchset), and
also adds support for writing to the cache as well.
Jeff Layton (2):
ceph: conversion to new fscache API
ceph: add fscache writeback support
fs/ceph/Kconfig | 2 +-
fs/ceph/addr.c | 101 +-
fs/ceph/cache.c | 218 +-
When updating the backing store from the pagecache (a'la writepage or
writepages), write to the cache first. This allows us to keep caching
files even when they are being written, as long as we have appropriate
caps.
Signed-off-by: Jeff Layton
---
fs/ceph/addr.c
, ensure we resize the cached data on truncates, and invalidate the
cache in response to the appropriate events. This will allow us to
plumb in write support later.
Signed-off-by: Jeff Layton
---
fs/ceph/Kconfig | 2 +-
fs/ceph/addr.c | 34
fs/ceph/cache.c | 218
v)
> - ops->cleanup(netfs_priv, mapping);
> + ops->cleanup(mapping, netfs_priv);
> _leave(" = %d", ret);
> return ret;
> }
Ouch, good catch.
Reviewed-by: Jeff Layton
--
Linux-cachefs mailing list
Linux-cachefs@redhat.com
https://listman.redhat.com/mailman/listinfo/linux-cachefs
On Mon, 2021-12-06 at 09:57 +, David Howells wrote:
> Jeff Layton wrote:
>
> > if (!(gfp & __GFP_DIRECT_RECLAIM) || !(gfp & __GFP_FS))
>
> There's a function for the first part of this:
>
> if (!gfpflags_allow_blocking(gfp)
1 - 100 of 239 matches
Mail list logo