On Thu, 2019-02-14 at 19:49 +0800, Marvin Zhang wrote:
> Hi Jeff,
> Another question is about Client Caching when disabling delegation.
> I set breakpoint on nfs4_op_read, which is OP_READ process function in
> nfs-ganesha. Then I read a file, I found that it will hit only once on
> the first time, which means latter reading operation on this file will
> not trigger OP_READ. It will read the data from client side cache. Is
> it right?

Yes. In the absence of a delegation, the client will periodically query
for the inode attributes, and will serve reads from the cache if it
looks like the file hasn't changed.

> I also checked the nfs client code in linux kernel. Only
> cache_validity is NFS_INO_INVALID_DATA, it will send OP_READ again,
> like this:
>     if (nfsi->cache_validity & NFS_INO_INVALID_DATA) {
>         ret = nfs_invalidate_mapping(inode, mapping);
>     }
> This about this senario, client1 connect ganesha1 and client2 connect
> ganesha2. I read /1.txt on client1 and client1 will cache the data.
> Then I modify this file on client2. At that time, how client1 know the
> file is modifed and how it will add NFS_INO_INVALID_DATA into
> cache_validity?

Once you modify the code on client2, ganesha2 will request the necessary
caps from the ceph MDS, and client1 will have its caps revoked. It'll
then make the change.

When client1 reads again it will issue a GETATTR against the file [1].
ganesha1 will then request caps to do the getattr, which will end up
revoking ganesha2's caps. client1 will then see the change in attributes
(the change attribute and mtime, most likely) and will invalidate the
mapping, causing it do reissue a READ on the wire.

[1]: There may be a window of time after you change the file on client2
where client1 doesn't see it. That's due to the fact that inode
attributes on the client are only revalidated after a timeout. You may
want to read over the DATA AND METADATA COHERENCE section of nfs(5) to
make sure you understand how the NFS client validates its caches.

Jeff Layton <jlay...@poochiereds.net>

ceph-users mailing list

Reply via email to