On Tue, Jul 9, 2013 at 6:33 AM, David Howells <[email protected]> wrote:
> Milosz Tanski <[email protected]> wrote:
>
>> It looks like both the cifs and NFS code do not bother with any
>> locking around cifs_fscache_set_inode_cookie. Is there no concern over
>> multiple open() calls racing to create the cookie in those
>> filesystems?
>
> Yeah... That's probably wrong. AFS obviates the need for special locking by
> doing it in afs_iget().
I'm going to create a mutex around the enable cache / disable cache in
the Ceph code since the spinlock is also right now.
>
> Hmmm... I think I've just spotted what might be the cause of pages getting
> marked PG_fscache whilst belonging to the allocator.
>
> void nfs_fscache_set_inode_cookie(struct inode *inode, struct file
> *filp)
> {
> if (NFS_FSCACHE(inode)) {
> nfs_fscache_inode_lock(inode);
> if ((filp->f_flags & O_ACCMODE) != O_RDONLY)
> nfs_fscache_disable_inode_cookie(inode);
> else
> nfs_fscache_enable_inode_cookie(inode);
> nfs_fscache_inode_unlock(inode);
> }
> }
>
> can release the cookie whilst reads are in progress on it when an inode being
> read suddenly changes to an inode being written. We need some sort of
> synchronisation on that there.
So far my experience has been that synchronization has been the most
tricky part of implementing fscache for Ceph. Things work great when
there's a simple shell accessing data and break down when you're doing
a HPC kind of workload.
>
> David
--
To unsubscribe from this list: send the line "unsubscribe ceph-devel" in
the body of a message to [email protected]
More majordomo info at http://vger.kernel.org/majordomo-info.html