If I add on one client a file to the cephfs, that is exported via 
ganesha and nfs mounted somewhere else. I can see it in the dir listing 
on the other nfs client. But trying to read it gives an Input/output 
error. Other files (older ones in the same dir I can read)

Anyone had this also?


nfs-ganesha-xfs-2.6.1-0.1.el7.x86_64
nfs-ganesha-2.6.1-0.1.el7.x86_64
nfs-ganesha-mem-2.6.1-0.1.el7.x86_64
nfs-ganesha-vfs-2.6.1-0.1.el7.x86_64
nfs-ganesha-rgw-2.6.1-0.1.el7.x86_64
nfs-ganesha-ceph-2.6.1-0.1.el7.x86_64

ceph-12.2.8-0.el7.x86_64
ceph-base-12.2.8-0.el7.x86_64
ceph-common-12.2.8-0.el7.x86_64
ceph-mds-12.2.8-0.el7.x86_64
ceph-mgr-12.2.8-0.el7.x86_64
ceph-mon-12.2.8-0.el7.x86_64
ceph-osd-12.2.8-0.el7.x86_64
ceph-radosgw-12.2.8-0.el7.x86_64
ceph-selinux-12.2.8-0.el7.x86_64
collectd-ceph-5.8.0-2.el7.x86_64
libcephfs2-12.2.8-0.el7.x86_64
nfs-ganesha-ceph-2.6.1-0.1.el7.x86_64
python-cephfs-12.2.8-0.el7.x86_64



## These are defaults for exports.  They can be overridden per-export.
EXPORT_DEFAULTS {
        ## Access type for clients.  Default is None, so some access 
must be
        ## given either here or in the export itself.
        Transports = TCP;
        Protocols = 4,3;
        Squash = root_id_squash;
        anonymous_uid = 500;
        anonymous_gid = 500;
        Access_Type = RW;
}

## Configure settings for the object handle cache
CACHEINODE {
        ## The point at which object cache entries will start being 
reused.
        Entries_HWMark = 1000000;
}

_______________________________________________
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

Reply via email to