Looks like you're right, Jeff. Just tried to write into the dir and am now
getting the quota warning. So I guess it was the libcephfs cache as you
say. That's fine for me, I don't need the quotas to be too strict, just a
failsafe really.
Interestingly, if I create a new dir, set the same 100MB quota, I can write
multiple files with "dd if=/dev/zero of=1G bs=1M count=1024 oflag=direct".
Wouldn't that bypass the cache? I have the following in my ganesha.conf
which I believe effectively disables Ganesha's caching:
CACHEINODE {
Dir_Chunk = 0;
NParts = 1;
Cache_Size = 1;
}
Thanks,
On Mon, Mar 4, 2019 at 2:50 PM Jeff Layton <[email protected]> wrote:
> On Mon, 2019-03-04 at 09:11 -0500, Jeff Layton wrote:
> > This list has been deprecated. Please subscribe to the new devel list at
> lists.nfs-ganesha.org.
> > On Fri, 2019-03-01 at 15:49 +0000, David C wrote:
> > > This list has been deprecated. Please subscribe to the new devel list
> at lists.nfs-ganesha.org.
> > > Hi All
> > >
> > > Exporting cephfs with the CEPH_FSAL
> > >
> > > I set the following on a dir:
> > >
> > > setfattr -n ceph.quota.max_bytes -v 100000000 /dir
> > > setfattr -n ceph.quota.max_files -v 10 /dir
> > >
> > > From an NFSv4 client, the quota.max_bytes appears to be completely
> ignored, I can go GBs over the quota in the dir. The quota.max_files DOES
> work however, if I try and create more than 10 files, I'll get "Error
> opening file 'dir/new file': Disk quota exceeded" as expected.
> > >
> > > From a fuse-mount on the same server that is running nfs-ganesha, I've
> confirmed ceph.quota.max_bytes is enforcing the quota, I'm unable to copy
> more than 100MB into the dir.
> > >
> > > According to [1] and [2] this should work.
> > >
> > > Cluster is Luminous 12.2.10
> > >
> > > Package versions on nfs-ganesha server:
> > >
> > > nfs-ganesha-rados-grace-2.7.1-0.1.el7.x86_64
> > > nfs-ganesha-2.7.1-0.1.el7.x86_64
> > > nfs-ganesha-vfs-2.7.1-0.1.el7.x86_64
> > > nfs-ganesha-ceph-2.7.1-0.1.el7.x86_64
> > > libcephfs2-13.2.2-0.el7.x86_64
> > > ceph-fuse-12.2.10-0.el7.x86_64
> > >
> > > My Ganesha export:
> > >
> > > EXPORT
> > > {
> > > Export_ID=100;
> > > Protocols = 4;
> > > Transports = TCP;
> > > Path = /;
> > > Pseudo = /ceph/;
> > > Access_Type = RW;
> > > Attr_Expiration_Time = 0;
> > > #Manage_Gids = TRUE;
> > > Filesystem_Id = 100.1;
> > > FSAL {
> > > Name = CEPH;
> > > }
> > > }
> > >
> > > My ceph.conf client section:
> > >
> > > [client]
> > > mon host = 10.10.10.210:6789, 10.10.10.211:6789,
> 10.10.10.212:6789
> > > client_oc_size = 8388608000
> > > #fuse_default_permission=0
> > > client_acl_type=posix_acl
> > > client_quota = true
> > > client_quota_df = true
> > >
> > > Related links:
> > >
> > > [1] http://tracker.ceph.com/issues/16526
> > > [2] https://github.com/nfs-ganesha/nfs-ganesha/issues/100
> > >
> > > Thanks
> > > David
> > >
> >
> > It looks like you're having ganesha do the mount as "client.admin", and
> > I suspect that that may allow you to bypass quotas? You may want to try
> > creating a cephx user with less privileges, have ganesha connect as that
> > user and see if it changes things?
> >
>
> Actually, this may be wrong info.
>
> How are you testing being able to write to the file past quota? Are you
> using O_DIRECT I/O? If not, then it may just be that you're seeing the
> effect of the NFS client caching writes.
> --
> Jeff Layton <[email protected]>
>
>
_______________________________________________
ceph-users mailing list
[email protected]
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com