Re: [ceph-users] CephFS Quotas on Subdirectories

2019-02-26 Thread Ramana Raja
On Tue, Feb 26, 2019 at 1:38 PM, Hendrik Peyerl  wrote: 
> 
> Hello All,
> 
> I am having some troubles with Ceph Quotas not working on subdirectories. I
> am running with the following directory tree:
> 
> - customer
>   - project
> - environment
>   - application1
>   - application2
>   - applicationx
> 
> I set a quota on environment which works perfectly fine, the client sees the
> quota and is not breaching it. The problem starts when I try to mount a
> subdirectory like application1, this directory does not have any quota at
> all.
> Is there a possibility to set a quota for environment so that the application
> directories will not be able to go over that quota?

Can you set quotas on the application directories as well?
setfattr -n ceph.quota.max_bytes -v  
/environment/application1 

> 
> Client Caps:
> 
> caps: [mds] allow rw path=/customer/project/environment
> caps: [mon] allow r
> caps: [osd] allow rw tag cephfs data=cephfs
> 
> 
> My Environment:
> 
> Ceph 13.2.4 on CentOS 7.6 with Kernel 4.20.3-1 for both Servers and Clients
> 
> 
> Any help would be greatly appreciated.
> 
> Best Regards,
> 
> Hendrik
> ___
> ceph-users mailing list
> ceph-users@lists.ceph.com
> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
> 
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


Re: [ceph-users] Problems getting nfs-ganesha with cephfs backend to work.

2017-07-19 Thread Ramana Raja
On 07/20/2017 at 12:02 AM, Daniel Gryniewicz  wrote:
> On 07/19/2017 05:27 AM, Micha Krause wrote:
> > Hi,
> > 
> >> Ganesha version 2.5.0.1 from the nfs-ganesha repo hosted on
> >> download.ceph.com 
> > 
> > I didn't know about that repo, and compiled ganesha myself. The
> > developers in the #ganesha IRC channel pointed me to
> > the libcephfs version.
> > After recompiling ganesha with a kraken libcephfs instead of a jewel
> > version both errors went away.
> > 
> > I'm sure using a compiled Version from the repo you mention would have
> > worked out of the box.
> > 
> > Micha Krause
> > 
> 
> These packages aren't quite ready for use yet, the packaging work is
> still underway.  CCing Ali, who's doing the work.
> 
> Daniel

Ali told me that the rpm for Ganesha's CephFS FSAL(driver),
nfs-ganesha-ceph v2.5.0.1 (28th June) available at download.ceph.com,
was built using libcephfs2 of Ceph luminous release v12.0.3.

AFAIK Ali's working on building latest nfs-ganesha, and nfs-ganesha-ceph FSAL
(v2.5.0.4) rpms and debs using libcephfs2 in latest luminous release, 12.1.1.
You can expect them to be at download.ceph.com/nfs-ganesha sometime soon.

-Ramana
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


Re: [ceph-users] CephFS mount shows the entire cluster size as apposed to custom-cephfs-pool-size

2017-03-17 Thread Ramana Raja
On Friday, March 17, 2017 at 7:44 AM, Deepak Naidu  wrote:
> , df always reports entire cluster
> size

... instead of CephFS data pool's size.

This issue has been recorded as a feature
request recently,
http://tracker.ceph.com/issues/19109

> Not sure, if this is still true with Jewel CephFS ie
> cephfs does not support any type of quota

If you were interested in setting quota on directories
in a FS, you can do that. See doc,
http://docs.ceph.com/docs/master/cephfs/quota/

You'd have to use the FUSE client (kernel client
does not support quotas),
http://docs.ceph.com/docs/master/cephfs/fuse/
and set a client config option,
client_quota = true
in Jewel releases (preferably use the latest v10.2.6).
An existing quota issue that was recently discussed
is here,
http://tracker.ceph.com/issues/17939

-Ramana

> 
> 
> 
> https://www.spinics.net/lists/ceph-users/msg05623.html
> 
> 
> 
> --
> 
> Deepak
> 
> 
> 
> 
> From: Deepak Naidu
> Sent: Thursday, March 16, 2017 6:19 PM
> To: 'ceph-users'
> Subject: CephFS mount shows the entire cluster size as apposed to
> custom-cephfs-pool-size
> 
> 
> 
> 
> Greetings,
> 
> 
> 
> I am trying to build a CephFS system. Currently I have created my crush map
> which uses only certain OSD & I have pools created out from them. But when I
> mount the cephFS the mount size is my entire ceph cluster size, how is that
> ?
> 
> 
> 
> 
> 
> Ceph cluster & pools
> 
> 
> 
> [ceph-admin@storageAdmin ~]$ ceph df
> 
> GLOBAL:
> 
> SIZE AVAIL RAW USED %RAW USED
> 
> 4722G 4721G 928M 0.02
> 
> POOLS:
> 
> NAME ID USED %USED MAX AVAIL OBJECTS
> 
> ecpool_disk1 22 0 0 1199G 0
> 
> rcpool_disk2 24 0 0 1499G 0
> 
> rcpool_cepfsMeta 25 4420 0 76682M 20
> 
> 
> 
> 
> 
> CephFS volume & pool
> 
> 
> 
> Here data0 is the volume/filesystem name
> 
> rcpool_cepfsMeta – is the meta-data pool
> 
> rcpool_disk2 – is the data pool
> 
> 
> 
> [ceph-admin@storageAdmin ~]$ ceph fs ls
> 
> name: data0 , metadata pool: rcpool_cepfsMeta, data pools: [rcpool_disk2 ]
> 
> 
> 
> 
> 
> Command to mount CephFS
> 
> sudo mount -t ceph mon1:6789:/ /mnt/cephfs/ -o
> name=admin,secretfile=admin.secret
> 
> 
> 
> 
> 
> Client host df –h output
> 
> 192.168.1.101:6789:/ 4.7T 928M 4.7T 1% /mnt/cephfs
> 
> 
> 
> 
> 
> 
> 
> --
> 
> Deepak
> 
> 
> 
> 
> 
> 
> 
> 
> 
> This email message is for the sole use of the intended recipient(s) and may
> contain confidential information. Any unauthorized review, use, disclosure
> or distribution is prohibited. If you are not the intended recipient, please
> contact the sender by reply email and destroy all copies of the original
> message.
> 
> 
> ___
> ceph-users mailing list
> ceph-users@lists.ceph.com
> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
> 
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com