Hello, John!

Thank you very much for your reply and for the provided information! As a
follow-up to your email, a few other questions have arisen:

   - is the http://ceph.com/docs/master/cephfs/ page referring to the
   current release version (Giant) or to the HEAD (Hammer) version? if it's
   referring to Giant -- are there any major improvements and fixes for CephFS
   included in the (upcoming) Hammer release?


   - the 'one filesystem per Ceph cluster' sounds like a (possible)
   drawback, from the flexibility point of view. Is this something which will
   be (or is currently) worked on?


   - regarding the system users created on a CephFS -- if it's still not
   production ready (according to the first replied bullet), I guess I'll try
   the Ceph block device functionality, as it seems more appropriate to my
   needs. Of course, I will post any bugs to the bug tracker.

Thanks, again!
Kind regards,
Bogdan


On Mon, Mar 23, 2015 at 12:47 PM, John Spray <[email protected]> wrote:

> On 22/03/2015 08:29, Bogdan SOLGA wrote:
>
>> Hello, everyone!
>>
>> I have a few questions related to the CephFS part of Ceph:
>>
>>   * is it production ready?
>>
>>  Like it says at http://ceph.com/docs/master/cephfs/: " CephFS currently
> lacks a robust ‘fsck’ check and repair function. Please use caution when
> storing important data as the disaster recovery tools are still under
> development".  That page was recently updated.
>
>>
>>   * can multiple CephFS be created on the same cluster? The CephFS
>>     creation <http://docs.ceph.com/docs/master/cephfs/createfs/> page
>>     describes how to create a CephFS using (at least) two pools, but
>>     the mounting <http://docs.ceph.com/docs/master/cephfs/kernel/>
>>     page does not refer to any pool, when mounting the FS;
>>
>>  Currently you can only have one filesystem per Ceph cluster.
>
>>
>>   * besides the pool quota
>>     <http://docs.ceph.com/docs/master/rados/operations/pools/
>> #set-pool-quotas>
>>     setting, are there any means by which a CephFS can have a quota
>>     defined? I have found this
>>     <https://wiki.ceph.com/Planning/Blueprints/Firefly/
>> Cephfs_quota_support>
>>     document, which is from the Firefly release (and it seems only a
>>     draft), but no other references on the matter.
>>
>>  Yes, when using the fuse client there is a per-directory quota system
> available, although it is not guaranteed to be completely strict. I don't
> think there is any documentation for that, but you can see how to use it
> here:
> https://github.com/ceph/ceph/blob/master/qa/workunits/fs/quota/quota.sh
>
>>
>>   * this <http://docs.ceph.com/docs/master/man/8/mount.ceph/> page
>>     refers to 'mounting only a part of the namespace' -- what is the
>>     namespace referred in the page?
>>
>>  In this context namespace means the filesystem tree.  So "part of the
> namespace" means a subdirectory.
>
>>
>>   * can a CephFS be mounted simultaneously from multiple clients?
>>
>>  Yes.
>
>>
>>   * what would be the recommended way of creating system users on a
>>     CephFS, if a quota is needed for each user? create a pool for each
>>     user? or?
>>
>>  No recommendation at this stage - it would be interesting for you to try
> some things and let us know how you get on.
>
> Cheers,
> John
>
>
_______________________________________________
ceph-users mailing list
[email protected]
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

Reply via email to