Looking to add CephFS into our Ceph cluster (10.2.3), and trying to plan for 
that addition.

Currently only using RADOS on a single replicated, non-EC, pool, no RBD or RGW, 
and segmenting logically in namespaces.

No auth scoping at this time, but likely something we will be moving to in the 
future as our Ceph cluster grows in size and use.

The main question at hand is bringing CephFS, by way of the kernel driver, into 
our cluster. We are trying to be more efficient with our PG enumeration, and 
questioning whether there is efficiency or unwanted complexity by way of 
creating a namespace in the existing pool, versus a completely separate pool. 

On top of that, how does the cephfs-metadata pool/namespace equate into that? 
Is this even feasible?

Barring feasibility, how do others plan their pg_num for separate pools for 
cephfs and the metadata pool, compared to a standard object pool?

Hopefully someone has some experience with this and can comment.

TL;DR - is there a way to specify cephfs_data and cephfs_metadata ‘pools’ as a 
namespace, rather than entire pools?
        $ ceph fs new <fs_name> <metadata pool> <data pool> 
--metadata-namespace <ns1> --data-namespace <ns2>
        <metadata pool> is the name of the pool where metadata is stored, <ns1> 
the namespace within the aforementioned pool.
        <data pool> and <ns2> analogous with the metadata side.

Thanks,

Reed
_______________________________________________
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

Reply via email to