Greetings,

I am trying to build a CephFS system. Currently I have created my crush map 
which uses only certain OSD & I have pools created out from them. But when I 
mount the cephFS the mount size is my entire ceph cluster size, how is that ?


Ceph cluster & pools

[ceph-admin@storageAdmin ~]$ ceph df
GLOBAL:
    SIZE      AVAIL     RAW USED     %RAW USED
    4722G     4721G         928M          0.02
POOLS:
    NAME                              ID     USED     %USED     MAX AVAIL     
OBJECTS
    ecpool_disk1                22        0             0               1199G   
             0
    rcpool_disk2                 24        0             0               1499G  
              0
    rcpool_cepfsMeta     25     4420          0              76682M            
20


CephFS volume & pool

Here data0 is the volume/filesystem name
rcpool_cepfsMeta - is the meta-data pool
rcpool_disk2 - is the data pool

[ceph-admin@storageAdmin ~]$ ceph fs ls
name: data0, metadata pool: rcpool_cepfsMeta, data pools: [rcpool_disk2 ]


Command to mount CephFS
sudo mount -t ceph mon1:6789:/ /mnt/cephfs/ -o 
name=admin,secretfile=admin.secret


Client host df -h output
192.168.1.101:6789:/     4.7T  928M  4.7T   1% /mnt/cephfs



--
Deepak





-----------------------------------------------------------------------------------
This email message is for the sole use of the intended recipient(s) and may 
contain
confidential information.  Any unauthorized review, use, disclosure or 
distribution
is prohibited.  If you are not the intended recipient, please contact the 
sender by
reply email and destroy all copies of the original message.
-----------------------------------------------------------------------------------
_______________________________________________
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

Reply via email to