My Ceph cluster has a CephFS file system, using an erasure-code data pool 
(k=8, m=2), which has used 14TiB of space. My CephFS has 19 subvolumes, and 
each subvolume automatically creates a snapshot every day and keeps it for 3 
days. The problem is that when I manually calculate the disk space usage of 
each subvolume directory in CephFS, the total amount is only 8.4TiB. I don't 
know why this is happening. Do snapshots take up a lot of space?

     ceph version 17.2.7 (b12291d110049b2f35e32e0de30d70e9a4c060d2) quincy 
(stable)

Thank you all!
_______________________________________________
ceph-users mailing list -- [email protected]
To unsubscribe send an email to [email protected]

Reply via email to