Den fre 27 juli 2018 kl 12:24 skrev Anton Aleksandrov <an...@aleksandrov.eu
> Might sounds strange, but I could not find answer in google or docs, might
> be called somehow else.
> I dont understand pool capacity policy and how to set/define it. I have
> created simple cluster for CephFS on 4 servers, each has 30gb disk - so in
> total 120gb. On top I build replicated metapool with size of 3 and erasure
> pool for data k=2, m=1. Made CephFS, things look good. "ceph df" shows,
> that not all space is used.
> *ceph df*
> SIZE AVAIL RAW USED %RAW USED
> 119G 68941M 53922M 43.89
> NAME ID USED %USED MAX AVAIL OBJECTS
> cephfs_data 1 28645M 42.22 39210M 61674
> cephfs_metadata 2 171M 0.87 19605M 1089
> pg_num for medata is 8
> pg_num for data is 40
> Am I doing anything wrong? I want as much space for data as possible.
Looks like it says that cephfs_data can write at most 39G, which at 3x
replication makes it consume close to 120G in total. How did this differ
from your expectations?
May the most significant bit of your life be positive.
ceph-users mailing list