Hi All,

I have a cluster that I've been pushing data into in order to get an idea
of how full it can get prior ceph marking the cluster full. Unfortunately,
each time I fill the cluster I end up with one disk that typically hits the
full ratio (0.95) while all other disks still have anywhere from 20-40%
free space (my latest attempt resulted in the cluster marking full at 60%
total usage). Any idea why the OSDs would be so unbalanced?

Few notes on the cluster:

   - It has 6 storage hosts with 143 total OSDs (typically 144 but it has
   one failed disk - removed from cluster)
   - All OSDs are 4TB drives
   - All OSDs are set to the same weight
   - The cluster is using host rules
   - Using ceph version 0.80.7


In terms of the Pool(s), I have been varying the number of pools from run
to run, following the PG calculator at http://ceph.com/pgcalc/ to determine
the number of placement groups. I have also attempted a few runs bumping up
the number of PGs, but it has only resulted in further unbalance.

Any thoughts?

Thanks,

Matt
_______________________________________________
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

Reply via email to