Hi,

We have Ceph clusters which are greater than 1PB. We are using tree
algorithm. The issue is with the data placement. If the cluster utilization
percentage is at 65% then some of the OSDs are already above 87%. We had to
change the near_full ratio to 0.90 to circumvent warnings and to get back
the Health to OK state.

How can we keep the OSDs utilization to be in sync with cluster utilization
(both percentages to be close enough) as we want to utilize the cluster to
the max (above 80%) without unnecessarily adding too many nodes/osd's.
Right now we are losing close to 400TB of the disk space unused as some
OSDs are above 87% and some are below 50%. If the above 87% OSDs reach 95%
then the cluster will have issues. What is the best way to mitigate this
issue?

Thanks,

*Pardhiv Karri*
_______________________________________________
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

Reply via email to