The Ceph Enterprise default is 65% nearfull. Do not go above 85% nearfull
unless you are stuck while backfilling and need to increase it to
add/remove storage. Ceph needs overhead to be able to recover from
situations where disks are lost. I always take into account what would
happen to the %full if I lost a full storage node and used that as my
target size.

The first place to look when trying to use more space is to balance your
cluster to get all disks closer to how fill the cluster is as a whole. By
default osds can be as far apart as 50% while others are 85% and your
cluster % used is right around upper 60's-low 70's.

The most full I've ever maintained a cluster was 77-80% full and it took so
much more babysitting as any time a disk died or we added storage, it
required constant babysitting to make sure things would finish
backfilling.  This cluster has 50 nodes.

On Thu, May 4, 2017, 8:30 AM Loic Dachary <[email protected]> wrote:

> Hi,
>
> In a cluster where the failure domain is the host and dozens of hosts, the
> 85% default for nearfull ratio is fine. A host failing won't suddenly make
> the cluster 99% full. In smaller clusters, with 10 hosts or less, it is
> likely to not be enough. And in larger clusters 85% may be too much to
> reserve and 90% could be more than enough.
>
> Is there a way to calculate the optimum nearfull ratio for a given
> crushmap ?
>
> Cheers
>
> --
> Loïc Dachary, Artisan Logiciel Libre
> _______________________________________________
> ceph-users mailing list
> [email protected]
> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
>
_______________________________________________
ceph-users mailing list
[email protected]
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

Reply via email to