You'll find it said time and time agin on the ML... avoid disks of
different sizes in the same cluster. It's a headache that sucks. It's
not impossible, it's not even overly hard to pull off... but it's
very easy to cause a mess and a lot of headaches. It will also make
it harder to diagnose performance issues in the cluster.
Not very practical for clusters which aren't new.
There is no way to fill up all disks evenly with the same number of
Bytes and then stop filling the small disks when they're full and
only continue filling the larger disks.
This is possible with adjusting crush weights. Initially the smaller
drives are weighted more highly than larger drives. As data gets added
the weights are changed so that larger drives continue to fill while no
drives becomes overfull.
That means the crush weights were not adjusted correctly as the cluster
What will happen if you are filling all disks evenly with Bytes
instead of % is that the small disks will get filled completely and
all writes to the cluster will block until you do something to reduce
the amount used on the full disks.
but in this case you would have a steep drop off of performance. when
you reach the fill level where small drives do not accept more data,
suddenly you would have a performance cliff where only your larger disks
are doing new writes. and only larger disks doing reads on new data.
Good point! Although if this is implemented by changing crush weights,
adjusting the weights as the cluster fills will cause the data to churn
and the new data will not only be assigned to larger drives. :)
ceph-users mailing list