Hi all,

I'd like to update the tunables on our older ceph cluster, created with firefly 
and now on luminous. I need to update two tunables, chooseleaf_vary_r from 2 to 
1, and chooseleaf_stable from 0 to 1. I'm going to do 1 tunable update at a 
time.

With the first one, I've dumped the current crushmap out and compared it to the 
proposed updated crushmap with chooseleaf_vary_r set to 1 instead of 2. I need 
some help to understand the output:

# ceph osd getcrushmap -o crushmap-20190801-chooseleaf-vary-r-2
# crushtool -i crushmap-20190801-chooseleaf-vary-r-2 --set-chooseleaf-vary-r 1 
-o crushmap-20190801-chooseleaf-vary-r-1
# crushtool -i crushmap-20190801-chooseleaf-vary-r-2 --compare 
crushmap-20190801-chooseleaf-vary-r-1
rule 0 had 9137/10240 mismatched mappings (0.892285)
rule 1 had 9152/10240 mismatched mappings (0.89375)
rule 4 had 9173/10240 mismatched mappings (0.895801)
rule 5 had 0/7168 mismatched mappings (0)
rule 6 had 0/7168 mismatched mappings (0)
warning: maps are NOT equivalent

So I've learned in the past doing this sort of stuff that if the maps are 
equivalent then there is no data movement. In this case, obviously I'm 
expecting data movement, but by how much? Rules 0, 1 and 4 are about our 3 
different device classes in this cluster.

Does that mean I'm going to expect almost 90% mismatched based on the above 
output? That's much bigger than I expected, as in the previous steps of 
changing the chooseleaf-vary-r from 0 to 5 then down to 2 by 1 at a time 
(before knowing anything about this crushtool --compare command) I had only up 
to about 28% mismatched objects.

Also, if you've done a similar change, please let me know how mcuh data 
movement you encountered. Thanks!

Cheers,
Linh
_______________________________________________
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

Reply via email to