Hello Shawn, Thanks for the info, I had not considered that each host has a weight also.
By default it seems to be the overall size of all the disks in that system. The systems with the SSDs that are getting full are due to the additional HDD capacity on that system making that host weight higher! Is we keep the HDD to SSD ratio on each host the same it should fix it. # ceph osd tree ID CLASS WEIGHT TYPE NAME STATUS REWEIGHT PRI-AFF -1 534.60309 root default -19 62.90637 host NAS-AUBUN-RK2-CEPH06 104 hdd 5.56000 osd.104 up 1.00000 1.00000 105 hdd 5.56000 osd.105 up 1.00000 1.00000 106 hdd 5.56000 osd.106 up 1.00000 1.00000 107 hdd 5.56000 osd.107 up 1.00000 1.00000 108 hdd 5.56000 osd.108 up 1.00000 1.00000 109 hdd 5.56000 osd.109 up 1.00000 1.00000 110 hdd 5.56000 osd.110 up 1.00000 1.00000 111 hdd 5.56000 osd.111 up 1.00000 1.00000 112 hdd 5.56000 osd.112 up 1.00000 1.00000 113 hdd 5.56000 osd.113 up 1.00000 1.00000 114 hdd 5.56000 osd.114 up 1.00000 1.00000 115 ssd 0.43660 osd.115 up 1.00000 1.00000 116 ssd 0.43660 osd.116 up 1.00000 1.00000 117 ssd 0.43660 osd.117 up 1.00000 1.00000 118 ssd 0.43660 osd.118 up 1.00000 1.00000 -22 105.51169 host NAS-AUBUN-RK2-CEPH07 119 hdd 5.45749 osd.119 up 1.00000 1.00000 120 hdd 5.45749 osd.120 up 1.00000 1.00000 121 hdd 5.45749 osd.121 up 1.00000 1.00000 122 hdd 5.45749 osd.122 up 1.00000 1.00000 123 hdd 5.45749 osd.123 up 1.00000 1.00000 124 hdd 5.45749 osd.124 up 1.00000 1.00000 125 hdd 5.45749 osd.125 up 1.00000 1.00000 126 hdd 5.45749 osd.126 up 1.00000 1.00000 127 hdd 5.45749 osd.127 up 1.00000 1.00000 128 hdd 5.45749 osd.128 up 1.00000 1.00000 129 hdd 5.45749 osd.129 up 1.00000 1.00000 130 hdd 5.45749 osd.130 up 1.00000 1.00000 131 hdd 5.45749 osd.131 up 1.00000 1.00000 132 hdd 5.45749 osd.132 up 1.00000 1.00000 133 hdd 5.45749 osd.133 up 1.00000 1.00000 134 hdd 5.45749 osd.134 up 1.00000 1.00000 135 hdd 5.45749 osd.135 up 1.00000 1.00000 136 hdd 5.45749 osd.136 up 1.00000 1.00000 137 hdd 5.45749 osd.137 up 1.00000 1.00000 138 ssd 0.90970 osd.138 up 1.00000 1.00000 139 ssd 0.90970 osd.139 up 1.00000 1.00000 -25 105.51169 host NAS-AUBUN-RK2-CEPH08 142 hdd 5.45749 osd.142 up 1.00000 1.00000 143 hdd 5.45749 osd.143 up 1.00000 1.00000 144 hdd 5.45749 osd.144 up 1.00000 1.00000 145 hdd 5.45749 osd.145 up 1.00000 1.00000 146 hdd 5.45749 osd.146 up 1.00000 1.00000 147 hdd 5.45749 osd.147 up 1.00000 1.00000 148 hdd 5.45749 osd.148 up 1.00000 1.00000 149 hdd 5.45749 osd.149 up 1.00000 1.00000 150 hdd 5.45749 osd.150 up 1.00000 1.00000 151 hdd 5.45749 osd.151 up 1.00000 1.00000 152 hdd 5.45749 osd.152 up 1.00000 1.00000 153 hdd 5.45749 osd.153 up 1.00000 1.00000 154 hdd 5.45749 osd.154 up 1.00000 1.00000 155 hdd 5.45749 osd.155 up 1.00000 1.00000 156 hdd 5.45749 osd.156 up 1.00000 1.00000 157 hdd 5.45749 osd.157 up 1.00000 1.00000 158 hdd 5.45749 osd.158 up 1.00000 1.00000 159 hdd 5.45749 osd.159 up 1.00000 1.00000 160 hdd 5.45749 osd.160 up 1.00000 1.00000 140 ssd 0.90970 osd.140 up 1.00000 1.00000 141 ssd 0.90970 osd.141 up 1.00000 1.00000 -3 56.32617 host NAS-AUBUN-RK3-CEPH01 0 hdd 2.72899 osd.0 up 1.00000 1.00000 1 hdd 2.72899 osd.1 up 1.00000 1.00000 2 hdd 2.72899 osd.2 up 1.00000 1.00000 5 hdd 2.72899 osd.5 up 1.00000 1.00000 6 hdd 2.72899 osd.6 up 1.00000 1.00000 7 hdd 2.72899 osd.7 up 1.00000 1.00000 8 hdd 2.72899 osd.8 up 1.00000 1.00000 9 hdd 2.72899 osd.9 up 1.00000 1.00000 10 hdd 2.72899 osd.10 up 1.00000 1.00000 11 hdd 2.72899 osd.11 up 1.00000 1.00000 12 hdd 2.72899 osd.12 up 1.00000 1.00000 13 hdd 2.72899 osd.13 up 1.00000 1.00000 14 hdd 2.72899 osd.14 up 1.00000 1.00000 15 hdd 2.72899 osd.15 up 1.00000 1.00000 16 hdd 2.72899 osd.16 up 1.00000 1.00000 17 hdd 2.72899 osd.17 up 1.00000 1.00000 18 hdd 2.72899 osd.18 up 1.00000 1.00000 19 hdd 2.72899 osd.19 up 1.00000 1.00000 20 hdd 2.72899 osd.20 up 1.00000 1.00000 21 hdd 2.72899 osd.21 up 1.00000 1.00000 60 ssd 0.43660 osd.60 up 1.00000 1.00000 61 ssd 0.43660 osd.61 up 1.00000 1.00000 62 ssd 0.43660 osd.62 up 1.00000 1.00000 63 ssd 0.43660 osd.63 up 1.00000 1.00000 -5 56.32617 host NAS-AUBUN-RK3-CEPH02 3 hdd 2.72899 osd.3 up 1.00000 1.00000 22 hdd 2.72899 osd.22 up 1.00000 1.00000 23 hdd 2.72899 osd.23 up 1.00000 1.00000 24 hdd 2.72899 osd.24 up 1.00000 1.00000 25 hdd 2.72899 osd.25 up 1.00000 1.00000 26 hdd 2.72899 osd.26 up 1.00000 1.00000 27 hdd 2.72899 osd.27 up 1.00000 1.00000 28 hdd 2.72899 osd.28 up 1.00000 1.00000 29 hdd 2.72899 osd.29 up 1.00000 1.00000 30 hdd 2.72899 osd.30 up 1.00000 1.00000 31 hdd 2.72899 osd.31 up 1.00000 1.00000 32 hdd 2.72899 osd.32 up 1.00000 1.00000 33 hdd 2.72899 osd.33 up 1.00000 1.00000 34 hdd 2.72899 osd.34 up 1.00000 1.00000 35 hdd 2.72899 osd.35 up 1.00000 1.00000 36 hdd 2.72899 osd.36 up 1.00000 1.00000 37 hdd 2.72899 osd.37 up 1.00000 1.00000 38 hdd 2.72899 osd.38 up 1.00000 1.00000 39 hdd 2.72899 osd.39 up 1.00000 1.00000 40 hdd 2.72899 osd.40 up 1.00000 1.00000 64 ssd 0.43660 osd.64 up 1.00000 1.00000 65 ssd 0.43660 osd.65 up 1.00000 1.00000 66 ssd 0.43660 osd.66 up 1.00000 1.00000 67 ssd 0.43660 osd.67 up 1.00000 1.00000 -7 56.32617 host NAS-AUBUN-RK3-CEPH03 4 hdd 2.72899 osd.4 up 1.00000 1.00000 41 hdd 2.72899 osd.41 up 1.00000 1.00000 42 hdd 2.72899 osd.42 up 1.00000 1.00000 43 hdd 2.72899 osd.43 up 1.00000 1.00000 44 hdd 2.72899 osd.44 up 1.00000 1.00000 45 hdd 2.72899 osd.45 up 1.00000 1.00000 46 hdd 2.72899 osd.46 up 1.00000 1.00000 47 hdd 2.72899 osd.47 up 1.00000 1.00000 48 hdd 2.72899 osd.48 up 1.00000 1.00000 49 hdd 2.72899 osd.49 up 1.00000 1.00000 50 hdd 2.72899 osd.50 up 1.00000 1.00000 51 hdd 2.72899 osd.51 up 1.00000 1.00000 52 hdd 2.72899 osd.52 up 1.00000 1.00000 53 hdd 2.72899 osd.53 up 1.00000 1.00000 54 hdd 2.72899 osd.54 up 1.00000 1.00000 55 hdd 2.72899 osd.55 up 1.00000 1.00000 56 hdd 2.72899 osd.56 up 1.00000 1.00000 57 hdd 2.72899 osd.57 up 1.00000 1.00000 58 hdd 2.72899 osd.58 up 1.00000 1.00000 59 hdd 2.72899 osd.59 up 1.00000 1.00000 68 ssd 0.43660 osd.68 up 1.00000 1.00000 69 ssd 0.43660 osd.69 up 1.00000 1.00000 70 ssd 0.43660 osd.70 up 1.00000 1.00000 71 ssd 0.43660 osd.71 up 1.00000 1.00000 -13 45.84741 host NAS-AUBUN-RK3-CEPH04 80 hdd 3.63869 osd.80 up 1.00000 1.00000 81 hdd 3.63869 osd.81 up 1.00000 1.00000 82 hdd 3.63869 osd.82 up 1.00000 1.00000 83 hdd 3.63869 osd.83 up 1.00000 1.00000 84 hdd 3.63869 osd.84 up 1.00000 1.00000 85 hdd 3.63869 osd.85 up 1.00000 1.00000 86 hdd 3.63869 osd.86 up 1.00000 1.00000 87 hdd 3.63869 osd.87 up 1.00000 1.00000 88 hdd 3.63869 osd.88 up 1.00000 1.00000 89 hdd 3.63869 osd.89 up 1.00000 1.00000 90 hdd 3.63869 osd.90 up 1.00000 1.00000 91 hdd 3.63869 osd.91 up 1.00000 1.00000 72 ssd 0.54579 osd.72 up 1.00000 1.00000 73 ssd 0.54579 osd.73 up 1.00000 1.00000 76 ssd 0.54579 osd.76 up 1.00000 1.00000 77 ssd 0.54579 osd.77 up 1.00000 1.00000 -16 45.84741 host NAS-AUBUN-RK3-CEPH05 92 hdd 3.63869 osd.92 up 1.00000 1.00000 93 hdd 3.63869 osd.93 up 1.00000 1.00000 94 hdd 3.63869 osd.94 up 1.00000 1.00000 95 hdd 3.63869 osd.95 up 1.00000 1.00000 96 hdd 3.63869 osd.96 up 1.00000 1.00000 97 hdd 3.63869 osd.97 up 1.00000 1.00000 98 hdd 3.63869 osd.98 up 1.00000 1.00000 99 hdd 3.63869 osd.99 up 1.00000 1.00000 100 hdd 3.63869 osd.100 up 1.00000 1.00000 101 hdd 3.63869 osd.101 up 1.00000 1.00000 102 hdd 3.63869 osd.102 up 1.00000 1.00000 103 hdd 3.63869 osd.103 up 1.00000 1.00000 74 ssd 0.54579 osd.74 up 1.00000 1.00000 75 ssd 0.54579 osd.75 up 1.00000 1.00000 78 ssd 0.54579 osd.78 up 1.00000 1.00000 79 ssd 0.54579 osd.79 up 1.00000 1.00000 Kind regards, Glen Baars From: Shawn Iverson <[email protected]> Sent: Saturday, 21 July 2018 9:21 PM To: Glen Baars <[email protected]> Cc: ceph-users <[email protected]> Subject: Re: [ceph-users] 12.2.7 - Available space decreasing when adding disks Glen, Correction...looked at the wrong column for weights, my bad... I was looking at the wrong column for weight. You have varying weights, but the process is still the same. Balance your buckets (hosts) in your crush map, and balance your osds in each bucket (host). On Sat, Jul 21, 2018 at 9:14 AM, Shawn Iverson <[email protected]<mailto:[email protected]>> wrote: Glen, It appears you have 447G, 931G, and 558G disks in your cluster, all with a weight of 1.0. This means that although the new disks are bigger, they are not going to be utilized by pgs any more than any other disk. I would suggest reweighting your other disks (they are smaller), so that you balance your cluster. You should do this gradually over time, preferably during off-peak times, when remapping will not affect operations. I do a little math, first by taking total cluster capacity and dividing it by total capacity of each bucket. I then do the same thing in each bucket, until everything is proportioned appropriately down to the osds. On Fri, Jul 20, 2018 at 8:43 PM, Glen Baars <[email protected]<mailto:[email protected]>> wrote: Hello Ceph Users, We have added more ssd storage to our ceph cluster last night. We added 4 x 1TB drives and the available space went from 1.6TB to 0.6TB ( in `ceph df` for the SSD pool ). I would assume that the weight needs to be changed but I didn’t think I would need to? Should I change them to 0.75 from 0.9 and hopefully it will rebalance correctly? #ceph osd tree | grep -v hdd ID CLASS WEIGHT TYPE NAME STATUS REWEIGHT PRI-AFF -1 534.60309 root default -19 62.90637 host NAS-AUBUN-RK2-CEPH06 115 ssd 0.43660 osd.115 up 1.00000 1.00000 116 ssd 0.43660 osd.116 up 1.00000 1.00000 117 ssd 0.43660 osd.117 up 1.00000 1.00000 118 ssd 0.43660 osd.118 up 1.00000 1.00000 -22 105.51169 host NAS-AUBUN-RK2-CEPH07 138 ssd 0.90970 osd.138 up 1.00000 1.00000 Added 139 ssd 0.90970 osd.139 up 1.00000 1.00000 Added -25 105.51169 host NAS-AUBUN-RK2-CEPH08 140 ssd 0.90970 osd.140 up 1.00000 1.00000 Added 141 ssd 0.90970 osd.141 up 1.00000 1.00000 Added -3 56.32617 host NAS-AUBUN-RK3-CEPH01 60 ssd 0.43660 osd.60 up 1.00000 1.00000 61 ssd 0.43660 osd.61 up 1.00000 1.00000 62 ssd 0.43660 osd.62 up 1.00000 1.00000 63 ssd 0.43660 osd.63 up 1.00000 1.00000 -5 56.32617 host NAS-AUBUN-RK3-CEPH02 64 ssd 0.43660 osd.64 up 1.00000 1.00000 65 ssd 0.43660 osd.65 up 1.00000 1.00000 66 ssd 0.43660 osd.66 up 1.00000 1.00000 67 ssd 0.43660 osd.67 up 1.00000 1.00000 -7 56.32617 host NAS-AUBUN-RK3-CEPH03 68 ssd 0.43660 osd.68 up 1.00000 1.00000 69 ssd 0.43660 osd.69 up 1.00000 1.00000 70 ssd 0.43660 osd.70 up 1.00000 1.00000 71 ssd 0.43660 osd.71 up 1.00000 1.00000 -13 45.84741 host NAS-AUBUN-RK3-CEPH04 72 ssd 0.54579 osd.72 up 1.00000 1.00000 73 ssd 0.54579 osd.73 up 1.00000 1.00000 76 ssd 0.54579 osd.76 up 1.00000 1.00000 77 ssd 0.54579 osd.77 up 1.00000 1.00000 -16 45.84741 host NAS-AUBUN-RK3-CEPH05 74 ssd 0.54579 osd.74 up 1.00000 1.00000 75 ssd 0.54579 osd.75 up 1.00000 1.00000 78 ssd 0.54579 osd.78 up 1.00000 1.00000 79 ssd 0.54579 osd.79 up 1.00000 1.00000 # ceph osd df | grep -v hdd ID CLASS WEIGHT REWEIGHT SIZE USE AVAIL %USE VAR PGS 115 ssd 0.43660 1.00000 447G 250G 196G 56.00 1.72 103 116 ssd 0.43660 1.00000 447G 191G 255G 42.89 1.32 84 117 ssd 0.43660 1.00000 447G 213G 233G 47.79 1.47 92 118 ssd 0.43660 1.00000 447G 208G 238G 46.61 1.43 85 138 ssd 0.90970 1.00000 931G 820G 111G 88.08 2.71 216 Added 139 ssd 0.90970 1.00000 931G 771G 159G 82.85 2.55 207 Added 140 ssd 0.90970 1.00000 931G 709G 222G 76.12 2.34 197 Added 141 ssd 0.90970 1.00000 931G 664G 267G 71.31 2.19 184 Added 60 ssd 0.43660 1.00000 447G 275G 171G 61.62 1.89 100 61 ssd 0.43660 1.00000 447G 237G 209G 53.04 1.63 90 62 ssd 0.43660 1.00000 447G 275G 171G 61.58 1.89 95 63 ssd 0.43660 1.00000 447G 260G 187G 58.15 1.79 97 64 ssd 0.43660 1.00000 447G 232G 214G 52.08 1.60 83 65 ssd 0.43660 1.00000 447G 207G 239G 46.36 1.42 75 66 ssd 0.43660 1.00000 447G 217G 230G 48.54 1.49 84 67 ssd 0.43660 1.00000 447G 252G 195G 56.36 1.73 92 68 ssd 0.43660 1.00000 447G 248G 198G 55.56 1.71 94 69 ssd 0.43660 1.00000 447G 229G 217G 51.25 1.57 84 70 ssd 0.43660 1.00000 447G 259G 187G 58.01 1.78 87 71 ssd 0.43660 1.00000 447G 267G 179G 59.83 1.84 97 72 ssd 0.54579 1.00000 558G 217G 341G 38.96 1.20 100 73 ssd 0.54579 1.00000 558G 283G 275G 50.75 1.56 121 76 ssd 0.54579 1.00000 558G 286G 272G 51.33 1.58 129 77 ssd 0.54579 1.00000 558G 246G 312G 44.07 1.35 104 74 ssd 0.54579 1.00000 558G 273G 285G 48.91 1.50 122 75 ssd 0.54579 1.00000 558G 281G 276G 50.45 1.55 114 78 ssd 0.54579 1.00000 558G 289G 269G 51.80 1.59 133 79 ssd 0.54579 1.00000 558G 276G 282G 49.39 1.52 119 Kind regards, Glen Baars BackOnline Manager This e-mail is intended solely for the benefit of the addressee(s) and any other named recipient. It is confidential and may contain legally privileged or confidential information. If you are not the recipient, any use, distribution, disclosure or copying of this e-mail is prohibited. The confidentiality and legal privilege attached to this communication is not waived or lost by reason of the mistaken transmission or delivery to you. If you have received this e-mail in error, please notify us immediately. _______________________________________________ ceph-users mailing list [email protected]<mailto:[email protected]> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com -- Shawn Iverson, CETL Director of Technology Rush County Schools 765-932-3901 x1171 [email protected]<mailto:[email protected]> [https://docs.google.com/uc?export=download&id=0Bw5iD0ToYvs_cy1OZFNIZ0drYVU&revid=0Bw5iD0ToYvs_UitIcHVIWkJVVTl2VGpxVUE0d0FQcHBIRXk4PQ][https://docs.google.com/uc?export=download&id=0Bw5iD0ToYvs_Zkh4eEs3R01yWXc&revid=0Bw5iD0ToYvs_QWpBK2Y2ajJtYjhOMDRFekZwK2xOamk5Q3Y0PQ] [https://docs.google.com/uc?export=download&id=1aBrlQou4gjB04FY-twHN_0Dn3GHVNxqa&revid=0Bw5iD0ToYvs_RnQ0eDhHcm95WHBFdkNRbXhQRXpoYkR6SEEwPQ] -- Shawn Iverson, CETL Director of Technology Rush County Schools 765-932-3901 x1171 [email protected]<mailto:[email protected]> [https://docs.google.com/uc?export=download&id=0Bw5iD0ToYvs_cy1OZFNIZ0drYVU&revid=0Bw5iD0ToYvs_UitIcHVIWkJVVTl2VGpxVUE0d0FQcHBIRXk4PQ][https://docs.google.com/uc?export=download&id=0Bw5iD0ToYvs_Zkh4eEs3R01yWXc&revid=0Bw5iD0ToYvs_QWpBK2Y2ajJtYjhOMDRFekZwK2xOamk5Q3Y0PQ] [https://docs.google.com/uc?export=download&id=1aBrlQou4gjB04FY-twHN_0Dn3GHVNxqa&revid=0Bw5iD0ToYvs_RnQ0eDhHcm95WHBFdkNRbXhQRXpoYkR6SEEwPQ] This e-mail is intended solely for the benefit of the addressee(s) and any other named recipient. It is confidential and may contain legally privileged or confidential information. If you are not the recipient, any use, distribution, disclosure or copying of this e-mail is prohibited. The confidentiality and legal privilege attached to this communication is not waived or lost by reason of the mistaken transmission or delivery to you. If you have received this e-mail in error, please notify us immediately.
_______________________________________________ ceph-users mailing list [email protected] http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
