Re: [ceph-users] Degraded data redundancy: NUM pgs undersized

2018-09-04 Thread Lothar Gesslein
e n is the pool size, so here 3-1 = 2) leafs in each datacenter" You will need at least two osds in each dc for this, because it is random (with respect to the weights) in which dc the 2 copies will be placed and which gets the remaining copy. Best regards, Lothar -- Lothar Gesslein Linu

Re: [ceph-users] Unexpected behaviour after monitors upgrade from Jewel to Luminous

2018-08-22 Thread Lothar Gesslein
7f10de377700 0 log_channel(cluster) log [INF] >> : pgmap 6208 pgs: 12 unknown, 114 remapped+peering, 5 activating, 481 >> peering, 5596 active+clean; 261 TB data, 385 TB used, 225 T >> B / 611 TB avail; 177 MB/s rd, 173 MB/s wr, 317 op/s >> 2018-08-21 21:05:07.025200 7f

Re: [ceph-users] Error creating compat weight-set with mgr balancer plugin

2018-07-24 Thread Lothar Gesslein
om straw to straw2 will result in a reasonably small amount of data movement, depending on how much the bucket item weights vary from each other. When the weights are all the same no data will move, and when item weights vary significantly there will be more movement. Best, Lothar -- Lothar Gesslei