Your crush rule is ok:
step chooseleaf firstn 0 type host
You are replicating host-wise, not rack wise.
This is what I would suggest for you cluster, but keep in mind that a
whole-rack outage will leave some PGs incomplete.
Regarding the straw2 change causing 12% data movement -- in this ca
Hi Dan
I have indeed at the moment only 5 OSD nodes on 3 racks.
The crush-map is attached.
Are you suggesting to replicate only between nodes and not between racks
(since the very few resources) ?
Thanks, Massimo
On Mon, Jan 14, 2019 at 3:29 PM Dan van der Ster wrote:
> On Mon, Jan 14, 2019 at
This [*]is the output of "ceph osd df".
Thanks a lot !
Massimo
[*]
[root@ceph-mon-01 ~]# ceph osd df
ID CLASS WEIGHT REWEIGHT SIZE USE AVAIL %USE VAR PGS
30 hdd 5.45609 1.0 5587G 1875G 3711G 33.57 0.65 140
31 hdd 5.45609 1.0 5587G 3951G 1635G 70.72 1.38 144
32 hdd 5.45609
On Mon, Jan 14, 2019 at 3:18 PM Massimo Sgaravatto
wrote:
>
> Thanks for the prompt reply
>
> Indeed I have different racks with different weights.
Are you sure you're replicating across racks? You have only 3 racks,
one of which is half the size of the other two -- if yes, then your
cluster will
On 1/14/19 3:18 PM, Massimo Sgaravatto wrote:
> Thanks for the prompt reply
>
> Indeed I have different racks with different weights.
> Below the ceph osd tree" output
>
Can you also show the output of 'ceph osd df' ?
The amount of PGs might be on the low side which also causes this imbalanc
Thanks for the prompt reply
Indeed I have different racks with different weights.
Below the ceph osd tree" output
[root@ceph-mon-01 ~]# ceph osd tree
ID CLASS WEIGHTTYPE NAME STATUS REWEIGHT PRI-AFF
-1 272.80426 root default
-7 109.12170 rack Rack11-PianoAlto
-
On Mon, Jan 14, 2019 at 3:06 PM Massimo Sgaravatto
wrote:
>
> I have a ceph luminous cluster running on CentOS7 nodes.
> This cluster has 50 OSDs, all with the same size and all with the same weight.
>
> Since I noticed that there was a quite "unfair" usage of OSD nodes (some used
> at 30 %, some
I have a ceph luminous cluster running on CentOS7 nodes.
This cluster has 50 OSDs, all with the same size and all with the same
weight.
Since I noticed that there was a quite "unfair" usage of OSD nodes (some
used at 30 %, some used at 70 %) I tried to activate the balancer.
But the balancer does