Am 23.02.2018 um 01:05 schrieb Gregory Farnum:
>
>
> On Wed, Feb 21, 2018 at 2:46 PM Oliver Freyermuth
> > wrote:
>
> Dear Cephalopodians,
>
> in a Luminous 12.2.3 cluster with a pool with:
> - 192 Bluestore OSDs
Am 24.02.2018 um 07:14 schrieb David Turner:
> There was another part to my suggestion which was to set the initial crush
> weight to 0 in ceph.conf. after you add all of your osds, you could download
> the crush map, weight the new osds to what they should be, and upload the
> crush map to
In my case we don't even use "default" ruleset in CRUSH - no pool has this
ruleset associated with it. Adding OSDs doesn't lead to any PG
recalculation or data movement. It's triggered only after modifying CRUSH
map and placing OSDs in appropriate failure domain.
This way you can add any number of
There was another part to my suggestion which was to set the initial crush
weight to 0 in ceph.conf. after you add all of your osds, you could
download the crush map, weight the new osds to what they should be, and
upload the crush map to give them all the ability to take PGs at the same
time.
Am 23.02.2018 um 01:05 schrieb Gregory Farnum:
>
>
> On Wed, Feb 21, 2018 at 2:46 PM Oliver Freyermuth
> > wrote:
>
> Dear Cephalopodians,
>
> in a Luminous 12.2.3 cluster with a pool with:
> - 192 Bluestore OSDs
On Wed, Feb 21, 2018 at 2:46 PM Oliver Freyermuth <
freyerm...@physik.uni-bonn.de> wrote:
> Dear Cephalopodians,
>
> in a Luminous 12.2.3 cluster with a pool with:
> - 192 Bluestore OSDs total
> - 6 hosts (32 OSDs per host)
> - 2048 total PGs
> - EC profile k=4, m=2
> - CRUSH failure domain =
Am 22.02.2018 um 02:54 schrieb David Turner:
> You could set the flag noin to prevent the new osds from being calculated by
> crush until you are ready for all of them in the host to be marked in.
> You can also set initial crush weight to 0 for new pads so that they won't
> receive any PGs
You could set the flag noin to prevent the new osds from being calculated
by crush until you are ready for all of them in the host to be marked in.
You can also set initial crush weight to 0 for new pads so that they won't
receive any PGs until you're ready for it.
On Wed, Feb 21, 2018, 5:46 PM
Dear Cephalopodians,
in a Luminous 12.2.3 cluster with a pool with:
- 192 Bluestore OSDs total
- 6 hosts (32 OSDs per host)
- 2048 total PGs
- EC profile k=4, m=2
- CRUSH failure domain = host
which results in 2048*6/192 = 64 PGs per OSD on average, I run into issues with
PG overdose