Hi Konstantin,
thanks! "set-all-straw-buckets-to-straw2" was what I was
looking for. Didn't see it in the docs. Thanks again!
Cheers,
Oliver
On 23.06.2018 06:39, Konstantin Shalygin wrote:
Yes, I know that section of the docs, but can't find how
to change the crush rules after "ceph osd
Yes, I know that section of the docs, but can't find how
to change the crush rules after "ceph osd crush tunables ...".
Could you give me a hint?
What you mean? All what you need after upgrade to Luminous is:
ceph osd crush tunables optimal
ceph osd crush set-all-straw-buckets-to-straw2
k
Yes, I know that section of the docs, but can't find how
to change the crush rules after "ceph osd crush tunables ...".
Could you give me a hint?
Another question, if I may: Would you recommend going from
my ancient tunables to hammer directly (or even to jewel,
if I can get the clients updated)
Yeah, your tunables are ancient. Probably wouldn't have happened with
modern ones.
If this was my cluster I would probably update the clients and update that
(caution: lots of data movement!),
but I know how annoying it can be to chase down everyone who runs ancient
clients.
For comparison, this
Hi Paul,
ah, right, "ceph pg dump | grep remapped", that's what I was looking
for. I added the output and the result of the pg query at the end of
https://gist.github.com/oschulz/7d637c7a1dfa28660b1cdd5cc5dffbcb
> But my guess here is that you are running a CRUSH rule to distribute across
Hi,
have a look at "ceph pg dump" to see which ones are stuck in remapped.
But my guess here is that you are running a CRUSH rule to distribute across
3 racks
and you only have 3 racks in total.
CRUSH will sometimes fail to find a mapping in this scenario. There are a
few parameters
that you can
Can you post the full output of "ceph -s", "ceph health detail, and ceph
osd df tree
Also please run "ceph pg X.YZ query" on one of the PGs not backfilling.
Paul
2018-06-20 15:25 GMT+02:00 Oliver Schulz :
> Dear all,
>
> we (somewhat) recently extended our Ceph cluster,
> and updated it to
Dear all,
we (somewhat) recently extended our Ceph cluster,
and updated it to Luminous. By now, the fill level
on some ODSs is quite high again, so I'd like to
re-balance via "OSD reweight".
I'm running into the following problem, however:
Not matter what I do (reweigt a little, or a lot,
or