osd.34, last time : 2019-10-18 06:24:26
* osd.20, last time : 2019-10-27 18:12:31
* osd.28, last time : 2019-10-28 12:57:47
No matter that the data came from osd.25 or osd.30, i have the same
error. It seems this PG|object try to recover an healthy state but
shutdown my OSDs one by one…
Thus spake
Thus spake Brad Hubbard (bhubb...@redhat.com) on mercredi 30 octobre 2019 à
12:50:50:
> Maybe you should set nodown and noout while you do these maneuvers?
> That will minimise peering and recovery (data movement).
As the commands don't take too long, i just had a few slow requests before
the
Thus spake Brad Hubbard (bhubb...@redhat.com) on mardi 29 octobre 2019 à
08:20:31:
> Yes, try and get the pgs healthy, then you can just re-provision the down
> OSDs.
>
> Run a scrub on each of these pgs and then use the commands on the
> following page to find out more information for each
Hello,
From several weeks, i have some OSDs flapping before ending out of the
cluster by Ceph…
I was hoping some Ceph's magic and just gave it sometime to auto heal
(and be able to do all the side work…) but it was a bad idea (what a
surprise :D). Also got some inconsistents PGs, but i was