Thanks Linh Vu, so it sounds like i should be prepared to bounce the OSDs
and/or Hosts, but I haven't heard anyone yet say that it won't work, so I
guess there's that...
On Tue, Dec 14, 2021 at 7:48 PM Linh Vu wrote:
> I haven't tested this in Nautilus 14.2.22 (or any nautilus) but in
> Luminous
I haven't tested this in Nautilus 14.2.22 (or any nautilus) but in Luminous
or older, if you go from a bigger size to a smaller size, there was either
a bug or a "feature-not-bug" that didn't allow the OSDs to automatically
purge the redundant PGs with data copies. I did this on a size=5 to size=3
Hi Joachim,
Understood on the risks. Aside from the alt. cluster, we have 3 other
copies of the data outside of Ceph, so I feel pretty confident that it's a
question of time to repopulate and not data loss.
That said, I would be interested in your experience on what I'm trying to
do if you've at
Hi Martin,
Agreed on the min_size of 2. I have no intention of worrying about uptime
in event of a host failure. Once size of 2 is effectuated (and I'm unsure
how long it will take), it is our intention to evacuate all OSDs in one of
4 hosts, in order to migrate the host to the new cluster, wher
Hello,
avoid size 2 whenever you can. As long as you know that you might lose
data, it can be an acceptable risk while migrating the cluster. We had that
in the past multiple time and it is a valid use case in our opinion.
However make sure to monitor the state and recover as fast as possible.
Lea
I would avoid doing this. Size 2 is not where you want to be. Maybe you can
give more details about your cluster size and shape and what you are trying
to accomplish and another solution could be proposed. The contents of "ceph
osd tree " and "ceph df" would help.
Respectfully,
*Wes Dillingham*
w