Hi David,
It looks like we are affected by the same bug, thanks for the hint.
We're running pacific 16.2.0, and I'm looking forward to upgrading to
the last pacific version, but the last upgrade I tried was not
successful. In hindsight, it was the same bug causing the problem.
Now, my
It may be this:
https://tracker.ceph.com/issues/50526
https://github.com/alfredodeza/remoto/issues/62
Which we resolved with: https://github.com/alfredodeza/remoto/pull/63
What version of ceph are you running, and is it impacted by the above?
David
On Thu, Sep 2, 2021 at 9:53 AM fcid wrote:
Hi Sebastian,
Following your sugestion, I've found this process:
/usr/bin/python3
/var/lib/ceph//cephadm.f77d9d71514a634758d4ad41ab6eef36d25386c99d8b365310ad41f9b74d5ce6
--image
ceph/ceph@sha256:9b04c0f15704c49591640a37c7adfd40ffad0a4b42fecb950c3407687cb4f29a
ceph-volume --fsid -- lvm list
Am 31.08.21 um 04:05 schrieb fcid:
Hi ceph community,
I'm having some trouble trying to delete an OSD.
I've been using cephadm in one of our clusters and it's works fine,
but lately, after an OSD failure, I cannot delete it using the
orchestrator. Since the orchestrator is not working (for