Quick update:
Unfortunately, it seems like we are still having issues.
"ceph orch apply osd --all-available-devices" now enumerates through all 60
available 10TB drives in the host, and the OSDs don't flap, allowing all 60
to be defined and marked in. Once all 60 complete the OSDs begin to
activ
Peter and fellow Ceph users,
I just wanted to update this forum on our interim findings so far, but
first and foremost, a HUGE thank you to David Orman for all of his help.
We're in the process of staging our testing on bare metal now, but I wanted
to confirm that for us at least, the showstopper
ormandj/ceph:v16.2.4-mgrfix <-- pushed to dockerhub.
Try bootstrap with: --image "docker.io/ormandj/ceph:v16.2.4-mgrfix" if
you want to give it a shot, or you can set CEPHADM_IMAGE. We think
these should both work during any cephadm command, even if the
documentation doesn't make it clear.
On Tue
I do not believe it was in 16.2.4. I will build another patched version of the
image tomorrow based on that version. I do agree, I feel this breaks new
deploys as well as existing, and hope a point release will come soon that
includes the fix.
> On May 31, 2021, at 15:33, Marco Pizzolo wrote:
David,
What I can confirm is that if this fix is already in 16.2.4 and 15.2.13,
then there's another issue resulting in the same situation, as it continues
to happen in the latest available images.
We are going to try and see if we can install a 15.2.x release and
subsequently upgrade using a fixe
Does the image we built fix the problem for you? That's how we worked
around it. Unfortunately, it even bites you with less OSDs if you have
DB/WAL on other devices, we have 24 rotational drives/OSDs, but split
DB/WAL onto multiple NVMEs. We're hoping the remoto fix (since it's
merged upstream and
Unfortunately Ceph 16.2.4 is still not working for us. We continue to have
issues where the 26th OSD is not fully created and started. We've
confirmed that we do get the flock as described in:
https://tracker.ceph.com/issues/50526
-
*I have verified in our labs a way to reproduce easily th
I've actually managed to get a little further with my problem.
As I've said before these servers are slightly distorted in config.
63 drives and only 48g if memory.
Once I create about 15-20 osds it continues to format the disks but won't
actually create the containers or start any service.
Wor
Thanks David
We will investigate the bugs as per your suggestion, and then will look to
test with the custom image.
Appreciate it.
On Sat, May 29, 2021, 4:11 PM David Orman wrote:
> You may be running into the same issue we ran into (make sure to read
> the first issue, there's a few mingled in
You may be running into the same issue we ran into (make sure to read
the first issue, there's a few mingled in there), for which we
submitted a patch:
https://tracker.ceph.com/issues/50526
https://github.com/alfredodeza/remoto/issues/62
If you're brave (YMMV, test first non-prod), we pushed an i
10 matches
Mail list logo