Le 2025-08-18 13:43, Eugen Block a écrit :
Hi,

the last message you sent is normal for an OSD that hasn't reported back its status yet.

I would check the ceph-volume.log and the cephadm.log, maybe an OSD log as well if it tried to boot. If this is a test cluster, did you properly wipe all disks before trying to deploy OSDs? For me 'cephadm ceph-volume lvm zap --destroy /dev/sdx /dev/sdy /dev/sdz ...' (locally on the host) has been working great for years now. You can zap them with the orchestrator as well, but only one disk a time, so a for loop would be useful.

There have been reports every now and then from users who tried to deploy many disks per host, I don't have a link available right now. And I haven't had the chance yet to deploy multiple hosts/many OSDs with 19.2.3, so there might be a regression in ceph-volume.

Regards,
Eugen

Yes I have zapped all drives before each try...

I'm looking in ceph-volume logs, but I don't have much history.
I think I'd have to try again, looking at the logs...
_______________________________________________
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-users-le...@ceph.io

Reply via email to