> > - 36 data disks, registered as mpath-devices > - 2 NVMEs, which act as block.db for all 36 spinning disks.
That’s an unusually high ratio. Which model exactly are the NVMe SSDs? And why are you using multipathing with the HDDs? > I hacked a workaround, which removes the last conditional check > (occupied_slots < fast_slots_per_device) and hard-coded the expected size of > each block.db in my deployment file: Honestly that’s what I often do, especially when applying the orch service to systems with OSDs that were previously deployed under a different scheme. Hybrid OSDs can be messy. > service_type: osd > service_id: delta2024_osd > service_name: osd.delta2024_osd > placement: > label: delta2024 > spec: > block_db_size: 397G > data_devices: > rotational: 1 > size: '15T:' > db_devices: > rotational: 0 > encrypted: true > filter_logic: AND > objectstore: bluestore > > This gives the expected results. > In my opinion, cephadm sends a wrong „ceph-volume lvm batch“ command to the > osd-node. It should always include all of the disks, since running it is > promised to be idempotent. With the full list of disks, ceph-volume should be > able to calculate correct slots for block.db. > > Did I find a bug here or is this expected behavior? Good question. Bitte enter a ticket at tracker.ceph.com <http://tracker.ceph.com/>. > > > > --------------------------- > M.Sc Alex Walender > Institut für Bio- und Geowissenschaften > IBG 5 - Computergestützte Metagenomik / > de.NBI Cloud Site Bielefeld > Büro : Universität Bielefeld (UHG), N7-101 > Tel. : +49-521-106-2907 > > Forschungszentrum Jülich GmbH > 52425 Jülich > Sitz der Gesellschaft: Jülich > Eingetragen im Handelsregister des Amtsgerichts Düren Nr. HR B 3498 > Vorsitzender des Aufsichtsrats: MinDir Stefan Müller > Geschäftsführung: Prof. Dr. Astrid Lambrecht (Vorsitzende), > Dr. Stephanie Bauer (stellv. Vorsitzende), Prof. Dr. Ir. Pieter Jansens > > _______________________________________________ > ceph-users mailing list -- [email protected] > To unsubscribe send an email to [email protected] _______________________________________________ ceph-users mailing list -- [email protected] To unsubscribe send an email to [email protected]
