Hey Anthony,

The NVMEs are Micron 7500 Pro with 7.6TB space (MTFDKCC7T6TGP-1BK1DABYYR).

We haven't exactly aimed for multipath disks. Lets say that they were ordered 
by „accident“ ;)

Nevertheless, I was able to pinpoint the issue. It also has something to do 
with multipath-disks.
The issue is described here https://tracker.ceph.com/issues/63862

TL;DR: The inventory command for ceph-volume was not able to list mpath-disks 
when they were already used as osd.
They got falsely filtered out because they got detected as mapper devices with 
LVs on top.
I added a patch-file for this https://tracker.ceph.com/attachments/8006

It appears that this only affects Ceph Reef and older. The code for 
ceph-volume/util/disk.py has been changed since then.


---------------------------
M.Sc Alex Walender
Institut für Bio- und Geowissenschaften
IBG 5 - Computergestützte Metagenomik /
        de.NBI Cloud Site Bielefeld
Büro : Universität Bielefeld (UHG), N7-101
Tel. :  +49-521-106-2907

Forschungszentrum Jülich GmbH
52425 Jülich
Sitz der Gesellschaft: Jülich
Eingetragen im Handelsregister des Amtsgerichts Düren Nr. HR B 3498
Vorsitzender des Aufsichtsrats: MinDir Stefan Müller
Geschäftsführung: Prof. Dr. Astrid Lambrecht (Vorsitzende),
Dr. Stephanie Bauer (stellv. Vorsitzende), Prof. Dr. Ir. Pieter Jansens

> Am 11.03.2026 um 15:43 schrieb Anthony D'Atri via ceph-users 
> <[email protected]>:
> 
>> 
>> - 36 data disks, registered as mpath-devices
>> - 2 NVMEs, which act as block.db for all 36 spinning disks.
> 
> That’s an unusually high ratio.  Which model exactly are the NVMe SSDs?  And 
> why are you using multipathing with the HDDs?
> 
>> I hacked a workaround, which removes the last conditional check 
>> (occupied_slots < fast_slots_per_device) and hard-coded the expected size of 
>> each block.db in my deployment file:
> 
> Honestly that’s what I often do, especially when applying the orch service to 
> systems with OSDs that were previously deployed under a different scheme. 
> Hybrid OSDs can be messy.
> 
>> service_type: osd
>> service_id: delta2024_osd
>> service_name: osd.delta2024_osd
>> placement:
>> label: delta2024
>> spec:
>> block_db_size: 397G
>> data_devices:
>>   rotational: 1
>>   size: '15T:'
>> db_devices:
>>   rotational: 0
>> encrypted: true
>> filter_logic: AND
>> objectstore: bluestore
>> 
>> This gives the expected results.
>> In my opinion, cephadm sends a wrong „ceph-volume lvm batch“ command to the 
>> osd-node. It should always include all of the disks, since running it is 
>> promised to be idempotent. With the full list of disks, ceph-volume should 
>> be able to calculate correct slots for block.db.
>> 
>> Did I find a bug here or is this expected behavior?
> 
> Good question.  Bitte enter a ticket at tracker.ceph.com 
> <http://tracker.ceph.com/>.
> 
>> 
>> 
>> 
>> ---------------------------
>> M.Sc Alex Walender
>> Institut für Bio- und Geowissenschaften
>> IBG 5 - Computergestützte Metagenomik /
>>       de.NBI Cloud Site Bielefeld
>> Büro : Universität Bielefeld (UHG), N7-101
>> Tel. :  +49-521-106-2907
>> 
>> Forschungszentrum Jülich GmbH
>> 52425 Jülich
>> Sitz der Gesellschaft: Jülich
>> Eingetragen im Handelsregister des Amtsgerichts Düren Nr. HR B 3498
>> Vorsitzender des Aufsichtsrats: MinDir Stefan Müller
>> Geschäftsführung: Prof. Dr. Astrid Lambrecht (Vorsitzende),
>> Dr. Stephanie Bauer (stellv. Vorsitzende), Prof. Dr. Ir. Pieter Jansens
>> 
>> _______________________________________________
>> ceph-users mailing list -- [email protected]
>> To unsubscribe send an email to [email protected]
> 
> _______________________________________________
> ceph-users mailing list -- [email protected]
> To unsubscribe send an email to [email protected]

_______________________________________________
ceph-users mailing list -- [email protected]
To unsubscribe send an email to [email protected]

Reply via email to