Thanks for the confirmation, that’s great to know. I’ll try that next
time I have a chance.
Zitat von "GLE, Vivien" <[email protected]>:
Thanks for your help, I use size and rotational to make it work as
Stephan's config and it works perfectly. (didnt used db_slots)
Vivien
________________________________
De : Stephan Hohn <[email protected]>
Envoyé : mercredi 1 octobre 2025 09:32:41
À : Eugen Block
Cc : [email protected]
Objet : [ceph-users] Re: help OSD deploying via yaml
This is currently reef but we started with it on quincy
Yes disks replacement works
Am Di., 30. Sept. 2025 um 17:21 Uhr schrieb Eugen Block <[email protected]>:
That’s really surprising, did you also already have to replace drives
and they were redeployed as expected?
I must admit, it’s been some time since I last tried something similar
(maybe pacific), and when it failed I decided to do it differently.
Which version is this?
Zitat von Stephan Hohn <[email protected]>:
> In our deployments it works like this.
>
> It grabs what you get from "ceph orch ls" from top to bottom.
> That's why we use the numbering (osd_1... osd_2... osd_3...).
> Also db_slots work quite well.
>
>
>
> Am Di., 30. Sept. 2025 um 15:57 Uhr schrieb Eugen Block <[email protected]>:
>
>> That's not going to work as you expect: if the same host is applicable
>> to multiple osd specs, only one of the specs will be applied (I think
>> it's the last one in the list of 'ceph orch ls osd --export').
>>
>> I would also recommend to not use device paths, those can change after
>> a reboot (although cephadm works with LVM and labels).
>>
>> The config option "db_slots" has never worked reliably, it's mentioned
>> in the docs somewhere, so it most likely won't work either as you
>> expect.
>>
>> If you can distinguish your OSDs using models, sizes or some other
>> parameter, you could use several osd specs. If you can't, you might
>> need to create the daemons manually like this:
>>
>> ceph01:~ # ceph orch daemon add osd
>>
ceph01:data_devices=/dev/vdb,/dev/vdc,/dev/vdd,db_devices=/dev/vde,block_db_size=64G
>> (I'm not sure about that last parameter, writing this from the top of my
>> head)
>>
>> But note that these OSDs will be displayed as "unmanaged" in 'ceph
>> orch ls osd' output.
>>
>>
>> Zitat von "GLE, Vivien" <[email protected]>:
>>
>> > Hi,
>> >
>> >
>> > For testing purpose we need to deploy :
>> >
>> >
>> > - 1 pool of 6 SSD OSD
>> >
>> > - 1 pool of 6 HDD OSD
>> >
>> > - 1 pool of 6 HDD OSD with 2 SSD for DB+WAL
>> >
>> >
>> > I tried to orch apply this yaml but it doesn't work as expected
>> >
>> >
>> > Osd part of the yaml ->
>> >
>> >
>> > service_type: osd
>> > service_id: osd_spec
>> > placement:
>> > hosts:
>> > - host1
>> > - host2
>> > - host3
>> > - host4
>> > - host5
>> > - host6
>> > data_devices:
>> > paths:
>> > - /dev/sda
>> > - /dev/sdc
>> > spec:
>> > data_devices:
>> > all: true
>> > filter_logic: AND
>> > objectstore: bluestore
>> > ---
>> > service_type: osd
>> > service_id: osd_spec_wall
>> > placement:
>> > hosts:
>> > - host1
>> > - host2
>> > - host3
>> > - host4
>> > - host5
>> > - host6
>> >
>> > spec:
>> > data_devices:
>> > paths:
>> > - /dev/sdf
>> > db_devices:
>> > paths:
>> > - /dev/sde
>> > limit: 2
>> > db_slots: 3
>> >
>> >
>> >
>> > Only 1 db on /dev/sde from host1 has been created and this OSD
>> > showed up as half full at his creation:
>> >
>> > ceph osd df | grep 25
>> >
>> > 25 hdd 0.63669 1.00000 652 GiB 373 GiB 1.5 MiB 1 KiB 38
>> > MiB 279 GiB 57.15 35.93 0 up
>> >
>> > ceph-volume lvm list
>> >
>> > ====== osd.25 ======
>> >
>> > [block]
>> >
>>
/dev/ceph-23d9297a-d0e1-47be-8650-5c8ccae4fe0e/osd-block-2f009760-fc2b-46d5-984d-e8200dfd9d9d
>> >
>> > block device
>> >
>>
/dev/ceph-23d9297a-d0e1-47be-8650-5c8ccae4fe0e/osd-block-2f009760-fc2b-46d5-984d-e8200dfd9d9d
>> > block uuid vqJGf8-V5g0-S1cA-BAcN-Qm9D-9VTx-xsc8wk
>> > cephx lockbox secret
>> > cluster fsid id
>> > cluster name ceph
>> > crush device class
>> > db device
>> >
>>
/dev/ceph-9c39b87c-a39c-413f-b1ef-07881195fcb8/osd-db-feee5095-b5e7-47a0-ae87-8e5039512661
>> > db uuid Rtj5KA-Qxjk-IFmY-3ffQ-gSAu-Snte-uku8HC
>> > encrypted 0
>> > osd fsid 2f009760-fc2b-46d5-984d-e8200dfd9d9d
>> > osd id 25
>> > osdspec affinity osd_spec_wall
>> > type block
>> > vdo 0
>> > with tpm 0
>> > devices /dev/sdf
>> >
>> > [db]
>> >
>>
/dev/ceph-9c39b87c-a39c-413f-b1ef-07881195fcb8/osd-db-feee5095-b5e7-47a0-ae87-8e5039512661
>> >
>> > block device
>> >
>>
/dev/ceph-23d9297a-d0e1-47be-8650-5c8ccae4fe0e/osd-block-2f009760-fc2b-46d5-984d-e8200dfd9d9d
>> > block uuid vqJGf8-V5g0-S1cA-BAcN-Qm9D-9VTx-xsc8wk
>> > cephx lockbox secret
>> > cluster fsid id
>> > cluster name ceph
>> > crush device class
>> > db device
>> >
>>
/dev/ceph-9c39b87c-a39c-413f-b1ef-07881195fcb8/osd-db-feee5095-b5e7-47a0-ae87-8e5039512661
>> > db uuid Rtj5KA-Qxjk-IFmY-3ffQ-gSAu-Snte-uku8HC
>> > encrypted 0
>> > osd fsid 2f009760-fc2b-46d5-984d-e8200dfd9d9d
>> > osd id 25
>> > osdspec affinity osd_spec_wall
>> > type db
>> > vdo 0
>> > with tpm 0
>> > devices /dev/sde
>> >
>> >
>> >
>> > The other /dev/sde device did show up as data_device instead of
>> > db_device (example here on host2) :
>> >
>> > ceph-volume lvm list
>> >
>> > ====== osd.17 ======
>> >
>> > [block]
>> >
>>
/dev/ceph-629f98b0-5ed4-4e75-81b9-e85ca76afb15/osd-block-5d43d683-1f7f-4dc1-935e-6a79745252f9
>> >
>> > block device
>> >
>>
/dev/ceph-629f98b0-5ed4-4e75-81b9-e85ca76afb15/osd-block-5d43d683-1f7f-4dc1-935e-6a79745252f9
>> > block uuid HQpp1l-x7IB-kA2W-6gWO-BGlM-VN2k-vYf43R
>> > cephx lockbox secret
>> > cluster fsid id
>> > cluster name ceph
>> > crush device class
>> > encrypted 0
>> > osd fsid 5d43d683-1f7f-4dc1-935e-6a79745252f9
>> > osd id 17
>> > osdspec affinity osd_spec
>> > type block
>> > vdo 0
>> > with tpm 0
>> > devices /dev/sde
>> >
>> > Thx for your help
>> > Vivien
>> >
>> >
>> >
>> >
>> >
>> >
>> >
>> > _______________________________________________
>> > ceph-users mailing list -- [email protected]
>> > To unsubscribe send an email to [email protected]
>>
>>
>> _______________________________________________
>> ceph-users mailing list -- [email protected]
>> To unsubscribe send an email to [email protected]
>>
_______________________________________________
ceph-users mailing list -- [email protected]
To unsubscribe send an email to [email protected]
_______________________________________________
ceph-users mailing list -- [email protected]
To unsubscribe send an email to [email protected]