Not really, its on an airgapped/secure network and I cannot copy-and-paste from 
it.  What are you looking for?  This cluster has 720 OSDs across 18 storage 
nodes.
I think we have identified the problem and it may not be a ceph issue, but need 
to investigate further.  It has something to do with the SSD devices that are 
being ignored - they are slightly different from the other ones.
________________________________
From: Eugen Block <[email protected]>
Sent: Wednesday, January 11, 2023 3:27 AM
To: [email protected] <[email protected]>
Subject: [ceph-users] Re: adding OSD to orchestrated system, ignoring osd 
service spec.

Hi,

can you share the output of

storage01:~ # ceph orch ls osd

Thanks,
Eugen

Zitat von Wyll Ingersoll <[email protected]>:

> When adding a new OSD to a ceph orchestrated system (16.2.9) on a
> storage node that has a specification profile that dictates which
> devices to use as the db_devices (SSDs), the newly added OSDs seem
> to be ignoring the db_devices (there are several available) and
> putting the data and db/wal on the same device.
>
> We installed the new disk (HDD) and then ran "ceph orch device zap
> /dev/xyz --force" to initialize the addition process.
> The OSDs that were added originally on that node were layed out
> correctly, but the new ones seem to be ignoring the OSD service spec.
>
> How can we make sure the new devices added are layed out correctly?
>
> thanks,
> Wyllys
>
>
> _______________________________________________
> ceph-users mailing list -- [email protected]
> To unsubscribe send an email to [email protected]


_______________________________________________
ceph-users mailing list -- [email protected]
To unsubscribe send an email to [email protected]
_______________________________________________
ceph-users mailing list -- [email protected]
To unsubscribe send an email to [email protected]

Reply via email to