On Hi Cephers,

I'm building a new Squid cluster with cephadm on Ubuntu 24.04.
After expanding my cluster in the Dashboard (adding my 7 hosts),
I choose throughput_optimized proflie wich create a generic spec for hybrid HDD/SSD :

service_type: osd
service_id: throughput_optimized
service_name: osd.throughput_optimized
placement:
  host_pattern: '*'
spec:
  data_devices:
    rotational: 1
  db_devices:
    rotational: 0
  encrypted: true
  filter_logic: AND
  objectstore: bluestore

The cluster is for a LAB environment, on each of the 7 nodes, I have 17 HDD SAS 1.2TB drives and 1 SSD SAS Enterprise 400GB drive. On my first try, only 28 OSD where created (out of the 119), the others appeared as down, but they won't start, I didn't find systemd units created on the hosts. But, the VGs and LVs where created, there are 17 LVs on the SSD for WAL/DB of the 17 HDD (yes, small : 29GB).

On my second try, it creates 72 OSDs, still it stops, and never tries to continue, or re-create the down OSDs.

I didn't manage to find them, but it seems I saw some OSD creation timeout in the logs...

What can I do to have my missing OSD created ?

I tried restarting, redeploying the orch OSD service, but it only restart/redeploy le OSDs it has already created...
_______________________________________________
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-users-le...@ceph.io

Reply via email to