Hi,

Just my 2 cents, please wait for a second opinion. I wouldn't want to give 
incorrect/incomplete advice.

It's best practice to present Ceph the raw block devices on which you're going 
to deploy your OSDs. No (active) RAID controller in between, just the raw 
device. You can use a RAID controller but configure it so that the OS sees the 
RAW device, not a block device the raid controller itself presents to the OS. I 
think i't called IT-mode or hbamode.

Please someone correct me if I'm wrong but AFAIK, it doesn't really matter on 
which block devices you're running your OS, including where the Ceph binaries 
are located and/or /var/lib/ceph. Just make sure it's reliable storage and 
sufficiently fast like you would spec your regular servers.

Also make sure to choose suitable block devices for OSDs! HDDs are definitely 
possible and used in Ceph clusters but result in a relatively slow cluster 
unless you've got a massive amount of them. Eg, if you want to use it for 
archival purposes, and performance doesn't really matter, it might work. For 
SSDs, definitely go enterprise class with PLP! Don't bargain on PLP and don't 
go for consumer class SSDs. Its a big big no-no you definitely want to avoid!

And in case you skipped the paragraph above: Don't compromise on PLP and go for 
Enterprise class SSDs, pretty please 😉🙂.

And while thinking about SSDs, our Ceph support partner once told me there are 
certain SSDs that are known to cause problems. I don't know the specifics of 
it, but once you're speccing your cluster, I think it's a good idea to post 
your proposal here or in the Slack channel for review. Who knows, someone might 
possibly chime in with an actual experience with your hardware.

Tentacle has just been released, while it's a stable release you might not want 
to go Tentacle just yet until there's at least one point release.

There's Squid, but you might want to be careful and disable 
bluestore_elastic_shared_blobs before you start deploying OSDs especially if 
you're looking to use Erasure Coded pools rather than replicated pools. I 
wasn't aware while deploying my cluster of the bluestore_elastic_shared_blobs 
bug. I did redeploy all my OSDs just to be on the safe side which took around a 
week to complete.

And last but not least: Here is the official documentation which provides a 
good overview of what you might want for hardware: 
https://docs.ceph.com/en/tentacle/start/hardware-recommendations/ You'll 
probably pick up on some more dos and don'ts.

Cheers and happy holidays!

Wannes


________________________________
From: Nekopilot via ceph-users <[email protected]>
Sent: Wednesday, December 24, 2025 11:12
To: [email protected] <[email protected]>
Subject: [ceph-users] Ceph system disk on non-raid drive

Dear ceph users I discussed about some practices for ceph node hardware 
configuration and specifically about system disk on non-RAID drive (e. g. 
internal m2). Failure domain is host or rack, nvme cluster. Is it OK or is it 
one of possible and
ZjQcmQRYFpfptBannerStart
This Message Is From an External Sender
This message came from outside your organization.

ZjQcmQRYFpfptBannerEnd

Dear ceph users

I discussed about some practices for ceph node hardware configuration and
specifically about system disk on non-RAID drive (e.g. internal m2).

Failure domain is host or rack, nvme cluster.

Is it OK or is it one of possible and "adopted" way to deploy ceph nodes
with system disk without RAID1 (hw or sw)? Do you have some experience with
such nodes?
_______________________________________________
ceph-users mailing list -- [email protected]
To unsubscribe send an email to [email protected]

_______________________________________________
ceph-users mailing list -- [email protected]
To unsubscribe send an email to [email protected]

Reply via email to