Hi,

>
> My mission is to make a ceph cluster production ready.
> We already have a mock-up of 6 nodes deployed via cephadm to test ceph's 
> features
> We want to install a ceph cluster to get off our dependencies to SAN and to 
> store around 400 TB of data. We'll use mostly block storage for VMs/k8 and a 
> bit of CephFS and Object storage
>
> Are those specs good for our purpose ?
>



> - 3 MGR/MON/RGW 128 GB RAM 32 cores (is it recommended to put RGW daemon in 
> the same node as MGR/MON ?)
> - 2 MDS 16 GB RAM 8 cores
> - * OSD 8 GB RAM 8 cores (with high IOPS per core)
>It is very common to have a single type of node, a converged architecture, 
>especially with smaller deployments.
>The above three lines RAM/cores for entire nodes?


This is for each node (Each daemon for OSD part)



>I would suggest
>Run 5 mons, not 3.  That lets you survive a double failure.  It happens.
>Run one RGW / ingress on every cluster node.  RGW scales better horizontally 
>than vertically.


Should I run RGW on OSD node aswell ?
Can we run MDS alongside some other daemon ?

>18x nodes:
>- 10x 7.6 TB SSD
>- NVMe-only chassis, no SAS/SATA, no RAID HBA
>- M.2 boot
>- bonded 25GE
>- 128GB RAM
>- at least 32 physical cores / 64 hyperthreads per node.  If using AMD 
>processors, disable IOMMU and consider a 1S chassis and an XXXX-P SKU

>or


Is it good idea to use a dual cpu for node that carry OSD ?

>9x nodes like the above but with 15 TB SSDs
>Enterprise SSDs, not client-class.


Thank you for all this information

Vivien

>
> We are pretty new to ceph so any advice would be appreciate !
>
> Thanks !
> Vivien
>
> _______________________________________________
> ceph-users mailing list -- ceph-users@ceph.io
> To unsubscribe send an email to ceph-users-le...@ceph.io

________________________________
De : Anthony D'Atri <a...@dreamsnake.net>
Envoyé : jeudi 17 juillet 2025 15:01:46
À : GLE, Vivien
Cc : ceph-users@ceph.io
Objet : Re: [ceph-users] Hardware recommendation

>
> My mission is to make a ceph cluster production ready.
> We already have a mock-up of 6 nodes deployed via cephadm to test ceph's 
> features
> We want to install a ceph cluster to get off our dependencies to SAN and to 
> store around 400 TB of data. We'll use mostly block storage for VMs/k8 and a 
> bit of CephFS and Object storage
>
> Are those specs good for our purpose ?
>
> - 3 MGR/MON/RGW 128 GB RAM 32 cores (is it recommended to put RGW daemon in 
> the same node as MGR/MON ?)
> - 2 MDS 16 GB RAM 8 cores
> - * OSD 8 GB RAM 8 cores (with high IOPS per core)

It is very common to have a single type of node, a converged architecture, 
especially with smaller deployments.
The above three lines RAM/cores for entire nodes?

I would suggest


Run 5 mons, not 3.  That lets you survive a double failure.  It happens.
Run one RGW / ingress on every cluster node.  RGW scales better horizontally 
than vertically.


18x nodes:
- 10x 7.6 TB SSD
- NVMe-only chassis, no SAS/SATA, no RAID HBA
- M.2 boot
- bonded 25GE
- 128GB RAM
- at least 32 physical cores / 64 hyperthreads per node.  If using AMD 
processors, disable IOMMU and consider a 1S chassis and an XXXX-P SKU

or

9x nodes like the above but with 15 TB SSDs

Enterprise SSDs, not client-class.



>
> We are pretty new to ceph so any advice would be appreciate !
>
> Thanks !
> Vivien
>
> _______________________________________________
> ceph-users mailing list -- ceph-users@ceph.io
> To unsubscribe send an email to ceph-users-le...@ceph.io

_______________________________________________
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-users-le...@ceph.io

Reply via email to