A deep archive cluster benefits from NVMe too.  You can use QLC up to 60TB in 
size, 32 of those in one RU makes for a cluster that doesn’t take up the whole 
DC.  

> On Apr 21, 2024, at 5:42 AM, Darren Soothill <darren.sooth...@croit.io> wrote:
> 
> Hi Niklaus,
> 
> Lots of questions here but let me tray and get through some of them.
> 
> Personally unless a cluster is for deep archive then I would never suggest 
> configuring or deploying a cluster without Rocks DB and WAL on NVME.
> There are a number of benefits to this in terms of performance and recovery. 
> Small writes go to the NVME first before being written to the HDD and it 
> makes many recovery operations far more efficient.
> 
> As to how much faster it makes things that very much depends on the type of 
> workload you have on the system. Lots of small writes will make a significant 
> difference. Very large writes not as much of a difference.
> Things like compactions of the RocksDB database are a lot faster as they are 
> now running from NVME and not from the HDD.
> 
> We normally work with  a upto 1:12 ratio so 1 NVME for every 12 HDD’s. This 
> is assuming the NVME’s being used are good mixed use enterprise NVME’s with 
> power loss protection.
> 
> As to failures yes a failure of the NVME would mean a loss of 12 OSD’s but 
> this is no worse than a failure of an entire node. This is something Ceph is 
> designed to handle.
> 
> I certainly wouldn’t be thinking about putting the NVME’s into raid sets as 
> that will degrade the performance of them when you are trying to get better 
> performance.
> 
> 
> 
> Darren Soothill
> 
> 
> Looking for help with your Ceph cluster? Contact us at https://croit.io/
> 
> croit GmbH, Freseniusstr. 31h, 81247 Munich
> CEO: Martin Verges - VAT-ID: DE310638492
> Com. register: Amtsgericht Munich HRB 231263
> Web: https://croit.io/ | YouTube: https://goo.gl/PGE1Bx
> 
> 
> 
> 
> _______________________________________________
> ceph-users mailing list -- ceph-users@ceph.io
> To unsubscribe send an email to ceph-users-le...@ceph.io
_______________________________________________
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-users-le...@ceph.io

Reply via email to