Re: [ceph-users] NVMe disk - size

2019-11-17 Thread Lars Täuber
Hi Kristof, may I add another choice? I configured my SSDs this way. Every host for OSDs has two fast and durable SSDs. Both SSDs are in one RAID1 which then is split up into LVs. I took 58GB for DB & WAL (and space for a special action by the DB (was it compaction?)) for each OSD. Then there

Re: [ceph-users] NVMe disk - size

2019-11-17 Thread jesper
Is c) the bcache solution? real life experience - unless you are really beating an enterprise ssd with writes - they last very,very long and even when failure happens- you can typically see it by the wear levels in smart months before. I would go for c) but if possible add one more nvme to

Re: [ceph-users] NVMe disk - size

2019-11-17 Thread Kristof Coucke
Hi all, Thanks for the feedback. Though, just to be sure: 1. There is no 30GB limit if I understand correctly for the RocksDB size. If metadata crosses that barrier, the L4 part will spillover to the primary device? Or will it just move the RocksDB completely? Or will it just stop and indicate