Re: [ceph-users] Benchmark does not show gains with DB on SSD

2018-09-12 Thread Ján Senko
0GB DB for a 12TB OSD. If you're setting up your > OSDs with a 30GB DB, you're just going to fill that up really quick and > spill over onto the HDD and have wasted your money on the SSDs. > > On Wed, Sep 12, 2018 at 11:07 AM Ján Senko wrote: > >> We are benchmarking a test machine

[ceph-users] Benchmark does not show gains with DB on SSD

2018-09-12 Thread Ján Senko
We are benchmarking a test machine which has: 8 cores, 64GB RAM 12 * 12 TB HDD (SATA) 2 * 480 GB SSD (SATA) 1 * 240 GB SSD (NVME) Ceph Mimic Baseline benchmark for HDD only (Erasure Code 4+2) Write 420 MB/s, 100 IOPS, 150ms latency Read 1040 MB/s, 260 IOPS, 60ms latency Now we moved WAL to the

Re: [ceph-users] wal and db device on SSD partitions?

2018-03-21 Thread Ján Senko
2018-03-21 8:56 GMT+01:00 Caspar Smit : > 2018-03-21 7:20 GMT+01:00 ST Wong (ITSC) : > >> Hi all, >> >> >> >> We got some decommissioned servers from other projects for setting up >> OSDs. They’ve 10 2TB SAS disks with 4 2TB SSD. >> >> We try to

Re: [ceph-users] New Ceph cluster design

2018-03-09 Thread Ján Senko
e count. > > How many nodes will this cluster have? > > > On Mar 9, 2018 4:16 AM, "Ján Senko" <jan.se...@gmail.com> wrote: > > I am planning a new Ceph deployement and I have few questions that I could > not find good answers yet. > > Our nodes will be using Xeon-D mac

[ceph-users] New Ceph cluster design

2018-03-09 Thread Ján Senko
I am planning a new Ceph deployement and I have few questions that I could not find good answers yet. Our nodes will be using Xeon-D machines with 12 HDDs each and 64GB each. Our target is to use 10TB drives for 120TB capacity per node. 1. We want to have small amount of SSDs in the machines.