Hi,

Between tests we destroyed the OSDs and created them from scratch. We used
Docker image to deploy Ceph on one machine.
I've seen that there are WAL/DB partitions created on the disks.
Should I also check somewhere in ceph config that it actually uses those?

if you created them from scratch you should be fine.

If you want to check anyway you run something like this:

ceph@host1:~> ceph-bluestore-tool show-label --dev /var/lib/ceph/osd/ceph-1/block | grep path
        "path_block.db": "/dev/ceph-journals/bluefsdb-1",

If you also use LVM with Ceph you should check the respective tags of the OSD's block and block.db symlinks, they should match.

You could also run something like iostat on the SSD/NVMe devices to see if something's going on there.

Regards,
Eugen


Zitat von Ján Senko <jan.se...@gmail.com>:

Eugene:
Between tests we destroyed the OSDs and created them from scratch. We used
Docker image to deploy Ceph on one machine.
I've seen that there are WAL/DB partitions created on the disks.
Should I also check somewhere in ceph config that it actually uses those?

David:
We used 4MB writes.

I know about the recommended journal size, however this is the machine we
have at the moment.
For final production I can change the size of SSD (if it makes sense)
The benchmark hasn't filled the 30GB of DB in the time it was running, so I
doubt that having properly sized DB would change the results.
(It wrote 38GB per minute of testing, spread across 12 disks, with 50% EC
overhead, therefore about 5GB/minute)

Jan

st 12. 9. 2018 o 17:36 David Turner <drakonst...@gmail.com> napísal(a):

If you're writes are small enough (64k or smaller) they're being placed on
the WAL device regardless of where your DB is.  If you change your testing
to use larger writes you should see a difference by adding the DB.

Please note that the community has never recommended using less than 120GB
DB for a 12TB OSD and the docs have come out and officially said that you
should use at least a 480GB DB for a 12TB OSD.  If you're setting up your
OSDs with a 30GB DB, you're just going to fill that up really quick and
spill over onto the HDD and have wasted your money on the SSDs.

On Wed, Sep 12, 2018 at 11:07 AM Ján Senko <jan.se...@gmail.com> wrote:

We are benchmarking a test machine which has:
8 cores, 64GB RAM
12 * 12 TB HDD (SATA)
2 * 480 GB SSD (SATA)
1 * 240 GB SSD (NVME)
Ceph Mimic

Baseline benchmark for HDD only (Erasure Code 4+2)
Write 420 MB/s, 100 IOPS, 150ms latency
Read 1040 MB/s, 260 IOPS, 60ms latency

Now we moved WAL to the SSD (all 12 WALs on single SSD, default size
(512MB)):
Write 640 MB/s, 160 IOPS, 100ms latency
Read identical as above.

Nice boost we thought, so we moved WAL+DB to the SSD (Assigned 30GB for
DB)
All results are the same as above!

Q: This is suspicious, right? Why is the DB on SSD not helping with our
benchmark? We use *rados bench*

We tried putting WAL on the NVME, and again, the results are the same as
on SSD.
Same for WAL+DB on NVME

Again, the same speed. Any ideas why we don't gain speed by using faster
HW here?

Jan

--
Jan Senko, Skype janos-
Phone in Switzerland: +41 774 144 602
Phone in Czech Republic: +420 777 843 818 <+420%20777%20843%20818>
_______________________________________________
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com



--
Jan Senko, Skype janos-
Phone in Switzerland: +41 774 144 602
Phone in Czech Republic: +420 777 843 818



_______________________________________________
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

Reply via email to