Hi,
perhaps due IOs from the journal?
You can test with iostat (like "iostat -dm 5 sdg").

on debian iostat is in the package sysstat.

Udo

Am 28.04.2014 07:38, schrieb Indra Pramana:
> Hi Craig,
> 
> Good day to you, and thank you for your enquiry.
> 
> As per your suggestion, I have created a 3rd partition on the SSDs and did
> the dd test directly into the device, and the result is very slow.
> 
> ====
> root@ceph-osd-08:/mnt# dd bs=1M count=128 if=/dev/zero of=/dev/sdg3
> conv=fdatasync oflag=direct
> 128+0 records in
> 128+0 records out
> 134217728 bytes (134 MB) copied, 19.5223 s, 6.9 MB/s
> 
> root@ceph-osd-08:/mnt# dd bs=1M count=128 if=/dev/zero of=/dev/sdf3
> conv=fdatasync oflag=direct
> 128+0 records in
> 128+0 records out
> 134217728 bytes (134 MB) copied, 5.34405 s, 25.1 MB/s
> ====
> 
> I did a test onto another server with exactly similar specification and
> similar SSD drive (Seagate SSD 100 GB) but not added into the cluster yet
> (thus no load), and the result is fast:
> 
> ====
> root@ceph-osd-09:/home/indra# dd bs=1M count=128 if=/dev/zero of=/dev/sdf1
> conv=fdatasync oflag=direct
> 128+0 records in
> 128+0 records out
> 134217728 bytes (134 MB) copied, 0.742077 s, 181 MB/s
> ====
> 
> Is the Ceph journal load really takes up a lot of the SSD resources? I
> don't understand how come the performance can drop significantly.
> Especially since the two Ceph journals are only taking the first 20 GB out
> of the 100 GB of the SSD total capacity.
> 
> Any advice is greatly appreciated.
> 
> Looking forward to your reply, thank you.
> 
> Cheers.
> 
> 
> 
_______________________________________________
ceph-users mailing list
[email protected]
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

Reply via email to