Hi
For seq reads here's the latencies:
lat (usec) : 2=0.01%, 10=0.01%, 20=0.01%, 50=0.02%, 100=0.03%
lat (usec) : 250=1.02%, 500=87.09%, 750=7.47%, 1000=1.50%
lat (msec) : 2=0.76%, 4=1.72%, 10=0.19%, 20=0.19%
Random reads:
lat (usec) : 10=0.01%
lat (msec) : 2=0.01%, 4=0.01%, 10=0.02%, 20=0.03%, 50=0.55%
lat (msec) : 100=99.31%, 250=0.08%
100msecs seems a lot to me.
Br,T
-----Original Message-----
From: Mark Nelson [mailto:[email protected]]
Sent: 30. kesäkuuta 2015 22:01
To: Tuomas Juntunen; [email protected]
Subject: Re: [ceph-users] Very low 4k randread performance ~1000iops
Seems reasonable. What's the latency distribution look like in your fio
output file? Would be useful to know if it's universally slow or if some
ops are taking much longer to complete than others.
Mark
On 06/30/2015 01:27 PM, Tuomas Juntunen wrote:
> I created a file which has the following parameters
>
>
> [random-read]
> rw=randread
> size=128m
> directory=/root/asd
> ioengine=libaio
> bs=4k
> #numjobs=8
> iodepth=64
>
>
> Br,T
> -----Original Message-----
> From: ceph-users [mailto:[email protected]] On Behalf
> Of Mark Nelson
> Sent: 30. kesäkuuta 2015 20:55
> To: [email protected]
> Subject: Re: [ceph-users] Very low 4k randread performance ~1000iops
>
> Hi Tuomos,
>
> Can you paste the command you ran to do the test?
>
> Thanks,
> Mark
>
> On 06/30/2015 12:18 PM, Tuomas Juntunen wrote:
>> Hi
>>
>> Its not probably hitting the disks, but that really doesnt matter.
>> The point is we have very responsive VMs while writing and that is
>> what the users will see.
>>
>> The iops we get with sequential read is good, but the random read is
>> way too low.
>>
>> Is using SSDs as OSDs the only way to get it up? or is there some
>> tunable which would enhance it? I would assume Linux caches reads in
>> memory and serves them from there, but atleast now we dont see it.
>>
>> Br,
>>
>> Tuomas
>>
>> *From:*Somnath Roy [mailto:[email protected]]
>> *Sent:* 30. kesäkuuta 2015 19:24
>> *To:* Tuomas Juntunen; 'ceph-users'
>> *Subject:* RE: [ceph-users] Very low 4k randread performance
>> ~1000iops
>>
>> Break it down, try fio-rbd to see what is the performance you getting..
>>
>> But, I am really surprised you are getting > 100k iops for write, did
>> you check it is hitting the disks ?
>>
>> Thanks & Regards
>>
>> Somnath
>>
>> *From:*ceph-users [mailto:[email protected]] *On
>> Behalf Of *Tuomas Juntunen
>> *Sent:* Tuesday, June 30, 2015 8:33 AM
>> *To:* 'ceph-users'
>> *Subject:* [ceph-users] Very low 4k randread performance ~1000iops
>>
>> Hi
>>
>> I have been trying to figure out why our 4k random reads in VMs are
>> so bad. I am using fio to test this.
>>
>> Write : 170k iops
>>
>> Random write : 109k iops
>>
>> Read : 64k iops
>>
>> Random read : 1k iops
>>
>> Our setup is:
>>
>> 3 nodes with 36 OSDs, 18 SSDs one SSD for two OSDs, each node has
>> 64gb mem & 2x6core cpus
>>
>> 4 monitors running on other servers
>>
>> 40gbit infiniband with IPoIB
>>
>> Openstack : Qemu-kvm for virtuals
>>
>> Any help would be appreciated
>>
>> Thank you in advance.
>>
>> Br,
>>
>> Tuomas
>>
>> ---------------------------------------------------------------------
>> -
>> --
>>
>>
>> PLEASE NOTE: The information contained in this electronic mail
>> message is intended only for the use of the designated recipient(s) named
above.
>> If the reader of this message is not the intended recipient, you are
>> hereby notified that you have received this message in error and that
>> any review, dissemination, distribution, or copying of this message
>> is strictly prohibited. If you have received this communication in
>> error, please notify the sender by telephone or e-mail (as shown
>> above) immediately and destroy any and all copies of this message in
>> your possession (whether hard copies or electronically stored copies).
>>
>>
>>
>> _______________________________________________
>> ceph-users mailing list
>> [email protected]
>> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
>>
> _______________________________________________
> ceph-users mailing list
> [email protected]
> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
>
_______________________________________________
ceph-users mailing list
[email protected]
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com