You're not really testing only a RBD device there - you're testing
1) the O_DIRECT implementation in the kernel version you have (they differ)
- try different kernels in guest
2) cache implementation in qemu (and possibly virtio block driver) - if it's 
enabled
- disable it for this test completely (cache=none)
3) O_DIRECT implementation on the filesystem where your "test" file is - most 
important is preallocation!
- I'm not sure if you can "dd" into an existing file without truncating it, but 
you should first create the file with:
dd if=/dev/zero of=test bs=1M
It's better to create a new virtual drive and attach it to this machine, then 
test it directly (and while dd is good for "baseline" testing, I recommend you 
use fio)

Btw I find the 4MB test pretty consistent, it seems to oscillate about ~50MB/s. 
In the beginning the cluster was probably busy doing something else (scrubbing? 
cleanup of something? cron scripts?).

Jan

> On 07 Aug 2015, at 03:31, Steve Dainard <sdain...@spd1.com> wrote:
> 
> Trying to get an understanding why direct IO would be so slow on my cluster.
> 
> Ceph 0.94.1
> 1 Gig public network
> 10 Gig public network
> 10 Gig cluster network
> 
> 100 OSD's, 4T disk sizes, 5G SSD journal.
> 
> As of this morning I had no SSD journal and was finding direct IO was
> sub 10MB/s so I decided to add journals today.
> 
> Afterwards I started running tests again and wasn't very impressed.
> Then for no apparent reason the write speeds increased significantly.
> But I'm finding they vary wildly.
> 
> Currently there is a bit of background ceph activity, but only my
> testing client has an rbd mapped/mounted:
>           election epoch 144, quorum 0,1,2 mon1,mon3,mon2
>     osdmap e181963: 100 osds: 100 up, 100 in
>            flags noout
>      pgmap v2852566: 4144 pgs, 7 pools, 113 TB data, 29179 kobjects
>            227 TB used, 135 TB / 363 TB avail
>                4103 active+clean
>                  40 active+clean+scrubbing
>                   1 active+clean+scrubbing+deep
> 
> Tests:
> 1M block size: http://pastebin.com/LKtsaHrd throughput has no consistency
> 4k block size: http://pastebin.com/ib6VW9eB thoughput is amazingly consistent
> 
> Thoughts?
> _______________________________________________
> ceph-users mailing list
> ceph-users@lists.ceph.com
> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

_______________________________________________
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

Reply via email to