Mark,

If it's any help, we've done a small totally unreliable benchmark on our
end. For a KVM instance, we had:
260MB/s write, 200MB/s read on local SAS disks, attached as LVM LVs,
250MB/s write, 90MB/s read on RBD, 32 osds, all SATA.

All sequential, a 10G network. It's more than enough currently but we'd
like to improve RBD read performance.

Cheers,


On Sat, Mar 9, 2013 at 7:27 AM, Andrew Thrift <[email protected]>wrote:

> Mark,
>
>
> I would just like to add, we too are seeing the same behavior with
> QEMU/KVM/RBD.  Maybe it is a common symptom of high IO with this setup.
>
>
>
> Regards,
>
>
>
>
>
> Andrew
>
>
> On 3/8/2013 12:46 AM, Mark Nelson wrote:
>
>> On 03/07/2013 05:10 AM, Wolfgang Hennerbichler wrote:
>>
>>>
>>>
>>> On 03/06/2013 02:31 PM, Mark Nelson wrote:
>>> t
>>>
>>>> If you are doing sequential reads, you may benefit by increasing the
>>>> read_ahead_kb value for each device in /sys/block/<device>/queue on the
>>>> OSD hosts.
>>>>
>>>
>>> Thanks, that didn't really help. It seems the VM has to handle too much
>>> I/O, even the mouse-cursor is jerking over the screen when connecting
>>> via vnc. I guess this is the wrong list, but it has somehow to do with
>>> librbd in connection with kvm, as the same machine on LVM works just ok.
>>>
>>
>> Thanks for the heads up Wolfgang.  I'm going to be looking into QEMU/KVM
>> RBD performance in the coming weeks so I'll try to watch out for this
>> behaviour.
>>
>>
>>> Wolfgang
>>> ______________________________**_________________
>>> ceph-users mailing list
>>> [email protected]
>>> http://lists.ceph.com/**listinfo.cgi/ceph-users-ceph.**com<http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com>
>>>
>>>
>
> ______________________________**_________________
> ceph-users mailing list
> [email protected]
> http://lists.ceph.com/**listinfo.cgi/ceph-users-ceph.**com<http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com>
>



-- 
erdem agaoglu
_______________________________________________
ceph-users mailing list
[email protected]
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

Reply via email to