thanks, what I am talking is IOPs, which also reflect the bandwidth in fio
testing. I did not set readahead parameter, I will try. One more question
here is, for only 4k write or read IO random pattern, comparing with using
a single bigger rbd image, will performance(IOPS) be improved if I
distribute IOs to a virtual volume consisted by multiple rbd images in a
stripe way?

Thanks.


2016-02-02 22:41 GMT+08:00 Mark Nelson <[email protected]>:

> If testing with fio and librbd, you may also find that increasing the
> thresholds for RBD readahead will help significantly.  Specifically, set
> "rbd readahead disable after bytes" to 0 so rbd readahead stays enabled.
> In most cases with buffered reads on a real client volume, rbd readahead
> isn't necessary, but with fio and the librbd engine this can make a big
> difference, especially with newstore.
>
> Mark
>
> On 02/02/2016 07:29 AM, Wade Holler wrote:
>
>> Could you share the fio command and your read_ahead_kb setting for the
>> OSD devices ?  "performance is better" is a little too general.  I
>> understand that we usually mean higher IOPS or higher aggregate
>> throughput when we say performance is better.  However, application
>> random read performance "generally" implies an interest in lower latency
>> - which of course is much more involved from a testing perspective.
>>
>> Cheers
>> Wade
>>
>>
>> On Tue, Feb 2, 2016 at 7:28 AM min fang <[email protected]
>> <mailto:[email protected]>> wrote:
>>
>>     Hi, I did a fio testing on my ceph cluster, and found ceph random
>>     read performance is better than sequential read. Is it true in your
>>     stand?
>>
>>     Thanks.
>>     _______________________________________________
>>     ceph-users mailing list
>>     [email protected] <mailto:[email protected]>
>>     http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
>>
>>
>>
>> _______________________________________________
>> ceph-users mailing list
>> [email protected]
>> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
>>
>> _______________________________________________
> ceph-users mailing list
> [email protected]
> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
>
_______________________________________________
ceph-users mailing list
[email protected]
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

Reply via email to