thanks, with your help, I set the read ahead parameter. What is the cache
parameter for kernel module rbd?
Such as:
1) what is the cache size?
2) Does it support write back?
3) Will read ahead be disabled if max bytes has been read into cache?
(similar the concept as "rbd_readahead_disable_after_bytes".

thanks again.

2016-03-01 21:31 GMT+08:00 Adrien Gillard <[email protected]>:

> As Tom stated, RBD cache only works if your client is using librbd (KVM
> clients for instance).
> Using the kernel RBD client, one of the parameter you can tune to optimize
> sequential read is increasing /sys/class/block/rbd4/queue/read_ahead_kb
>
> Adrien
>
>
>
> On Tue, Mar 1, 2016 at 12:48 PM, min fang <[email protected]> wrote:
>
>> I can use the following command to change parameter, for example as the
>> following,  but not sure whether it will work.
>>
>>  ceph --admin-daemon /var/run/ceph/ceph-mon.openpower-0.asok config set
>> rbd_readahead_disable_after_bytes 0
>>
>> 2016-03-01 15:07 GMT+08:00 Tom Christensen <[email protected]>:
>>
>>> If you are mapping the RBD with the kernel driver then you're not using
>>> librbd so these settings will have no effect I believe.  The kernel driver
>>> does its own caching but I don't believe there are any settings to change
>>> its default behavior.
>>>
>>>
>>> On Mon, Feb 29, 2016 at 9:36 PM, Shinobu Kinjo <[email protected]>
>>> wrote:
>>>
>>>> You may want to set "ioengine=rbd", I guess.
>>>>
>>>> Cheers,
>>>>
>>>> ----- Original Message -----
>>>> From: "min fang" <[email protected]>
>>>> To: "ceph-users" <[email protected]>
>>>> Sent: Tuesday, March 1, 2016 1:28:54 PM
>>>> Subject: [ceph-users]  rbd cache did not help improve performance
>>>>
>>>> Hi, I set the following parameters in ceph.conf
>>>>
>>>> [client]
>>>> rbd cache=true
>>>> rbd cache size= 25769803776
>>>> rbd readahead disable after byte=0
>>>>
>>>>
>>>> map a rbd image to a rbd device then run fio testing on 4k read as the
>>>> command
>>>> ./fio -filename=/dev/rbd4 -direct=1 -iodepth 64 -thread -rw=read
>>>> -ioengine=aio -bs=4K -size=500G -numjobs=32 -runtime=300 -group_reporting
>>>> -name=mytest2
>>>>
>>>> Compared the result with setting rbd cache=false and enable cache
>>>> model, I did not see performance improved by librbd cache.
>>>>
>>>> Is my setting not right, or it is true that ceph librbd cache will not
>>>> have benefit on 4k seq read?
>>>>
>>>> thanks.
>>>>
>>>>
>>>> _______________________________________________
>>>> ceph-users mailing list
>>>> [email protected]
>>>> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
>>>> _______________________________________________
>>>> ceph-users mailing list
>>>> [email protected]
>>>> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
>>>>
>>>
>>>
>>> _______________________________________________
>>> ceph-users mailing list
>>> [email protected]
>>> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
>>>
>>>
>>
>> _______________________________________________
>> ceph-users mailing list
>> [email protected]
>> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
>>
>>
>
_______________________________________________
ceph-users mailing list
[email protected]
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

Reply via email to