Hi, 

I don't have tested yet rbd readhead,
but maybe do you reach qemu limit. (by default qemu can use only 1thread/1core 
to manage ios, check you qemu cpu).

Do you have some performance results ? how many iops ?


but I have had 4x improvement in qemu-kvm, with virtio-scsi + num_queues + 
lasts kernel.
(4k seq coalesced reads in qemu, was doing bigger iops to ceph).

libvirt : <controller type='scsi' index='0' model='virtio-scsi' num_queues='8'/>


Regards,

Alexandre
----- Mail original ----- 

De: "duan xufeng" <duan.xuf...@zte.com.cn> 
À: "ceph-users" <ceph-us...@ceph.com> 
Cc: "si dawei" <si.da...@zte.com.cn> 
Envoyé: Vendredi 21 Novembre 2014 03:58:38 
Objet: [ceph-users] RBD read-ahead didn't improve 4K read performance 


hi, 

I upgraded CEPH to 0.87 for rbd readahead , but can't see any performance 
improvement in 4K seq read in the VM. 
How can I know if the readahead is take effect? 

thanks. 

ceph.conf 
[client] 
rbd_cache = true 
rbd_cache_size = 335544320 
rbd_cache_max_dirty = 251658240 
rbd_cache_target_dirty = 167772160 

rbd readahead trigger requests = 1 
rbd readahead max bytes = 4194304 
rbd readahead disable after bytes = 0 
--------------------------------------------------------
ZTE Information Security Notice: The information contained in this mail (and 
any attachment transmitted herewith) is privileged and confidential and is 
intended for the exclusive use of the addressee(s).  If you are not an intended 
recipient, any disclosure, reproduction, distribution or other dissemination or 
use of the information contained is strictly prohibited.  If you have received 
this mail in error, please delete it and notify us immediately. 

_______________________________________________ 
ceph-users mailing list 
ceph-users@lists.ceph.com 
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com 
_______________________________________________
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

Reply via email to