>>25000 iops 4K read : auth_client: cephx  rbd_cache: off (100% cpu - seem to 
>>have read lock with rbd_cache=true)

Ok, this one was a regression bug in 0.85, and as been fixed in last master

----- Mail original ----- 

De: "Alexandre DERUMIER" <aderum...@odiso.com> 
À: "Dietmar Maurer" <diet...@proxmox.com> 
Cc: pve-devel@pve.proxmox.com 
Envoyé: Dimanche 19 Octobre 2014 20:47:04 
Objet: Re: [pve-devel] backup ceph high iops and slow 

>>How should we do read-ahead inside qemu? manually? 

This is managed by the linux kernel automaticaly 

/sys/class/block/sda/queue/read_ahead_kb 



Also about ceph performance, 

another problem is that qemu is single threaded for block access. And 
ceph/librbd cpu usage is huge, so it's possible to be 
cpu bound on 1 core. 

I need to send a patch, but with virtio-scsi it's possible to do multi-queue, 
to scale on multi-cores, with "num_queue" param in virtio-scsi device. 
I'm not sure It's helping for the blocks jobs, but It's really helping the 
guest ios. 




Here some bench results on coming giant ceph release with differents 
tuning:(0.86) 

8 cores (CPU E5-2603 v2 @ 1.80GHz): 

15000 iops 4K read : auth_client: cephx rbd_cache: on (50% cpu) 
25000 iops 4K read : auth_client: cephx rbd_cache: off (100% cpu - seem to have 
read lock with rbd_cache=true) 

40000 iops 4K read : auth_client: none rbd_cache: off (100% cpu - cephx auth is 
really cpu intensive) 


And with 1 core, I can get only 7000 iops. (same inside the vm with virtio-blk) 

----- Mail original ----- 

De: "Dietmar Maurer" <diet...@proxmox.com> 
À: "Alexandre DERUMIER" <aderum...@odiso.com> 
Cc: pve-devel@pve.proxmox.com, "VELARTIS Philipp Dürhammer" 
<p.duerham...@velartis.at>, "Dmitry Petuhov" <mityapetu...@gmail.com> 
Envoyé: Dimanche 19 Octobre 2014 18:07:30 
Objet: RE: [pve-devel] backup ceph high iops and slow 

> +RBD supports read-ahead/prefetching to optimize small, sequential reads. 
> +This should normally be handled by the guest OS in the case of a VM, 

How should we do read-ahead inside qemu? manually? 
_______________________________________________ 
pve-devel mailing list 
pve-devel@pve.proxmox.com 
http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel 
_______________________________________________
pve-devel mailing list
pve-devel@pve.proxmox.com
http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel

Reply via email to