>>How should we do read-ahead inside qemu? manually?

This is managed by the linux kernel automaticaly

/sys/class/block/sda/queue/read_ahead_kb



Also about ceph performance,

another problem is that qemu is single threaded for block access. And 
ceph/librbd cpu usage is huge, so it's possible to be 
cpu bound on 1 core.

I need to send a patch, but with virtio-scsi it's possible to do multi-queue, 
to scale on multi-cores, with "num_queue" param in virtio-scsi device.
I'm not sure It's helping for the blocks jobs, but It's really helping the 
guest ios.




Here some bench results on coming giant ceph release  with differents 
tuning:(0.86)

8 cores (CPU E5-2603 v2 @ 1.80GHz): 

15000 iops 4K read :  auth_client: cephx  rbd_cache: on  (50% cpu) 
25000 iops 4K read : auth_client: cephx  rbd_cache: off (100% cpu - seem to 
have read lock with rbd_cache=true)

40000 iops 4K read : auth_client: none  rbd_cache: off (100% cpu - cephx auth 
is really cpu intensive)


And with 1 core, I can get only 7000 iops. (same inside the vm with virtio-blk)

----- Mail original ----- 

De: "Dietmar Maurer" <[email protected]> 
À: "Alexandre DERUMIER" <[email protected]> 
Cc: [email protected], "VELARTIS Philipp Dürhammer" 
<[email protected]>, "Dmitry Petuhov" <[email protected]> 
Envoyé: Dimanche 19 Octobre 2014 18:07:30 
Objet: RE: [pve-devel] backup ceph high iops and slow 

> +RBD supports read-ahead/prefetching to optimize small, sequential reads. 
> +This should normally be handled by the guest OS in the case of a VM, 

How should we do read-ahead inside qemu? manually? 
_______________________________________________
pve-devel mailing list
[email protected]
http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel

Reply via email to