Re: [Users] VirtIO disk latency

2014-01-16 Thread Blaster

On Jan 9, 2014, at 3:16 AM, Markus Stockhausen stockhau...@collogia.de wrote:

 
 We see a quite a heavy latency penalty using KVM VirtIO disks in comparison
 to ESX. Doing one I/O onto disk inside a VM usually adds 370us of overhead in
 the virtualisation layer. This has been tested with VirtIO-SCSI and windows
 guest (2K3). More here (still now answer yet):

It would be interesting to see you do the same tests on virtio-blk instead of 
virtio-scsi.
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [Users] VirtIO disk latency

2014-01-09 Thread Sander Grendelman
On Thu, Jan 9, 2014 at 10:16 AM, Markus Stockhausen
stockhau...@collogia.de wrote:
...
 - access NFS inside the hypervisor - 12.000 I/Os per second - or 83us latency
 - access DISK inside ESX VM that resides on NFS - 8000 I/Os per second - or 
 125us latency
 - access DISK inside OVirt VM that resides on NFS - 2200 I/Os per second - or 
 450us latency

I can do a bit of testing on local disk and FC (with some extra setup
maybe also NFS).
What is your exact testing method? ( commands, file sizes, sofware
versions, mount options etc.)
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [Users] VirtIO disk latency

2014-01-09 Thread Markus Stockhausen
 Von: sander.grendel...@gmail.com 
 Gesendet: Donnerstag, 9. Januar 2014 10:32
 An: Markus Stockhausen
 Cc: users@ovirt.org
 Betreff: Re: [Users] VirtIO disk latency
 
 On Thu, Jan 9, 2014 at 10:16 AM, Markus Stockhausen
 stockhau...@collogia.de wrote:
 ...
  - access NFS inside the hypervisor - 12.000 I/Os per second - or 83us 
  latency
  - access DISK inside ESX VM that resides on NFS - 8000 I/Os per second - or 
  125us latency
  - access DISK inside OVirt VM that resides on NFS - 2200 I/Os per second - 
  or 450us latency
 
 I can do a bit of testing on local disk and FC (with some extra setup
 maybe also NFS).
 What is your exact testing method? ( commands, file sizes, sofware
 versions, mount options etc.)

Thanks for taking time to help. 

I have used several tools to measure latencies but it
always boils down to the same numbers. The software 
components and their releases should not matter to get 
a first overview. The important thing is to ensure that a 
read request of a test inside the VM is really passing the 
QEMU layer. 

The simplest test I can think of (at least in our case) is to
start a Windows VM and attach a very small NFS disk with
1GB to it. Start it, install HDTune Trial and run the random 
access test to the small disk. Other ways could be to run 
some kind of direct IO based read test inside the VM.

During each test I can see the packets running between 
the NFS server and the hypervisor so I know that each
request is not cached inside the VM or QEMU. 

After one or two runs the filecache in the RAM of our NFS 
server has all the hot data and latency decreases down to
the microseconds area. With that we can derive the
penalty of the virtualization layer.

Whatever I try to optimize I only reach 1/4th of the I/Os
of ESX for very small packets (512 bytes or 1K). And
that inside the same (migrated) VM on the same NFS
topology with the same test programs.

The baseline numbers for the hypervisor are an average of 
running direct io based test tools onto files residing on
the same NFS.

Markus

P.S. I'm not complaining about that performance. 
Driving an IPoIB environment you get used to waste 
bandwidth and latency. But it is always good to know
where it comes from.
Diese E-Mail enthält vertrauliche und/oder rechtlich geschützte
Informationen. Wenn Sie nicht der richtige Adressat sind oder diese E-Mail
irrtümlich erhalten haben, informieren Sie bitte sofort den Absender und
vernichten Sie diese Mail. Das unerlaubte Kopieren sowie die unbefugte
Weitergabe dieser Mail ist nicht gestattet.

Über das Internet versandte E-Mails können unter fremden Namen erstellt oder
manipuliert werden. Deshalb ist diese als E-Mail verschickte Nachricht keine
rechtsverbindliche Willenserklärung.

Collogia
Unternehmensberatung AG
Ubierring 11
D-50678 Köln

Vorstand:
Kadir Akin
Dr. Michael Höhnerbach

Vorsitzender des Aufsichtsrates:
Hans Kristian Langva

Registergericht: Amtsgericht Köln
Registernummer: HRB 52 497

This e-mail may contain confidential and/or privileged information. If you
are not the intended recipient (or have received this e-mail in error)
please notify the sender immediately and destroy this e-mail. Any
unauthorized copying, disclosure or distribution of the material in this
e-mail is strictly forbidden.

e-mails sent over the internet may have been written under a wrong name or
been manipulated. That is why this message sent as an e-mail is not a
legally binding declaration of intention.

Collogia
Unternehmensberatung AG
Ubierring 11
D-50678 Köln

executive board:
Kadir Akin
Dr. Michael Höhnerbach

President of the supervisory board:
Hans Kristian Langva

Registry office: district court Cologne
Register number: HRB 52 497


___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users