[ovirt-users] NFS sync vs async

2018-01-29 Thread Jean-Francois Courteau

Hello there,

At first I thought I had a performance problem with virtio-scsi on 
Windows, but after thorough experimentation, I finally found that my 
performance problem was related to the way I share my storage using NFS.


Using the settings suggested on the oVirt website for the /etc/exports 
file, I implemented the following line:
   /storage
*(rw,sync,no_subtree_check,all_squash,anonuid=36,anongid=36)


The underlying filesystem is ext4.

In the end, whatever the VM I am running through this NFS export, I get 
extremely poor write performance, like sub-100 IOPS (my disks usually 
can do 800-1k). Under the hood, iotop shows that my host IO is all taken 
up by jbd2, and if I understand correctly, it is the ext4 logging 
process.


I have read that using the "async" option in my NFS export is unsafe, 
like if my host crashes during a write operation, it could corrupt my VM 
Disks.


What is the best combination of filesystem / settings if I want to go 
with NFS sync? Is someone getting good performance with the same options 
as me? if so, why do I get such abysmal IOPS numbers?


Thanks!

J-F Courteau

___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


[ovirt-users] Windows 2012R2 virtio-scsi abysmal disk write performance

2018-01-26 Thread Jean-Francois Courteau

Hello there,

I just installed oVirt on brend new machines. The engine on a Virtualbox 
VM in my current infrastructure, and a single dedicated CentOS host 
attached to the engine.


Here are my host specs (srvhc02):

CPU: Intel(R) Core(TM) i7-4790 CPU @ 3.60GHz (4C 8T)
Mem: 32GB DDR3
Disks:
 - 2 x WD Black 2TB (RAID1) where the CentOS is installed
 - 4 x WD Gold 4TB (RAID10) dedicated for a storage domain (/dev/md0, 
ext4, mounted in /storage and exported with NFS with default settings)
NIC: Intel 4 ports Gigabit. One port for VMs, one port for everything 
else. 2 unused ports.

OS: Freshly installed CentOS 7 minimal latest (yum updated)
oVirt deployed from the Engine using root account with a password


Here are my Engine specs (srvvmmgmt):

Virtualbox guest VM in another physical computer just for test purposes.
CPU: 2vCPU
Mem: 8GB
Disk: 50GB VDI on a RAID10 array
NIC: Virtual Intel Pro 1000 (Bridged over a physical Intel 4 ports 
Gigabit)

OS: Freshly installed CentOS 7 minimal latest (yum updated)
oVirt Engine 4.2 deployed from the repo, yum updated yesterday
Storage domain: NFS - srvhc02:/storage
ISO domain: NFS - srvhc02:/iso


Physical network is Gigabit on Cisco switches, no VLAN tagging.


My (barely usable) Windows 2012R2 guest (srvweb03)
CPU: 2vCPU
Mem: 8GB
Disk1: 100GB on the storage domain
Disk 2: 200GB on the storage domain (this is the one I was changing the 
controller for testing purposes)

NIC: virtio bridged over the VM port


I have tried every possible combination of drive controller 
(virtio-scsi, virtio, IDE), virtio drivers (stable and latest from the 
Fedora website, others found in sometimes obsucre places), and the disk 
write performances are simply unacceptable. Of course I had a reboot 
between each NIC driver update and driver change.


I have compared with a Virtualbox installed on the same host after I 
cleaned up oVirt, and the host is absolutely not problematic. Here is 
the compare:



  oVirt Virtualbox
---
SEQUENTIAL READ   3000MB/s  406MB/s (this is 
ridiculously high, oVirt probably reading from RAM)

SEQUENTIAL WRITE  31MB/s164MB/s
4k RANDOM READ400MB/s   7.18MB/s(this is 
ridiculously high, oVirt probably reading from RAM)

4k RANDOM WRITE   0.3MB/s(80IOPS)   3MB/s (747IOPS)

These performances are using the virtio-scsi in oVirt (any version, none 
gave me better performance), and the SATA driver in Virtualbox. Needless 
to say that this is frustrating.


I have tried with oVirt virtio-blk (virtio in the interface) driver, it 
gave me a 10-15% improvement, but never up to the level I could expect 
from an enterprise grade solution. I also tried IDE. I could not even 
write at 10kbps on it. I had approx 10IOPM (yes, IO per minute!) with 
it.



Is there something I am missing?


--
https://nexcess.ca
JEAN-FRANÇOIS COURTEAU
_Président, Directeur Général_
C: jean-francois.court...@nexcess.ca
T: +1 (418) 558-5169
-

___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


[ovirt-users] Windows 2012R2 virtio-scsi abysmal disk write performance

2018-01-26 Thread Jean-Francois Courteau

Hello there,

I just installed oVirt on brend new machines. The engine on a Virtualbox
VM in my current infrastructure, and a single dedicated CentOS host
attached to the engine.

I am getting extremely poor write performance in my oVirt Windows 2012R2 
VM, whatever the virtio, virtio-scsi device I use, and whatever the 
virtio Windows driver version.


Here are my host specs (srvhc02):

CPU: Intel(R) Core(TM) i7-4790 CPU @ 3.60GHz (4C 8T)
Mem: 32GB DDR3
Disks:
  - 2 x WD Black 2TB (RAID1) where the CentOS is installed
  - 4 x WD Gold 4TB (RAID10) dedicated for a storage domain (/dev/md0,
ext4, mounted in /storage and exported with NFS with default settings)
NIC: Intel 4 ports Gigabit. One port for VMs, one port for everything
else. 2 unused ports.
OS: Freshly installed CentOS 7 minimal latest (yum updated)
oVirt deployed from the Engine using root account with a password

Here are my Engine specs (srvvmmgmt):

Virtualbox guest VM in another physical computer just for test purposes.
CPU: 2vCPU
Mem: 8GB
Disk: 50GB VDI on a RAID10 array
NIC: Virtual Intel Pro 1000 (Bridged over a physical Intel 4 ports
Gigabit)
OS: Freshly installed CentOS 7 minimal latest (yum updated)
oVirt Engine 4.2 deployed from the repo, yum updated yesterday
Storage domain: NFS - srvhc02:/storage
ISO domain: NFS - srvhc02:/iso

Physical network is Gigabit on Cisco switches, no VLAN tagging.

My (barely usable) Windows 2012R2 guest (srvweb03)
CPU: 2vCPU
Mem: 8GB
Disk1: 100GB on the storage domain
Disk 2: 200GB on the storage domain (this is the one I was changing the
controller for testing purposes)
NIC: virtio bridged over the VM port

I have tried every possible combination of drive controller
(virtio-scsi, virtio, IDE), virtio drivers (stable and latest from the
Fedora website, others found in sometimes obsucre places), and the disk
write performances are simply unacceptable. Of course I had a reboot
between each NIC driver update and driver change.

I have compared with a Virtualbox installed on the same host after I
cleaned up oVirt, and the host is absolutely not problematic. Here is
the compare:

   oVirtVirtualbox
---
SEQUENTIAL READ  *3000MB/s  406MB/s
SEQUENTIAL WRITE  31MB/s164MB/s
4k RANDOM READ   *400MB/s   7.18MB/s
4k RANDOM WRITE   0.3MB/s(80IOPS)   3MB/s (747IOPS)

*this is ridiculously high, oVirt probably reading from RAM

These performances are using the virtio-scsi in oVirt (any version, none
gave me better performance), and the SATA driver in Virtualbox. Needless
to say that this is frustrating.

I have tried with oVirt virtio-blk (virtio in the interface) driver, it
gave me a 10-15% improvement, but never up to the level I could expect
from an enterprise grade solution. I also tried IDE. I could not even
write at 10kbps on it. I had approx 10IOPM (yes, IO per minute!) with
it.

Is there something I am missing?

Thanks!

___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users