Re: [Users] Extremely poor disk access speeds in Windows guest

2014-01-31 Thread Steve Dainard
I've reconfigured my setup (good succes below, but need clarity on gluster
option):

Two nodes total, both running virt and glusterfs storage (2 node replica,
quorum).

I've created an NFS storage domain, pointed at the first nodes IP address.
I've launched a 2008 R2 SP1 install with a virtio-scsi disk, and the SCSI
pass-through driver on the same node as the NFS domain is pointing at.

Windows guest install has been running for roughly 1.5 hours, still
Expanding Windows files (55%) ...

top is showing:
  PID USER  PR  NI  VIRT  RES  SHR S %CPU %MEMTIME+  COMMAND

 3609 root  20   0 1380m  33m 2604 S 35.4  0.1 231:39.75 glusterfsd

21444 qemu  20   0 6362m 4.1g 6592 S 10.3  8.7  10:11.53 qemu-kvm


This is a 2 socket, 6 core xeon machine with 48GB of RAM, and 6x 7200rpm
enterprise sata disks in RAID5 so I don't think we're hitting hardware
limitations.

dd on xfs (no gluster)

time dd if=/dev/zero of=test bs=1M count=2048
2048+0 records in
2048+0 records out
2147483648 bytes (2.1 GB) copied, 4.15787 s, 516 MB/s

real 0m4.351s
user 0m0.000s
sys 0m1.661s


time dd if=/dev/zero of=test bs=1k count=200
200+0 records in
200+0 records out
204800 bytes (2.0 GB) copied, 4.06949 s, 503 MB/s

real 0m4.260s
user 0m0.176s
sys 0m3.991s


I've enabled nfs.trusted-sync (
http://gluster.org/community/documentation/index.php/Gluster_3.2:_Setting_Volume_Options#nfs.trusted-sync
) on the gluster volume, and the speed difference is immeasurable . Can
anyone explain what this option does, and what the risks are with a 2 node
gluster replica volume with quorum enabled?

Thanks,
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [Users] Extremely poor disk access speeds in Windows guest

2014-01-31 Thread Steve Dainard


 I've enabled nfs.trusted-sync (
 http://gluster.org/community/documentation/index.php/Gluster_3.2:_Setting_Volume_Options#nfs.trusted-sync
 ) on the gluster volume, and the speed difference is immeasurable . Can
 anyone explain what this option does, and what the risks are with a 2
 node gluster replica volume with quorum enabled?


Sorry, I understand async, I meant option nfs.trusted-write, and if it
would help this situation.
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [Users] Extremely poor disk access speeds in Windows guest

2014-01-31 Thread Vadim Rozenfeld
On Fri, 2014-01-31 at 11:37 -0500, Steve Dainard wrote:
 I've reconfigured my setup (good succes below, but need clarity on
 gluster option):
 
 
 Two nodes total, both running virt and glusterfs storage (2 node
 replica, quorum).
 
 
 I've created an NFS storage domain, pointed at the first nodes IP
 address. I've launched a 2008 R2 SP1 install with a virtio-scsi disk,
 and the SCSI pass-through driver on the same node as the NFS domain is
 pointing at.
 
 
 Windows guest install has been running for roughly 1.5 hours, still
 Expanding Windows files (55%) ...

[VR]
Does it work faster with IDE?
Do you have kvm enabled?
Thanks,
Vadim.
 
 
 
 top is showing:
   PID USER  PR  NI  VIRT  RES  SHR S %CPU %MEMTIME+  COMMAND
 
  3609 root  20   0 1380m  33m 2604 S 35.4  0.1 231:39.75
 glusterfsd  
 21444 qemu  20   0 6362m 4.1g 6592 S 10.3  8.7  10:11.53 qemu-kvm

 
 
 This is a 2 socket, 6 core xeon machine with 48GB of RAM, and 6x
 7200rpm enterprise sata disks in RAID5 so I don't think we're hitting
 hardware limitations.
 
 
 dd on xfs (no gluster)
 
 
 time dd if=/dev/zero of=test bs=1M count=2048
 2048+0 records in
 2048+0 records out
 2147483648 bytes (2.1 GB) copied, 4.15787 s, 516 MB/s
 
 
 real 0m4.351s
 user 0m0.000s
 sys 0m1.661s
 
 
 
 
 time dd if=/dev/zero of=test bs=1k count=200
 200+0 records in
 200+0 records out
 204800 bytes (2.0 GB) copied, 4.06949 s, 503 MB/s
 
 
 real 0m4.260s
 user 0m0.176s
 sys 0m3.991s
 
 
 
 
 I've enabled nfs.trusted-sync
 (http://gluster.org/community/documentation/index.php/Gluster_3.2:_Setting_Volume_Options#nfs.trusted-sync)
  on the gluster volume, and the speed difference is immeasurable . Can anyone 
 explain what this option does, and what the risks are with a 2 node gluster 
 replica volume with quorum enabled?
 
 
 Thanks,


___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [Users] Extremely poor disk access speeds in Windows guest

2014-01-31 Thread Steve Dainard
IDE is just as slow. Just over 2 hours for 2008R2 install.

Is this what you mean by kvm?
lsmod | grep kvm
kvm_intel  54285  3
kvm   332980  1 kvm_intel


*Steve Dainard *
IT Infrastructure Manager
Miovision http://miovision.com/ | *Rethink Traffic*
519-513-2407 ex.250
877-646-8476 (toll-free)

*Blog http://miovision.com/blog  |  **LinkedIn
https://www.linkedin.com/company/miovision-technologies  |  Twitter
https://twitter.com/miovision  |  Facebook
https://www.facebook.com/miovision*
--
 Miovision Technologies Inc. | 148 Manitou Drive, Suite 101, Kitchener, ON,
Canada | N2C 1L3
This e-mail may contain information that is privileged or confidential. If
you are not the intended recipient, please delete the e-mail and any
attachments and notify us immediately.


On Fri, Jan 31, 2014 at 8:20 PM, Vadim Rozenfeld vroze...@redhat.comwrote:

 On Fri, 2014-01-31 at 11:37 -0500, Steve Dainard wrote:
  I've reconfigured my setup (good succes below, but need clarity on
  gluster option):
 
 
  Two nodes total, both running virt and glusterfs storage (2 node
  replica, quorum).
 
 
  I've created an NFS storage domain, pointed at the first nodes IP
  address. I've launched a 2008 R2 SP1 install with a virtio-scsi disk,
  and the SCSI pass-through driver on the same node as the NFS domain is
  pointing at.
 
 
  Windows guest install has been running for roughly 1.5 hours, still
  Expanding Windows files (55%) ...

 [VR]
 Does it work faster with IDE?
 Do you have kvm enabled?
 Thanks,
 Vadim.

 
 
  top is showing:
PID USER  PR  NI  VIRT  RES  SHR S %CPU %MEMTIME+  COMMAND
 
   3609 root  20   0 1380m  33m 2604 S 35.4  0.1 231:39.75
  glusterfsd
  21444 qemu  20   0 6362m 4.1g 6592 S 10.3  8.7  10:11.53 qemu-kvm
 
 
 
  This is a 2 socket, 6 core xeon machine with 48GB of RAM, and 6x
  7200rpm enterprise sata disks in RAID5 so I don't think we're hitting
  hardware limitations.
 
 
  dd on xfs (no gluster)
 
 
  time dd if=/dev/zero of=test bs=1M count=2048
  2048+0 records in
  2048+0 records out
  2147483648 bytes (2.1 GB) copied, 4.15787 s, 516 MB/s
 
 
  real 0m4.351s
  user 0m0.000s
  sys 0m1.661s
 
 
 
 
  time dd if=/dev/zero of=test bs=1k count=200
  200+0 records in
  200+0 records out
  204800 bytes (2.0 GB) copied, 4.06949 s, 503 MB/s
 
 
  real 0m4.260s
  user 0m0.176s
  sys 0m3.991s
 
 
 
 
  I've enabled nfs.trusted-sync
  (
 http://gluster.org/community/documentation/index.php/Gluster_3.2:_Setting_Volume_Options#nfs.trusted-sync)
 on the gluster volume, and the speed difference is immeasurable . Can
 anyone explain what this option does, and what the risks are with a 2 node
 gluster replica volume with quorum enabled?
 
 
  Thanks,



___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [Users] Extremely poor disk access speeds in Windows guest

2014-01-30 Thread Vadim Rozenfeld
On Wed, 2014-01-29 at 12:35 -0500, Steve Dainard wrote:
 On Wed, Jan 29, 2014 at 5:11 AM, Vadim Rozenfeld vroze...@redhat.com
 wrote:
 On Wed, 2014-01-29 at 11:30 +0200, Ronen Hod wrote:
  Adding the virtio-scsi developers.
  Anyhow, virtio-scsi is newer and less established than
 viostor (the
  block device), so you might want to try it out.
 
 
 [VR]
 Was it SCSI Controller or SCSI pass-through controller?
 If it's SCSI Controller then it will be viostor (virtio-blk)
 device
 driver.
 
 
 
 
 SCSI Controller is listed in device manager.
 
 
 Hardware ID's: 
 PCI\VEN_1AF4DEV_1004SUBSYS_00081AF4REV_00
 PCI\VEN_1AF4DEV_1004SUBSYS_00081AF4

There is something strange here. Subsystem ID 0008
means it is a virtio scsi pass-through controller.
And you shouldn't be able to install  SCSI Controller
device driver (viostor.sys) on top of SCSI pass-through
Controller.

vioscsi.sys should be installed on top of
VEN_1AF4DEV_1004SUBSYS_00081AF4REV_00

viostor.sys should be installed on top of
VEN_1AF4DEV_1001SUBSYS_00021AF4REV_00

 PCI\VEN_1AF4DEV_1004CC_01
 PCI\VEN_1AF4DEV_1004CC_0100
 
 
  
 
  A disclaimer: There are time and patches gaps between RHEL
 and other
  versions.
 
  Ronen.
 
  On 01/28/2014 10:39 PM, Steve Dainard wrote:
 
   I've had a bit of luck here.
  
  
   Overall IO performance is very poor during Windows
 updates, but a
   contributing factor seems to be the SCSI Controller
 device in the
   guest. This last install I didn't install a driver for
 that device,
 
 
 [VR]
 Does it mean that your system disk is IDE and the data disk
 (virtio-blk)
 is not accessible?
 
 
 In Ovirt 3.3.2-1.el6 I do not have an option to add a virtio-blk
 device:
 Screenshot here:
 https://dl.dropboxusercontent.com/u/21916057/Screenshot%20from%
 202014-01-29%2010%3A04%3A57.png
my guess is that VirtIO means virtio-blk, and you should use viostor.sys
for it.
VirtIO-SCSI is for virtio-scsi, and need install vioscsi.sys to make it
working in Windows. 
 
 
 VM disk drive is Red Hat VirtIO SCSI Disk Device, storage controller
 is listed as Red Hat VirtIO SCSI Controller as shown in device
 manager.
 Screenshot here:
 https://dl.dropboxusercontent.com/u/21916057/Screenshot%20from%
 202014-01-29%2009%3A57%3A24.png
 
 
 
 In Ovirt manager the disk interface is listed as VirtIO.
 Screenshot
 here: https://dl.dropboxusercontent.com/u/21916057/Screenshot%20from%
 202014-01-29%2009%3A58%3A35.png
  
 
and my performance is much better. Updates still chug
 along quite
   slowly, but I seem to have more than the  100KB/s write
 speeds I
   was seeing previously.
  
  
   Does anyone know what this device is for? I have the Red
 Hat VirtIO
   SCSI Controller listed under storage controllers.
 
 
 [VR]
 It's a virtio-blk device. OS cannot see this volume unless you
 have
 viostor.sys driver installed on it.
 
 
 Interesting that my VM's can see the controller, but I can't add a
 disk for that controller in Ovirt. Is there a package I have missed on
 install?
 
 
 rpm -qa | grep ovirt
 ovirt-host-deploy-java-1.1.3-1.el6.noarch
 ovirt-engine-backend-3.3.2-1.el6.noarch
 ovirt-engine-lib-3.3.2-1.el6.noarch
 ovirt-engine-restapi-3.3.2-1.el6.noarch
 ovirt-engine-sdk-python-3.3.0.8-1.el6.noarch
 ovirt-log-collector-3.3.2-2.el6.noarch
 ovirt-engine-dbscripts-3.3.2-1.el6.noarch
 ovirt-engine-webadmin-portal-3.3.2-1.el6.noarch
 ovirt-host-deploy-1.1.3-1.el6.noarch
 ovirt-image-uploader-3.3.2-1.el6.noarch
 ovirt-engine-websocket-proxy-3.3.2-1.el6.noarch
 ovirt-engine-userportal-3.3.2-1.el6.noarch
 ovirt-engine-setup-3.3.2-1.el6.noarch
 ovirt-iso-uploader-3.3.2-1.el6.noarch
 ovirt-engine-cli-3.3.0.6-1.el6.noarch
 ovirt-engine-3.3.2-1.el6.noarch
 ovirt-engine-tools-3.3.2-1.el6.noarch
 
 
 
  
   I've setup a NFS storage domain on my
 desktops SSD.
   I've re-installed
   win 2008 r2 and initially it was running
 smoother.
  
   Disk performance peaks at 100MB/s.
  
   If I copy a 250MB file from a share into
 the Windows
   VM, it writes out
 
 [VR]
 Do you copy it with Explorer or any other copy program?
 
 
 Windows Explorer only.
  
 Do you have HPET enabled?
 
 
 I can't find it in the guest 'system devices'. On the hosts the
 current clock source is 'tsc', although 'hpet' is an available option.
  
 How does it work with if you copy from/to local (non-NFS)
 

Re: [Users] Extremely poor disk access speeds in Windows guest

2014-01-29 Thread Ronen Hod

Adding the virtio-scsi developers.
Anyhow, virtio-scsi is newer and less established than viostor (the block 
device), so you might want to try it out.
A disclaimer: There are time and patches gaps between RHEL and other versions.

Ronen.

On 01/28/2014 10:39 PM, Steve Dainard wrote:

I've had a bit of luck here.

Overall IO performance is very poor during Windows updates, but a contributing factor seems 
to be the SCSI Controller device in the guest. This last install I didn't 
install a driver for that device, and my performance is much better. Updates still chug 
along quite slowly, but I seem to have more than the  100KB/s write speeds I was seeing 
previously.

Does anyone know what this device is for? I have the Red Hat VirtIO SCSI 
Controller listed under storage controllers.

*Steve Dainard *
IT Infrastructure Manager
Miovision http://miovision.com/ | /Rethink Traffic/
519-513-2407 ex.250
877-646-8476 (toll-free)

*Blog http://miovision.com/blog | **LinkedIn 
https://www.linkedin.com/company/miovision-technologies  | Twitter 
https://twitter.com/miovision  | Facebook https://www.facebook.com/miovision*

Miovision Technologies Inc. | 148 Manitou Drive, Suite 101, Kitchener, ON, 
Canada | N2C 1L3
This e-mail may contain information that is privileged or confidential. If you 
are not the intended recipient, please delete the e-mail and any attachments 
and notify us immediately.


On Sun, Jan 26, 2014 at 2:33 AM, Itamar Heim ih...@redhat.com 
mailto:ih...@redhat.com wrote:

On 01/26/2014 02:37 AM, Steve Dainard wrote:

Thanks for the responses everyone, really appreciate it.

I've condensed the other questions into this reply.


Steve,
What is the CPU load of the GlusterFS host when comparing the raw
brick test to the gluster mount point test? Give it 30 seconds and
see what top reports. You'll probably have to significantly increase
the count on the test so that it runs that long.

- Nick



Gluster mount point:

*4K* on GLUSTER host
[root@gluster1 rep2]# dd if=/dev/zero of=/mnt/rep2/test1 bs=4k 
count=50
50+0 records in
50+0 records out
204800 tel:204800 tel:204800 tel:204800 bytes 
(2.0 GB) copied, 100.076 s, 20.5 MB/s


Top reported this right away:
PID USER  PR  NI  VIRT  RES  SHR S %CPU %MEM  TIME+  COMMAND
  1826 root  20   0  294m  33m 2540 S 27.2  0.4 0:04.31 glusterfs
  2126 root  20   0 1391m  31m 2336 S 22.6  0.4  11:25.48 glusterfsd

Then at about 20+ seconds top reports this:
   PID USER  PR  NI  VIRT  RES  SHR S %CPU %MEM  TIME+  COMMAND
  1826 root  20   0  294m  35m 2660 R 141.7  0.5 1:14.94 glusterfs
  2126 root  20   0 1392m  31m 2344 S 33.7  0.4  11:46.56 glusterfsd

*4K* Directly on the brick:
dd if=/dev/zero of=test1 bs=4k count=50
50+0 records in
50+0 records out
204800 tel:204800 tel:204800 tel:204800 bytes 
(2.0 GB) copied, 4.99367 s, 410 MB/s


  7750 root  20   0  102m  648  544 R 50.3  0.0 0:01.52 dd
  7719 root  20   0 000 D  1.0  0.0 0:01.50 flush-253:2

Same test, gluster mount point on OVIRT host:
dd if=/dev/zero of=/mnt/rep2/test1 bs=4k count=50
50+0 records in
50+0 records out
204800 tel:204800 tel:204800 tel:204800 bytes 
(2.0 GB) copied, 42.4518 s, 48.2 MB/s


   PID USER  PR  NI  VIRT  RES  SHR S %CPU %MEM  TIME+  COMMAND
  2126 root  20   0 1396m  31m 2360 S 40.5  0.4  13:28.89 glusterfsd


Same test, on OVIRT host but against NFS mount point:
dd if=/dev/zero of=/mnt/rep2-nfs/test1 bs=4k count=50
50+0 records in
50+0 records out
204800 tel:204800 tel:204800 tel:204800 bytes 
(2.0 GB) copied, 18.8911 s, 108 MB/s


PID USER  PR  NI  VIRT  RES  SHR S %CPU %MEM  TIME+  COMMAND
  2141 root  20   0  550m 184m 2840 R 84.6  2.3  16:43.10 glusterfs
  2126 root  20   0 1407m  30m 2368 S 49.8  0.4  13:49.07 glusterfsd

   

Re: [Users] Extremely poor disk access speeds in Windows guest

2014-01-29 Thread Vadim Rozenfeld
On Wed, 2014-01-29 at 11:30 +0200, Ronen Hod wrote:
 Adding the virtio-scsi developers.
 Anyhow, virtio-scsi is newer and less established than viostor (the
 block device), so you might want to try it out.

[VR]
Was it SCSI Controller or SCSI pass-through controller?
If it's SCSI Controller then it will be viostor (virtio-blk) device
driver.


 A disclaimer: There are time and patches gaps between RHEL and other
 versions.
 
 Ronen.
 
 On 01/28/2014 10:39 PM, Steve Dainard wrote:
 
  I've had a bit of luck here. 
  
  
  Overall IO performance is very poor during Windows updates, but a
  contributing factor seems to be the SCSI Controller device in the
  guest. This last install I didn't install a driver for that device,

[VR]
Does it mean that your system disk is IDE and the data disk (virtio-blk)
is not accessible? 

   and my performance is much better. Updates still chug along quite
  slowly, but I seem to have more than the  100KB/s write speeds I
  was seeing previously.
  
  
  Does anyone know what this device is for? I have the Red Hat VirtIO
  SCSI Controller listed under storage controllers.

[VR]
It's a virtio-blk device. OS cannot see this volume unless you have
viostor.sys driver installed on it.

  
  Steve Dainard 
  IT Infrastructure Manager
  Miovision | Rethink Traffic
  519-513-2407 ex.250
  877-646-8476 (toll-free)
  
  Blog  |  LinkedIn  |  Twitter  |  Facebook 
  
  Miovision Technologies Inc. | 148 Manitou Drive, Suite 101,
  Kitchener, ON, Canada | N2C 1L3
  This e-mail may contain information that is privileged or
  confidential. If you are not the intended recipient, please delete
  the e-mail and any attachments and notify us immediately.
  
  
  On Sun, Jan 26, 2014 at 2:33 AM, Itamar Heim ih...@redhat.com
  wrote:
  On 01/26/2014 02:37 AM, Steve Dainard wrote:
  
  Thanks for the responses everyone, really appreciate
  it.
  
  I've condensed the other questions into this reply.
  
  
  Steve,
  What is the CPU load of the GlusterFS host when
  comparing the raw
  brick test to the gluster mount point test? Give
  it 30 seconds and
  see what top reports. You’ll probably have to
  significantly increase
  the count on the test so that it runs that long.
  
  - Nick
  
  
  
  Gluster mount point:
  
  *4K* on GLUSTER host
  [root@gluster1 rep2]# dd if=/dev/zero
  of=/mnt/rep2/test1 bs=4k count=50
  50+0 records in
  50+0 records out
  
  204800 tel:204800 bytes (2.0 GB) copied,
  100.076 s, 20.5 MB/s 
  
  
  Top reported this right away:
  PID USER  PR  NI  VIRT  RES  SHR S %CPU %MEM
   TIME+  COMMAND
1826 root  20   0  294m  33m 2540 S 27.2  0.4
  0:04.31 glusterfs
2126 root  20   0 1391m  31m 2336 S 22.6  0.4
   11:25.48 glusterfsd
  
  Then at about 20+ seconds top reports this:
 PID USER  PR  NI  VIRT  RES  SHR S %CPU %MEM
 TIME+  COMMAND
1826 root  20   0  294m  35m 2660 R 141.7  0.5
  1:14.94 glusterfs
2126 root  20   0 1392m  31m 2344 S 33.7  0.4
   11:46.56 glusterfsd
  
  *4K* Directly on the brick:
  dd if=/dev/zero of=test1 bs=4k count=50
  50+0 records in
  50+0 records out
  
  204800 tel:204800 bytes (2.0 GB) copied,
  4.99367 s, 410 MB/s 
  
  
7750 root  20   0  102m  648  544 R 50.3  0.0
  0:01.52 dd
7719 root  20   0 000 D  1.0  0.0
  0:01.50 flush-253:2
  
  Same test, gluster mount point on OVIRT host:
  dd if=/dev/zero of=/mnt/rep2/test1 bs=4k
  count=50
  50+0 records in
  50+0 records out
  
  204800 tel:204800 bytes (2.0 GB) copied,
  42.4518 s, 48.2 MB/s 
  
  
 PID USER  PR  NI  VIRT  RES  SHR S %CPU %MEM
 TIME+  COMMAND
2126 root

Re: [Users] Extremely poor disk access speeds in Windows guest

2014-01-29 Thread Steve Dainard
On Wed, Jan 29, 2014 at 5:11 AM, Vadim Rozenfeld vroze...@redhat.comwrote:

 On Wed, 2014-01-29 at 11:30 +0200, Ronen Hod wrote:
  Adding the virtio-scsi developers.
  Anyhow, virtio-scsi is newer and less established than viostor (the
  block device), so you might want to try it out.

 [VR]
 Was it SCSI Controller or SCSI pass-through controller?
 If it's SCSI Controller then it will be viostor (virtio-blk) device
 driver.


SCSI Controller is listed in device manager.

Hardware ID's:
PCI\VEN_1AF4DEV_1004SUBSYS_00081AF4REV_00
PCI\VEN_1AF4DEV_1004SUBSYS_00081AF4
PCI\VEN_1AF4DEV_1004CC_01
PCI\VEN_1AF4DEV_1004CC_0100




  A disclaimer: There are time and patches gaps between RHEL and other
  versions.
 
  Ronen.
 
  On 01/28/2014 10:39 PM, Steve Dainard wrote:
 
   I've had a bit of luck here.
  
  
   Overall IO performance is very poor during Windows updates, but a
   contributing factor seems to be the SCSI Controller device in the
   guest. This last install I didn't install a driver for that device,

 [VR]
 Does it mean that your system disk is IDE and the data disk (virtio-blk)
 is not accessible?


In Ovirt 3.3.2-1.el6 I do not have an option to add a virtio-blk device:
Screenshot here:
https://dl.dropboxusercontent.com/u/21916057/Screenshot%20from%202014-01-29%2010%3A04%3A57.png

VM disk drive is Red Hat VirtIO SCSI Disk Device, storage controller is
listed as Red Hat VirtIO SCSI Controller as shown in device manager.
Screenshot here:
https://dl.dropboxusercontent.com/u/21916057/Screenshot%20from%202014-01-29%2009%3A57%3A24.png

In Ovirt manager the disk interface is listed as VirtIO.
Screenshot here:
https://dl.dropboxusercontent.com/u/21916057/Screenshot%20from%202014-01-29%2009%3A58%3A35.png



and my performance is much better. Updates still chug along quite
   slowly, but I seem to have more than the  100KB/s write speeds I
   was seeing previously.
  
  
   Does anyone know what this device is for? I have the Red Hat VirtIO
   SCSI Controller listed under storage controllers.

 [VR]
 It's a virtio-blk device. OS cannot see this volume unless you have
 viostor.sys driver installed on it.


Interesting that my VM's can see the controller, but I can't add a disk for
that controller in Ovirt. Is there a package I have missed on install?

rpm -qa | grep ovirt
ovirt-host-deploy-java-1.1.3-1.el6.noarch
ovirt-engine-backend-3.3.2-1.el6.noarch
ovirt-engine-lib-3.3.2-1.el6.noarch
ovirt-engine-restapi-3.3.2-1.el6.noarch
ovirt-engine-sdk-python-3.3.0.8-1.el6.noarch
ovirt-log-collector-3.3.2-2.el6.noarch
ovirt-engine-dbscripts-3.3.2-1.el6.noarch
ovirt-engine-webadmin-portal-3.3.2-1.el6.noarch
ovirt-host-deploy-1.1.3-1.el6.noarch
ovirt-image-uploader-3.3.2-1.el6.noarch
ovirt-engine-websocket-proxy-3.3.2-1.el6.noarch
ovirt-engine-userportal-3.3.2-1.el6.noarch
ovirt-engine-setup-3.3.2-1.el6.noarch
ovirt-iso-uploader-3.3.2-1.el6.noarch
ovirt-engine-cli-3.3.0.6-1.el6.noarch
ovirt-engine-3.3.2-1.el6.noarch
ovirt-engine-tools-3.3.2-1.el6.noarch


  
   I've setup a NFS storage domain on my desktops SSD.
   I've re-installed
   win 2008 r2 and initially it was running smoother.
  
   Disk performance peaks at 100MB/s.
  
   If I copy a 250MB file from a share into the Windows
   VM, it writes out
 [VR]
 Do you copy it with Explorer or any other copy program?


Windows Explorer only.


 Do you have HPET enabled?


I can't find it in the guest 'system devices'. On the hosts the current
clock source is 'tsc', although 'hpet' is an available option.


 How does it work with if you copy from/to local (non-NFS) storage?


Not sure, this is a royal pain to setup. Can I use my ISO domain in two
different data centers at the same time? I don't have an option to create
an ISO / NFS domain in the local storage DC.

When I use the import option with the default DC's ISO domain, I get an
error There is no storage domain under the specified path. Check event log
for more details. VDMS logs show Resource namespace
0e90e574-b003-4a62-867d-cf274b17e6b1_imageNS already registered so I'm
guessing the answer is no.

I tried to deploy with WDS, but the 64bit drivers apparently aren't signed,
and on x86 I get an error about the NIC not being supported even with the
drivers added to WDS.



 What is your virtio-win drivers package origin and version?


virtio-win-0.1-74.iso -
http://alt.fedoraproject.org/pub/alt/virtio-win/latest/images/



 Thanks,
 Vadim.



Appreciate it,
Steve
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [Users] Extremely poor disk access speeds in Windows guest

2014-01-29 Thread Itamar Heim

On 01/29/2014 07:35 PM, Steve Dainard wrote:
...

In Ovirt 3.3.2-1.el6 I do not have an option to add a virtio-blk device:
Screenshot here:
https://dl.dropboxusercontent.com/u/21916057/Screenshot%20from%202014-01-29%2010%3A04%3A57.png


virtio is virtio-blk (in the beginning, there was only one virtio, 
virtio-blk)



...
Do you have HPET enabled?


I can't find it in the guest 'system devices'. On the hosts the current
clock source is 'tsc', although 'hpet' is an available option.

How does it work with if you copy from/to local (non-NFS) storage?


Not sure, this is a royal pain to setup. Can I use my ISO domain in two
different data centers at the same time? I don't have an option to
create an ISO / NFS domain in the local storage DC.


an iso domain can be associated to two data centers (or more, or 
different engines, etc.)




When I use the import option with the default DC's ISO domain, I get an
error There is no storage domain under the specified path. Check event
log for more details. VDMS logs show Resource namespace
0e90e574-b003-4a62-867d-cf274b17e6b1_imageNS already registered so I'm
guessing the answer is no.


the answer is yes, please open a separate thread on this issue to make 
it easier to troubleshoot it.


thanks



I tried to deploy with WDS, but the 64bit drivers apparently aren't
signed, and on x86 I get an error about the NIC not being supported even
with the drivers added to WDS.

What is your virtio-win drivers package origin and version?


virtio-win-0.1-74.iso -
http://alt.fedoraproject.org/pub/alt/virtio-win/latest/images/


Thanks,
Vadim.



Appreciate it,
Steve


___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [Users] Extremely poor disk access speeds in Windows guest

2014-01-29 Thread Alan Murrell
I notced updates on a Win7 VM I ceated previously were *really* slow,  
but when I logged in to it remotely for daily use, it seemed pretty  
snappy.  I did not do any significant data transfers, however.  I had  
the same latest virtio-win drivers installed, and in oVirt, the disk  
was of type VIRTIO (and not VIRTIO-SCSI).


For other reasons, I have rebuilt my test host, and am going to be  
installing a new Windows 7 VM.  Is there anything I can do in this  
process to provide more data and help with this troubleshooting?


-Alan

___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [Users] Extremely poor disk access speeds in Windows guest

2014-01-28 Thread Steve Dainard
I've had a bit of luck here.

Overall IO performance is very poor during Windows updates, but a
contributing factor seems to be the SCSI Controller device in the guest.
This last install I didn't install a driver for that device, and my
performance is much better. Updates still chug along quite slowly, but I
seem to have more than the  100KB/s write speeds I was seeing previously.

Does anyone know what this device is for? I have the Red Hat VirtIO SCSI
Controller listed under storage controllers.

*Steve Dainard *
IT Infrastructure Manager
Miovision http://miovision.com/ | *Rethink Traffic*
519-513-2407 ex.250
877-646-8476 (toll-free)

*Blog http://miovision.com/blog  |  **LinkedIn
https://www.linkedin.com/company/miovision-technologies  |  Twitter
https://twitter.com/miovision  |  Facebook
https://www.facebook.com/miovision*
--
 Miovision Technologies Inc. | 148 Manitou Drive, Suite 101, Kitchener, ON,
Canada | N2C 1L3
This e-mail may contain information that is privileged or confidential. If
you are not the intended recipient, please delete the e-mail and any
attachments and notify us immediately.


On Sun, Jan 26, 2014 at 2:33 AM, Itamar Heim ih...@redhat.com wrote:

 On 01/26/2014 02:37 AM, Steve Dainard wrote:

 Thanks for the responses everyone, really appreciate it.

 I've condensed the other questions into this reply.


 Steve,
 What is the CPU load of the GlusterFS host when comparing the raw
 brick test to the gluster mount point test? Give it 30 seconds and
 see what top reports. You'll probably have to significantly increase
 the count on the test so that it runs that long.

 - Nick



 Gluster mount point:

 *4K* on GLUSTER host
 [root@gluster1 rep2]# dd if=/dev/zero of=/mnt/rep2/test1 bs=4k
 count=50
 50+0 records in
 50+0 records out
 204800 tel:204800 bytes (2.0 GB) copied, 100.076 s, 20.5 MB/s


 Top reported this right away:
 PID USER  PR  NI  VIRT  RES  SHR S %CPU %MEMTIME+  COMMAND
   1826 root  20   0  294m  33m 2540 S 27.2  0.4   0:04.31 glusterfs
   2126 root  20   0 1391m  31m 2336 S 22.6  0.4  11:25.48 glusterfsd

 Then at about 20+ seconds top reports this:
PID USER  PR  NI  VIRT  RES  SHR S %CPU %MEMTIME+  COMMAND
   1826 root  20   0  294m  35m 2660 R 141.7  0.5   1:14.94 glusterfs
   2126 root  20   0 1392m  31m 2344 S 33.7  0.4  11:46.56 glusterfsd

 *4K* Directly on the brick:
 dd if=/dev/zero of=test1 bs=4k count=50
 50+0 records in
 50+0 records out
 204800 tel:204800 bytes (2.0 GB) copied, 4.99367 s, 410 MB/s


   7750 root  20   0  102m  648  544 R 50.3  0.0   0:01.52 dd
   7719 root  20   0 000 D  1.0  0.0   0:01.50 flush-253:2

 Same test, gluster mount point on OVIRT host:
 dd if=/dev/zero of=/mnt/rep2/test1 bs=4k count=50
 50+0 records in
 50+0 records out
 204800 tel:204800 bytes (2.0 GB) copied, 42.4518 s, 48.2 MB/s


PID USER  PR  NI  VIRT  RES  SHR S %CPU %MEMTIME+  COMMAND
   2126 root  20   0 1396m  31m 2360 S 40.5  0.4  13:28.89 glusterfsd


 Same test, on OVIRT host but against NFS mount point:
 dd if=/dev/zero of=/mnt/rep2-nfs/test1 bs=4k count=50
 50+0 records in
 50+0 records out
 204800 tel:204800 bytes (2.0 GB) copied, 18.8911 s, 108 MB/s


 PID USER  PR  NI  VIRT  RES  SHR S %CPU %MEMTIME+  COMMAND
   2141 root  20   0  550m 184m 2840 R 84.6  2.3  16:43.10 glusterfs
   2126 root  20   0 1407m  30m 2368 S 49.8  0.4  13:49.07 glusterfsd

 Interesting - It looks like if I use a NFS mount point, I incur a cpu
 hit on two processes instead of just the daemon. I also get much better
 performance if I'm not running dd (fuse) on the GLUSTER host.


 The storage servers are a bit older, but are both dual socket
 quad core

 opterons with 4x 7200rpm drives.


 A block size of 4k is quite small so that the context switch
 overhead involved with fuse would be more perceivable.

 Would it be possible to increase the block size for dd and test?



 I'm in the process of setting up a share from my desktop and
 I'll see if

 I can bench between the two systems. Not sure if my ssd will
 impact the

 tests, I've heard there isn't an advantage using ssd storage for
 glusterfs.


 Do you have any pointers to this source of information? Typically
 glusterfs performance for virtualization work loads is bound by the
 slowest element in the entire stack. Usually storage/disks happen to
 be the bottleneck and ssd storage does benefit glusterfs.

 -Vijay


 I had a couple technical calls with RH (re: RHSS), and when I asked if
 SSD's could add any benefit I was told no. The context may have been in
 a product comparison to other storage vendors, where they use SSD's for
 read/write caching, versus having an all SSD storage domain (which I'm
 not proposing, but 

Re: [Users] Extremely poor disk access speeds in Windows guest

2014-01-25 Thread Steve Dainard
Thanks for the responses everyone, really appreciate it.

I've condensed the other questions into this reply.


Steve,
 What is the CPU load of the GlusterFS host when comparing the raw brick
 test to the gluster mount point test? Give it 30 seconds and see what top
 reports. You’ll probably have to significantly increase the count on the
 test so that it runs that long.

 - Nick



Gluster mount point:

*4K* on GLUSTER host
[root@gluster1 rep2]# dd if=/dev/zero of=/mnt/rep2/test1 bs=4k count=50
50+0 records in
50+0 records out
204800 bytes (2.0 GB) copied, 100.076 s, 20.5 MB/s

Top reported this right away:
PID USER  PR  NI  VIRT  RES  SHR S %CPU %MEMTIME+  COMMAND

 1826 root  20   0  294m  33m 2540 S 27.2  0.4   0:04.31 glusterfs

 2126 root  20   0 1391m  31m 2336 S 22.6  0.4  11:25.48 glusterfsd

Then at about 20+ seconds top reports this:
  PID USER  PR  NI  VIRT  RES  SHR S %CPU %MEMTIME+  COMMAND

 1826 root  20   0  294m  35m 2660 R 141.7  0.5   1:14.94 glusterfs

 2126 root  20   0 1392m  31m 2344 S 33.7  0.4  11:46.56 glusterfsd

*4K* Directly on the brick:
dd if=/dev/zero of=test1 bs=4k count=50
50+0 records in
50+0 records out
204800 bytes (2.0 GB) copied, 4.99367 s, 410 MB/s

 7750 root  20   0  102m  648  544 R 50.3  0.0   0:01.52 dd

 7719 root  20   0 000 D  1.0  0.0   0:01.50 flush-253:2

Same test, gluster mount point on OVIRT host:
dd if=/dev/zero of=/mnt/rep2/test1 bs=4k count=50
50+0 records in
50+0 records out
204800 bytes (2.0 GB) copied, 42.4518 s, 48.2 MB/s

  PID USER  PR  NI  VIRT  RES  SHR S %CPU %MEMTIME+  COMMAND

 2126 root  20   0 1396m  31m 2360 S 40.5  0.4  13:28.89 glusterfsd


Same test, on OVIRT host but against NFS mount point:
dd if=/dev/zero of=/mnt/rep2-nfs/test1 bs=4k count=50
50+0 records in
50+0 records out
204800 bytes (2.0 GB) copied, 18.8911 s, 108 MB/s

PID USER  PR  NI  VIRT  RES  SHR S %CPU %MEMTIME+  COMMAND

 2141 root  20   0  550m 184m 2840 R 84.6  2.3  16:43.10 glusterfs

 2126 root  20   0 1407m  30m 2368 S 49.8  0.4  13:49.07 glusterfsd


Interesting - It looks like if I use a NFS mount point, I incur a cpu hit
on two processes instead of just the daemon. I also get much better
performance if I'm not running dd (fuse) on the GLUSTER host.


The storage servers are a bit older, but are both dual socket quad core

opterons with 4x 7200rpm drives.


A block size of 4k is quite small so that the context switch overhead
involved with fuse would be more perceivable.

Would it be possible to increase the block size for dd and test?



 I'm in the process of setting up a share from my desktop and I'll see if

I can bench between the two systems. Not sure if my ssd will impact the

tests, I've heard there isn't an advantage using ssd storage for glusterfs.


Do you have any pointers to this source of information? Typically glusterfs
performance for virtualization work loads is bound by the slowest element
in the entire stack. Usually storage/disks happen to be the bottleneck and
ssd storage does benefit glusterfs.

-Vijay


I had a couple technical calls with RH (re: RHSS), and when I asked if
SSD's could add any benefit I was told no. The context may have been in a
product comparison to other storage vendors, where they use SSD's for
read/write caching, versus having an all SSD storage domain (which I'm not
proposing, but which is effectively what my desktop would provide).

Increasing bs against NFS mount point (gluster backend):
dd if=/dev/zero of=/mnt/rep2-nfs/test1 bs=128k count=16000
16000+0 records in
16000+0 records out
2097152000 bytes (2.1 GB) copied, 19.1089 s, 110 MB/s


GLUSTER host top reports:
  PID USER  PR  NI  VIRT  RES  SHR S %CPU %MEMTIME+  COMMAND

 2141 root  20   0  550m 183m 2844 R 88.9  2.3  17:30.82 glusterfs

 2126 root  20   0 1414m  31m 2408 S 46.1  0.4  14:18.18 glusterfsd


So roughly the same performance as 4k writes remotely. I'm guessing if I
could randomize these writes we'd see a large difference.


Check this thread out,
http://raobharata.wordpress.com/2012/10/29/qemu-glusterfs-native-integration/
it's
quite dated but I remember seeing similar figures.

In fact when I used FIO on a libgfapi mounted VM I got slightly faster
read/write speeds than on the physical box itself (I assume because of some
level of caching). On NFS it was close to half.. You'll probably get a
little more interesting results using FIO opposed to dd

( -Andrew)


Sorry Andrew, I meant to reply to your other message - it looks like CentOS
6.5 can't use libgfapi right now, I stumbled across this info in a couple
threads. Something about how the CentOS build has different flags set on
build for RHEV snapshot support then RHEL, so native gluster storage
domains are disabled because snapshot support is assumed and would break
otherwise. I'm assuming this is still valid as I cannot get a storage 

Re: [Users] Extremely poor disk access speeds in Windows guest

2014-01-25 Thread Itamar Heim

On 01/26/2014 02:37 AM, Steve Dainard wrote:

Thanks for the responses everyone, really appreciate it.

I've condensed the other questions into this reply.


Steve,
What is the CPU load of the GlusterFS host when comparing the raw
brick test to the gluster mount point test? Give it 30 seconds and
see what top reports. You’ll probably have to significantly increase
the count on the test so that it runs that long.

- Nick



Gluster mount point:

*4K* on GLUSTER host
[root@gluster1 rep2]# dd if=/dev/zero of=/mnt/rep2/test1 bs=4k count=50
50+0 records in
50+0 records out
204800 tel:204800 bytes (2.0 GB) copied, 100.076 s, 20.5 MB/s

Top reported this right away:
PID USER  PR  NI  VIRT  RES  SHR S %CPU %MEMTIME+  COMMAND
  1826 root  20   0  294m  33m 2540 S 27.2  0.4   0:04.31 glusterfs
  2126 root  20   0 1391m  31m 2336 S 22.6  0.4  11:25.48 glusterfsd

Then at about 20+ seconds top reports this:
   PID USER  PR  NI  VIRT  RES  SHR S %CPU %MEMTIME+  COMMAND
  1826 root  20   0  294m  35m 2660 R 141.7  0.5   1:14.94 glusterfs
  2126 root  20   0 1392m  31m 2344 S 33.7  0.4  11:46.56 glusterfsd

*4K* Directly on the brick:
dd if=/dev/zero of=test1 bs=4k count=50
50+0 records in
50+0 records out
204800 tel:204800 bytes (2.0 GB) copied, 4.99367 s, 410 MB/s

  7750 root  20   0  102m  648  544 R 50.3  0.0   0:01.52 dd
  7719 root  20   0 000 D  1.0  0.0   0:01.50 flush-253:2

Same test, gluster mount point on OVIRT host:
dd if=/dev/zero of=/mnt/rep2/test1 bs=4k count=50
50+0 records in
50+0 records out
204800 tel:204800 bytes (2.0 GB) copied, 42.4518 s, 48.2 MB/s

   PID USER  PR  NI  VIRT  RES  SHR S %CPU %MEMTIME+  COMMAND
  2126 root  20   0 1396m  31m 2360 S 40.5  0.4  13:28.89 glusterfsd


Same test, on OVIRT host but against NFS mount point:
dd if=/dev/zero of=/mnt/rep2-nfs/test1 bs=4k count=50
50+0 records in
50+0 records out
204800 tel:204800 bytes (2.0 GB) copied, 18.8911 s, 108 MB/s

PID USER  PR  NI  VIRT  RES  SHR S %CPU %MEMTIME+  COMMAND
  2141 root  20   0  550m 184m 2840 R 84.6  2.3  16:43.10 glusterfs
  2126 root  20   0 1407m  30m 2368 S 49.8  0.4  13:49.07 glusterfsd

Interesting - It looks like if I use a NFS mount point, I incur a cpu
hit on two processes instead of just the daemon. I also get much better
performance if I'm not running dd (fuse) on the GLUSTER host.


The storage servers are a bit older, but are both dual socket
quad core

opterons with 4x 7200rpm drives.


A block size of 4k is quite small so that the context switch
overhead involved with fuse would be more perceivable.

Would it be possible to increase the block size for dd and test?



I'm in the process of setting up a share from my desktop and
I'll see if

I can bench between the two systems. Not sure if my ssd will
impact the

tests, I've heard there isn't an advantage using ssd storage for
glusterfs.


Do you have any pointers to this source of information? Typically
glusterfs performance for virtualization work loads is bound by the
slowest element in the entire stack. Usually storage/disks happen to
be the bottleneck and ssd storage does benefit glusterfs.

-Vijay


I had a couple technical calls with RH (re: RHSS), and when I asked if
SSD's could add any benefit I was told no. The context may have been in
a product comparison to other storage vendors, where they use SSD's for
read/write caching, versus having an all SSD storage domain (which I'm
not proposing, but which is effectively what my desktop would provide).

Increasing bs against NFS mount point (gluster backend):
dd if=/dev/zero of=/mnt/rep2-nfs/test1 bs=128k count=16000
16000+0 records in
16000+0 records out
2097152000 tel:2097152000 bytes (2.1 GB) copied, 19.1089 s, 110 MB/s


GLUSTER host top reports:
   PID USER  PR  NI  VIRT  RES  SHR S %CPU %MEMTIME+  COMMAND
  2141 root  20   0  550m 183m 2844 R 88.9  2.3  17:30.82 glusterfs
  2126 root  20   0 1414m  31m 2408 S 46.1  0.4  14:18.18 glusterfsd

So roughly the same performance as 4k writes remotely. I'm guessing if I
could randomize these writes we'd see a large difference.


Check this thread out,

http://raobharata.wordpress.com/2012/10/29/qemu-glusterfs-native-integration/ 
it's
quite dated but I remember seeing similar figures.

In fact when I used FIO on a libgfapi mounted VM I got slightly
faster read/write speeds than on the physical box itself (I assume
because of some level of caching). On NFS it was close to half..
You'll probably get a little more interesting results using FIO
opposed to dd

( -Andrew)


Sorry Andrew, I meant to reply to your other message - it looks like
CentOS 6.5 can't use libgfapi right now, I stumbled across this info in
a couple 

Re: [Users] Extremely poor disk access speeds in Windows guest

2014-01-24 Thread Steve Dainard
Not sure what a good method to bench this would be, but:

An NFS mount point on virt host:
[root@ovirt001 iso-store]# dd if=/dev/zero of=test1 bs=4k count=10
10+0 records in
10+0 records out
40960 bytes (410 MB) copied, 3.95399 s, 104 MB/s

Raw brick performance on gluster server (yes, I know I shouldn't write
directly to the brick):
[root@gluster1 iso-store]# dd if=/dev/zero of=test bs=4k count=10
10+0 records in
10+0 records out
40960 bytes (410 MB) copied, 3.06743 s, 134 MB/s

Gluster mount point on gluster server:
[root@gluster1 iso-store]# dd if=/dev/zero of=test bs=4k count=10
10+0 records in
10+0 records out
40960 bytes (410 MB) copied, 19.5766 s, 20.9 MB/s

The storage servers are a bit older, but are both dual socket quad core
opterons with 4x 7200rpm drives.

I'm in the process of setting up a share from my desktop and I'll see if I
can bench between the two systems. Not sure if my ssd will impact the
tests, I've heard there isn't an advantage using ssd storage for glusterfs.

Does anyone have a hardware reference design for glusterfs as a backend for
virt? Or is there a benchmark utility?

*Steve Dainard *
IT Infrastructure Manager
Miovision http://miovision.com/ | *Rethink Traffic*
519-513-2407 ex.250
877-646-8476 (toll-free)

*Blog http://miovision.com/blog  |  **LinkedIn
https://www.linkedin.com/company/miovision-technologies  |  Twitter
https://twitter.com/miovision  |  Facebook
https://www.facebook.com/miovision*
--
 Miovision Technologies Inc. | 148 Manitou Drive, Suite 101, Kitchener, ON,
Canada | N2C 1L3
This e-mail may contain information that is privileged or confidential. If
you are not the intended recipient, please delete the e-mail and any
attachments and notify us immediately.


On Thu, Jan 23, 2014 at 7:18 PM, Andrew Cathrow acath...@redhat.com wrote:

 Are we sure that the issue is the guest I/O - what's the raw performance
 on the host accessing the gluster storage?

 --

 *From: *Steve Dainard sdain...@miovision.com
 *To: *Itamar Heim ih...@redhat.com
 *Cc: *Ronen Hod r...@redhat.com, users users@ovirt.org, Sanjay
 Rao s...@redhat.com
 *Sent: *Thursday, January 23, 2014 4:56:58 PM
 *Subject: *Re: [Users] Extremely poor disk access speeds in Windows guest


 I have two options, virtio and virtio-scsi.

 I was using virtio, and have also attempted virtio-scsi on another Windows
 guest with the same results.

 Using the newest drivers, virtio-win-0.1-74.iso.

 *Steve Dainard *
 IT Infrastructure Manager
 Miovision http://miovision.com/ | *Rethink Traffic*
 519-513-2407 ex.250
 877-646-8476 (toll-free)

 *Blog http://miovision.com/blog  |  **LinkedIn
 https://www.linkedin.com/company/miovision-technologies  |  Twitter
 https://twitter.com/miovision  |  Facebook
 https://www.facebook.com/miovision*
 --
  Miovision Technologies Inc. | 148 Manitou Drive, Suite 101, Kitchener,
 ON, Canada | N2C 1L3
 This e-mail may contain information that is privileged or confidential. If
 you are not the intended recipient, please delete the e-mail and any
 attachments and notify us immediately.


 On Thu, Jan 23, 2014 at 4:24 PM, Itamar Heim ih...@redhat.com wrote:

 On 01/23/2014 07:46 PM, Steve Dainard wrote:

 Backing Storage: Gluster Replica
 Storage Domain: NFS
 Ovirt Hosts: CentOS 6.5
 Ovirt version: 3.3.2
 Network: GigE
 # of VM's: 3 - two Linux guests are idle, one Windows guest is
 installing updates.

 I've installed a Windows 2008 R2 guest with virtio disk, and all the
 drivers from the latest virtio iso. I've also installed the spice agent
 drivers.

 Guest disk access is horribly slow, Resource monitor during Windows
 updates shows Disk peaking at 1MB/sec (scale never increases) and Disk
 Queue Length Peaking at 5 and looks to be sitting at that level 99% of
 the time. 113 updates in Windows has been running solidly for about 2.5
 hours and is at 89/113 updates complete.


 virtio-block or virtio-scsi?
 which windows guest driver version for that?


 I can't say my Linux guests are blisteringly fast, but updating a guest
 from RHEL 6.3 fresh install to 6.5 took about 25 minutes.

 If anyone has any ideas, please let me know - I haven't found any tuning
 docs for Windows guests that could explain this issue.

 Thanks,


 *Steve Dainard *



 ___
 Users mailing list
 Users@ovirt.org
 http://lists.ovirt.org/mailman/listinfo/users




 ___
 Users mailing list
 Users@ovirt.org
 http://lists.ovirt.org/mailman/listinfo/users



___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [Users] Extremely poor disk access speeds in Windows guest

2014-01-24 Thread Nicholas Kesick
Steve,

What is the CPU load of the GlusterFS host when comparing the raw brick test to 
the gluster mount point test? Give it 30 seconds and see what top reports. 
You’ll probably have to significantly increase the count on the test so that it 
runs that long.






- Nick





From: Sanjay Rao
Sent: ‎Friday‎, ‎January‎ ‎24‎, ‎2014 ‎3‎:‎35‎ ‎PM
To: Steve Dainard
Cc: Bob Sibley, oVirt Mailing List, Ronen Hod






Adding Bob Sibley to this thread. 










From: Steve Dainard sdain...@miovision.com
To: Andrew Cathrow acath...@redhat.com
Cc: Ronen Hod r...@redhat.com, users users@ovirt.org, Sanjay Rao 
s...@redhat.com, Itamar Heim ih...@redhat.com
Sent: Friday, January 24, 2014 3:01:25 PM
Subject: Re: [Users] Extremely poor disk access speeds in Windows guest




Not sure what a good method to bench this would be, but:



An NFS mount point on virt host:


[root@ovirt001 iso-store]# dd if=/dev/zero of=test1 bs=4k count=10

10+0 records in

10+0 records out

40960 bytes (410 MB) copied, 3.95399 s, 104 MB/s




Raw brick performance on gluster server (yes, I know I shouldn't write directly 
to the brick):


[root@gluster1 iso-store]# dd if=/dev/zero of=test bs=4k count=10

10+0 records in

10+0 records out

40960 bytes (410 MB) copied, 3.06743 s, 134 MB/s




Gluster mount point on gluster server:


[root@gluster1 iso-store]# dd if=/dev/zero of=test bs=4k count=10

10+0 records in

10+0 records out

40960 bytes (410 MB) copied, 19.5766 s, 20.9 MB/s




The storage servers are a bit older, but are both dual socket quad core 
opterons with 4x 7200rpm drives. 




I'm in the process of setting up a share from my desktop and I'll see if I can 
bench between the two systems. Not sure if my ssd will impact the tests, I've 
heard there isn't an advantage using ssd storage for glusterfs.




Does anyone have a hardware reference design for glusterfs as a backend for 
virt? Or is there a benchmark utility?




Steve Dainard 
IT Infrastructure Manager
Miovision | Rethink Traffic
519-513-2407 ex.250
877-646-8476 (toll-free)

Blog  |  LinkedIn  |  Twitter  |  Facebook 


Miovision Technologies Inc. | 148 Manitou Drive, Suite 101, Kitchener, ON, 
Canada | N2C 1L3
This e-mail may contain information that is privileged or confidential. If you 
are not the intended recipient, please delete the e-mail and any attachments 
and notify us immediately.





On Thu, Jan 23, 2014 at 7:18 PM, Andrew Cathrow acath...@redhat.com wrote:



Are we sure that the issue is the guest I/O - what's the raw performance on the 
host accessing the gluster storage?






From: Steve Dainard sdain...@miovision.com
To: Itamar Heim ih...@redhat.com
Cc: Ronen Hod r...@redhat.com, users users@ovirt.org, Sanjay Rao 
s...@redhat.com
Sent: Thursday, January 23, 2014 4:56:58 PM
Subject: Re: [Users] Extremely poor disk access speeds in Windows guest






I have two options, virtio and virtio-scsi.



I was using virtio, and have also attempted virtio-scsi on another Windows 
guest with the same results.




Using the newest drivers, virtio-win-0.1-74.iso.




Steve Dainard 
IT Infrastructure Manager
Miovision | Rethink Traffic
519-513-2407 ex.250
877-646-8476 (toll-free)

Blog  |  LinkedIn  |  Twitter  |  Facebook 


Miovision Technologies Inc. | 148 Manitou Drive, Suite 101, Kitchener, ON, 
Canada | N2C 1L3
This e-mail may contain information that is privileged or confidential. If you 
are not the intended recipient, please delete the e-mail and any attachments 
and notify us immediately.





On Thu, Jan 23, 2014 at 4:24 PM, Itamar Heim ih...@redhat.com wrote:


On 01/23/2014 07:46 PM, Steve Dainard wrote:

Backing Storage: Gluster Replica
Storage Domain: NFS
Ovirt Hosts: CentOS 6.5
Ovirt version: 3.3.2
Network: GigE
# of VM's: 3 - two Linux guests are idle, one Windows guest is
installing updates.

I've installed a Windows 2008 R2 guest with virtio disk, and all the
drivers from the latest virtio iso. I've also installed the spice agent
drivers.

Guest disk access is horribly slow, Resource monitor during Windows
updates shows Disk peaking at 1MB/sec (scale never increases) and Disk
Queue Length Peaking at 5 and looks to be sitting at that level 99% of
the time. 113 updates in Windows has been running solidly for about 2.5
hours and is at 89/113 updates complete.


virtio-block or virtio-scsi?
which windows guest driver version for that?




I can't say my Linux guests are blisteringly fast, but updating a guest
from RHEL 6.3 fresh install to 6.5 took about 25 minutes.

If anyone has any ideas, please let me know - I haven't found any tuning
docs for Windows guests that could explain this issue.

Thanks,



*Steve Dainard *



___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users




___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users

Re: [Users] Extremely poor disk access speeds in Windows guest

2014-01-24 Thread Vijay Bellur

On 01/25/2014 01:31 AM, Steve Dainard wrote:

Not sure what a good method to bench this would be, but:

An NFS mount point on virt host:
[root@ovirt001 iso-store]# dd if=/dev/zero of=test1 bs=4k count=10
10+0 records in
10+0 records out
40960 bytes (410 MB) copied, 3.95399 s, 104 MB/s

Raw brick performance on gluster server (yes, I know I shouldn't write
directly to the brick):
[root@gluster1 iso-store]# dd if=/dev/zero of=test bs=4k count=10
10+0 records in
10+0 records out
40960 bytes (410 MB) copied, 3.06743 s, 134 MB/s

Gluster mount point on gluster server:
[root@gluster1 iso-store]# dd if=/dev/zero of=test bs=4k count=10
10+0 records in
10+0 records out
40960 bytes (410 MB) copied, 19.5766 s, 20.9 MB/s

The storage servers are a bit older, but are both dual socket quad core
opterons with 4x 7200rpm drives.


A block size of 4k is quite small so that the context switch overhead 
involved with fuse would be more perceivable.


Would it be possible to increase the block size for dd and test?



I'm in the process of setting up a share from my desktop and I'll see if
I can bench between the two systems. Not sure if my ssd will impact the
tests, I've heard there isn't an advantage using ssd storage for glusterfs.


Do you have any pointers to this source of information? Typically 
glusterfs performance for virtualization work loads is bound by the 
slowest element in the entire stack. Usually storage/disks happen to be 
the bottleneck and ssd storage does benefit glusterfs.


-Vijay




Does anyone have a hardware reference design for glusterfs as a backend
for virt? Or is there a benchmark utility?

*Steve Dainard *
IT Infrastructure Manager
Miovision http://miovision.com/ | /Rethink Traffic/
519-513-2407 ex.250
877-646-8476 (toll-free)

*Blog http://miovision.com/blog  | **LinkedIn
https://www.linkedin.com/company/miovision-technologies  | Twitter
https://twitter.com/miovision  | Facebook
https://www.facebook.com/miovision*

Miovision Technologies Inc. | 148 Manitou Drive, Suite 101, Kitchener,
ON, Canada | N2C 1L3
This e-mail may contain information that is privileged or confidential.
If you are not the intended recipient, please delete the e-mail and any
attachments and notify us immediately.


On Thu, Jan 23, 2014 at 7:18 PM, Andrew Cathrow acath...@redhat.com
mailto:acath...@redhat.com wrote:

Are we sure that the issue is the guest I/O - what's the raw
performance on the host accessing the gluster storage?



*From: *Steve Dainard sdain...@miovision.com
mailto:sdain...@miovision.com
*To: *Itamar Heim ih...@redhat.com mailto:ih...@redhat.com
*Cc: *Ronen Hod r...@redhat.com mailto:r...@redhat.com,
users users@ovirt.org mailto:users@ovirt.org, Sanjay Rao
s...@redhat.com mailto:s...@redhat.com
*Sent: *Thursday, January 23, 2014 4:56:58 PM
*Subject: *Re: [Users] Extremely poor disk access speeds in
Windows guest


I have two options, virtio and virtio-scsi.

I was using virtio, and have also attempted virtio-scsi on
another Windows guest with the same results.

Using the newest drivers, virtio-win-0.1-74.iso.

*Steve Dainard *
IT Infrastructure Manager
Miovision http://miovision.com/ | /Rethink Traffic/
519-513-2407 tel:519-513-2407 ex.250
877-646-8476 tel:877-646-8476 (toll-free)

*Blog http://miovision.com/blog  | **LinkedIn
https://www.linkedin.com/company/miovision-technologies  |
Twitter https://twitter.com/miovision  | Facebook
https://www.facebook.com/miovision*

Miovision Technologies Inc. | 148 Manitou Drive, Suite 101,
Kitchener, ON, Canada | N2C 1L3
This e-mail may contain information that is privileged or
confidential. If you are not the intended recipient, please
delete the e-mail and any attachments and notify us immediately.


On Thu, Jan 23, 2014 at 4:24 PM, Itamar Heim ih...@redhat.com
mailto:ih...@redhat.com wrote:

On 01/23/2014 07:46 PM, Steve Dainard wrote:

Backing Storage: Gluster Replica
Storage Domain: NFS
Ovirt Hosts: CentOS 6.5
Ovirt version: 3.3.2
Network: GigE
# of VM's: 3 - two Linux guests are idle, one Windows
guest is
installing updates.

I've installed a Windows 2008 R2 guest with virtio disk,
and all the
drivers from the latest virtio iso. I've also installed
the spice agent
drivers.

Guest disk access is horribly slow, Resource

[Users] Extremely poor disk access speeds in Windows guest

2014-01-23 Thread Steve Dainard
Backing Storage: Gluster Replica
Storage Domain: NFS
Ovirt Hosts: CentOS 6.5
Ovirt version: 3.3.2
Network: GigE
# of VM's: 3 - two Linux guests are idle, one Windows guest is installing
updates.

I've installed a Windows 2008 R2 guest with virtio disk, and all the
drivers from the latest virtio iso. I've also installed the spice agent
drivers.

Guest disk access is horribly slow, Resource monitor during Windows updates
shows Disk peaking at 1MB/sec (scale never increases) and Disk Queue Length
Peaking at 5 and looks to be sitting at that level 99% of the time. 113
updates in Windows has been running solidly for about 2.5 hours and is at
89/113 updates complete.

I can't say my Linux guests are blisteringly fast, but updating a guest
from RHEL 6.3 fresh install to 6.5 took about 25 minutes.

If anyone has any ideas, please let me know - I haven't found any tuning
docs for Windows guests that could explain this issue.

Thanks,


*Steve Dainard *
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [Users] Extremely poor disk access speeds in Windows guest

2014-01-23 Thread Itamar Heim

On 01/23/2014 07:46 PM, Steve Dainard wrote:

Backing Storage: Gluster Replica
Storage Domain: NFS
Ovirt Hosts: CentOS 6.5
Ovirt version: 3.3.2
Network: GigE
# of VM's: 3 - two Linux guests are idle, one Windows guest is
installing updates.

I've installed a Windows 2008 R2 guest with virtio disk, and all the
drivers from the latest virtio iso. I've also installed the spice agent
drivers.

Guest disk access is horribly slow, Resource monitor during Windows
updates shows Disk peaking at 1MB/sec (scale never increases) and Disk
Queue Length Peaking at 5 and looks to be sitting at that level 99% of
the time. 113 updates in Windows has been running solidly for about 2.5
hours and is at 89/113 updates complete.


virtio-block or virtio-scsi?
which windows guest driver version for that?



I can't say my Linux guests are blisteringly fast, but updating a guest
from RHEL 6.3 fresh install to 6.5 took about 25 minutes.

If anyone has any ideas, please let me know - I haven't found any tuning
docs for Windows guests that could explain this issue.

Thanks,


*Steve Dainard *



___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users



___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [Users] Extremely poor disk access speeds in Windows guest

2014-01-23 Thread Steve Dainard
I have two options, virtio and virtio-scsi.

I was using virtio, and have also attempted virtio-scsi on another Windows
guest with the same results.

Using the newest drivers, virtio-win-0.1-74.iso.

*Steve Dainard *
IT Infrastructure Manager
Miovision http://miovision.com/ | *Rethink Traffic*
519-513-2407 ex.250
877-646-8476 (toll-free)

*Blog http://miovision.com/blog  |  **LinkedIn
https://www.linkedin.com/company/miovision-technologies  |  Twitter
https://twitter.com/miovision  |  Facebook
https://www.facebook.com/miovision*
--
 Miovision Technologies Inc. | 148 Manitou Drive, Suite 101, Kitchener, ON,
Canada | N2C 1L3
This e-mail may contain information that is privileged or confidential. If
you are not the intended recipient, please delete the e-mail and any
attachments and notify us immediately.


On Thu, Jan 23, 2014 at 4:24 PM, Itamar Heim ih...@redhat.com wrote:

 On 01/23/2014 07:46 PM, Steve Dainard wrote:

 Backing Storage: Gluster Replica
 Storage Domain: NFS
 Ovirt Hosts: CentOS 6.5
 Ovirt version: 3.3.2
 Network: GigE
 # of VM's: 3 - two Linux guests are idle, one Windows guest is
 installing updates.

 I've installed a Windows 2008 R2 guest with virtio disk, and all the
 drivers from the latest virtio iso. I've also installed the spice agent
 drivers.

 Guest disk access is horribly slow, Resource monitor during Windows
 updates shows Disk peaking at 1MB/sec (scale never increases) and Disk
 Queue Length Peaking at 5 and looks to be sitting at that level 99% of
 the time. 113 updates in Windows has been running solidly for about 2.5
 hours and is at 89/113 updates complete.


 virtio-block or virtio-scsi?
 which windows guest driver version for that?


 I can't say my Linux guests are blisteringly fast, but updating a guest
 from RHEL 6.3 fresh install to 6.5 took about 25 minutes.

 If anyone has any ideas, please let me know - I haven't found any tuning
 docs for Windows guests that could explain this issue.

 Thanks,


 *Steve Dainard *



 ___
 Users mailing list
 Users@ovirt.org
 http://lists.ovirt.org/mailman/listinfo/users



___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [Users] Extremely poor disk access speeds in Windows guest

2014-01-23 Thread Andrew Cathrow
Are we sure that the issue is the guest I/O - what's the raw performance on the 
host accessing the gluster storage? 

- Original Message -

 From: Steve Dainard sdain...@miovision.com
 To: Itamar Heim ih...@redhat.com
 Cc: Ronen Hod r...@redhat.com, users users@ovirt.org, Sanjay
 Rao s...@redhat.com
 Sent: Thursday, January 23, 2014 4:56:58 PM
 Subject: Re: [Users] Extremely poor disk access speeds in Windows
 guest

 I have two options, virtio and virtio-scsi.

 I was using virtio, and have also attempted virtio-scsi on another
 Windows guest with the same results.

 Using the newest drivers, virtio-win-0.1-74.iso.

 Steve Dainard
 IT Infrastructure Manager
 Miovision | Rethink Traffic
 519-513-2407 ex.250
 877-646-8476 (toll-free)

 Blog | LinkedIn | Twitter | Facebook

 Miovision Technologies Inc. | 148 Manitou Drive, Suite 101,
 Kitchener, ON, Canada | N2C 1L3
 This e-mail may contain information that is privileged or
 confidential. If you are not the intended recipient, please delete
 the e-mail and any attachments and notify us immediately.

 On Thu, Jan 23, 2014 at 4:24 PM, Itamar Heim  ih...@redhat.com 
 wrote:

  On 01/23/2014 07:46 PM, Steve Dainard wrote:
 

   Backing Storage: Gluster Replica
  
 
   Storage Domain: NFS
  
 
   Ovirt Hosts: CentOS 6.5
  
 
   Ovirt version: 3.3.2
  
 
   Network: GigE
  
 
   # of VM's: 3 - two Linux guests are idle, one Windows guest is
  
 
   installing updates.
  
 

   I've installed a Windows 2008 R2 guest with virtio disk, and all
   the
  
 
   drivers from the latest virtio iso. I've also installed the spice
   agent
  
 
   drivers.
  
 

   Guest disk access is horribly slow, Resource monitor during
   Windows
  
 
   updates shows Disk peaking at 1MB/sec (scale never increases) and
   Disk
  
 
   Queue Length Peaking at 5 and looks to be sitting at that level
   99%
   of
  
 
   the time. 113 updates in Windows has been running solidly for
   about
   2.5
  
 
   hours and is at 89/113 updates complete.
  
 

  virtio-block or virtio-scsi?
 
  which windows guest driver version for that?
 

   I can't say my Linux guests are blisteringly fast, but updating a
   guest
  
 
   from RHEL 6.3 fresh install to 6.5 took about 25 minutes.
  
 

   If anyone has any ideas, please let me know - I haven't found any
   tuning
  
 
   docs for Windows guests that could explain this issue.
  
 

   Thanks,
  
 

   *Steve Dainard *
  
 

   __ _
  
 
   Users mailing list
  
 
   Users@ovirt.org
  
 
   http://lists.ovirt.org/ mailman/listinfo/users
  
 

 ___
 Users mailing list
 Users@ovirt.org
 http://lists.ovirt.org/mailman/listinfo/users
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users