Re: [Users] Extremely poor disk access speeds in Windows guest

2014-01-29 Thread Ronen Hod

Adding the virtio-scsi developers.
Anyhow, virtio-scsi is newer and less established than viostor (the block 
device), so you might want to try it out.
A disclaimer: There are time and patches gaps between RHEL and other versions.

Ronen.

On 01/28/2014 10:39 PM, Steve Dainard wrote:

I've had a bit of luck here.

Overall IO performance is very poor during Windows updates, but a contributing factor seems 
to be the SCSI Controller device in the guest. This last install I didn't 
install a driver for that device, and my performance is much better. Updates still chug 
along quite slowly, but I seem to have more than the  100KB/s write speeds I was seeing 
previously.

Does anyone know what this device is for? I have the Red Hat VirtIO SCSI 
Controller listed under storage controllers.

*Steve Dainard *
IT Infrastructure Manager
Miovision http://miovision.com/ | /Rethink Traffic/
519-513-2407 ex.250
877-646-8476 (toll-free)

*Blog http://miovision.com/blog | **LinkedIn 
https://www.linkedin.com/company/miovision-technologies  | Twitter 
https://twitter.com/miovision  | Facebook https://www.facebook.com/miovision*

Miovision Technologies Inc. | 148 Manitou Drive, Suite 101, Kitchener, ON, 
Canada | N2C 1L3
This e-mail may contain information that is privileged or confidential. If you 
are not the intended recipient, please delete the e-mail and any attachments 
and notify us immediately.


On Sun, Jan 26, 2014 at 2:33 AM, Itamar Heim ih...@redhat.com 
mailto:ih...@redhat.com wrote:

On 01/26/2014 02:37 AM, Steve Dainard wrote:

Thanks for the responses everyone, really appreciate it.

I've condensed the other questions into this reply.


Steve,
What is the CPU load of the GlusterFS host when comparing the raw
brick test to the gluster mount point test? Give it 30 seconds and
see what top reports. You'll probably have to significantly increase
the count on the test so that it runs that long.

- Nick



Gluster mount point:

*4K* on GLUSTER host
[root@gluster1 rep2]# dd if=/dev/zero of=/mnt/rep2/test1 bs=4k 
count=50
50+0 records in
50+0 records out
204800 tel:204800 tel:204800 tel:204800 bytes 
(2.0 GB) copied, 100.076 s, 20.5 MB/s


Top reported this right away:
PID USER  PR  NI  VIRT  RES  SHR S %CPU %MEM  TIME+  COMMAND
  1826 root  20   0  294m  33m 2540 S 27.2  0.4 0:04.31 glusterfs
  2126 root  20   0 1391m  31m 2336 S 22.6  0.4  11:25.48 glusterfsd

Then at about 20+ seconds top reports this:
   PID USER  PR  NI  VIRT  RES  SHR S %CPU %MEM  TIME+  COMMAND
  1826 root  20   0  294m  35m 2660 R 141.7  0.5 1:14.94 glusterfs
  2126 root  20   0 1392m  31m 2344 S 33.7  0.4  11:46.56 glusterfsd

*4K* Directly on the brick:
dd if=/dev/zero of=test1 bs=4k count=50
50+0 records in
50+0 records out
204800 tel:204800 tel:204800 tel:204800 bytes 
(2.0 GB) copied, 4.99367 s, 410 MB/s


  7750 root  20   0  102m  648  544 R 50.3  0.0 0:01.52 dd
  7719 root  20   0 000 D  1.0  0.0 0:01.50 flush-253:2

Same test, gluster mount point on OVIRT host:
dd if=/dev/zero of=/mnt/rep2/test1 bs=4k count=50
50+0 records in
50+0 records out
204800 tel:204800 tel:204800 tel:204800 bytes 
(2.0 GB) copied, 42.4518 s, 48.2 MB/s


   PID USER  PR  NI  VIRT  RES  SHR S %CPU %MEM  TIME+  COMMAND
  2126 root  20   0 1396m  31m 2360 S 40.5  0.4  13:28.89 glusterfsd


Same test, on OVIRT host but against NFS mount point:
dd if=/dev/zero of=/mnt/rep2-nfs/test1 bs=4k count=50
50+0 records in
50+0 records out
204800 tel:204800 tel:204800 tel:204800 bytes 
(2.0 GB) copied, 18.8911 s, 108 MB/s


PID USER  PR  NI  VIRT  RES  SHR S %CPU %MEM  TIME+  COMMAND
  2141 root  20   0  550m 184m 2840 R 84.6  2.3  16:43.10 glusterfs
  2126 root  20   0 1407m  30m 2368 S 49.8  0.4  13:49.07 glusterfsd

   

Re: [Users] Nodes lose storage at random

2014-02-24 Thread Ronen Hod

On 02/24/2014 11:48 AM, Nir Soffer wrote:

- Original Message -

From: Johan Kooijman m...@johankooijman.com
To: Nir Soffer nsof...@redhat.com
Cc: users users@ovirt.org
Sent: Monday, February 24, 2014 2:45:59 AM
Subject: Re: [Users] Nodes lose storage at random

Interestingly enough - same thing happened today, around the same time.
Logs from this host are attached.

Around 1:10 AM stuff starts to go wrong again. Same pattern - we reboot the
node and the node is fine again.

So we made some progress, we know that it is not a problem with old kernel.

In messages we see the same picture:

1. sanlock fail to renew the lease
2. after 80 secodns, kill vdsm
3. sanlock and vdsm cannot access the storage
4. kernel complain about nfs server timeouts
(explains why sanlock failed to renew the lease)
5. after reboot, nfs is accessible again
6. after few days goto step 1

This looks like kernel nfs issue.

Could be also kvm issue (running bsd on the one of the vm?)

Could be also some incompatibility with the nfs server - maybe you are using
esoteric configuration options?

CCing Ronen, in case this is related to kvm.


Not seems to be related to KVM.
Adding Ric Wheeler.

Ronen.



thread: http://lists.ovirt.org/pipermail/users/2014-February/021507.html

Nir


___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [Users] Stack trace caused by FreeBSD client

2014-02-24 Thread Ronen Hod

On 02/23/2014 10:13 PM, Nir Soffer wrote:

- Original Message -

From: Johan Kooijman m...@johankooijman.com
To: users users@ovirt.org
Sent: Sunday, February 23, 2014 8:22:41 PM
Subject: [Users] Stack trace caused by FreeBSD client

Hi all,

Interesting thing I found out this afternoon. I have a FreeBSD 10 guest with
virtio drivers, both disk and net.

The VM works fine, but when I connect over SSH to the VM, I see this stack
trace in messages on the node:

This warning may be interesting to qemu/kvm/kernel developers, ccing Ronen.


Probably, nobody bothered to productize FreeBSD.
You can try to use E1000 instead of virtio.

Ronen.




Feb 23 19:19:42 hv3 kernel: [ cut here ]
Feb 23 19:19:42 hv3 kernel: WARNING: at net/core/dev.c:1907
skb_warn_bad_offload+0xc2/0xf0() (Tainted: G W --- )
Feb 23 19:19:42 hv3 kernel: Hardware name: X9DR3-F
Feb 23 19:19:42 hv3 kernel: igb: caps=(0x12114bb3, 0x0) len=5686
data_len=5620 ip_summed=0
Feb 23 19:19:42 hv3 kernel: Modules linked in: ebt_arp nfs lockd fscache
auth_rpcgss nfs_acl sunrpc bonding 8021q garp ebtable_nat ebtables bridge
stp llc xt_physdev ipt_REJECT nf_conntrack_ipv4 nf_defrag_ipv4 xt_multiport
iptable_filter ip_tables ip6t_REJECT nf_conntrack_ipv6 nf_defrag_ipv6
xt_state nf_conntrack ip6table_filter ip6_tables ipv6 dm_round_robin
dm_multipath vhost_net macvtap macvlan tun kvm_intel kvm iTCO_wdt
iTCO_vendor_support sg ixgbe mdio sb_edac edac_core lpc_ich mfd_core
i2c_i801 ioatdma igb dca i2c_algo_bit i2c_core ptp pps_core ext4 jbd2
mbcache sd_mod crc_t10dif 3w_sas ahci isci libsas scsi_transport_sas
dm_mirror dm_region_hash dm_log dm_mod [last unloaded: scsi_wait_scan]
Feb 23 19:19:42 hv3 kernel: Pid: 15280, comm: vhost-15276 Tainted: G W
--- 2.6.32-431.5.1.el6.x86_64 #1
Feb 23 19:19:42 hv3 kernel: Call Trace:
Feb 23 19:19:42 hv3 kernel: IRQ [81071e27] ?
warn_slowpath_common+0x87/0xc0
Feb 23 19:19:42 hv3 kernel: [81071f16] ?
warn_slowpath_fmt+0x46/0x50
Feb 23 19:19:42 hv3 kernel: [a016c862] ? igb_get_drvinfo+0x82/0xe0
[igb]
Feb 23 19:19:42 hv3 kernel: [8145b1d2] ?
skb_warn_bad_offload+0xc2/0xf0
Feb 23 19:19:42 hv3 kernel: [814602c1] ?
__skb_gso_segment+0x71/0xc0
Feb 23 19:19:42 hv3 kernel: [81460323] ? skb_gso_segment+0x13/0x20
Feb 23 19:19:42 hv3 kernel: [814603cb] ?
dev_hard_start_xmit+0x9b/0x480
Feb 23 19:19:42 hv3 kernel: [8147bf5a] ?
sch_direct_xmit+0x15a/0x1c0
Feb 23 19:19:42 hv3 kernel: [81460a58] ? dev_queue_xmit+0x228/0x320
Feb 23 19:19:42 hv3 kernel: [a035a898] ?
br_dev_queue_push_xmit+0x88/0xc0 [bridge]
Feb 23 19:19:42 hv3 kernel: [a035a928] ?
br_forward_finish+0x58/0x60 [bridge]
Feb 23 19:19:42 hv3 kernel: [a035a9da] ? __br_forward+0xaa/0xd0
[bridge]
Feb 23 19:19:42 hv3 kernel: [814897b6] ? nf_hook_slow+0x76/0x120
Feb 23 19:19:42 hv3 kernel: [a035aa5d] ? br_forward+0x5d/0x70
[bridge]
Feb 23 19:19:42 hv3 kernel: [a035ba6b] ?
br_handle_frame_finish+0x17b/0x2a0 [bridge]
Feb 23 19:19:42 hv3 kernel: [a035bd3a] ?
br_handle_frame+0x1aa/0x250 [bridge]
Feb 23 19:19:42 hv3 kernel: [8145b7c9] ?
__netif_receive_skb+0x529/0x750
Feb 23 19:19:42 hv3 kernel: [8145ba8a] ? process_backlog+0x9a/0x100
Feb 23 19:19:42 hv3 kernel: [81460d43] ? net_rx_action+0x103/0x2f0
Feb 23 19:19:42 hv3 kernel: [8107a8e1] ? __do_softirq+0xc1/0x1e0
Feb 23 19:19:42 hv3 kernel: [8100c30c] ? call_softirq+0x1c/0x30
Feb 23 19:19:42 hv3 kernel: EOI [8100fa75] ? do_softirq+0x65/0xa0
Feb 23 19:19:42 hv3 kernel: [814611c8] ? netif_rx_ni+0x28/0x30
Feb 23 19:19:42 hv3 kernel: [a01a0749] ? tun_sendmsg+0x229/0x4ec
[tun]
Feb 23 19:19:42 hv3 kernel: [a027bcf5] ? handle_tx+0x275/0x5e0
[vhost_net]
Feb 23 19:19:42 hv3 kernel: [a027c095] ? handle_tx_kick+0x15/0x20
[vhost_net]
Feb 23 19:19:42 hv3 kernel: [a027955c] ? vhost_worker+0xbc/0x140
[vhost_net]
Feb 23 19:19:42 hv3 kernel: [a02794a0] ? vhost_worker+0x0/0x140
[vhost_net]
Feb 23 19:19:42 hv3 kernel: [8109aee6] ? kthread+0x96/0xa0
Feb 23 19:19:42 hv3 kernel: [8100c20a] ? child_rip+0xa/0x20
Feb 23 19:19:42 hv3 kernel: [8109ae50] ? kthread+0x0/0xa0
Feb 23 19:19:42 hv3 kernel: [8100c200] ? child_rip+0x0/0x20
Feb 23 19:19:42 hv3 kernel: ---[ end trace e93142595d6ecfc7 ]---

This is 100% reproducable, every time. The login itself works just fine. Some
more info:

[root@hv3 ~]# uname -a
Linux hv3.ovirt.gs.cloud.lan 2.6.32-431.5.1.el6.x86_64 #1 SMP Wed Feb 12
00:41:43 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux
[root@hv3 ~]# rpm -qa | grep vdsm
vdsm-4.13.3-3.el6.x86_64
vdsm-xmlrpc-4.13.3-3.el6.noarch
vdsm-python-4.13.3-3.el6.x86_64
vdsm-cli-4.13.3-3.el6.noarch

--
Met vriendelijke groeten / With kind regards,
Johan Kooijman

E m...@johankooijman.com

___
Users mailing list
Users@ovirt.org

Re: [Users] BSOD's on Server 2008 Guests

2013-01-16 Thread Ronen Hod

On 01/16/2013 02:23 PM, Itamar Heim wrote:

On 01/16/2013 01:18 PM, Neil wrote:

Hi guys,

I have 3 Server 2008 guests running on my oVirt 3.1 system, if there
is an incorrect shutdown(power failure) of the entire system and
guests, or even sometimes a normal reboot of the guests, the 2008
Servers all start with blue screens and I have to reboot them multiple
times before they eventually boot into Windows as if nothing was ever
wrong. The Linux guests all behave perfectly on the system so I highly
doubt there are any hardware issues.

Does this problem sound familiar to anyone? I don't want to go ahead
and run all the latest updates and possibly risk bigger issues, unless
there is a good reason to.

These are the details of my system.

Centos 6.3 64bit on nodes and engine using dreyou repo.

ovirt-engine-3.1.0-3.8.el6.noarch
vdsm-4.10.0-0.44.14.el6.x86_64
qemu-kvm-0.12.1.2-2.295.el6_3.2.x86_64
libvirt-0.9.10-21.el6_3.5.x86_64

2x Dell R720 with Xeon E5-2620 CPU's Nodes running the guests.
An FC SAN for storage
HP Micro Server for the engine

Please shout if any other details will help.

Thanks.

Regards.

Neil Wilson.
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users



sounds like a qemu-kvm issue.
ronen - rings any bells?



Yes, it rings a bell, but in order to handle it properly we would rather get a 
proper bug report in our Bugzilla, ask questions, receive dumps ...
OTOH, since I assume that you are not a Red Hat customer, we will not handle it.

Regards, Ronen.

___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [Users] windows 8 guest support

2013-01-16 Thread Ronen Hod

On 01/16/2013 05:38 PM, Itamar Heim wrote:

On 01/16/2013 05:31 PM, Jithin Raju wrote:

Hi Itamar,

I tried installing windows8 enterprise 32 bit as ovirt guest but
bootloader failed to load with error 0x5d .
host machine support vt  vt-d.
Same thing is happening while installing windows 8 in centos 6.2 host.
windows8 is not listed in guest support option in ovirt 3.1/rhel 6.3.

some google results showed that its because windows 8 is not supporting IDE.
There is no option for sata controller in any of the above
virtualization solutions.
If its happening due to some other thing could you please let me know
the how to make it work.


first, please open a bug for not allowing IDE device for windows 8 (2012?) OSs 
(since 3.2 added them to list of OSs).
second, ronen - are there community available virtio-block drivers for windows 
8 Jithin can try to install with?
(Jithin, you would need to attach a virtual floppy (vfd) to the guest during 
guest OS install to provide it with the driver).



Win8 support is premature, and we are still working on it. We will publish the 
drivers as usual once the drivers are ready.

Ronen.







Thanks,
Jithin


On Wed, Jan 16, 2013 at 5:46 PM, Itamar Heim ih...@redhat.com
mailto:ih...@redhat.com wrote:

On 01/16/2013 02:14 PM, Jithin Raju wrote:

Hi All,

Windows 8 guest is supported in ovirt?
Any plan since it has some issues with qemu sata support.


can you please elaborate on the issue?







___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users