Hi,

we had the same issue (oVirt 3.0 & oVirt 3.2.1) when using balance-rr for VM 
networking:
Windows VM couldn’t access network.

After changing to bond mode to 4 (LACP) , Windows VM networking was possible.

Cheers,
Sven.

Sven Knohsalla | System Administration | Netbiscuits

Office +49 631 68036 433 | Fax +49 631 68036 111  |E-Mail 
[email protected] | Skype: netbiscuits.admin
Netbiscuits GmbH | Europaallee 10 | 67657 | GERMANY

Von: [email protected] [mailto:[email protected]] Im Auftrag von 
Karli Sjöberg
Gesendet: Freitag, 20. September 2013 13:11
An: SULLIVAN, Chris (WGK)
Cc: [email protected]
Betreff: Re: [Users] oVirt 3.3/F19 - Windows guest unable to access network

fre 2013-09-20 klockan 10:50 +0000 skrev SULLIVAN, Chris (WGK):



Hi,



Just following up on this issue. Turns out the network problems were being 
caused by the bond0 interface.



The initial configuration was two NICs teamed as bond0, which was then bridged 
to the ovirtmgmt interface. With this configuration, RHEL guests could access 
the network normally but Windows guests (XP, 7, 2008 R2) could not. After 
deactivating the bond0 interface and bridging one of the NICs directly to the 
ovirtmgmt interface, both RHEL and Windows guests have fully functioning 
networks.



I am not sure why exactly the bond0 interface was not working as intended. The 
initial configuration had the mode as balance-rr, is this known to cause 
problems? My intention was to have the mode as balance-alb however the bonding 
driver in F19 seems to completely ignore any BONDING_OPTS settings in the 
ifcfg-bond0 file. Attempts to change the bonding mode directly via 
/sys/class/net/bond0/bonding/mode repeatedly failed due to 'the bond having 
slaves', even after the bond had been taken down via ifconfig. I was not able 
to remove the bond0 definition either, even after removing the ifcfg-bond0 file 
and the modprobe.d alias.



Is there a recommended/tested bonding configuration HOWTO for oVirt on F19?

Well, for what it´s worth, here´s our ifcfg-bond0:
DEVICE="bond0"
NM_CONTROLLED="no"
USERCTL="no"
BOOTPROTO="none"
BONDING_OPTS="mode=4 miimon=100"
TYPE="Ethernet"

And we also have VLAN-interfaces on top of that, and then bridges on top of 
them, and no issues so far.

Found the mode-definitions at "http://www.linuxhorizon.ro/bonding.html":
mode=0 (balance-rr)
Round-robin policy: Transmit packets in sequential order from the first 
available slave through the last. This mode provides load balancing and fault 
tolerance.

mode=1 (active-backup)
Active-backup policy: Only one slave in the bond is active. A different slave 
becomes active if, and only if, the active slave fails. The bond's MAC address 
is externally visible on only one port (network adapter) to avoid confusing the 
switch. This mode provides fault tolerance. The primary option affects the 
behavior of this mode.

mode=2 (balance-xor)
XOR policy: Transmit based on [(source MAC address XOR'd with destination MAC 
address) modulo slave count]. This selects the same slave for each destination 
MAC address. This mode provides load balancing and fault tolerance.

mode=3 (broadcast)
Broadcast policy: transmits everything on all slave interfaces. This mode 
provides fault tolerance.

mode=4 (802.3ad)
IEEE 802.3ad Dynamic link aggregation. Creates aggregation groups that share 
the same speed and duplex settings. Utilizes all slaves in the active 
aggregator according to the 802.3ad specification.

HTH!

/Karli







Joop: Responses as follows:

 - Windows firewall was disabled in each Windows VM

 - Changing the CPU setting and starting the VM directly on the host via QEMU 
(i.e. not through oVirt) did not seem to affect the behavior



Thanks,



Chris







PLEASE CONSIDER THE ENVIRONMENT, DON'T PRINT THIS EMAIL UNLESS YOU REALLY NEED 
TO.



This email and its attachments may contain information which is confidential 
and/or legally privileged. If you are not the intended recipient of this e-mail 
please notify the sender immediately by e-mail and delete this e-mail and its 
attachments from your computer and IT systems. You must not copy, re-transmit, 
use or disclose (other than to the sender) the existence or contents of this 
email or its attachments or permit anyone else to do so.



-----------------------------



-----Original Message-----

From: [email protected]<mailto:[email protected]> 
[mailto:[email protected]] On Behalf Of 
[email protected]<mailto:[email protected]>

Sent: Thursday, September 19, 2013 3:31 PM

To: [email protected]<mailto:[email protected]>

Subject: Users Digest, Vol 24, Issue 93



------------------------------



Message: 3

Date: Thu, 19 Sep 2013 09:13:43 +0200

From: noc <[email protected]<mailto:[email protected]>>

Cc: "[email protected]<mailto:[email protected]>" 
<[email protected]<mailto:[email protected]>>

Subject: Re: [Users] oVirt 3.3/F19 - Windows guest unable to access

        network

Message-ID: 
<[email protected]<mailto:[email protected]>>

Content-Type: text/plain; charset="iso-8859-1"; Format="flowed"



On 18-9-2013 23:08, SULLIVAN, Chris (WGK) wrote:

>

> Hi,

>

> I'm having network issues with a Windows Server 2008 R2 guest running

> on an F19 host. The guest has a static configuration and is able to

> ping itself and the host it is running on, however cannot ping the

> gateway, any other hosts on the local network, or external hosts. A

> RHEL 6.4 guest on the same host with a similar static configuration

> works normally.

>

> Iptables/firewalld on the host are switched off and the network

> definitions in the XML for each VM (Windows/RHEL) are the same. The

> virtio network drivers are installed in the guest. The guest was

> created from a Win 2008 R2 template, which was created from a VM

> imported from oVirt 3.2. Software versions below.

>

Just to be sure, iptables/firewalld!=Windows Firewall. Is there a rule

in the windows firewall to allow ping or is it disabled?



> Are there any manual configuration steps required on the host to

> support Windows guests? Are there any particular diagnostic steps I

> could take to try and narrow down the cause?

>

Don't think so, just converted a Windows2008R2 datacenter guest from

Vmware to oVirt and it ran, after adding virtio drivers or using e1000

and/or ide disks.



> Versions:

>

> -oVirt 3.3.0-4

>

> -F19 3.10.11-200

>

> -QEMU 1.4.2-9

>

> -Libvirt 1.1.2-1

>

> -VDSM 4.12.1-2

>

> -virtio-win 0.1-52

>

>

Your problem looks like the problem Ren? had with his Solaris guest, its

a recent thread. Turned out that setting -cpu Nehalem by ovirt caused

networking in the Solaris guest to fail.

Don't think this is your problem though since lots of people run Windows

guest without problems.



Regards,



Joop





_______________________________________________

Users mailing list

[email protected]<mailto:[email protected]>

http://lists.ovirt.org/mailman/listinfo/users

--

Med Vänliga Hälsningar
-------------------------------------------------------------------------------
Karli Sjöberg
Swedish University of Agricultural Sciences
Box 7079 (Visiting Address Kronåsvägen 8)
S-750 07 Uppsala, Sweden
Phone:  +46-(0)18-67 15 66
[email protected]<mailto:[email protected]>


_______________________________________________
Users mailing list
[email protected]
http://lists.ovirt.org/mailman/listinfo/users

Reply via email to