Re: [ovirt-users] 3.6.1 HE install on CentOS 7.2 resulted in unsync'd network

2015-12-21 Thread Dan Kenigsberg
On Sun, Dec 20, 2015 at 12:13:15PM +0200, Yedidyah Bar David wrote:
> On Sat, Dec 19, 2015 at 12:53 PM, Gianluca Cecchi
>  wrote:
> > On Sat, Dec 19, 2015 at 1:08 AM, John Florian 
> > wrote:
> >>
> >> I'm trying to get a 3.6.1 HE setup going where I have 4 VLANs (VIDs
> >> 101-104) for storage networks, 1 VLAN (VID 100) for ovirtmgmt and 1 more
> >> (VID 1) for everything else.  Because I know of no way to manipulate the
> >> network configuration from the management GUI once the HE is running and
> >> with only a single Host, I made the OS configuration as close as possible 
> >> to
> >> what I'd want when done.  This looks like:
> >
> >
> > Why do you think of this necessary pre-work? I configured (in 3.6.0) an
> > environment with HE too on a single host and I only preconfigured my bond1
> > in 802.3ad mode with the interfaces I planned to use for ovirtmgmt and I
> > left the other interfaces unconfigured, so that all is not used by Network
> > Manager.
> > During  the "hosted-engine --deploy" setup I got this input:
> >
> >--== NETWORK CONFIGURATION ==--
> >
> >   Please indicate a nic to set ovirtmgmt bridge on: (em1, bond1,
> > em2) [em1]: bond1
> >   iptables was detected on your computer, do you wish setup to
> > configure it? (Yes, No)[Yes]:
> >   Please indicate a pingable gateway IP address [10.4.168.254]:
> >
> > and then on preview of configuration to apply:
> >
> >   --== CONFIGURATION PREVIEW ==--
> >
> >   Bridge interface   : bond1
> >   Engine FQDN: ractorshe.mydomain.local
> >   Bridge name: ovirtmgmt
> >
> > After setup I configured my vlan based networks for my VMS from the GUI
> > itself as in the usual way, so that now I have this bond0 created by oVirt
> > GUI on the other two interfaces (em1 and em2):
> >
> > [root@ractor ~]# cat /proc/net/bonding/bond0
> > Ethernet Channel Bonding Driver: v3.7.1 (April 27, 2011)
> >
> > Bonding Mode: IEEE 802.3ad Dynamic link aggregation
> > Transmit Hash Policy: layer2 (0)
> > MII Status: up
> > MII Polling Interval (ms): 100
> > Up Delay (ms): 0
> > Down Delay (ms): 0
> >
> > 802.3ad info
> > LACP rate: fast
> > Min links: 0
> > Aggregator selection policy (ad_select): stable
> > Active Aggregator Info:
> > Aggregator ID: 2
> > Number of ports: 2
> > Actor Key: 17
> > Partner Key: 8
> > Partner Mac Address: 00:01:02:03:04:0c
> >
> > Slave Interface: em1
> > MII Status: up
> > Speed: 1000 Mbps
> > Duplex: full
> > Link Failure Count: 0
> > Permanent HW addr: 00:25:64:ff:0b:f0
> > Aggregator ID: 2
> > Slave queue ID: 0
> >
> > Slave Interface: em2
> > MII Status: up
> > Speed: 1000 Mbps
> > Duplex: full
> > Link Failure Count: 0
> > Permanent HW addr: 00:25:64:ff:0b:f2
> > Aggregator ID: 2
> > Slave queue ID: 0
> >
> > And then "ip a" command returns:
> >
> > 9: bond0.65@bond0:  mtu 1500 qdisc noqueue
> > master vlan65 state UP
> > link/ether 00:25:64:ff:0b:f0 brd ff:ff:ff:ff:ff:ff
> > 10: vlan65:  mtu 1500 qdisc noqueue state
> > UP
> > link/ether 00:25:64:ff:0b:f0 brd ff:ff:ff:ff:ff:ff
> >
> > with
> > [root@ractor ~]# brctl show
> > bridge namebridge idSTP enabledinterfaces
> > ;vdsmdummy;8000.no
> > ovirtmgmt8000.002564ff0bf4nobond1
> > vnet0
> > vlan658000.002564ff0bf0nobond0.65
> > vnet1
> > vnet2
> >
> > vnet1 and vnet2 being the virtual network interfaces of my two running VMs.
> >
> > The only note I can submit is that by default when you set a network in
> > oVirt GUI with mode=4 (802.3ad), it defaults to configuring it with
> > "lacp_rate=0" so slow, that I think it is bad, as I read in many articles
> > (but I'm not a network guru at all)
> > So that I chose custom mode in the GUI and specified "mode=4 lacp_rate=1" in
> > options and this was reflected in my configuration as you see above in bond0
> > output.
> >
> > Can we set lacp_rate=1 as a default option for mode=4 in oVirt?
> 
> No idea, adding Dan. I guess you can always open an RFE bz...
> Dan - any specific reason for the current defaults?

lacp_rate=0 ('slow') is the default of the bonding module in mode=4, and
we do not change that (even though we could).

Please open an RFE, citing the articles that recomend the usage of
faster rate. Until then - configure Engine's custom bond option to your likings.
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] 3.6.1 HE install on CentOS 7.2 resulted in unsync'd network

2015-12-20 Thread Yedidyah Bar David
On Sat, Dec 19, 2015 at 12:53 PM, Gianluca Cecchi
 wrote:
> On Sat, Dec 19, 2015 at 1:08 AM, John Florian 
> wrote:
>>
>> I'm trying to get a 3.6.1 HE setup going where I have 4 VLANs (VIDs
>> 101-104) for storage networks, 1 VLAN (VID 100) for ovirtmgmt and 1 more
>> (VID 1) for everything else.  Because I know of no way to manipulate the
>> network configuration from the management GUI once the HE is running and
>> with only a single Host, I made the OS configuration as close as possible to
>> what I'd want when done.  This looks like:
>
>
> Why do you think of this necessary pre-work? I configured (in 3.6.0) an
> environment with HE too on a single host and I only preconfigured my bond1
> in 802.3ad mode with the interfaces I planned to use for ovirtmgmt and I
> left the other interfaces unconfigured, so that all is not used by Network
> Manager.
> During  the "hosted-engine --deploy" setup I got this input:
>
>--== NETWORK CONFIGURATION ==--
>
>   Please indicate a nic to set ovirtmgmt bridge on: (em1, bond1,
> em2) [em1]: bond1
>   iptables was detected on your computer, do you wish setup to
> configure it? (Yes, No)[Yes]:
>   Please indicate a pingable gateway IP address [10.4.168.254]:
>
> and then on preview of configuration to apply:
>
>   --== CONFIGURATION PREVIEW ==--
>
>   Bridge interface   : bond1
>   Engine FQDN: ractorshe.mydomain.local
>   Bridge name: ovirtmgmt
>
> After setup I configured my vlan based networks for my VMS from the GUI
> itself as in the usual way, so that now I have this bond0 created by oVirt
> GUI on the other two interfaces (em1 and em2):
>
> [root@ractor ~]# cat /proc/net/bonding/bond0
> Ethernet Channel Bonding Driver: v3.7.1 (April 27, 2011)
>
> Bonding Mode: IEEE 802.3ad Dynamic link aggregation
> Transmit Hash Policy: layer2 (0)
> MII Status: up
> MII Polling Interval (ms): 100
> Up Delay (ms): 0
> Down Delay (ms): 0
>
> 802.3ad info
> LACP rate: fast
> Min links: 0
> Aggregator selection policy (ad_select): stable
> Active Aggregator Info:
> Aggregator ID: 2
> Number of ports: 2
> Actor Key: 17
> Partner Key: 8
> Partner Mac Address: 00:01:02:03:04:0c
>
> Slave Interface: em1
> MII Status: up
> Speed: 1000 Mbps
> Duplex: full
> Link Failure Count: 0
> Permanent HW addr: 00:25:64:ff:0b:f0
> Aggregator ID: 2
> Slave queue ID: 0
>
> Slave Interface: em2
> MII Status: up
> Speed: 1000 Mbps
> Duplex: full
> Link Failure Count: 0
> Permanent HW addr: 00:25:64:ff:0b:f2
> Aggregator ID: 2
> Slave queue ID: 0
>
> And then "ip a" command returns:
>
> 9: bond0.65@bond0:  mtu 1500 qdisc noqueue
> master vlan65 state UP
> link/ether 00:25:64:ff:0b:f0 brd ff:ff:ff:ff:ff:ff
> 10: vlan65:  mtu 1500 qdisc noqueue state
> UP
> link/ether 00:25:64:ff:0b:f0 brd ff:ff:ff:ff:ff:ff
>
> with
> [root@ractor ~]# brctl show
> bridge namebridge idSTP enabledinterfaces
> ;vdsmdummy;8000.no
> ovirtmgmt8000.002564ff0bf4nobond1
> vnet0
> vlan658000.002564ff0bf0nobond0.65
> vnet1
> vnet2
>
> vnet1 and vnet2 being the virtual network interfaces of my two running VMs.
>
> The only note I can submit is that by default when you set a network in
> oVirt GUI with mode=4 (802.3ad), it defaults to configuring it with
> "lacp_rate=0" so slow, that I think it is bad, as I read in many articles
> (but I'm not a network guru at all)
> So that I chose custom mode in the GUI and specified "mode=4 lacp_rate=1" in
> options and this was reflected in my configuration as you see above in bond0
> output.
>
> Can we set lacp_rate=1 as a default option for mode=4 in oVirt?

No idea, adding Dan. I guess you can always open an RFE bz...
Dan - any specific reason for the current defaults?
-- 
Didi
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] 3.6.1 HE install on CentOS 7.2 resulted in unsync'd network

2015-12-20 Thread John Florian
On 12/19/2015 05:53 AM, Gianluca Cecchi wrote:
> On Sat, Dec 19, 2015 at 1:08 AM, John Florian  > wrote:
>
> I'm trying to get a 3.6.1 HE setup going where I have 4 VLANs
> (VIDs 101-104) for storage networks, 1 VLAN (VID 100) for
> ovirtmgmt and 1 more (VID 1) for everything else.  Because I know
> of no way to manipulate the network configuration from the
> management GUI once the HE is running and with only a single Host,
> I made the OS configuration as close as possible to what I'd want
> when done.  This looks like:
>
>
> Why do you think of this necessary pre-work?

Because my storage is iSCSI and I need the VLAN configuration in place
for the Host to access it on behalf of the HE.  Otherwise, yes I agree
it would be easier to let the hosted-engine script deal with the set
up.  I've done a workable setup before letting the script do everything,
but the mode 4 bonding only gave me half the possible performance
because in effect one NIC on the NAS did all the transmitting while the
other NIC did all the receiving.  So I really need all of the storage
network setup in place prior to starting the HE deployment.

It seems like it should be trivial to convince the engine that the two
netmasks are indeed equivalent.  I tried changing in
/var/lib/vdsm/persistence/netconf/nets/ovirtmgmt the '"prefix": "24"'
setting to '"netmask": "255.255.255.0"' and running
/usr/share/vdsm/vdsm-restore-net-config but that didn't seem to change
anything WRT the network being out of sync.

> I configured (in 3.6.0) an environment with HE too on a single host
> and I only preconfigured my bond1 in 802.3ad mode with the interfaces
> I planned to use for ovirtmgmt and I left the other interfaces
> unconfigured, so that all is not used by Network Manager.
> During  the "hosted-engine --deploy" setup I got this input:
>
>--== NETWORK CONFIGURATION ==--
> 
>   Please indicate a nic to set ovirtmgmt bridge on: (em1,
> bond1, em2) [em1]: bond1
>   iptables was detected on your computer, do you wish setup to
> configure it? (Yes, No)[Yes]:  
>   Please indicate a pingable gateway IP address [10.4.168.254]:
>
> and then on preview of configuration to apply:
>
>   --== CONFIGURATION PREVIEW ==--
>
>   Bridge interface   : bond1
>   Engine FQDN: ractorshe.mydomain.local
>   Bridge name: ovirtmgmt
>
> After setup I configured my vlan based networks for my VMS from the
> GUI itself as in the usual way, so that now I have this bond0 created
> by oVirt GUI on the other two interfaces (em1 and em2):
>
> [root@ractor ~]# cat /proc/net/bonding/bond0
> Ethernet Channel Bonding Driver: v3.7.1 (April 27, 2011)
>
> Bonding Mode: IEEE 802.3ad Dynamic link aggregation
> Transmit Hash Policy: layer2 (0)
> MII Status: up
> MII Polling Interval (ms): 100
> Up Delay (ms): 0
> Down Delay (ms): 0
>
> 802.3ad info
> LACP rate: fast
> Min links: 0
> Aggregator selection policy (ad_select): stable
> Active Aggregator Info:
> Aggregator ID: 2
> Number of ports: 2
> Actor Key: 17
> Partner Key: 8
> Partner Mac Address: 00:01:02:03:04:0c
>
> Slave Interface: em1
> MII Status: up
> Speed: 1000 Mbps
> Duplex: full
> Link Failure Count: 0
> Permanent HW addr: 00:25:64:ff:0b:f0
> Aggregator ID: 2
> Slave queue ID: 0
>
> Slave Interface: em2
> MII Status: up
> Speed: 1000 Mbps
> Duplex: full
> Link Failure Count: 0
> Permanent HW addr: 00:25:64:ff:0b:f2
> Aggregator ID: 2
> Slave queue ID: 0
>
> And then "ip a" command returns:
>
> 9: bond0.65@bond0:  mtu 1500 qdisc
> noqueue master vlan65 state UP
> link/ether 00:25:64:ff:0b:f0 brd ff:ff:ff:ff:ff:ff
> 10: vlan65:  mtu 1500 qdisc noqueue
> state UP
> link/ether 00:25:64:ff:0b:f0 brd ff:ff:ff:ff:ff:ff
>
> with
> [root@ractor ~]# brctl show
> bridge namebridge idSTP enabledinterfaces
> ;vdsmdummy;8000.no   
> ovirtmgmt8000.002564ff0bf4nobond1
> vnet0
> vlan658000.002564ff0bf0nobond0.65
> vnet1
> vnet2
>
> vnet1 and vnet2 being the virtual network interfaces of my two running
> VMs.
>
> The only note I can submit is that by default when you set a network
> in oVirt GUI with mode=4 (802.3ad), it defaults to configuring it with
> "lacp_rate=0" so slow, that I think it is bad, as I read in many
> articles (but I'm not a network guru at all)
> So that I chose custom mode in the GUI and specified "mode=4
> lacp_rate=1" in options and this was reflected in my configuration as
> you see above in bond0 output.
>
> Can we set lacp_rate=1 as a default option for mode=4 in oVirt?
>
> HIH,
> Gianluca


-- 
John Florian


Re: [ovirt-users] 3.6.1 HE install on CentOS 7.2 resulted in unsync'd network

2015-12-19 Thread Gianluca Cecchi
On Sat, Dec 19, 2015 at 1:08 AM, John Florian 
wrote:

> I'm trying to get a 3.6.1 HE setup going where I have 4 VLANs (VIDs
> 101-104) for storage networks, 1 VLAN (VID 100) for ovirtmgmt and 1 more
> (VID 1) for everything else.  Because I know of no way to manipulate the
> network configuration from the management GUI once the HE is running and
> with only a single Host, I made the OS configuration as close as possible
> to what I'd want when done.  This looks like:
>

Why do you think of this necessary pre-work? I configured (in 3.6.0) an
environment with HE too on a single host and I only preconfigured my bond1
in 802.3ad mode with the interfaces I planned to use for ovirtmgmt and I
left the other interfaces unconfigured, so that all is not used by Network
Manager.
During  the "hosted-engine --deploy" setup I got this input:

   --== NETWORK CONFIGURATION ==--

  Please indicate a nic to set ovirtmgmt bridge on: (em1, bond1,
em2) [em1]: bond1
  iptables was detected on your computer, do you wish setup to
configure it? (Yes, No)[Yes]:
  Please indicate a pingable gateway IP address [10.4.168.254]:

and then on preview of configuration to apply:

  --== CONFIGURATION PREVIEW ==--

  Bridge interface   : bond1
  Engine FQDN: ractorshe.mydomain.local
  Bridge name: ovirtmgmt

After setup I configured my vlan based networks for my VMS from the GUI
itself as in the usual way, so that now I have this bond0 created by oVirt
GUI on the other two interfaces (em1 and em2):

[root@ractor ~]# cat /proc/net/bonding/bond0
Ethernet Channel Bonding Driver: v3.7.1 (April 27, 2011)

Bonding Mode: IEEE 802.3ad Dynamic link aggregation
Transmit Hash Policy: layer2 (0)
MII Status: up
MII Polling Interval (ms): 100
Up Delay (ms): 0
Down Delay (ms): 0

802.3ad info
LACP rate: fast
Min links: 0
Aggregator selection policy (ad_select): stable
Active Aggregator Info:
Aggregator ID: 2
Number of ports: 2
Actor Key: 17
Partner Key: 8
Partner Mac Address: 00:01:02:03:04:0c

Slave Interface: em1
MII Status: up
Speed: 1000 Mbps
Duplex: full
Link Failure Count: 0
Permanent HW addr: 00:25:64:ff:0b:f0
Aggregator ID: 2
Slave queue ID: 0

Slave Interface: em2
MII Status: up
Speed: 1000 Mbps
Duplex: full
Link Failure Count: 0
Permanent HW addr: 00:25:64:ff:0b:f2
Aggregator ID: 2
Slave queue ID: 0

And then "ip a" command returns:

9: bond0.65@bond0:  mtu 1500 qdisc noqueue
master vlan65 state UP
link/ether 00:25:64:ff:0b:f0 brd ff:ff:ff:ff:ff:ff
10: vlan65:  mtu 1500 qdisc noqueue state
UP
link/ether 00:25:64:ff:0b:f0 brd ff:ff:ff:ff:ff:ff

with
[root@ractor ~]# brctl show
bridge namebridge idSTP enabledinterfaces
;vdsmdummy;8000.no
ovirtmgmt8000.002564ff0bf4nobond1
vnet0
vlan658000.002564ff0bf0nobond0.65
vnet1
vnet2

vnet1 and vnet2 being the virtual network interfaces of my two running VMs.

The only note I can submit is that by default when you set a network in
oVirt GUI with mode=4 (802.3ad), it defaults to configuring it with
"lacp_rate=0" so slow, that I think it is bad, as I read in many articles
(but I'm not a network guru at all)
So that I chose custom mode in the GUI and specified "mode=4 lacp_rate=1"
in options and this was reflected in my configuration as you see above in
bond0 output.

Can we set lacp_rate=1 as a default option for mode=4 in oVirt?

HIH,
Gianluca
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


[ovirt-users] 3.6.1 HE install on CentOS 7.2 resulted in unsync'd network

2015-12-18 Thread John Florian
I'm trying to get a 3.6.1 HE setup going where I have 4 VLANs (VIDs
101-104) for storage networks, 1 VLAN (VID 100) for ovirtmgmt and 1 more
(VID 1) for everything else.  Because I know of no way to manipulate the
network configuration from the management GUI once the HE is running and
with only a single Host, I made the OS configuration as close as
possible to what I'd want when done.  This looks like:

[root@orthosie ~]# ip a
1: lo:  mtu 65536 qdisc noqueue state UNKNOWN
link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
inet 127.0.0.1/8 scope host lo
   valid_lft forever preferred_lft forever
inet6 ::1/128 scope host
   valid_lft forever preferred_lft forever
2: bond0:  mtu 1500 qdisc
noqueue master ovirtmgmt state UP
link/ether 78:2b:cb:3c:da:02 brd ff:ff:ff:ff:ff:ff
3: em1:  mtu 1500 qdisc mq master
bond0 state UP qlen 1000
link/ether 78:2b:cb:3c:da:02 brd ff:ff:ff:ff:ff:ff
4: em2:  mtu 1500 qdisc mq master
bond0 state UP qlen 1000
link/ether 78:2b:cb:3c:da:02 brd ff:ff:ff:ff:ff:ff
5: em3:  mtu 1500 qdisc mq master
bond0 state UP qlen 1000
link/ether 78:2b:cb:3c:da:02 brd ff:ff:ff:ff:ff:ff
6: em4:  mtu 1500 qdisc mq master
bond0 state UP qlen 1000
link/ether 78:2b:cb:3c:da:02 brd ff:ff:ff:ff:ff:ff
8: bond0.1@bond0:  mtu 1500 qdisc
noqueue state UP
link/ether 78:2b:cb:3c:da:02 brd ff:ff:ff:ff:ff:ff
inet 172.16.7.8/24 brd 172.16.7.255 scope global bond0.1
   valid_lft forever preferred_lft forever
inet6 fe80::7a2b:cbff:fe3c:da02/64 scope link
   valid_lft forever preferred_lft forever
9: bond0.101@bond0:  mtu 1500 qdisc
noqueue state UP
link/ether 78:2b:cb:3c:da:02 brd ff:ff:ff:ff:ff:ff
inet 192.168.101.203/24 brd 192.168.101.255 scope global bond0.101
   valid_lft forever preferred_lft forever
inet6 fe80::7a2b:cbff:fe3c:da02/64 scope link
   valid_lft forever preferred_lft forever
10: bond0.102@bond0:  mtu 1500 qdisc
noqueue state UP
link/ether 78:2b:cb:3c:da:02 brd ff:ff:ff:ff:ff:ff
inet 192.168.102.203/24 brd 192.168.102.255 scope global bond0.102
   valid_lft forever preferred_lft forever
inet6 fe80::7a2b:cbff:fe3c:da02/64 scope link
   valid_lft forever preferred_lft forever
11: bond0.103@bond0:  mtu 1500 qdisc
noqueue state UP
link/ether 78:2b:cb:3c:da:02 brd ff:ff:ff:ff:ff:ff
inet 192.168.103.203/24 brd 192.168.103.255 scope global bond0.103
   valid_lft forever preferred_lft forever
inet6 fe80::7a2b:cbff:fe3c:da02/64 scope link
   valid_lft forever preferred_lft forever
12: bond0.104@bond0:  mtu 1500 qdisc
noqueue state UP
link/ether 78:2b:cb:3c:da:02 brd ff:ff:ff:ff:ff:ff
inet 192.168.104.203/24 brd 192.168.104.255 scope global bond0.104
   valid_lft forever preferred_lft forever
inet6 fe80::7a2b:cbff:fe3c:da02/64 scope link
   valid_lft forever preferred_lft forever
13: ovirtmgmt:  mtu 1500 qdisc noqueue
state UP
link/ether 78:2b:cb:3c:da:02 brd ff:ff:ff:ff:ff:ff
inet 192.168.100.102/24 brd 192.168.100.255 scope global ovirtmgmt
   valid_lft forever preferred_lft forever

The hosted-engine deploy script got stuck near the end when it wanted
the HA broker to take over.  It said the ovirtmgmt network was
unavailable on the Host and suggested trying to activate it within the
GUI.  Though I had my bonding and bridging all configured prior to any
HE deployment attempt (as shown above), the GUI didn’t see it that way. 
It knew of the bond, and the 4 IFs of course, but it showed all 4 IFs as
down and the required ovirtmgmt network was off on the right side –
effectively not yet associated with the physical devices.  I dragged the
ovirtmgmt net over to the left to associate it the 4 IFs and pressed
Save.  The GUI now shows all 4 IFs up with ovirtmgmt assigned.  But it
is not in sync -- specifically the netmask property on the host is
"255.255.255.0" while on the DC its "24".  They're saying the same
thing; just in different ways.

Since I only have the one Host, how can I sync this?

-- 
John Florian

___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users