Re: [Users] Setting up logical storage networks

2012-06-23 Thread Mike Kolesnik
> 
> If I keep my ovirtmgmt interface on a 100Mbps subnet, and my VM
> Networks on 1Gbps network, there's anything special I have to do in
> routing or anything to prevent traffic of the VMs from following the
> default route defined in ovirtmgmt?
> 
> I'm also experiencing an issue with bonds that may be related.  I
> create the bond and set to Mode 5, yet the ifcfg-bond0 seems to
> reflect Mode 4.
> 
> DEVICE=bond0
> ONBOOT=yes
> BOOTPROTO=none
> BONDING_OPTS='mode=802.3ad miimon=150'
> NM_CONTROLLED=no
> MTU=9000
> 
> 
> Here's what looks relevant in the vdsm.log
> 
> 
> Thread-55232::DEBUG::2012-06-22
> 16:56:56,242::BindingXMLRPC::872::vds::(wrapper) client
> [128.194.76.185]::call setupNetworks with ({'stor0': {'bonding':
> 'bond0', 'bridged': 'false', 'mtu': '9000'}}, {'bond0': {'nics':
> ['eth3', 'eth2'], 'BONDING_OPTS': 'mode=5'}}, {'connectivityCheck':
> 'true', 'connectivityTimeout': '6'}) {} flowID [39d484a3]
> Thread-55233::DEBUG::2012-06-22
> 16:56:56,242::BindingXMLRPC::872::vds::(wrapper) client
> [128.194.76.185]::call ping with () {} flowID [39d484a3]
> Thread-55233::DEBUG::2012-06-22
> 16:56:56,244::BindingXMLRPC::879::vds::(wrapper) return ping with
> {'status': {'message': 'Done', 'code': 0}}
> MainProcess|Thread-55232::DEBUG::2012-06-22
> 16:56:56,270::configNetwork::1061::setupNetworks::(setupNetworks)
> Setting up network according to configuration: networks:{'stor0':
> {'bonding': 'bond0', 'bridged': 'false', 'mtu': '9000'}},
> bondings:{'bond0': {'nics': ['eth3', 'eth2'], 'BONDING_OPTS':
> 'mode=5'}}, options:{'connectivityCheck': 'true',
> 'connectivityTimeout': '6'}
> MainProcess|Thread-55232::DEBUG::2012-06-22
> 16:56:56,270::configNetwork::1065::root::(setupNetworks) Validating
> configuration
> Thread-55234::DEBUG::2012-06-22
> 16:56:56,294::BindingXMLRPC::872::vds::(wrapper) client
> [128.194.76.185]::call ping with () {} flowID [39d484a3]
> Thread-55234::DEBUG::2012-06-22
> 16:56:56,295::BindingXMLRPC::879::vds::(wrapper) return ping with
> {'status': {'message': 'Done', 'code': 0}}
> MainProcess|Thread-55232::DEBUG::2012-06-22
> 16:56:56,297::configNetwork::1070::setupNetworks::(setupNetworks)
> Applying...
> MainProcess|Thread-55232::DEBUG::2012-06-22
> 16:56:56,297::configNetwork::1099::setupNetworks::(setupNetworks)
> Adding network 'stor0'
> MainProcess|Thread-55232::DEBUG::2012-06-22
> 16:56:56,322::configNetwork::582::root::(addNetwork) validating
> bridge...
> MainProcess|Thread-55232::INFO::2012-06-22
> 16:56:56,323::configNetwork::591::root::(addNetwork) Adding network
> stor0 with vlan=None, bonding=bond0, nics=['eth3', 'eth2'],
> bondingOptions=None, mtu=9000, bridged=False, options={}
> 
> 
> Looking at the code I may see where things are going wrong.  Looks
> like network['bonding']['BONDING_OPTS'] is being passed when the code
> is looking for network['bonding']['options'].

Yes indeed, this bug was fixed and should be available in next release
of oVirt.

> 
> - Trey
> 
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [Users] Setting up logical storage networks

2012-06-22 Thread Itamar Heim

On 06/23/2012 01:22 AM, Trey Dockendorf wrote:

If I keep my ovirtmgmt interface on a 100Mbps subnet, and my VM
Networks on 1Gbps network, there's anything special I have to do in
routing or anything to prevent traffic of the VMs from following the
default route defined in ovirtmgmt?


just on this point - if you keep your ovirtmgmt on 100MBps, i'm not sure 
you would be able to live migrate VMs

___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [Users] Setting up logical storage networks

2012-06-22 Thread Trey Dockendorf
On Thu, Jun 21, 2012 at 7:17 AM, Dan Kenigsberg  wrote:
> On Wed, Jun 20, 2012 at 02:52:19PM -0400, Mike Kolesnik wrote:
>> > Thanks for the response, see responses inline.
>>
>> You're welcome, responses inline.
>>
>> >
>> > On Wed, Jun 20, 2012 at 2:54 AM, Mike Kolesnik 
>> > wrote:
>> > > Hi,
>> > >
>> > > Please see reply in-line.
>> > >
>> > >> In ovirt-engine-3.1 I'm attempting to setup the base logical
>> > >> networks
>> > >> and have run into 2 major issues.
>> > >
>> > > Are you using cluster version 3.0 or 3.1?
>> > >
>> >
>> > I have been using 3.1 as it's the default.  Is the different just the
>> > API updates?  All I could really find related to 3.0 vs 3.1
>> > pertaining
>> > to networking was this document
>> > http://www.ovirt.org/wiki/Features/Design/Network/SetupNetworks
>> >
>>
>> As Itamar replied, there are a few more network features in 3.1 other than
>> this one.
>>
>> For a Host which is in a 3.1 cluster there should be a "Setup Networks"
>> button which indeed enables the functionality described in that wiki.
>> This is a new feature for 3.1 & up which allows to do several network
>> changes in an atomic manner, with an improved UI experience.
>>
>> However, from the logs it looks like you're using the old commands to edit
>> the networks on the Host, so if you have this button (you should) then you
>> can try using it.
>>
>> 
>>
>> > >
>> > > Unfortunately, oVirt supports setting only the default gateway of
>> > > the Host
>> > > (This is the field you saw in the management network).
>> > >
>> > > We could theoretically use initscripts' static routing files, but
>> > > that is left for
>> > > future development.
>> > >
>> >
>> > So for now, is it then easier to just run all public interfaces
>> > through the same subnet/gateway?  The main reason to run management
>> > via 100Mbps and everything else 1Gbps was that our campus is out of
>> > IPs so we're attempting to conserve on the usage of gigabit IPs.
>>
>> Yes, currently the only gateway you can specify is the default one which
>> is set on the management network.
>
> However it is worth mentioning that VM networks should generally not
> have an IP address (or gateway) of their own. At best, they serve as
> layer-2-only entities. Putting the management network in one subnet and
> VMs on a different one, makes a lot of sense.
>
>>
>> 
>>
> 
>> >
>> >
>> > So in the host interface eth5 I set the following via web interface
>> >
>> > Network: private1
>> > Boot Protocol: Static
>> > IP: 10.20.1.241
>> > Subnet Mask: 255.0.0.0
>> > Check: Save network configuration
>> >
>> > After the save the node's ifcfg-eth5 is touched (based on modified
>> > date in ls -la) but this is all it contains
>> > DEVICE=eth5
>> > ONBOOT=yes
>> > BOOTPROTO=none
>> > HWADDR=00:1b:21:1d:33:f1
>> > NM_CONTROLLED=no
>> > MTU=9000
>> >
>> >
>> > As far as I can tell the only setting from ovirt-engine that made it
>> > to that file was the MTU setting defined when creating the logical
>> > network for the cluster.
>> >
>> > Is my process somehow wrong or am I missing a step?  I've done this
>> > with the node being in both "Up" status and "Maintenance", same
>> > results.
>>
>> No, it looks like a bug that should be taken care of.
>
> And a serious one, that hinders the usability of non-VM networks, and
> which I consider an oVirt-3.1 release blocker
>
>  Bug 834281 - [vdsm][bridgeless] BOOTPROTO/IPADDR/NETMASK options are
>  not set on interface
>
> Thanks for reporting it.
>
> Dan.

If I keep my ovirtmgmt interface on a 100Mbps subnet, and my VM
Networks on 1Gbps network, there's anything special I have to do in
routing or anything to prevent traffic of the VMs from following the
default route defined in ovirtmgmt?

I'm also experiencing an issue with bonds that may be related.  I
create the bond and set to Mode 5, yet the ifcfg-bond0 seems to
reflect Mode 4.

DEVICE=bond0
ONBOOT=yes
BOOTPROTO=none
BONDING_OPTS='mode=802.3ad miimon=150'
NM_CONTROLLED=no
MTU=9000


Here's what looks relevant in the vdsm.log


Thread-55232::DEBUG::2012-06-22
16:56:56,242::BindingXMLRPC::872::vds::(wrapper) client
[128.194.76.185]::call setupNetworks with ({'stor0': {'bonding':
'bond0', 'bridged': 'false', 'mtu': '9000'}}, {'bond0': {'nics':
['eth3', 'eth2'], 'BONDING_OPTS': 'mode=5'}}, {'connectivityCheck':
'true', 'connectivityTimeout': '6'}) {} flowID [39d484a3]
Thread-55233::DEBUG::2012-06-22
16:56:56,242::BindingXMLRPC::872::vds::(wrapper) client
[128.194.76.185]::call ping with () {} flowID [39d484a3]
Thread-55233::DEBUG::2012-06-22
16:56:56,244::BindingXMLRPC::879::vds::(wrapper) return ping with
{'status': {'message': 'Done', 'code': 0}}
MainProcess|Thread-55232::DEBUG::2012-06-22
16:56:56,270::configNetwork::1061::setupNetworks::(setupNetworks)
Setting up network according to configuration: networks:{'stor0':
{'bonding': 'bond0', 'bridged': 'false', 'mtu': '9000'}},
bondings:{'bond0': {'nics': ['eth3', 'eth2'], 'BONDING_OPTS':
'mode=5'}}, options:{'connectivity

Re: [Users] Setting up logical storage networks

2012-06-21 Thread Dan Kenigsberg
On Wed, Jun 20, 2012 at 02:52:19PM -0400, Mike Kolesnik wrote:
> > Thanks for the response, see responses inline.
> 
> You're welcome, responses inline.
> 
> >
> > On Wed, Jun 20, 2012 at 2:54 AM, Mike Kolesnik 
> > wrote:
> > > Hi,
> > >
> > > Please see reply in-line.
> > >
> > >> In ovirt-engine-3.1 I'm attempting to setup the base logical
> > >> networks
> > >> and have run into 2 major issues.
> > >
> > > Are you using cluster version 3.0 or 3.1?
> > >
> >
> > I have been using 3.1 as it's the default.  Is the different just the
> > API updates?  All I could really find related to 3.0 vs 3.1
> > pertaining
> > to networking was this document
> > http://www.ovirt.org/wiki/Features/Design/Network/SetupNetworks
> >
> 
> As Itamar replied, there are a few more network features in 3.1 other than
> this one.
> 
> For a Host which is in a 3.1 cluster there should be a "Setup Networks"
> button which indeed enables the functionality described in that wiki.
> This is a new feature for 3.1 & up which allows to do several network
> changes in an atomic manner, with an improved UI experience.
> 
> However, from the logs it looks like you're using the old commands to edit
> the networks on the Host, so if you have this button (you should) then you
> can try using it.
> 
> 
> 
> > >
> > > Unfortunately, oVirt supports setting only the default gateway of
> > > the Host
> > > (This is the field you saw in the management network).
> > >
> > > We could theoretically use initscripts' static routing files, but
> > > that is left for
> > > future development.
> > >
> >
> > So for now, is it then easier to just run all public interfaces
> > through the same subnet/gateway?  The main reason to run management
> > via 100Mbps and everything else 1Gbps was that our campus is out of
> > IPs so we're attempting to conserve on the usage of gigabit IPs.
> 
> Yes, currently the only gateway you can specify is the default one which
> is set on the management network.

However it is worth mentioning that VM networks should generally not
have an IP address (or gateway) of their own. At best, they serve as
layer-2-only entities. Putting the management network in one subnet and
VMs on a different one, makes a lot of sense.

> 
> 
> 

> >
> >
> > So in the host interface eth5 I set the following via web interface
> >
> > Network: private1
> > Boot Protocol: Static
> > IP: 10.20.1.241
> > Subnet Mask: 255.0.0.0
> > Check: Save network configuration
> >
> > After the save the node's ifcfg-eth5 is touched (based on modified
> > date in ls -la) but this is all it contains
> > DEVICE=eth5
> > ONBOOT=yes
> > BOOTPROTO=none
> > HWADDR=00:1b:21:1d:33:f1
> > NM_CONTROLLED=no
> > MTU=9000
> >
> >
> > As far as I can tell the only setting from ovirt-engine that made it
> > to that file was the MTU setting defined when creating the logical
> > network for the cluster.
> >
> > Is my process somehow wrong or am I missing a step?  I've done this
> > with the node being in both "Up" status and "Maintenance", same
> > results.
> 
> No, it looks like a bug that should be taken care of.

And a serious one, that hinders the usability of non-VM networks, and
which I consider an oVirt-3.1 release blocker

 Bug 834281 - [vdsm][bridgeless] BOOTPROTO/IPADDR/NETMASK options are
 not set on interface

Thanks for reporting it.

Dan.
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [Users] Setting up logical storage networks

2012-06-21 Thread Mike Kolesnik


> No, it looks like a bug that should be taken care of.
> You can see from this log line that these options are not passed:
> MainProcess|Thread-41320::INFO::2012-06-20
> 10:26:32,457::configNetwork::596::root::(addNetwork) Adding network
> private1 with vlan=None, bonding=None, nics=['eth5'],
> bondingOptions=None, mtu=9000, bridged=False, options={'STP': 'no'}
> 
> I tried this today but didn't reproduce for me, but as I can't look
> at it in more detail ATM, but I'll try see whats going on tomorrow.

OK so apparently it works if you're using a "VM network", but if your
network is not for VMs then it's indeed a bug. I have filed the bug
for VDSM:
https://bugzilla.redhat.com/show_bug.cgi?id=834205

As I mentioned, you can set these options manually on the ifcfg files
until the bug is fixed.



Regards,
Mike
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [Users] Setting up logical storage networks

2012-06-20 Thread Mike Kolesnik
> Don't edit the ifcfg-eth4 / ifcfg-eth5 file instead edit
> ifcfg-logicnetworkname. Set that to static with a gateway. And that
> won't get over written. However be warned Linux doesn't like having
> two default gateway's. You might want to review
> http://www.centos.org/docs/2/rhl-rg-en-7.2/ch-networkscripts.html
> and use static routers.

If he were using a "VM network" then that is possible since (currently) VM 
network is implemented using a bridge device, which would be named 
ifcfg-logicnetworkname, however if the network is not a "VM network" then there 
is no bridge and the device file to edit is the ifcfg-nic* file. Either way, if 
he would later edit that network definition through oVirt then these values 
will be overwritten. 

> Thanks
> Robert

Regards, 
Mike 
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [Users] Setting up logical storage networks

2012-06-20 Thread Robert Middleswarth



On 06/20/2012 11:38 AM, Trey Dockendorf wrote:

Thanks for the response, see responses inline.

On Wed, Jun 20, 2012 at 2:54 AM, Mike Kolesnik  wrote:

Hi,

Please see reply in-line.


In ovirt-engine-3.1 I'm attempting to setup the base logical networks
and have run into 2 major issues.

Are you using cluster version 3.0 or 3.1?


I have been using 3.1 as it's the default.  Is the different just the
API updates?  All I could really find related to 3.0 vs 3.1 pertaining
to networking was this document
http://www.ovirt.org/wiki/Features/Design/Network/SetupNetworks


The first is I'm only seeing a Gateway field for the management
interface.  When I went to create a network for VMs (on seperate
subnet) I did not see a place to specify gateway (see img
ovirt_network_missing_gateway.png).  Right now my management port is
on a 100mbps network and the bridged devices live on a 1Gbps network
(net140 in cluster).  Is there a reason the gateway would be missing?
I've attached ovirt_networks.png that shows all the interfaces on my
host.

Unfortunately, oVirt supports setting only the default gateway of the Host
(This is the field you saw in the management network).

We could theoretically use initscripts' static routing files, but that is left 
for
future development.


So for now, is it then easier to just run all public interfaces
through the same subnet/gateway?  The main reason to run management
via 100Mbps and everything else 1Gbps was that our campus is out of
IPs so we're attempting to conserve on the usage of gigabit IPs.


The second issue I'm having is creating a storage network.  I created
2 logical networks , private0 and private1.  I left the "VM Network"
unchecked on both as my assumption was that dicates if they can be
added to VMs.  Since these are only for hosts to connect to the iSCSI
I didn't think that was necessary.  When I set the IP information
(private_network0.png) and select Ok the save goes through but when I
edit the interface again the information is gone and the file
ifcfg-eth4 does not have IP information.  This is what I looks like

DEVICE=eth4
ONBOOT=yes
BOOTPROTO=none
HWADDR=00:1b:21:1d:33:f0
NM_CONTROLLED=no
MTU=9000
Don't edit the ifcfg-eth4 / ifcfg-eth5 file instead edit 
ifcfg-logicnetworkname.  Set that to static with a gateway.  And that 
won't get over written.  However be warned Linux doesn't like having two 
default gateway's.  You might want to review 
http://www.centos.org/docs/2/rhl-rg-en-7.2/ch-networkscripts.html and 
use static routers.


Thanks
Robert

I didn't quite understand what you did here..
What I think you meant is:
1. You edited the network on a NIC, and provided static boot protocol
   with the parameters (ip, netmask).
2. After that when you clicked OK then the configuration was sent to
   the Host, and in the "Network Interfaces" tab for the Host you could
   see the IP in the "Address" column. On the host the ifcfg script for
   this network had these fields set.
--- Assuming that no restart of Host or VDSM on Host was done ---
3. You edited the network again, didn't change anything, and clicked OK.
4. This time, the boot protocol info was gone from display&  ifcfg file
   on the Host.

Is this correct?

Also do you by any chance have the log files of ovirt (engine.log)/vdsm
(vdsm.log) with the flow that you did?

I'll try to clarify the steps I took a little better, sorry if it was
unclear before.

1. Create logical network in Cluster that was NOT a "VM Network" (my
assumption of how to setup a storage network)
2. Edit NIC on host, set boot protocol to static and provide
IP/Netmask, and select the logical network created in #1, check "Save
network configuration"
3. After clicking OK the corresponding ifcfg file on the node was
modified, but the values for IP/Netmask were missing.  Also the values
did not appear in the network interface list, and were not shown when
going to that same interface and selecting "Add/Edit" again

That process did not involve a reboot of the host.

So in the host interface eth5 I set the following via web interface

Network: private1
Boot Protocol: Static
IP: 10.20.1.241
Subnet Mask: 255.0.0.0
Check: Save network configuration

After the save the node's ifcfg-eth5 is touched (based on modified
date in ls -la) but this is all it contains
DEVICE=eth5
ONBOOT=yes
BOOTPROTO=none
HWADDR=00:1b:21:1d:33:f1
NM_CONTROLLED=no
MTU=9000


As far as I can tell the only setting from ovirt-engine that made it
to that file was the MTU setting defined when creating the logical
network for the cluster.

Is my process somehow wrong or am I missing a step?  I've done this
with the node being in both "Up" status and "Maintenance", same
results.

As a test I manually updated the IP/Netmask of ifcfg-eth4 and it shows
up in the web interface with the correct information however any
changes via the web interface will remove the IPADDR and NETMASK
lines.


I also attached image cluster_logical_networks.png that shows the all
the logical networks on 

Re: [Users] Setting up logical storage networks

2012-06-20 Thread Mike Kolesnik
> Thanks for the response, see responses inline.

You're welcome, responses inline.

>
> On Wed, Jun 20, 2012 at 2:54 AM, Mike Kolesnik 
> wrote:
> > Hi,
> >
> > Please see reply in-line.
> >
> >> In ovirt-engine-3.1 I'm attempting to setup the base logical
> >> networks
> >> and have run into 2 major issues.
> >
> > Are you using cluster version 3.0 or 3.1?
> >
>
> I have been using 3.1 as it's the default.  Is the different just the
> API updates?  All I could really find related to 3.0 vs 3.1
> pertaining
> to networking was this document
> http://www.ovirt.org/wiki/Features/Design/Network/SetupNetworks
>

As Itamar replied, there are a few more network features in 3.1 other than
this one.

For a Host which is in a 3.1 cluster there should be a "Setup Networks"
button which indeed enables the functionality described in that wiki.
This is a new feature for 3.1 & up which allows to do several network
changes in an atomic manner, with an improved UI experience.

However, from the logs it looks like you're using the old commands to edit
the networks on the Host, so if you have this button (you should) then you
can try using it.



> >
> > Unfortunately, oVirt supports setting only the default gateway of
> > the Host
> > (This is the field you saw in the management network).
> >
> > We could theoretically use initscripts' static routing files, but
> > that is left for
> > future development.
> >
>
> So for now, is it then easier to just run all public interfaces
> through the same subnet/gateway?  The main reason to run management
> via 100Mbps and everything else 1Gbps was that our campus is out of
> IPs so we're attempting to conserve on the usage of gigabit IPs.

Yes, currently the only gateway you can specify is the default one which
is set on the management network.



>
> I'll try to clarify the steps I took a little better, sorry if it was
> unclear before.
>
> 1. Create logical network in Cluster that was NOT a "VM Network" (my
> assumption of how to setup a storage network)
> 2. Edit NIC on host, set boot protocol to static and provide
> IP/Netmask, and select the logical network created in #1, check "Save
> network configuration"
> 3. After clicking OK the corresponding ifcfg file on the node was
> modified, but the values for IP/Netmask were missing.  Also the
> values
> did not appear in the network interface list, and were not shown when
> going to that same interface and selecting "Add/Edit" again
>
> That process did not involve a reboot of the host.
>
> So in the host interface eth5 I set the following via web interface
>
> Network: private1
> Boot Protocol: Static
> IP: 10.20.1.241
> Subnet Mask: 255.0.0.0
> Check: Save network configuration
>
> After the save the node's ifcfg-eth5 is touched (based on modified
> date in ls -la) but this is all it contains
> DEVICE=eth5
> ONBOOT=yes
> BOOTPROTO=none
> HWADDR=00:1b:21:1d:33:f1
> NM_CONTROLLED=no
> MTU=9000
>
>
> As far as I can tell the only setting from ovirt-engine that made it
> to that file was the MTU setting defined when creating the logical
> network for the cluster.
>
> Is my process somehow wrong or am I missing a step?  I've done this
> with the node being in both "Up" status and "Maintenance", same
> results.

No, it looks like a bug that should be taken care of.
You can see from this log line that these options are not passed:
MainProcess|Thread-41320::INFO::2012-06-20 
10:26:32,457::configNetwork::596::root::(addNetwork) Adding network private1 
with vlan=None, bonding=None, nics=['eth5'], bondingOptions=None, mtu=9000, 
bridged=False, options={'STP': 'no'}

I tried this today but didn't reproduce for me, but as I can't look 
at it in more detail ATM, but I'll try see whats going on tomorrow.

In the meanwhile, you might want to try a newer VDSM RPM from:
http://koji.fedoraproject.org/koji/packageinfo?packageID=12944

>
> As a test I manually updated the IP/Netmask of ifcfg-eth4 and it
> shows
> up in the web interface with the correct information however any
> changes via the web interface will remove the IPADDR and NETMASK
> lines.
>

Yes, if you do changes manually then they will be discovered and
visible through the admin/REST clients. If you edit the network then
these changes get lost, but if not they will stay there.
However, for these basic options it's not necessary to do it like
this and they should work through the admin interface.



> Attached logs from the host and engine.  Host - node_vdsm.txt and
> Engine - engine.txt.
>
> The only issues I see are two deprecation notices from vdsm.
>
> Thanks
> - Trey
>

Regards,
Mike
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [Users] Setting up logical storage networks

2012-06-20 Thread Itamar Heim

On 06/20/2012 06:38 PM, Trey Dockendorf wrote:

I have been using 3.1 as it's the default.  Is the different just the
API updates?  All I could really find related to 3.0 vs 3.1 pertaining
to networking was this document
http://www.ovirt.org/wiki/Features/Design/Network/SetupNetworks


just to clarify on this, 3.1 has additional network features:
- port mirroring
- mtu setting (for jumbo frames)
- optional networks
- bridgeless networks
- new setup networks ui/api
- ability to hot plug a nic

___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [Users] Setting up logical storage networks

2012-06-20 Thread Trey Dockendorf
Thanks for the response, see responses inline.

On Wed, Jun 20, 2012 at 2:54 AM, Mike Kolesnik  wrote:
> Hi,
>
> Please see reply in-line.
>
>> In ovirt-engine-3.1 I'm attempting to setup the base logical networks
>> and have run into 2 major issues.
>
> Are you using cluster version 3.0 or 3.1?
>

I have been using 3.1 as it's the default.  Is the different just the
API updates?  All I could really find related to 3.0 vs 3.1 pertaining
to networking was this document
http://www.ovirt.org/wiki/Features/Design/Network/SetupNetworks

>>
>> The first is I'm only seeing a Gateway field for the management
>> interface.  When I went to create a network for VMs (on seperate
>> subnet) I did not see a place to specify gateway (see img
>> ovirt_network_missing_gateway.png).  Right now my management port is
>> on a 100mbps network and the bridged devices live on a 1Gbps network
>> (net140 in cluster).  Is there a reason the gateway would be missing?
>> I've attached ovirt_networks.png that shows all the interfaces on my
>> host.
>
> Unfortunately, oVirt supports setting only the default gateway of the Host
> (This is the field you saw in the management network).
>
> We could theoretically use initscripts' static routing files, but that is 
> left for
> future development.
>

So for now, is it then easier to just run all public interfaces
through the same subnet/gateway?  The main reason to run management
via 100Mbps and everything else 1Gbps was that our campus is out of
IPs so we're attempting to conserve on the usage of gigabit IPs.

>>
>> The second issue I'm having is creating a storage network.  I created
>> 2 logical networks , private0 and private1.  I left the "VM Network"
>> unchecked on both as my assumption was that dicates if they can be
>> added to VMs.  Since these are only for hosts to connect to the iSCSI
>> I didn't think that was necessary.  When I set the IP information
>> (private_network0.png) and select Ok the save goes through but when I
>> edit the interface again the information is gone and the file
>> ifcfg-eth4 does not have IP information.  This is what I looks like
>>
>> DEVICE=eth4
>> ONBOOT=yes
>> BOOTPROTO=none
>> HWADDR=00:1b:21:1d:33:f0
>> NM_CONTROLLED=no
>> MTU=9000
>
> I didn't quite understand what you did here..
> What I think you meant is:
> 1. You edited the network on a NIC, and provided static boot protocol
>   with the parameters (ip, netmask).
> 2. After that when you clicked OK then the configuration was sent to
>   the Host, and in the "Network Interfaces" tab for the Host you could
>   see the IP in the "Address" column. On the host the ifcfg script for
>   this network had these fields set.
> --- Assuming that no restart of Host or VDSM on Host was done ---
> 3. You edited the network again, didn't change anything, and clicked OK.
> 4. This time, the boot protocol info was gone from display & ifcfg file
>   on the Host.
>
> Is this correct?
>
> Also do you by any chance have the log files of ovirt (engine.log)/vdsm
> (vdsm.log) with the flow that you did?

I'll try to clarify the steps I took a little better, sorry if it was
unclear before.

1. Create logical network in Cluster that was NOT a "VM Network" (my
assumption of how to setup a storage network)
2. Edit NIC on host, set boot protocol to static and provide
IP/Netmask, and select the logical network created in #1, check "Save
network configuration"
3. After clicking OK the corresponding ifcfg file on the node was
modified, but the values for IP/Netmask were missing.  Also the values
did not appear in the network interface list, and were not shown when
going to that same interface and selecting "Add/Edit" again

That process did not involve a reboot of the host.

So in the host interface eth5 I set the following via web interface

Network: private1
Boot Protocol: Static
IP: 10.20.1.241
Subnet Mask: 255.0.0.0
Check: Save network configuration

After the save the node's ifcfg-eth5 is touched (based on modified
date in ls -la) but this is all it contains
DEVICE=eth5
ONBOOT=yes
BOOTPROTO=none
HWADDR=00:1b:21:1d:33:f1
NM_CONTROLLED=no
MTU=9000


As far as I can tell the only setting from ovirt-engine that made it
to that file was the MTU setting defined when creating the logical
network for the cluster.

Is my process somehow wrong or am I missing a step?  I've done this
with the node being in both "Up" status and "Maintenance", same
results.

As a test I manually updated the IP/Netmask of ifcfg-eth4 and it shows
up in the web interface with the correct information however any
changes via the web interface will remove the IPADDR and NETMASK
lines.

>
>>
>> I also attached image cluster_logical_networks.png that shows the all
>> the logical networks on this cluster.  So far my plan is to have a
>> single public interface for VM traffic, then two for storage traffic,
>> each going to a different switch.  This setup is just an initial test
>> but I'd hope to have it in production once I get some of these kinks
>> worked

Re: [Users] Setting up logical storage networks

2012-06-20 Thread Mike Kolesnik
Hi,

Please see reply in-line.

> In ovirt-engine-3.1 I'm attempting to setup the base logical networks
> and have run into 2 major issues.

Are you using cluster version 3.0 or 3.1?

> 
> The first is I'm only seeing a Gateway field for the management
> interface.  When I went to create a network for VMs (on seperate
> subnet) I did not see a place to specify gateway (see img
> ovirt_network_missing_gateway.png).  Right now my management port is
> on a 100mbps network and the bridged devices live on a 1Gbps network
> (net140 in cluster).  Is there a reason the gateway would be missing?
> I've attached ovirt_networks.png that shows all the interfaces on my
> host.

Unfortunately, oVirt supports setting only the default gateway of the Host
(This is the field you saw in the management network).

We could theoretically use initscripts' static routing files, but that is left 
for
future development.

> 
> The second issue I'm having is creating a storage network.  I created
> 2 logical networks , private0 and private1.  I left the "VM Network"
> unchecked on both as my assumption was that dicates if they can be
> added to VMs.  Since these are only for hosts to connect to the iSCSI
> I didn't think that was necessary.  When I set the IP information
> (private_network0.png) and select Ok the save goes through but when I
> edit the interface again the information is gone and the file
> ifcfg-eth4 does not have IP information.  This is what I looks like
> 
> DEVICE=eth4
> ONBOOT=yes
> BOOTPROTO=none
> HWADDR=00:1b:21:1d:33:f0
> NM_CONTROLLED=no
> MTU=9000

I didn't quite understand what you did here..
What I think you meant is:
1. You edited the network on a NIC, and provided static boot protocol 
   with the parameters (ip, netmask).
2. After that when you clicked OK then the configuration was sent to 
   the Host, and in the "Network Interfaces" tab for the Host you could 
   see the IP in the "Address" column. On the host the ifcfg script for
   this network had these fields set.
--- Assuming that no restart of Host or VDSM on Host was done ---
3. You edited the network again, didn't change anything, and clicked OK.
4. This time, the boot protocol info was gone from display & ifcfg file 
   on the Host.

Is this correct?

Also do you by any chance have the log files of ovirt (engine.log)/vdsm
(vdsm.log) with the flow that you did?

> 
> I also attached image cluster_logical_networks.png that shows the all
> the logical networks on this cluster.  So far my plan is to have a
> single public interface for VM traffic, then two for storage traffic,
> each going to a different switch.  This setup is just an initial test
> but I'd hope to have it in production once I get some of these kinks
> worked out.
> 
> Please let me know what information would be useful to debug this
> further.
> 
> Thanks
> - Trey

Regards,
Mike
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users