[ovirt-users] Re: trunked ports

2018-07-27 Thread william . dossett
just got back to this...  the method I usee worked well.

I used a 1GB copper port in my mgmt VLAN to set up oVirt with Gluster

Got it all up and running  then used 1 x 10 GB port to create my storage network

I think created to logical networks, one tagged with my static VLAN and the 
other tagged with my dhcp VLAN, dragged them both onto the second 10Gb port 
which is connected to cisco FEX trunk port - worked perfectly and I can deploy 
to either VLAN now.  I guess I will keep the 1Gb for management... I think I 
can transfer it to the 10Gb VLANs most likely, but didn't want to lose my ovirt 
engine as this setup seems to be working well for me at least on the network 
side.
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/5X7BD5CMQF5VPLE4JU64XM2J4R64OA54/


[ovirt-users] Re: trunked ports

2018-06-27 Thread Edward Haas
On Mon, Jun 25, 2018 at 4:35 PM, Michael Watters 
wrote:

> You should be able to use bonded interfaces with an IP on each VLAN
> Interface for the ovirt hosts and the engine.  For example, here is the IP
> configuration for one of our VLANs.
>
> 10: bond3:  mtu 1500 qdisc
> noqueue state UP group default qlen 1000
> link/ether 00:1b:21:5c:80:39 brd ff:ff:ff:ff:ff:ff
> inet6 fe80::21b:21ff:fe5c:8039/64 scope link
>valid_lft forever preferred_lft forever
> 24: bond3.311@bond3:  mtu 1500 qdisc
> noqueue state UP group default qlen 1000
> link/ether 00:1b:21:5c:80:39 brd ff:ff:ff:ff:ff:ff
> inet 192.168.111.201/24 brd 192.168.111.255 scope global bond3.311
>valid_lft forever preferred_lft forever
> inet6 fe80::21b:21ff:fe5c:8039/64 scope link
>valid_lft forever preferred_lft forever
>
> bond3.311 configuration is managed in the 
> /etc/sysconfig/network-scripts/ifcfg-bond3.311
> file.
> IMO setting up bonded NICs with VLAN tagging is one area where ovirt falls
> short.  You essentially have to configure your networks twice.  First using
> the /etc/sysconfig/network-interface files and then inside of the engine
> itself.
>

I am not familiar with this problem, a vlan network over a bond is
supported. I even recall several fixes related to it in 4.2.


> VDSM may also need to be configured to use ifcfg persistence in the config
> file.
>
> cat files/vdsm/vdsm.conf
> [vars]
> ssl = true
> net_persistence = ifcfg
>
> The ifcfg persistence mode is planned for  deprecation in 4.3, and is not
well tested/supported for some time now.
Do not use it unless you have a very (very) good reason to do so.

>
> [addresses]
> management_port = 54321
>
>
> Your switch ports also need to be configured to support 802.1q networking.
>
>
>
> On 06/23/2018 09:56 AM, william.doss...@gmail.com wrote:
>
> Hi,
>
>
>
> I setup oVirt a few years back…  now that the HCI is real, I am revisiting
> it.  I have deployed with Gluster, and am now moving on to networking.
>
>
>
> I come from a VMware shop and normally we trunk all the network ports
> exposing all VLANs to the hosts and place VMs in Portgoups that are tagged
> with VLANs.
>
>
>
> I did manage to do this years back but I am struggling to get this to work
> today.  I had pretty limited hardware back then and I thought I installed
> using vlan tagging and trunked ports, but I don’t see any  option to do
> this using the glusterfs and hosted engine setup.
>
>
>
> Each host has a dual port 10Gb NICs  I use one for storage that is
> connected to my storage network and one for ovirtmgmt.  (I need to add
> another of these for redundancy down the road, but no money for that at the
> moment)
>
>
>
> The hosts also have 4 x 1Gb ports. So in lieu of being able to configure
> vlan tagging to trunked ports on hosted engine deploy, I am considering
> cabling up a 1 Gb port on each in my management services VLAN and when it
> is all up and running create another logical network (or several of them as
> I think these equate to what is a vlan tagged port group in VMware) with
> the 10Gb NIC backing for VMs.
>
>
>
> Does that sound reasonable?  Or if anyone can point me to any docs that
> describe how to deploy to a specific VLAN with trunk ports, that would be
> nice as well as I won’t have to actually go to the office and run
> additional cables.
>
>
As mentioned by Michael, you can use vlan networks and attach several of
them on the same bond/nic in the setup networks window.
You also have the option to use a non-vlan network, and define the vlans
inside the VM. The linux bridge that connects the vnics to the bond ignore
tagging information, it just forwards frames in a flat manner, leaving it
to the vnic in the VM to strip the tag.


>
>
> Appreciate any advice
>
>
>
> Bill
>
>
>
>
> ___
> Users mailing list -- users@ovirt.org
> To unsubscribe send an email to users-le...@ovirt.org
> Privacy Statement: https://www.ovirt.org/site/privacy-policy/
> oVirt Code of Conduct: 
> https://www.ovirt.org/community/about/community-guidelines/
> List Archives: 
> https://lists.ovirt.org/archives/list/users@ovirt.org/message/TDFK2BNP4DMBB22SBNEXQLTOK5SOW42O/
>
>
>
> ___
> Users mailing list -- users@ovirt.org
> To unsubscribe send an email to users-le...@ovirt.org
> Privacy Statement: https://www.ovirt.org/site/privacy-policy/
> oVirt Code of Conduct: https://www.ovirt.org/community/about/community-
> guidelines/
> List Archives: https://lists.ovirt.org/archives/list/users@ovirt.org/
> message/4NVTFK2I5LOWCS4XLSCP5BEERS7NXXNR/
>
>
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 

[ovirt-users] Re: trunked ports

2018-06-25 Thread Michael Watters
You should be able to use bonded interfaces with an IP on each VLAN
Interface for the ovirt hosts and the engine.  For example, here is the
IP configuration for one of our VLANs.

10: bond3:  mtu 1500 qdisc
noqueue state UP group default qlen 1000
    link/ether 00:1b:21:5c:80:39 brd ff:ff:ff:ff:ff:ff
    inet6 fe80::21b:21ff:fe5c:8039/64 scope link
   valid_lft forever preferred_lft forever
24: bond3.311@bond3:  mtu 1500 qdisc
noqueue state UP group default qlen 1000
    link/ether 00:1b:21:5c:80:39 brd ff:ff:ff:ff:ff:ff
    inet 192.168.111.201/24 brd 192.168.111.255 scope global bond3.311
   valid_lft forever preferred_lft forever
    inet6 fe80::21b:21ff:fe5c:8039/64 scope link
   valid_lft forever preferred_lft forever

bond3.311 configuration is managed in the
/etc/sysconfig/network-scripts/ifcfg-bond3.311 file.

IMO setting up bonded NICs with VLAN tagging is one area where ovirt
falls short.  You essentially have to configure your networks twice. 
First using the /etc/sysconfig/network-interface files and then inside
of the engine itself.  VDSM may also need to be configured to use ifcfg
persistence in the config file.

cat files/vdsm/vdsm.conf 
[vars]
ssl = true
net_persistence = ifcfg

[addresses]
management_port = 54321

Your switch ports also need to be configured to support 802.1q networking.


On 06/23/2018 09:56 AM, william.doss...@gmail.com wrote:
>
> Hi,
>
>  
>
> I setup oVirt a few years back…  now that the HCI is real, I am
> revisiting it.  I have deployed with Gluster, and am now moving on to
> networking.
>
>  
>
> I come from a VMware shop and normally we trunk all the network ports
> exposing all VLANs to the hosts and place VMs in Portgoups that are
> tagged with VLANs.
>
>  
>
> I did manage to do this years back but I am struggling to get this to
> work today.  I had pretty limited hardware back then and I thought I
> installed using vlan tagging and trunked ports, but I don’t see any 
> option to do this using the glusterfs and hosted engine setup.
>
>  
>
> Each host has a dual port 10Gb NICs  I use one for storage that is
> connected to my storage network and one for ovirtmgmt.  (I need to add
> another of these for redundancy down the road, but no money for that
> at the moment)
>
>  
>
> The hosts also have 4 x 1Gb ports. So in lieu of being able to
> configure vlan tagging to trunked ports on hosted engine deploy, I am
> considering cabling up a 1 Gb port on each in my management services
> VLAN and when it is all up and running create another logical network
> (or several of them as I think these equate to what is a vlan tagged
> port group in VMware) with the 10Gb NIC backing for VMs.
>
>  
>
> Does that sound reasonable?  Or if anyone can point me to any docs
> that describe how to deploy to a specific VLAN with trunk ports, that
> would be nice as well as I won’t have to actually go to the office and
> run additional cables.
>
>  
>
> Appreciate any advice
>
>  
>
> Bill
>
>  
>
>
>
> ___
> Users mailing list -- users@ovirt.org
> To unsubscribe send an email to users-le...@ovirt.org
> Privacy Statement: https://www.ovirt.org/site/privacy-policy/
> oVirt Code of Conduct: 
> https://www.ovirt.org/community/about/community-guidelines/
> List Archives: 
> https://lists.ovirt.org/archives/list/users@ovirt.org/message/TDFK2BNP4DMBB22SBNEXQLTOK5SOW42O/

___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/4NVTFK2I5LOWCS4XLSCP5BEERS7NXXNR/