Hi Nicolas,

Yes I would do a double test with both bonding and teaming and see if the agent 
simply doesn’t like teaming at all. 
You can obviously also change the agent logs to trace and see if that sheds 
more light on it.

With regards to naming convention I know this is a contested issue – we do the 
same as you and change it back to the legacy ethX naming convention to simplify 
our build scripts, but overall I would expect it to work with the new world 
naming convention.

Regards,
Dag Sonstebo
Cloud Architect
ShapeBlue

On 06/02/2018, 12:47, "Nicolas Bouige" <n.bou...@dimsi.fr> wrote:

    Dag,
    
    
    okay, i got it, thanks a lot for the details and your help.
    As im  stuck with the current configuration with nmcli, im going to try 
without on an another host see if i have more success.
    
    
    Do you know if someone success to set up KVM networking with the new naming 
convention on centOS7 ? (ensX, enpX..etc)
    
    
    because i renamed the NICs with ethX but don't know if it was really 
necessary.
    
    
    Best regards,
    
    
    N.B
    ________________________________
    De : Dag Sonstebo <dag.sonst...@shapeblue.com>
    Envoyé : mardi 6 février 2018 12:40:19
    À : users@cloudstack.apache.org
    Objet : Re: host KVM unable to find cloudbr0
    
    Hi Nicolas
    
    These two settings are mutually exclusive – you are controlling your 
networking with NetworkManager (NM) through nmcli. My personal preference is to 
leave NM out of the equation and do all configuration manually (or with 
Ansible, Chef or whatever tool you choose) – hence I mark the different 
interfaces with "NM_CONTROLLED=no" to stop NM ever trying to interfere if 
someone starts the NM service up.
    
    So – if you want to use nmcli then remove "NM_CONTROLLED=no" from your 
config files.
    
    As I said – this is a personal preference only though – you will probably 
manage to get it to work with NM, I just find it too intrusive.
    
    Regards,
    Dag Sonstebo
    Cloud Architect
    ShapeBlue
    
    On 06/02/2018, 11:15, "Nicolas Bouige" <n.bou...@dimsi.fr> wrote:
    
        Hi Dag,
    
    
        You are right, and i did it, it was not clear enought on my first mail.
        I add the ethX interface to the team-MGMT with this command :
    
    
        nmcli con add type ethernet con-name MGMT-port1 ifname eth0 master MGMT
    
    
        Here the configuration :
    
        ############### MGMT-port1 ############
    
        NAME=MGMT-port1
    
        UUID=xxxx-xxxxx...etc
    
        DEVICE=eth0
    
        TEAM_MASTER=MGMT
    
        DEVICETYPE=TeamPort
    
    
    
         i just tried with adding "NM_CONTROLLED=no" but its worse, now, i 
can't even contact cloudstack management-server :/
    
        And "ip a" tell me cloudbr0 is down...
    
    
        So, there is a real difference between :
    
        - create the networking configuration with "nmcli command" and add 
"nm_controlled=no"
    
        - create the networking configuration manually and directly with 
"nm_controlled=no"
    
        Nicolas Bouige
        DIMSI
        cloud.dimsi.fr<http://www.cloud.dimsi.fr>
        4, avenue Laurent Cely
        Tour d’Asnière – 92600 Asnière sur Seine
        T/ +33 (0)6 28 98 53 40
    
    
        ________________________________
        De : Dag Sonstebo <dag.sonst...@shapeblue.com>
        Envoyé : mardi 6 février 2018 11:56:46
        À : users@cloudstack.apache.org
        Objet : Re: host KVM unable to find cloudbr0
    
        Hi Nicolas,
    
        First of all – you learn something new every day – I didn’t realise 
there was a difference between a team and a bond – but there is: 
https://access.redhat.com/documentation/en-us/red_hat_enterprise_linux/7/html/networking_guide/sec-comparison_of_network_teaming_to_bonding
 So with regards to this – I can’t comment – suggest you test with both, but I 
would expect teaming to work just as well looking through the comparison table 
above.
    
        I may be missing something – but to me it looks like your main problem 
is this:
    
        eth0/eth1 ---X---> teamed NIC (mgmt) -->  cloudbr0
    
        i.e. you have eth0 and eth1 – but they are not linked to the team in 
any way – I would expect to see a master/slave type configuration in your 
ifcfg-* files. The odd thing here is obviously that you can ping the host and 
speak to it in the first place – which would point to cloudbr0 somehow being 
online – hence my suspicion may be wrong.
    
        With regards to nmcli – personally this has caused me too much trouble 
through the years – hence I never use it and just mark my interfaces as 
NM_CONTROLLED=no.
    
    
        Regards,
        Dag Sonstebo
        Cloud Architect
        ShapeBlue
    
        On 06/02/2018, 10:24, "Nicolas Bouige" <n.bou...@dimsi.fr> wrote:
    
            Hello Dag,
    
    
            Thanks for your help,
    
    
            Here the informations :
    
    
            ###### IP A RESULT #######
    
            root@ASPRKVM06 network-scripts]# ip a
            1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN 
qlen 1
             link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
             inet 127.0.0.1/8 scope host lo
            valid_lft forever preferred_lft forever
            inet6 ::1/128 scope host
              valid_lft forever preferred_lft forever
            2: eth0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc 
pfifo_fast master MGMT state UP qlen 1000
            link/ether 00:1b:78:2b:3a:de brd ff:ff:ff:ff:ff:ff
            3: eth1: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc 
pfifo_fast master TRUNK state UP qlen 1000
            link/ether 00:1b:78:2b:3a:df brd ff:ff:ff:ff:ff:ff
            4: eth2: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc mq master 
MGMT portid 002481adfe90 state UP qlen 1000
            link/ether 00:1b:78:2b:3a:de brd ff:ff:ff:ff:ff:ff
            5: eth3: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc mq master 
TRUNK portid 002481adfe94 state UP qlen 1000
            link/ether 00:1b:78:2b:3a:df brd ff:ff:ff:ff:ff:ff
            12: virbr0: <NO-CARRIER,BROADCAST,MULTICAST,UP> mtu 1500 qdisc 
noqueue state DOWN qlen 1000
            link/ether 52:54:00:41:c3:2f brd ff:ff:ff:ff:ff:ff
            inet 192.168.122.1/24 brd 192.168.122.255 scope global virbr0
            valid_lft forever preferred_lft forever
            13: virbr0-nic: <BROADCAST,MULTICAST> mtu 1500 qdisc pfifo_fast 
master virbr0 state DOWN qlen 1000
            link/ether 52:54:00:41:c3:2f brd ff:ff:ff:ff:ff:ff
            20: cloud0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc 
noqueue state UNKNOWN qlen 1000
            link/ether 56:1e:58:2b:a4:95 brd ff:ff:ff:ff:ff:ff
              inet 169.254.0.1/16 scope global cloud0
                 valid_lft forever preferred_lft forever
            inet6 fe80::541e:58ff:fe2b:a495/64 scope link
            valid_lft forever preferred_lft forever
            39: TRUNK: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue 
master cloudbr1 state UP qlen 1000
               link/ether 00:1b:78:2b:3a:df brd ff:ff:ff:ff:ff:ff
               inet6 fe80::21b:78ff:fe2b:3adf/64 scope link
                valid_lft forever preferred_lft forever
            40: TRUNK103@TRUNK: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 
qdisc noqueue master cloudbr1.103 state UP qlen 1000
              link/ether 00:1b:78:2b:3a:df brd ff:ff:ff:ff:ff:ff
            41: cloudbr1.103: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc 
noqueue state UP qlen 1000
               link/ether 00:1b:78:2b:3a:df brd ff:ff:ff:ff:ff:ff
               inet 172.16.3.216/24 brd 172.16.3.255 scope global cloudbr1.103
                 valid_lft forever preferred_lft forever
              inet6 fe80::21b:78ff:fe2b:3adf/64 scope link
                valid_lft forever preferred_lft forever
            42: cloudbr1: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc 
noqueue state UP qlen 1000
               link/ether 00:1b:78:2b:3a:df brd ff:ff:ff:ff:ff:ff
               inet6 fe80::21b:78ff:fe2b:3adf/64 scope link
                 valid_lft forever preferred_lft forever
            45: cloudbr0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc 
noqueue state UP qlen 1000
               link/ether 00:1b:78:2b:3a:de brd ff:ff:ff:ff:ff:ff
              inet 172.16.22.216/24 brd 172.16.22.255 scope global cloudbr0
                valid_lft forever preferred_lft forever
            inet6 fe80::21b:78ff:fe2b:3ade/64 scope link
                valid_lft forever preferred_lft forever
            46: MGMT: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue 
master cloudbr0 state UP qlen 1000
              link/ether 00:1b:78:2b:3a:de brd ff:ff:ff:ff:ff:ff
               inet6 fe80::21b:78ff:fe2b:3ade/64 scope link
                  valid_lft forever preferred_lft forever
    
            ########## IFCFG-ETH0 ############
    
            TYPE=Ethernet
            PROXY_METHOD=none
            BROWSER_ONLY=no
            BOOTPROTO=static
            DEFROUTE=yes
            IPV4_FAILURE_FATAL=no
            NAME=eth0
            UUID=e5963b4d-e144-4ed0-a296-b16bd4cc2639
            DEVICE=eth0
            ONBOOT=yes
    
            ########### IFCFG-ETH2 ############
    
            TYPE=Ethernet
            PROXY_METHOD=none
            BROWSER_ONLY=no
            BOOTPROTO=static
            DEFROUTE=yes
            IPV4_FAILURE_FATAL=no
            NAME=eth2
            UUID=b980b62e-b344-4b47-8d25-7add6a28491a
            DEVICE=eth2
            ONBOOT=yes
    
            ########### IFCFG-team-MGMT ############
    
            DEVICE=MGMT
            PROXY_METHOD=none
            BROWSER_ONLY=no
            BOOTPROTO=static
            DEFROUTE=yes
            IPV4_FAILURE_FATAL=no
            NAME=team-MGMT
            UUID=4a09cf80-ab72-47e5-adb1-422c6fc86f9f
            ONBOOT=yes
            DEVICETYPE=Team
            BRIDGE=cloudbr0
    
            ######## IFCFG-cloudbr0 ###########
    
            DEVICE=cloudbr0
            STP=no
            BRIDGING_OPTS=priority=32768
            TYPE=Bridge
            PROXY_METHOD=none
            BROWSER_ONLY=no
            BOOTPROTO=static
            DEFROUTE=yes
            IPV4_FAILURE_FATAL=no
            NAME=cloudbr0
            UUID=90063d32-2e8c-4eac-8917-4b5c3d6d2fd7
            ONBOOT=yes
            IPADDR=172.16.22.216
            NETMASK=255.255.255.0
            GATEWAY=172.16.22.254
            DNS1=8.8.8.8
    
    
            ######## BRCTL SHOW #######
            bridge name     bridge id               STP enabled     interfaces
            cloud0          8000.000000000000       no
            cloudbr0                8000.001b782b3ade       no              MGMT
            cloudbr1                8000.001b782b3adf       no              
TRUNK
            cloudbr1.103            8000.001b782b3adf       no              
TRUNK103
            virbr0          8000.52540041c32f       yes             virbr0-nic
    
    
    
    
            For information, i used nmcli command to configure the networking.
    
    
            nmcli connection add type team ifname MGMT
    
            nmcli con add type ethernet con-name MGMT-port1 ifname eth0 master 
MGMT
    
            nmcli con add type ethernet con-name MGMT-port2 ifname eth2 master 
MGMT
    
            nmcli conn add type bridge con-name cloudbr0 ifname cloudbr0
    
            all device are up and connected
    
            Best regards,
            N.B
    
            ________________________________
            De : Dag Sonstebo <dag.sonst...@shapeblue.com>
            Envoyé : mardi 6 février 2018 10:26
            À : users@cloudstack.apache.org
            Objet : Re: host KVM unable to find cloudbr0
    
            Hi Nicolas,
    
            OK I’m with you. Sounds like you have an underlying network issue 
on your KVM host.
    
            Can you post up an ifconfig / ip a from your KVM host?
            Can you also post up the contents of the ifcfg-eth0 + ifcfg-eth1 as 
well as as ifcfg-<team or bond0> and ifcfg-cloudbr0?
    
    
            Regards,
            Dag Sonstebo
            Cloud Architect
            ShapeBlue
    
            On 05/02/2018, 20:06, "Nicolas Bouige" <n.bou...@dimsi.fr> wrote:
    
                Hello Dag and Andrija,
    
    
                Thanks for your answer,
    
    
                @Andrija, we are using advanced zone and yes, we have specified 
the traffics label and the agent on the host has retrieved the informations.
    
                @Dag, its the documentation i followed, just, instead of bond i 
used team NIC.
    
    
    
                Best regards,
    
                N.B
    
    
                ________________________________
                De : Dag Sonstebo <dag.sonst...@shapeblue.com>
                Envoyé : lundi 5 février 2018 20:01:17
                À : users@cloudstack.apache.org
                Objet : Re: host KVM unable to find cloudbr0
    
                Hi Nicolas,
    
                Take a look at the following blog article – it’s a couple of 
years old but should still be valid:
    
                http://www.shapeblue.com/networking-kvm-for-cloudstack/
    
    
                Regards,
                Dag Sonstebo
                Cloud Architect
                ShapeBlue
    
                On 05/02/2018, 18:51, "Andrija Panic" <andrija.pa...@gmail.com> 
wrote:
    
                    Hi Nicolas,
    
                    what does your zone networking look like ?
                    For every network you setup in the Zone (are you using 
advanced zones, vlan
                    isolation method ???) you need to specify "KVM traffic 
label" - this
                    actually tells ACS what parent interface to look for...
    
                    Cheers
    
    
                dag.sonst...@shapeblue.com
                www.shapeblue.com<http://www.shapeblue.com>
                53 Chandos Place, Covent Garden, London  WC2N 4HSUK
                @shapeblue
    
    
    
    
            dag.sonst...@shapeblue.com
            www.shapeblue.com<http://www.shapeblue.com>
            
[http://www.shapeblue.com/wp-content/uploads/2017/06/logo.png]<http://www.shapeblue.com/>
    
            Shapeblue - The CloudStack Company<http://www.shapeblue.com/>
            www.shapeblue.com<http://www.shapeblue.com>
            Rapid deployment framework for Apache CloudStack IaaS Clouds. 
CSForge is a framework developed by ShapeBlue to deliver the rapid deployment 
of a standardised ...
    
    
            53 Chandos Place, Covent Garden, London  WC2N 4HSUK
            @shapeblue
    
    
    
    
        dag.sonst...@shapeblue.com
        www.shapeblue.com<http://www.shapeblue.com>
        53 Chandos Place, Covent Garden, London  WC2N 4HSUK
        @shapeblue
    
    
    
    
    dag.sonst...@shapeblue.com
    www.shapeblue.com<http://www.shapeblue.com>
    53 Chandos Place, Covent Garden, London  WC2N 4HSUK
    @shapeblue
    
    
    
    
dag.sonst...@shapeblue.com 
www.shapeblue.com
53 Chandos Place, Covent Garden, London  WC2N 4HSUK
@shapeblue
  
 

On 5 February 2018 at 18:12, Nicolas Bouige <n.bou...@dimsi.fr> wrote:
    
                    > To complete my previous  mail:
                    >
                    >
                    > we are running KVM on CentOS7
                    >
                    >
                    > Here the exact error message from the cloudstack GUI :
                    >
                    > incorrect Network setup on agentReinitialize agent after 
network names are
                    > setupdetails : Can not find network: cloudbr0
                    >
                    > ________________________________
                    > De : Nicolas Bouige <n.bou...@dimsi.fr>
                    > Envoyé : lundi 5 février 2018 18:02:19
                    > À : users@cloudstack.apache.org
                    > Objet : host KVM unable to find cloudbr0
                    >
                    > Hello all,
                    >
                    >
                    > As a lot of people, we are trying to switch our 
hypervisor and so our
                    > cloudstack platform from Xenserver to KVM.
                    >
                    >
                    > And we dont have a lot of experience with the duo 
cloudstack/KVM, we are
                    > facing some issues and one of them is about the network.
                    >
                    > In the official documentation we have to create two 
bridges called
                    > cloudbr0 and cloudbr1.
                    >
                    > That's what we did.
                    >
                    >
                    > eth0/eth1 --> teamed NIC (mgmt) -->  cloudbr0
                    >
                    > eth2/eth3 --> teamed NIC (trunk) --> cloudbr1
                    >
                    >
                    > we add a vlan on teamed NIC (trunk) with the id of the 
storage network.
                    >
                    > --> teamed NIC (trunk) --> trunk103 (vlan 103) --> 
cloudbr1.103
                    >
                    >
                    > The configuration sound good, we can ping each 
host/storage and web.
                    >
                    > cloudbr0 is configured with an IP address and 
cloudbr1.103 as well.
                    >
                    >
                    > During zone configuration we have added  cloudbr0 for 
admin traffic  and
                    > cloudbr1 for storage/guest/public.
                    >
                    >
                    > We are able to add the host and the agent get all the 
informations needed :
                    >
                    > guest.network.device=cloudbr1
                    >
                    > workers=5
                    > private.network.device=cloudbr0
                    > port=8250
                    > 
resource=com.cloud.hypervisor.kvm.resource.LibvirtComputingResource
                    > pod=2
                    > zone=2
                    > hypervisor.type=kvm
                    > guid=6ce7dedb-0b21-31ed-b7f8-4141613c0946
                    > public.network.device=cloudbr1
                    > cluster=3
                    > local.storage.uuid=dbd798f9-b7ca-4022-943d-9dd2cd8b2bfa
                    > domr.scripts.dir=scripts/network/domr/kvm
                    > LibvirtComputingResource.id=0
                    > host=XXX.XXX.XXX.XXX
                    >
                    >
                    > network cloud0 has been created automatically.
                    >
                    > For information, we have followed this ticket as well but 
nothing changed.
                    >
                    > https://issues.apache.org/jira/browse/CLOUDSTACK-8838
            [KVM] agent setup failed when physical interface name is 
...<https://issues.apache.org/jira/browse/CLOUDSTACK-8838>
            issues.apache.org
            [KVM] agent setup failed when physical interface name is in ensX 
format (CentOS7) My environment: CloudStack 4.5.2 
(http://packages.shapeblue.com/cloudstack/upstream ...
    
    
                    >
                    >
                    > i guess i misunderstood something during the network 
configuration but i'm
                    > running out of idea.
                    >
                    >
                    > Any help will be appreciated ;)
                    >
                    >
                    > Have a great day,
                    >
                    > Best regards,
                    >
                    >
                    > N.B
                    >
                    >
                    >
    
    
                    --
    
                    Andrija Panić
    
    
    
    
    
    
    
    
    

Reply via email to