+1 VXLAN works just fine in my testing, the only gotcha I ever hit as Si 
mentioned is setting an IP address of sorts on the interface.

--
Sent from the Delta quadrant using Borg technology!

Nux!
www.nux.ro

----- Original Message -----
> From: "Simon Weller" <swel...@ena.com.INVALID>
> To: "dev" <dev@cloudstack.apache.org>
> Sent: Tuesday, 23 October, 2018 12:51:17
> Subject: Re: VXLAN and KVm experiences

> We've also been using VXLAN on KVM for all of our isolated VPC guest networks
> for quite a long time now. As Andrija pointed out, make sure you increase the
> max_igmp_memberships param and also put an ip address on each interface host
> VXLAN interface in the same subnet for all hosts that will share networking, 
> or
> multicast won't work.
> 
> 
> - Si
> 
> 
> ________________________________
> From: Wido den Hollander <w...@widodh.nl>
> Sent: Tuesday, October 23, 2018 5:21 AM
> To: dev@cloudstack.apache.org
> Subject: Re: VXLAN and KVm experiences
> 
> 
> 
> On 10/23/18 11:21 AM, Andrija Panic wrote:
>> Hi Wido,
>>
>> I have "pioneered" this one in production for last 3 years (and suffered a
>> nasty pain of silent drop of packages on kernel 3.X back in the days
>> because of being unaware of max_igmp_memberships kernel parameters, so I
>> have updated the manual long time ago).
>>
>> I never had any issues (beside above nasty one...) and it works very well.
> 
> That's what I want to hear!
> 
>> To avoid above issue that I described - you should increase
>> max_igmp_memberships (/proc/sys/net/ipv4/igmp_max_memberships)  - otherwise
>> with more than 20 vxlan interfaces, some of them will stay in down state
>> and have a hard traffic drop (with proper message in agent.log) with kernel
>>> 4.0 (or I silent, bitchy random packet drop on kernel 3.X...) - and also
>> pay attention to MTU size as well - anyway everything is in the manual (I
>> updated everything I though was missing) - so please check it.
>>
> 
> Yes, the underlying network will all be 9000 bytes MTU.
> 
>> Our example setup:
>>
>> We have i.e. bond.950 as the main VLAN which will carry all vxlan "tunnels"
>> - so this is defined as KVM traffic label. In our case it didn't make sense
>> to use bridge on top of this bond0.950 (as the traffic label) - you can
>> test it on your own - since this bridge is used only to extract child
>> bond0.950 interface name, then based on vxlan ID, ACS will provision
>> vxlan...@bond0.xxx and join this new vxlan interface to NEW bridge created
>> (and then of course vNIC goes to this new bridge), so original bridge (to
>> which bond0.xxx belonged) is not used for anything.
>>
> 
> Clear, I indeed thought something like that would happen.
> 
>> Here is sample from above for vxlan 867 used for tenant isolation:
>>
>> root@hostname:~# brctl show brvx-867
>>
>> bridge name     bridge id               STP enabled     interfaces
>> brvx-867                8000.2215cfce99ce       no              vnet6
>>
>>      vxlan867
>>
>> root@hostname:~# ip -d link show vxlan867
>>
>> 297: vxlan867: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 8142 qdisc noqueue
>> master brvx-867 state UNKNOWN mode DEFAULT group default qlen 1000
>>     link/ether 22:15:cf:ce:99:ce brd ff:ff:ff:ff:ff:ff promiscuity 1
>>     vxlan id 867 group 239.0.3.99 dev bond0.950 port 0 0 ttl 10 ageing 300
>>
>> root@ix1-c7-2:~# ifconfig bond0.950 | grep MTU
>>           UP BROADCAST RUNNING MULTICAST  MTU:8192  Metric:1
>>
>> So note how the vxlan interface has by 50 bytes smaller MTU than the
>> bond0.950 parent interface (which could affects traffic inside VM) - so
>> jumbo frames are needed anyway on the parent interface (bond.950 in example
>> above with minimum of 1550 MTU)
>>
> 
> Yes, thanks! We will be using 1500 MTU inside the VMs, so all the
> networks underneath will be ~9k.
> 
>> Ping me if more details needed, happy to help.
>>
> 
> Awesome! We'll be doing a PoC rather soon. I'll come back with our
> experiences later.
> 
> Wido
> 
>> Cheers
>> Andrija
>>
>> On Tue, 23 Oct 2018 at 08:23, Wido den Hollander <w...@widodh.nl> wrote:
>>
>>> Hi,
>>>
>>> I just wanted to know if there are people out there using KVM with
>>> Advanced Networking and using VXLAN for different networks.
>>>
>>> Our main goal would be to spawn a VM and based on the network the NIC is
>>> in attach it to a different VXLAN bridge on the KVM host.
>>>
>>> It seems to me that this should work, but I just wanted to check and see
>>> if people have experience with it.
>>>
>>> Wido
>>>
>>

Reply via email to