Marcus, I understand that what you just said is possible and better
solution - but you should have to do this kind of changes on EVERY vxlan
interface/bridge, for every of your guest networks - so not a problem for
existing bridges - but a problem for new bridges that are going to be
deployed when new guest network is created, right ?
 You need to change mtu on vnet, on brvx-xxx, on vxlan-xxx and finally on
ethX/cloudX..

Do you see my problem, for every new guest network on singel KVM host, you
need to adjust MTU...

Or am I wrong ?


On 26 October 2014 16:52, Marcus <shadow...@gmail.com> wrote:

> Just want to clarify, with our 9216 MTU we easily run 9000 on our VMS. You
> may want to set MTU to 1550 or greater on your KVM host interface used for
> vxlan, and adjust the network accordingly. Most vxlan documentation for
> network equipment should mention the 50 byte overhead and how to adjust for
> it (if necessary).
> On Oct 26, 2014 9:45 AM, "Marcus" <shadow...@gmail.com> wrote:
>
> > You should instead increase MTU on the host interface to accommodate. For
> > example, we used jumbo frames with an MTU of 9216 for the host
> interface. I
> > think it wasn't mentioned because its largely assumed in CS documentation
> > that the admin understands the network design they are using, i.e. you'd
> > run into the same issue even without cloudstack handling orchestration if
> > you were manually adding VMs to a vxlan-uplinked bridge.
> >
> > There is not much network documentation aside from the cloudstack
> > tunables.  I agree though that there would be no harm in putting a note
> > somewhere reminding the admin of vxlan encapsulation requirements. You
> can
> > submit a patch for the docs to reviewboard.
> > On Oct 26, 2014 7:39 AM, "Andrija Panic" <andrija.pa...@gmail.com>
> wrote:
> >
> >> Hi folks,
> >>
> >> I'm trying to figure out - why is there no documentation on the NEED to
> >> configure MTU inside VM, when using vxlan as guest isolation method ?
> >>
> >>
> >> Right now, by defaut/design, traffic/MTU goes like this:
> >>
> >> eth0 inside VM is by default 1500 bytes --> vnetY mtu1450 --> virbrX
> >> mtu1450--> vxlan mtu1450--> ethX mtu1500--> physical network (in this
> case
> >> I use ethX as traffic label instead of bridge, vxlan interface is
> created
> >> on top of ethX interface)
> >>
> >> Inside VM, I can get IP address via DHCP, use ping, because those
> generate
> >> packet less than 1500 bytes.
> >> From within VM - i.e. SSH/SCP login works, but SCP data transfer fails,
> >> yum
> >> update fails, etc -
> >>
> >> Any other traffic from VM to outside does not work and no other
> >> connectivity, until I configure MTU inside VM to be less than 1500...
> >>
> >> What is the recommended way to configure vxlan - documentation is just
> >> asking for supported kernel and iproute2 versions, use ethX or bridgeX
> as
> >> traffic label, give it IP - and that's it.
> >>
> >> There must be some clear decision on how to make this works:
> >>
> >> 1) eather - don't bother client configuring MTU inside VM/template, and
> >> make MTU on vxlan and vnet interfaces 1500 bytes - but ask Administrator
> >> to
> >> increase mtu to 1600 on physical interface ethX or bridgeX
> >>
> >> 2) as it currently is the case, use 1450 MTU on vnet,vxlan, and make
> >> trouble for user to configure MTU for each of his VMs/templates.
> >>
> >>
> >> Am I missing something here perhaps ?
> >> Is there any more complete documentation on this ?
> >>
> >> Best
> >> --
> >>
> >> Andrija Panić
> >>
> >
>



-- 

Andrija Panić
--------------------------------------
  http://admintweets.com
--------------------------------------

Reply via email to