Hi

We need jumbo frames for containers since they need access to an
internal network which has jumbo frames enabled and is used solely for
NFS traffic.

I figured it is not possible to have jumbo frames enabled inside a CT.
Even if the CT internally configures its interface to mtu 9000 and the
host bridge is configured to mtu 9000, after starting the CT everything
falls back to mtu 1500. It seems that bridges in general use the
smallest mtu of all members. It also seems that the CT's host interface
defaults to mtu 1500, no matter what the CT uses internally and no
matter what is used on the bridge.

I found the solution to be a small change in the
skript /usr/sbin/vznetaddbr. The attached patch changes vznetaddr so
that it sets an mtu of 9000 when it configures a host interface. This
works fine for all situations, even for CTs and host bridges that
actually use mtu 1500, since the host interface never creates any
traffic (with possibly wrong mtu) but only transports it. 

Could this change be considered for future versions of vzctl? Or is
there another|better way to get jumbo frames for containers?

Cheers
Roman

--- vznetaddbr      2014-11-25 15:40:56.994982738 +0100
+++ vznetaddbr.new  2014-11-25 16:51:40.140253183 +0100
@@ -32,6 +32,7 @@
     echo "Adding interface $host_ifname to bridge $bridge on CT0 for CT$VEID"
     ip link set dev "$host_ifname" up
     ip addr add 0.0.0.0/0 dev "$host_ifname"
+    ifconfig "$host_ifname" mtu 9000
     echo 1 >"/proc/sys/net/ipv4/conf/$host_ifname/proxy_arp"
     echo 1 >"/proc/sys/net/ipv4/conf/$host_ifname/forwarding"
     brctl addif "$bridge" "$host_ifname"

_______________________________________________
Users mailing list
Users@openvz.org
https://lists.openvz.org/mailman/listinfo/users

Reply via email to