From: <[email protected]<mailto:[email protected]>> on behalf of
Axton <[email protected]<mailto:[email protected]>>
Reply-To: OpenVZ users <[email protected]<mailto:[email protected]>>
Date: Sunday 28 February 2016 20:27
To: OpenVZ users <[email protected]<mailto:[email protected]>>
Subject: [Users] Virtuozzo7 beta - jumbo frames on veth
I need to configure some veth interfaces with jumbo frames. I can setup
everything properly on the host where the interfaces all have mtu 9000:
2: enp0s20f0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 9000 qdisc mq master team0
state UP mode DEFAULT qlen 1000
link/ether 0c:c4:7a:6b:95:55 brd ff:ff:ff:ff:ff:ff
3: enp0s20f1: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 9000 qdisc mq master team0
state UP mode DEFAULT qlen 1000
link/ether 0c:c4:7a:6b:95:55 brd ff:ff:ff:ff:ff:ff
4: enp0s20f2: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 9000 qdisc mq master team0
state UP mode DEFAULT qlen 1000
link/ether 0c:c4:7a:6b:95:55 brd ff:ff:ff:ff:ff:ff
5: enp0s20f3: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 9000 qdisc mq master team0
state UP mode DEFAULT qlen 1000
link/ether 0c:c4:7a:6b:95:55 brd ff:ff:ff:ff:ff:ff
7: team0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 9000 qdisc noqueue state UP
mode DEFAULT
link/ether 0c:c4:7a:6b:95:55 brd ff:ff:ff:ff:ff:ff
46: team0.97@team0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 9000 qdisc noqueue
master vmbr97 state UP mode DEFAULT
link/ether 0c:c4:7a:6b:95:55 brd ff:ff:ff:ff:ff:ff
47: vmbr97: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 9000 qdisc noqueue state UP
mode DEFAULT
link/ether 0c:c4:7a:6b:95:55 brd ff:ff:ff:ff:ff:ff
When I start a container with a veth interface on vmbr97, the bridge interface
falls back to mtu 1500:
[root@cluster-02 network-scripts]# prlctl start
ha21t02dh.tech.abc.org<http://ha21t02dh.tech.abc.org>
Starting the CT...
The CT has been successfully started.
[root@cluster-02 network-scripts]# ip link show vmbr97
47: vmbr97: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP
mode DEFAULT
link/ether 0c:c4:7a:6b:95:55 brd ff:ff:ff:ff:ff:ff
brctl shows that this container is the only one on this network:
[root@cluster-02 network-scripts]# brctl show vmbr97
bridge name bridge id STP enabled interfaces
vmbr97 8000.0cc47a6b9555 no team0.97
veth42f2f0a5
The container is running centos7. I have setup mtu 9000 inside the container:
CT-6598defa /# cat /etc/sysconfig/network-scripts/ifcfg-eth1
DEVICE=eth1
BOOTPROTO=none
ONBOOT=yes
HOSTNAME=ha21t02dh-c.tech.abc.org<http://ha21t02dh-c.tech.abc.org>
NM_CONTROLLED=no
TYPE=Ethernet
MTU=9000
IPADDR=10.1.28.9
PREFIX=22
DEFROUTE=no
IPV6INIT=no
IPV6_AUTOCONF=no
DOMAIN="..."
DNS1=10.0.20.250
DNS2=10.0.20.252
I can manually fix the issue by setting the mtu to 9000 on the veth interface
after the container is started as follows.
[root@cluster-01 ~]# ip l show vmbr97
47: vmbr97: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP
mode DEFAULT
link/ether 0c:c4:7a:6b:92:56 brd ff:ff:ff:ff:ff:ff
[root@cluster-01 ~]# ip link set dev veth42346d2f mtu 9000
[root@cluster-01 ~]# ip l show vmbr97
47: vmbr97: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 9000 qdisc noqueue state UP
mode DEFAULT
link/ether 0c:c4:7a:6b:92:56 brd ff:ff:ff:ff:ff:ff
The problem with having to do this each time a container is stopped/started
should be obvious.
My question is this: how do I configure the host and guest so the mtu settings
for jumbo frames are setup properly at container start time for the veth
interface on the host.
We don't have such setting (MTU for veth) right now - libvzctl should be
patched for it.
Setup of veth params is performed in
/usr/libexec/libvzctl/scripts/vz-netns_dev_add, One can modify it (or
vz-functions and add a call of new function to the vz-netns_dev_add) to add
above manual step there. Disadvantage of patching in place is that after
libvzctl update these changes will be lost. So, patch to libvzctl is definitely
the preferable way to fix it permanently.
Thank you,
Dmitry.
Axton Grams
_______________________________________________
Users mailing list
[email protected]
https://lists.openvz.org/mailman/listinfo/users