Hello. 

I work with a cluster of 4 node proxmox. I use GRE Tunnel and OVS rather like 
this: http://docs.openvswitch.org/en/latest/howto/tunneling/ 

GRE encapsulates packet (network layer) with a header of 4 bytes and another 
header ip of 20 bytes so you have a 24 bytes overhead compared to a "classic" 
packet IP. In my case, it's my "classic" packet IP which are encapsuled by GRE. 

Normally, mtu is set by default to 1500 bytes, but as I use GRE, I have 2 
possibility: 
1) increase MTU to 1524 or more 
2) decrease MTU to 1476 or less 

In the first case, I have to set up to my physical network to use jumbo frame, 
but I don't have to think anymore at the mtu of my VM (mtu 1500 by default) 

In the second case, I always have to set MTU to 1476 or less on my VM. 

I have choose the first case and it's work perfectly with VM (ping is limited 
to 1472 bytes, it's 1500- IPv4 (20 bytes)- UDP (8 bytes)=1472. I don't have 
vlan on this interface) 

But, when I use a LXC container, that is the output of ip link: 
---------------------------------------------------------------------------------------
 
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN mode 
DEFAULT group default qlen 1000 
link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00 
2: gre0@NONE: <NOARP> mtu 1476 qdisc noop state DOWN mode DEFAULT group default 
qlen 1000 
link/gre 0.0.0.0 brd 0.0.0.0 
3: gretap0@NONE: <BROADCAST,MULTICAST> mtu 1462 qdisc noop state DOWN mode 
DEFAULT group default qlen 1000 
link/ether 00:00:00:00:00:00 brd ff:ff:ff:ff:ff:ff 
78: eth0@if79: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state 
UP mode DEFAULT group default qlen 1000 
link/ether f2:55:a3:98:c2:31 brd ff:ff:ff:ff:ff:ff link-netnsid 0 
---------------------------------------------------------------------------------------
 
( I don't understand mtu of 1462 on interface gretap0 ...) 

LXC get GRE interface (I think it's because on the same kernel as the host) and 
the default MTU of GRE is set to 1476 but default MTU of eth0 is set to 1500. 
In consequence, 2 LXC container on 2 hosts linked by GRE can't communicate 
fine: 
_ I can ping containers each other, but not use tcp traffic (ssh by example) 
(ping is limited to 1444 bytes, it's 1500- IPv4 (20 bytes)- GRE (4 bytes)- 
802.1q VLAN (4 bytes)- IPv4 (20 bytes)- UDP (8 bytes)=1444 bytes ) 

I have to manually decrease the mtu of the container to 1476 or less to use 
application based on tcp protocols. 

ip link set eth0 mtu 1476 # ephemeral method 

or 

add mtu=1476 in the description of the LXC container in /etc/pve/lxc/ID.conf 
behind line define interface network (netX) (persistant method) 

It's would great if LXC can have the same comportement as the VM. 

Best regards. 

Jean-Mathieu 



        
_______________________________________________
pve-user mailing list
[email protected]
https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-user

Reply via email to