On Tue, Jul 24, 2018 at 8:26 AM, Miguel Angel Ajo Pelayo < majop...@redhat.com> wrote:
> > > > On 24 July 2018 at 17:20:59, Han Zhou (zhou...@gmail.com) wrote: > > > > On Thu, Jul 12, 2018 at 7:03 AM, Miguel Angel Ajo Pelayo < > majop...@redhat.com> wrote: > >> I believe we need to emit ICMP / need to frag messages to have proper >> support >> on different MTUs (on router sides), I wonder how does it work the other >> way >> around (when external net is 1500, and internal net is 1500-geneve >> overhead). >> > > I think this is expected since GW chassis forwards packets without going > through IP stack. > One solution might be using a network namespace on the GW node as an > intermediate hop, so that IP stack on the GW will handle the fragmentation > (or reply ICMP when DF is set). Of course this will have some latency > added, and also increase complexity of the deployment, so I'd rather tune > the MTU properly to avoid the problem. But if east-west performance is more > important and HV <-> HV jumbo frame is supported, then probably it worth > the namespace trick just to make external work regardless of internal MTU > settings. Does this make sense? > > > I believe we should avoid that path at all costs, it’s the way the neutron > reference implementation was built and it’s slower. Also it has a lot of > complexity. > > > Sometimes the MTU will be just mismatched the internal network/ls has a > bigger MTU to increase performance, but the external network is on the > standard 1500, in some cases such thing could be circumvented by having a > leg of the external router with big MTU just for ovn, but… if we look at > how people use openstack for example, that probably render most of the > deployments incompatible with ovn. > > > For example, customers tend to have several provider networks + external > networks, like legacy networks, different providers, etc. > > > > >> Is there any way to match packet_size > X on a flow? >> >> How could we implement this? >> > I didn't find anything for matching packet_size in ovs-fields.7. Even we > could do this in OVN (e.g. through controller action in slowpath), I wonder > is it really better than relying on IP stack. Maybe blp or someone else > could shed a light on this :) > > I think that would be undesirable also. > > > I wonder how it works now when external network is generally on 1500 MTU, > while Geneve has a lower mtu. > Do you mean for example: VM has MTU: 1400, while external network and eth0 (tunnel physical interface) of HVs and GWs are all 1500 MTU? Why would there be a problem in this case? Or did I misunderstand? > > > Thanks, > Han > > >> >> >> On Wed, Jul 11, 2018 at 1:01 PM Daniel Alvarez Sanchez < >> dalva...@redhat.com> wrote: >> >>> On Wed, Jul 11, 2018 at 12:55 PM Daniel Alvarez Sanchez < >>> dalva...@redhat.com> wrote: >>> >>>> Hi all, >>>> >>>> Miguel Angel Ajo and I have been trying to setup Jumbo frames in >>>> OpenStack using OVN as a backend. >>>> >>>> The external network has an MTU of 1900 while we have created two >>>> tenant networks (Logical Switches) with an MTU of 8942. >>>> >>> >>> s/1900/1500 >>> >>>> >>>> When pinging from one instance in one of the networks to the other >>>> instance on the other network, the routing takes place locally and >>>> everything is fine. We can ping with -s 3000 and with tcpdump we verify >>>> that the packets are not fragmented at all. >>>> >>>> However, when trying to reach the external network, we see that the >>>> packets are not tried to be fragmented and the traffic doesn't go through. >>>> >>>> In the ML2/OVS case (reference implementation for OpenStack >>>> networking), this works as we're seeing the following when attempting to >>>> reach a network with a lower MTU: >>>> >>> >>> Just to clarify, in the reference implementation (ML2/OVS) the routing >>> takes place with iptables rules so we assume that it's the kernel >>> processing those ICMP packets. >>> >>>> >>>> 10:38:03.807695 IP 192.168.20.14 > dell-virt-lab-01.mgmt.com: ICMP >>>> echo request, id 30977, seq 0, length 3008 >>>> >>>> 10:38:03.807723 IP overcloud-controller-0 > 192.168.20.14: ICMP >>>> dell-virt-lab-01.mgmt.com unreachable - need to frag (mtu 1500), >>>> length 556 >>>> >>>> As you can see, the router (overcloud-controller-0) is responding to >>>> the instance with an ICMP need to frag and after this, subsequent packets >>>> are going fragmented (while replies are not): >>>> >>>> 0:38:34.630437 IP 192.168.20.14 > dell-virt-lab-01.mgmt.com: ICMP echo >>>> request, id 31233, seq 0, length 1480 >>>> >>>> 10:38:34.630458 IP 192.168.20.14 > dell-virt-lab-01.mgmt.com: icmp >>>> >>>> 10:38:34.630462 IP 192.168.20.14 > dell-virt-lab-01.mgmt.com: icmp >>>> >>>> 10:38:34.631334 IP dell-virt-lab-01.mgmt.com > 192.168.20.14: ICMP >>>> echo reply, id 31233, seq 0, length 3008 >>>> >>>> >>>> >>>> Are we missing some configuration or we lack support for this in OVN? >>>> >>>> Any pointers are highly appreciated :) >>>> >>>> >>>> Thanks a lot. >>>> >>>> Daniel Alvarez >>>> >>>> >>> >>> _______________________________________________ >>> discuss mailing list >>> disc...@openvswitch.org >>> https://mail.openvswitch.org/mailman/listinfo/ovs-discuss >>> >> >> _______________________________________________ >> discuss mailing list >> disc...@openvswitch.org >> https://mail.openvswitch.org/mailman/listinfo/ovs-discuss >> >> > > >
_______________________________________________ discuss mailing list disc...@openvswitch.org https://mail.openvswitch.org/mailman/listinfo/ovs-discuss