Re: [vpp-dev] Multipoint GRE support
Hi Paul, Question 1 By physical addresses you mean the routable public IPs that form the tunnel src and tunnel dst addresses? In my use case there is a GRE traffic that is originating from 5G mobiles and is received by the N3IWF (wifi gateway) as shown in the below topo. The GRE traffic generated by UE is over IPSEC (the GRE pkts have ESP encap). In the production, there can be several thousands of 5G UEs that generate gre-over-ipsec traffic that will terminate on a single p2mp (multipoint) GRE interface on the N3IWF gateway as shown below. Question 2 Since GRE is always encapsulated in IPSEC tunnel, do we still need to need to map the GRE tunnel addresses to physical addresses. I thought we would not be need GRE physical addresses in our use-case as there is the IPSEC that is always outermost hdr and would be used for routing Topo UE --(gre-over-ipsec traffic)--N3IWF On Wed, Dec 23, 2020 at 10:12 AM Paul Vinciguerra < pvi...@vinciconsulting.com> wrote: > Hi Vijay, > > How are you planning to map the tunnel addresses to the physical addresses? > > On Tue, Dec 22, 2020 at 9:04 PM Vijay Kumar wrote: > >> Hi Paul, >> >> Thanks for the information. >> >> Your script is talking about nhrp protocol. >> Is NHRP protocol mandatory to support mGRE? >> >> >> Regards >> >> On Wed, 23 Dec 2020, 01:24 Paul Vinciguerra, >> wrote: >> >>> Hi Vijay. >>> >>> Does this help any? >>> https://github.com/vpp-dev/vpp/blob/master/test/test_gre.py#L998 >>> >>> >>> On Tue, Dec 22, 2020 at 12:47 PM Vijay Kumar >>> wrote: >>> Hi, Can someone help me understand if multipoint GRE (one gre interface that can communicate with multiple peers) is supported in the fd.ip GRE plugin? If yes, could you please share with me an example config for multi-point GRE. In the fd.io wiki pages, I am only seeing *p2mp *configuration (point to multipoint) for IP-in-IP but not for GRE Please share me an example config that I can use to test Regards -=-=-=-=-=-=-=-=-=-=-=- Links: You receive all messages sent to this group. View/Reply Online (#18421): https://lists.fd.io/g/vpp-dev/message/18421 Mute This Topic: https://lists.fd.io/mt/79154511/21656 Group Owner: vpp-dev+ow...@lists.fd.io Unsubscribe: https://lists.fd.io/g/vpp-dev/unsub [arch...@mail-archive.com] -=-=-=-=-=-=-=-=-=-=-=-
RES: [vpp-dev] VPP crashes when route is not reachable
Hello Paul, No I haven’t. Actually I’ve installed VPP from packege manager. Do you think building VPP from source is a better way to deploy it? Best Regards Marcos De: vpp-dev@lists.fd.io Em nome de Paul Vinciguerra Enviada em: quarta-feira, 23 de dezembro de 2020 13:33 Para: Marcos - Mgiga Cc: vpp-dev@lists.fd.io Assunto: Re: [vpp-dev] VPP crashes when route is not reachable Hi Marcos, Have you edited vpp.service and in commented the limitcore line? https://fdio-vpp.readthedocs.io/en/latest/troubleshooting/reportingissues/reportingissues.html On Dec 23, 2020, at 8:09 AM, Marcos - Mgiga mailto:mar...@mgiga.com.br> > wrote: Hi there, I’m trying to setup the attached topology (VPPTOPOLOGY.png) in order to work with VPP as CGN deterministic gateway. VPP is intend to replace another CGN software, so the link between VPP and BGNRouter is not working yet. So I first linked VPP to Layer2 switch and I was able to ping border router from vpp and vice versa. When I add a static route on border router pointed to some of the networks specified as “VPP ROUTING TABLE” on the drawing and start a ping, VPP immediately crashes. I attached all VPP config files and a outfrom from “journalctl -u vpp” at the moment of the crash, called error vpp.txt. Hope that helps, I wasn’t able to get a trace since VPP stops working when that happens. Best Regards Marcos -=-=-=-=-=-=-=-=-=-=-=- Links: You receive all messages sent to this group. View/Reply Online (#18420): https://lists.fd.io/g/vpp-dev/message/18420 Mute This Topic: https://lists.fd.io/mt/79185744/21656 Group Owner: vpp-dev+ow...@lists.fd.io Unsubscribe: https://lists.fd.io/g/vpp-dev/unsub [arch...@mail-archive.com] -=-=-=-=-=-=-=-=-=-=-=-
[vpp-dev] AMD and NAT44ed stateful benchmarks in updated CSIT-2009 Report
Hi All, Quick update re recent major additions to CSIT-2009 report available at https://docs.fd.io/csit/rls2009/report/ : 1. Benchmark results for AMD 2n-zn2 testbed (EPYC Zen2 7532) with VPP and DPDK performance data: - Testbed description in [12] - Sample throughput graphs in [13] - Sample latency graphs in [14] - Sample benchmark tables in [15] - Sample VPP operational data (“show runtime”) in [16] 2. New NAT44 Endpoint Dependent stateful benchmarks - Methodology for Connections-Per-Second (CPS) and Packets-Per-Second (PPS) for UDP and TCP/IP [17] - Sample benchmark graphs UDP CPS [18], TCP CPS [19], UDP PPS [20], TCP PPS [21] Happy browsing! :) Cheers, Maciek [12] https://docs.fd.io/csit/rls2009/report/introduction/physical_testbeds.html#node-amd-epyc-zen2-2n-zn2 [13] https://docs.fd.io/csit/rls2009/report/vpp_performance_tests/packet_throughput_graphs/ip4-2n-zn2-xxv710.html [14] https://docs.fd.io/csit/rls2009/report/vpp_performance_tests/packet_latency/ip4-2n-zn2-xxv710.html [15] https://docs.fd.io/csit/rls2009/report/detailed_test_results/vpp_performance_results_2n_zn2/ip4_xxv710.html [16] https://docs.fd.io/csit/rls2009/report/test_operational_data/vpp_performance_operational_data_2n_zn2/ip4_xxv710.html [17] https://docs.fd.io/csit/rls2009/report/introduction/methodology_nat44.html#nat44-endpoint-dependent [18] https://docs.fd.io/csit/rls2009/report/vpp_performance_tests/packet_throughput_graphs/nat44-ed-udp-cps.html [19] https://docs.fd.io/csit/rls2009/report/vpp_performance_tests/packet_throughput_graphs/nat44-ed-tcp-cps.html [20] https://docs.fd.io/csit/rls2009/report/vpp_performance_tests/packet_throughput_graphs/nat44-ed-udp-pps.html [21] https://docs.fd.io/csit/rls2009/report/vpp_performance_tests/packet_throughput_graphs/nat44-ed-tcp-pps.html > On 14 Oct 2020, at 21:58, Maciek Konstantynowicz (mkonstan) > wrote: > > Hi All, > > FD.io CSIT-2009 report is available on FD.io docs site: > >https://docs.fd.io/csit/rls2009/report/ > > Great thanks to all contributors in CSIT and VPP communities! > > Below summary and pointers to specific sections in the report. > Welcome all comments, best by email to csit-...@lists.fd.io. > > Cheers, > -Maciek > > > CSIT-2009 Release Summary > - > > NEW TESTS > > - A new category of tests using TRex ASTF stateful APIs and traffic > profiles with up to 16M UDP and TCP/IP sessions. Initial stateful > tests include VPP NAT44 Endpoint Dependent (NAT44ed) > connections-per-second and packets-per-second throughput (with > controlled packet size). (Note: report test runs are still to be > executed in their fullness, expect them to appear in maintenance > report versions next week and week after next. Maintenance reports are > published on a weekly basis if there are changes.) > > - Refactored existing NAT44 Deterministic (NAT44det) throughput tests > and added higher session scale, up to 16M UDP sessions. Continue to > use TRex STL stateless APIs and traffic profiles. > > - Added NAT44ed uni-directional UDP throughput tests using TRex STL > stateless APIs and traffic profiles, as a way to verify stateful tests > performance. > > - IPsec async mode VPP performance tests, with HW crypto only for now, > meaning Xeon Haswell testbeds only. > > - Full suite of tests now running on Mellanox ConnectX5-2p100GE NICs in > 2n-clx (Intel Xeon Cascadelake) testbeds using VPP native rdma driver. > For the first time one case see linear multi-core speedup into 72 Mpps > region (L2 on 2 cores, IPv4 on 4 cores), in some cases NIC is the > limit again (like it was to date in CSIT labs with FVL 2p25GE NICs). > > BENCHMARKING > > - AMD 2n-zn2 testbed onboarded with EPYC 7532 32-Core Processor. Full > set of CSIT-2009 results to be included in one of the upcoming > maintenance reports, following completion of calibrating dry runs that > are currently ongoing. > > - Optimization and calibration of TRex STL and ASTF multi-core > configurations, a small impact on test results as captured in current > vs. previous release performance comparisons. > > > Pointers to CSIT-2009 Report sections > - > > 1. FD.io CSIT test methodology [1] > 2. VPP release notes[2] > 3. VPP 64B/IMIX throughput graphs [3] > 4. VPP throughput speedup multi-core[4] > 5. VPP latency under load [5] > 6. VPP comparisons v20.09 vs. v20.05[6] > 7. VPP performance all pkt sizes & NICs [7] > 8. DPDK 20.08 apps release notes[8] > 9. DPDK 64B throughput graphs [9] > 10. DPDK latency under load [10] > 11. DPDK comparisons 20.08 vs. 20.02[11] > > Functional device tests (VPP_Device) are also included in the report. > > [1] https://docs.fd.io/csit/rls2009/report/introduction/methodology.html > [2] > https://docs.fd.io/csit/rls2009/report/vpp_performance_tests/csit_release_notes.html > [3]
Re: [vpp-dev] VPP crashes when route is not reachable
Hi Marcos, Have you edited vpp.service and in commented the limitcore line? https://fdio-vpp.readthedocs.io/en/latest/troubleshooting/reportingissues/reportingissues.html > On Dec 23, 2020, at 8:09 AM, Marcos - Mgiga wrote: > > > Hi there, > > I’m trying to setup the attached topology (VPPTOPOLOGY.png) in order to work > with VPP as CGN deterministic gateway. VPP is intend to replace another CGN > software, so the link between VPP and BGNRouter is not working yet. > > So I first linked VPP to Layer2 switch and I was able to ping border router > from vpp and vice versa. > > When I add a static route on border router pointed to some of the networks > specified as “VPP ROUTING TABLE” on the drawing and start a ping, VPP > immediately crashes. > > I attached all VPP config files and a outfrom from “journalctl -u vpp” at the > moment of the crash, called error vpp.txt. > > Hope that helps, I wasn’t able to get a trace since VPP stops working when > that happens. > > Best Regards > > Marcos > > > > > > > -=-=-=-=-=-=-=-=-=-=-=- Links: You receive all messages sent to this group. View/Reply Online (#18418): https://lists.fd.io/g/vpp-dev/message/18418 Mute This Topic: https://lists.fd.io/mt/79179463/21656 Group Owner: vpp-dev+ow...@lists.fd.io Unsubscribe: https://lists.fd.io/g/vpp-dev/unsub [arch...@mail-archive.com] -=-=-=-=-=-=-=-=-=-=-=-