[vpp-dev] Active open connection on multithread #vnet #vpp

2018-05-09 Thread ductm18
Hi,

I'm trying to use the proxy app (on vnet/session-apps).
The code works fine if I run vpp with 1 thread (just master). With more than 1 
thread, the code fails because of some assertions such as:

- Function transport_alloc_local_port - vnet/session/transport.c
237 /* Only support active opens from thread 0 */ 238 ASSERT 
(vlib_get_thread_index () == 0); - Function tcp_half_open_connection_new - 
vnet/tcp/tcp.c
144 tcp_connection_t *tc = 0; 145 ASSERT ( vlib_get_thread_index () == 0); 146 
pool_get (tm-> half_open_connections , tc);

- ...
Removing all those assertions make the code works. But I'm not sure this is a 
good way to solve it.

*Is there any particular reason for only supporting active open on master 
thread?*

Another things, on the proxy apps, I think those lines of code should be 
(function delete_proxy_session - vnet/session-apps/proxy.c):

72 if (ps-> vpp_active_open_handle != ~0) 73 active_open_session = 
session_get_from_handle 74 (ps-> vpp_active_open_handle ); 75 else 76 
active_open_session = 0;

instead of

72 if (ps-> vpp_server_handle != ~0) 73 active_open_session = 
session_get_from_handle 74 (ps-> vpp_server_handle ); 75 else 76 
active_open_session = 0;

Thanks,
DucTM


Re: [vpp-dev] VLAN to VLAN

2018-05-09 Thread carlito nueno
forget to mention, upgraded to vpp v18.04-rc2~26-gac2b736~b45

Current setup:
GigabitEthernet0/14/0.1, Idx 9, ip 192.168.0.0/24, vlan 1
GigabitEthernet0/14/0.2, Idx 12, ip 192.168.2.0/24, vlan 2

I don't want devices on vlan1 and vlan2 to communicate with each other.
I tried to use macip via VAT

vat# macip_acl_add ipv4 deny ip 192.168.2.0/24
vat# macip_acl_interface_add_del sw_if_index 9 add acl 0

But, devices under 192.168.0.0/24 can't communicate with each other.

Thanks


Re: [vpp-dev] tx-error on create vhost

2018-05-09 Thread steven luong
Aris,

There is not enough information here. What is VPP's vhost-user interface 
connected to? A VM launched by QEMU or a docker container running VPP with DPDK 
virtio_net driver? What do you see in the output from show vhost?

Steven

On 5/9/18, 1:13 PM, "vpp-dev@lists.fd.io on behalf of 
arisleivad...@sce.carleton.ca"  wrote:

Hi,

I am trying to create a vhost and bridge it to a TenGigabitEthernet DPDK
enabled interface.

Even though the vhost is being created successfully I am getting a
tx-error as follows:

vpp# show int
  Name   Idx   State  Counter 
Count
TenGigabitEthernet6/0/0   1 up   rx packets   
  2231
 rx bytes 
174297
 drops
  2231
TenGigabitEthernet6/0/1   2 up   rx packets   
  2225
 rx bytes 
173637
 drops
  2225
VirtualEthernet0/0/0  3 up   tx-error 
  2231
VirtualEthernet0/0/1  4 up   tx-error 
  2225

If I do a show error I am getting that the VirtualEthernet-output
interface is down:

vpp# show errors
   CountNode  Reason
  4715l2-output   L2 output packets
  4715l2-learnL2 learn packets
 2l2-learnL2 learn misses
  4715l2-inputL2 input packets
  4715l2-floodL2 flood packets
  2360   VirtualEthernet0/0/0-output  interface is down
  2355   VirtualEthernet0/0/1-output  interface is down


Is there a way that this can be resolved?

I am using version vpp v18.01.1-release and dpdk version 17.05.1 on redhat
7.3

Thanks,
Aris










-=-=-=-=-=-=-=-=-=-=-=-
Links:

You receive all messages sent to this group.

View/Reply Online (#9239): https://lists.fd.io/g/vpp-dev/message/9239
View All Messages In Topic (2): https://lists.fd.io/g/vpp-dev/topic/18972847
Mute This Topic: https://lists.fd.io/mt/18972847/21656
New Topic: https://lists.fd.io/g/vpp-dev/post

Change Your Subscription: https://lists.fd.io/g/vpp-dev/editsub/21656
Group Home: https://lists.fd.io/g/vpp-dev
Contact Group Owner: vpp-dev+ow...@lists.fd.io
Terms of Service: https://lists.fd.io/static/tos
Unsubscribe: https://lists.fd.io/g/vpp-dev/unsub
-=-=-=-=-=-=-=-=-=-=-=-



Re: [vpp-dev] VLAN to VLAN

2018-05-09 Thread carlito nueno
First Question:
Tried to do “make test TEST=acl_plugin_macip”, but I got this error:

Using /vpp/build-root/python/virtualenv/lib/python2.7/site-packages
Finished processing dependencies for vpp-papi==1.4
make -C ext
make[1]: Entering directory '/vpp/test/ext'
make[1]: *** No rule to make target '/vpp/vpp-api/vapi/.libs/libvapiclient.so', 
needed by '/vapi_test/vapi_c_test'.  Stop.
make[1]: Leaving directory '/vpp/test/ext'
Makefile:129: recipe for target 'ext' failed
make: *** [ext] Error 2

ubuntu 16.04
python2.7
downloaded vpp src to /vpp
export VPP_PYTHON_PREFIX=/vpp/build-root/python
export WS_ROOT=/vpp
Second question:
When using govpp to load acl, how to maintain persistence when vpp is restarted?
- does the go app need to be re-run?

Thanks


Re: [vpp-dev] NOTIFICATION: FD.io Maintenance

2018-05-09 Thread Vanessa Valderrama
OpenGrok maintenance is complete


On 05/09/2018 03:01 PM, Vanessa Valderrama wrote:
>
> OpenGrok maintenance starting shortly
>
>
> On 05/09/2018 02:45 PM, Vanessa Valderrama wrote:
>>
>> JIRA maintenance will be starting shortly
>>
>>
>> On 05/08/2018 07:14 PM, Vanessa Valderrama wrote:
>>>
>>> Nexus maintenance is complete for tonight.  We'll continue cleaning
>>> out repositories tomorrow but it should be transparent.
>>>
>>> Thank you,
>>>
>>> Vanessa
>>>
>>>
>>> On 05/08/2018 11:28 AM, Vanessa Valderrama wrote:

 Jenkins is in shutdown mode to prepare for the Nexus maintenance


 On 05/08/2018 09:18 AM, Vanessa Valderrama wrote:
>
> Sonar maintenance has been rescheduled to 2018-05-17
>
> Thank you,
> Vanessa
>
> On 05/07/2018 12:10 PM, Vanessa Valderrama wrote:
>>
>> *Reminder of upcoming maintenance*
>>
>>
>> On 04/26/2018 10:02 AM, Vanessa Valderrama wrote:
>>>
>>> Please let me know if the following maintenance schedule
>>> conflicts with your projects.
>>>
>>> *Nexus 2018-05-08 - **1700 UTC*
>>>
>>> The Nexus maintenance will require a restart of Nexus to bump up
>>> the JVM and run scheduled tasks to clean up the VPP CentOS
>>> repositories.  The size of the repositories, specifically master
>>> is causing intermittent timeouts and slowness.  There is risk in
>>> crashing the JVM when we run the scheduled tasks.
>>>
>>> *JIRA & OpenGrok AWS Migration 2018-05-09 - **2100 UTC to 0100 UTC*
>>>
>>> Services will be unavailable for approximately 2-3 hours
>>> depending on data transfer.
>>>
>>> *Gerrit 2018-05-15 - 2100 UTC to 0100 UTC*
>>>
>>> Gerrit will be unavailable for approximately 2-3 hours depending
>>> on data transfer.  Jenkins will be placed in shutdown mode 1
>>> hour prior to the start of maintenance.
>>>
>>> *Sonar 2018-05-16 - **2100 UTC to 0100 UTC*
>>>
>>> Sonar will be unavailable for approximately 2-3 hours depending
>>> on data transfer.
>>>
>>
>

>>>
>>
>



Re: [vpp-dev] NOTIFICATION: FD.io Maintenance

2018-05-09 Thread Vanessa Valderrama
JIRA maintenance is complete


On 05/09/2018 02:45 PM, Vanessa Valderrama wrote:
>
> JIRA maintenance will be starting shortly
>
>
> On 05/08/2018 07:14 PM, Vanessa Valderrama wrote:
>>
>> Nexus maintenance is complete for tonight.  We'll continue cleaning
>> out repositories tomorrow but it should be transparent.
>>
>> Thank you,
>>
>> Vanessa
>>
>>
>> On 05/08/2018 11:28 AM, Vanessa Valderrama wrote:
>>>
>>> Jenkins is in shutdown mode to prepare for the Nexus maintenance
>>>
>>>
>>> On 05/08/2018 09:18 AM, Vanessa Valderrama wrote:

 Sonar maintenance has been rescheduled to 2018-05-17

 Thank you,
 Vanessa

 On 05/07/2018 12:10 PM, Vanessa Valderrama wrote:
>
> *Reminder of upcoming maintenance*
>
>
> On 04/26/2018 10:02 AM, Vanessa Valderrama wrote:
>>
>> Please let me know if the following maintenance schedule
>> conflicts with your projects.
>>
>> *Nexus 2018-05-08 - **1700 UTC*
>>
>> The Nexus maintenance will require a restart of Nexus to bump up
>> the JVM and run scheduled tasks to clean up the VPP CentOS
>> repositories.  The size of the repositories, specifically master
>> is causing intermittent timeouts and slowness.  There is risk in
>> crashing the JVM when we run the scheduled tasks.
>>
>> *JIRA & OpenGrok AWS Migration 2018-05-09 - **2100 UTC to 0100 UTC*
>>
>> Services will be unavailable for approximately 2-3 hours
>> depending on data transfer.
>>
>> *Gerrit 2018-05-15 - 2100 UTC to 0100 UTC*
>>
>> Gerrit will be unavailable for approximately 2-3 hours depending
>> on data transfer.  Jenkins will be placed in shutdown mode 1 hour
>> prior to the start of maintenance.
>>
>> *Sonar 2018-05-16 - **2100 UTC to 0100 UTC*
>>
>> Sonar will be unavailable for approximately 2-3 hours depending
>> on data transfer.
>>
>

>>>
>>
>



[vpp-dev] tx-error on create vhost

2018-05-09 Thread arisleivadeas
Hi,

I am trying to create a vhost and bridge it to a TenGigabitEthernet DPDK
enabled interface.

Even though the vhost is being created successfully I am getting a
tx-error as follows:

vpp# show int
  Name   Idx   State  Counter 
Count
TenGigabitEthernet6/0/0   1 up   rx packets   
  2231
 rx bytes 
174297
 drops
  2231
TenGigabitEthernet6/0/1   2 up   rx packets   
  2225
 rx bytes 
173637
 drops
  2225
VirtualEthernet0/0/0  3 up   tx-error 
  2231
VirtualEthernet0/0/1  4 up   tx-error 
  2225

If I do a show error I am getting that the VirtualEthernet-output
interface is down:

vpp# show errors
   CountNode  Reason
  4715l2-output   L2 output packets
  4715l2-learnL2 learn packets
 2l2-learnL2 learn misses
  4715l2-inputL2 input packets
  4715l2-floodL2 flood packets
  2360   VirtualEthernet0/0/0-output  interface is down
  2355   VirtualEthernet0/0/1-output  interface is down


Is there a way that this can be resolved?

I am using version vpp v18.01.1-release and dpdk version 17.05.1 on redhat
7.3

Thanks,
Aris





-=-=-=-=-=-=-=-=-=-=-=-
Links:

You receive all messages sent to this group.

View/Reply Online (#9235): https://lists.fd.io/g/vpp-dev/message/9235
View All Messages In Topic (1): https://lists.fd.io/g/vpp-dev/topic/18972847
Mute This Topic: https://lists.fd.io/mt/18972847/21656
New Topic: https://lists.fd.io/g/vpp-dev/post

Change Your Subscription: https://lists.fd.io/g/vpp-dev/editsub/21656
Group Home: https://lists.fd.io/g/vpp-dev
Contact Group Owner: vpp-dev+ow...@lists.fd.io
Terms of Service: https://lists.fd.io/static/tos
Unsubscribe: https://lists.fd.io/g/vpp-dev/unsub
-=-=-=-=-=-=-=-=-=-=-=-



Re: [vpp-dev] NOTIFICATION: FD.io Maintenance

2018-05-09 Thread Vanessa Valderrama
JIRA maintenance will be starting shortly


On 05/08/2018 07:14 PM, Vanessa Valderrama wrote:
>
> Nexus maintenance is complete for tonight.  We'll continue cleaning
> out repositories tomorrow but it should be transparent.
>
> Thank you,
>
> Vanessa
>
>
> On 05/08/2018 11:28 AM, Vanessa Valderrama wrote:
>>
>> Jenkins is in shutdown mode to prepare for the Nexus maintenance
>>
>>
>> On 05/08/2018 09:18 AM, Vanessa Valderrama wrote:
>>>
>>> Sonar maintenance has been rescheduled to 2018-05-17
>>>
>>> Thank you,
>>> Vanessa
>>>
>>> On 05/07/2018 12:10 PM, Vanessa Valderrama wrote:

 *Reminder of upcoming maintenance*


 On 04/26/2018 10:02 AM, Vanessa Valderrama wrote:
>
> Please let me know if the following maintenance schedule conflicts
> with your projects.
>
> *Nexus 2018-05-08 - **1700 UTC*
>
> The Nexus maintenance will require a restart of Nexus to bump up
> the JVM and run scheduled tasks to clean up the VPP CentOS
> repositories.  The size of the repositories, specifically master
> is causing intermittent timeouts and slowness.  There is risk in
> crashing the JVM when we run the scheduled tasks.
>
> *JIRA & OpenGrok AWS Migration 2018-05-09 - **2100 UTC to 0100 UTC*
>
> Services will be unavailable for approximately 2-3 hours depending
> on data transfer.
>
> *Gerrit 2018-05-15 - 2100 UTC to 0100 UTC*
>
> Gerrit will be unavailable for approximately 2-3 hours depending
> on data transfer.  Jenkins will be placed in shutdown mode 1 hour
> prior to the start of maintenance.
>
> *Sonar 2018-05-16 - **2100 UTC to 0100 UTC*
>
> Sonar will be unavailable for approximately 2-3 hours depending on
> data transfer.
>

>>>
>>
>



Re: [vpp-dev] MTU in vpp

2018-05-09 Thread Ole Troan
Hi Akshaya,

> 
> Does your patch have a way to set per protocol MTU(ip4/ip6/mpls) on s/w 
> interface? We are looking for a way to set per protocol MTU and currently we 
> couldn't find any option for this.

No, but that’s simple to add if there’s a need. 

Cheers,
Ole


> 
> -- 
> Regards,
> Akshaya N 
> 
> -Original Message-
> From: Ole Troan 
> To: Nitin Saxena 
> Cc: vpp-dev@lists.fd.io
> Subject: Re: [vpp-dev] MTU in vpp
> Date: Wed, 9 May 2018 10:18:02 +0200
> 
> > If dpdk plugin used, which is the default case in VPP, MTU to each 
> > interface is set by retrieving MTU information from underlying DPDK PMD 
> > API: rte_eth_dev_info_get(). So if the device supports max MTU say 9216 
> > then that will be configured.
> > 
> > I don't think setting MTU via vpp cli or startup.config is supported. 
> > Although it should be easy to implement by adding dpdk plugin cli by 
> > calling rte_eth_dev_set_mtu().
> 
> I have a patch in the work that let's you set MTU on software interfaces. 
> Should be there within a few days.
> 
> Best regards,
> Ole
> 
> 
> 
> 


Re: [vpp-dev] TCP performance - TSO - HW offloading in general.

2018-05-09 Thread Florin Coras
Hi Luca, 

We don’t yet support pmtu in the stack so tcp uses a fixed 1460 mtu, unless you 
changed that, we shouldn’t generate jumbo packets. If we do, I’ll have to take 
a look at it :)

If you already had your transport protocol, using memif is the natural way to 
go. Using the session layer makes sense only if you can implement your 
transport within vpp in a way that leverages vectorization or if it can 
leverage the existing transports (see for instance the TLS implementation).

Until today [1] the stack did allow for excessive batching (generation of 
multiple frames in one dispatch loop) but we’re now restricting that to one. 
This is still far from proper pacing which is on our todo list. 

Florin

[1] https://gerrit.fd.io/r/#/c/12439/ 


> On May 9, 2018, at 4:21 AM, Luca Muscariello (lumuscar)  
> wrote:
> 
> Florin,
>
> Thanks for the slide deck, I’ll check it soon.
>
> BTW, VPP/DPDK test was using jumbo frames by default so the TCP stack had a 
> little
> advantage wrt the Linux TCP stack which was using 1500B by default.
>
> By manually setting DPDK MTU to 1500B the goodput goes down to 8.5Gbps which 
> compares
> to 4.5Gbps for Linux w/o TSO. Also congestion window adaptation is not the 
> same.
>
> BTW, for what we’re doing it is difficult to reuse the VPP session layer as 
> it is.
> Our transport stack uses a different kind of namespace and mux/demux is also 
> different.
>
> We are using memif as underlying driver which does not seem to be a
> bottleneck as we can also control batching there. Also, we have our own
> shared memory downstream memif inside VPP through a plugin.
>
> What we observed is that delay-based congestion control does not like
> much VPP batching (batching in general) and we are using DBCG.
>
> Linux TSO has the same problem but has TCP pacing to limit bad effects of 
> bursts
> on RTT/losses and flow control laws.
>
> I guess you’re aware of these issues already.
>
> Luca
>
>
> From: Florin Coras 
> Date: Monday 7 May 2018 at 22:23
> To: Luca Muscariello 
> Cc: Luca Muscariello , "vpp-dev@lists.fd.io" 
> 
> Subject: Re: [vpp-dev] TCP performance - TSO - HW offloading in general.
>
> Yes, the whole host stack uses shared memory segments and fifos that the 
> session layer manages. For a brief description of the session layer see [1, 
> 2]. Apart from that, unfortunately, we don’t have any other dev 
> documentation. src/vnet/session/segment_manager.[ch] has some good examples 
> of how to allocate segments and fifos. Under application_interface.h check 
> app_[send|recv]_[stream|dgram]_raw for examples on how to read/write to the 
> fifos.  <>
>
> Now, regarding the the writing to the fifos: they are lock free but size 
> increments are atomic since the assumption is that we’ll always have one 
> reader and one writer. Still, batching helps. VCL doesn’t do it but iperf 
> probably does it. 
>
> Hope this helps, 
> Florin
>
> [1] https://wiki.fd.io/view/VPP/HostStack/SessionLayerArchitecture 
> 
> [2] https://wiki.fd.io/images/1/15/Vpp-hoststack-kc-eu-18.pdf 
> 
> 
> 
>> On May 7, 2018, at 11:35 AM, Luca Muscariello (lumuscar) > > wrote:
>>
>> Florin,
>>
>> So the TCP stack does not connect to VPP using memif.
>> I’ll check the shared memory you mentioned.
>>
>> For our transport stack we’re using memif. Nothing to 
>> do with TCP though.
>>
>> Iperf3 to VPP there must be copies anyway. 
>> There must be some batching with timing though 
>> while doing these copies.
>>
>> Is there any doc of svm_fifo usage?
>>
>> Thanks
>> Luca 
>> 
>> On 7 May 2018, at 20:00, Florin Coras > > wrote:
>> 
>>> Hi Luca,
>>>
>>> I guess, as you did, that it’s vectorization. VPP is really good at pushing 
>>> packets whereas Linux is good at using all hw optimizations. 
>>>
>>> The stack uses it’s own shared memory mechanisms (check svm_fifo_t) but 
>>> given that you did the testing with iperf3, I suspect the edge is not 
>>> there. That is, I guess they’re not abusing syscalls with lots of small 
>>> writes. Moreover, the fifos are not zero-copy, apps do have to write to the 
>>> fifo and vpp has to packetize that data. 
>>>
>>> Florin
>>> 
>>> 
 On May 7, 2018, at 10:29 AM, Luca Muscariello (lumuscar) 
 > wrote:

 Hi Florin 

 Thanks for the info.

 So, how do you explain VPP TCP stack beats Linux
 implementation by doubling the goodput?
 Does it come from vectorization? 
 Any special memif optimization underneath?

 Luca 
 
 On 7 May 2018, at 18:17, Florin Coras 

Re: [vpp-dev] MTU in vpp

2018-05-09 Thread Akshaya Nadahalli
Hi Ole,

Does your patch have a way to set per protocol MTU(ip4/ip6/mpls) on s/w
interface? We are looking for a way to set per protocol MTU and
currently we couldn't find any option for this.

-- 
Regards,
Akshaya N 

-Original Message-
From: Ole Troan 
To: Nitin Saxena 
Cc: vpp-dev@lists.fd.io
Subject: Re: [vpp-dev] MTU in vpp
Date: Wed, 9 May 2018 10:18:02 +0200


> If dpdk plugin used, which is the default case in VPP, MTU to each interface 
> is set by retrieving MTU information from underlying DPDK PMD API: 
> rte_eth_dev_info_get(). So if the device supports max MTU say 9216 then that 
> will be configured.
> 
> I don't think setting MTU via vpp cli or startup.config is supported. 
> Although it should be easy to implement by adding dpdk plugin cli by calling 
> rte_eth_dev_set_mtu().

I have a patch in the work that let's you set MTU on software interfaces. 
Should be there within a few days.

Best regards,
Ole






[vpp-dev] when is make test-all run?

2018-05-09 Thread Brian Brooks
Patch?
Nightly?
Release?

IMPORTANT NOTICE: The contents of this email and any attachments are 
confidential and may also be privileged. If you are not the intended recipient, 
please notify the sender immediately and do not disclose the contents to any 
other person, use it for any purpose, or store or copy the information in any 
medium. Thank you.

-=-=-=-=-=-=-=-=-=-=-=-
Links:

You receive all messages sent to this group.

View/Reply Online (#9228): https://lists.fd.io/g/vpp-dev/message/9228
View All Messages In Topic (1): https://lists.fd.io/g/vpp-dev/topic/18967700
Mute This Topic: https://lists.fd.io/mt/18967700/21656
New Topic: https://lists.fd.io/g/vpp-dev/post

Change Your Subscription: https://lists.fd.io/g/vpp-dev/editsub/21656
Group Home: https://lists.fd.io/g/vpp-dev
Contact Group Owner: vpp-dev+ow...@lists.fd.io
Terms of Service: https://lists.fd.io/static/tos
Unsubscribe: https://lists.fd.io/g/vpp-dev/unsub
-=-=-=-=-=-=-=-=-=-=-=-

<>

Re: [vpp-dev] vlib_main() taking maximum cycle in v1804 release

2018-05-09 Thread Brian Brooks
I think this is due to perf aggregating results from all CPUs, and that 
vlib_main hotspot is from the main (non-worker) thread.

From: vpp-dev@lists.fd.io  On Behalf Of Saxena, Nitin
Sent: Thursday, April 26, 2018 1:46 PM
To: vpp-dev 
Subject: [vpp-dev] vlib_main() taking maximum cycle in v1804 release

Hi,

While running L3 Fwd test case with VPP v1804 release (released yesterday), 
perf tool shows vlib_main() taking maximum cycles.
This has been observed on Aarch64 (Both on Marvell’s Macchiatobin and Cavium’s 
ThunderX products) and Broadwell machine as well.

Perf top from Broadwell machine
=
 29.72%  libvlib.so.0.0.0[.] vlib_main
  17.33%  dpdk_plugin.so  [.] i40e_xmit_pkts
   7.35%  dpdk_plugin.so  [.] i40e_recv_scattered_pkts_vec
   7.21%  libvlibmemory.so.0.0.0  [.] memclnt_queue_callback
   6.02%  libvnet.so.0.0.0[.] ip4_rewrite_avx2
   5.96%  libvnet.so.0.0.0[.] ip4_lookup_avx2
   5.69%  libvnet.so.0.0.0[.] ip4_input_no_checksum_avx2
   5.49%  dpdk_plugin.so  [.] dpdk_input_avx2
   3.40%  dpdk_plugin.so  [.] dpdk_interface_tx_avx2
   1.63%  libvnet.so.0.0.0[.] vnet_interface_output_node_avx2

Is this the expected behaviour? I haven’t observed vlib_main() taking max perf 
cycles in VPP v1801.

Thanks,
Nitin



IMPORTANT NOTICE: The contents of this email and any attachments are 
confidential and may also be privileged. If you are not the intended recipient, 
please notify the sender immediately and do not disclose the contents to any 
other person, use it for any purpose, or store or copy the information in any 
medium. Thank you.


[vpp-dev] FD.io CSIT rls1804 report published

2018-05-09 Thread Maciek Konstantynowicz (mkonstan)
Hi All,

The CSIT rls1804 report is now published on FD.io docs site:

  html [1] and pdf [2] versions.

Great thanks to all of the CSIT and FD.io community who contributed and
made the CSIT rls1804 happen !

Points of note in rls1804 report:

  1. Release notes
- Performance tests incl. throughput changes: VPP [3] and DPDK [4].
- Functional tests on Ubuntu16.04, Centos7.4 [5].
  2. Graphs: Throughput [6], Multi-core speedup [7], Latency [8].
  3. Detailed test results [9].
  4. All tests config data [10].
  5. Performance tests efficiency telemetry data [11]

Welcome all feedback, best by email to csit-...@lists.fd.io.

Cheers,
-Maciek
On behalf of FD.io CSIT project.

[1] https://docs.fd.io/csit/rls1804/report/
[2] https://docs.fd.io/csit/rls1804/report/_static/archive/csit_rls1804.pdf
[3] 
https://docs.fd.io/csit/rls1804/report/vpp_performance_tests/csit_release_notes.html
[4] 
https://docs.fd.io/csit/rls1804/report/dpdk_performance_tests/csit_release_notes.html
[5] 
https://docs.fd.io/csit/rls1804/report/vpp_functional_tests/csit_release_notes.html
[6] 
https://docs.fd.io/csit/rls1804/report/vpp_performance_tests/packet_throughput_graphs/index.html
[7] 
https://docs.fd.io/csit/rls1804/report/vpp_performance_tests/throughput_speedup_multi_core/index.html
[8] 
https://docs.fd.io/csit/rls1804/report/vpp_performance_tests/packet_latency_graphs/index.html
[9] https://docs.fd.io/csit/rls1804/report/detailed_test_results/index.html
[10] https://docs.fd.io/csit/rls1804/report/test_configuration/index.html
[11] https://docs.fd.io/csit/rls1804/report/test_operational_data/index.html

> On 6 May 2018, at 18:58, Maciek Konstantynowicz (mkonstan) 
>  wrote:
> 
> CSIT rls1804 draft report is out: 
> 
>  https://docs.fd.io/csit/rls1804/report/index.html
> 
> Pls give it a scan and send feedback to csit-...@lists.fd.io.
> 
> We’re still waiting for few tests to finish: vpp-ligato, more NDR/PDR
> performance tests. Expecting these to be done in the next couple of
> days.. Will send announcement email once final version posted.
> 
> -Maciek
> 


-=-=-=-=-=-=-=-=-=-=-=-
Links:

You receive all messages sent to this group.

View/Reply Online (#9225): https://lists.fd.io/g/vpp-dev/message/9225
View All Messages In Topic (1): https://lists.fd.io/g/vpp-dev/topic/18967227
Mute This Topic: https://lists.fd.io/mt/18967227/21656
New Topic: https://lists.fd.io/g/vpp-dev/post

Change Your Subscription: https://lists.fd.io/g/vpp-dev/editsub/21656
Group Home: https://lists.fd.io/g/vpp-dev
Contact Group Owner: vpp-dev+ow...@lists.fd.io
Terms of Service: https://lists.fd.io/static/tos
Unsubscribe: https://lists.fd.io/g/vpp-dev/unsub
-=-=-=-=-=-=-=-=-=-=-=-



Re: [vpp-dev] VPP Scalability Problem

2018-05-09 Thread Andrew Yourtchenko
Dear Rubina,

During the development I did some ad-hoc for up to 4 cores (since that
was the h/w I have), it seemed reasonable - but obviously there is
something that I did not spot during my tests.

It's hard to say what is going on without looking in more detail.

--a

On 5/9/18, Rubina Bianchi  wrote:
> Thanks for your prompt response.
> If it would be possible I want to know about the maximum throughput which
> you take from vpp in stateful mode (permit+reflect acl rule, session table)
> ?
>
> Does vpp work scalable? I think the session table is bottleneck in stateful
> scenario and multi-threading? what 's your opinion?
>
> Sent from Outlook
> 
> From: Andrew Yourtchenko 
> Sent: Wednesday, May 9, 2018 2:47 PM
> To: Rubina Bianchi
> Cc: vpp-dev@lists.fd.io
> Subject: Re: [vpp-dev] VPP Scalability Problem
>
> Dear Rubina,
>
> You could take a look at “perf top” to see what could be going on. if you
> need help, let me know, I would be happy to look at it together.
>
> Also, now as part of the work for 18.07 I am testing a couple of different
> approaches to change the processing for more performance, would you be
> interested to give it a try ?
>
> --a
>
> On 9 May 2018, at 13:19, Rubina Bianchi
> > wrote:
>
> Dear VPP Folks,
> I have a problem in vpp scalability in statefull mode (by permit+reflect acl
> rules). I installed vpp on HPE ProLiant DL380 Gen9 Server in order to check
> the vpp performance and throughput scalability based on number of vpp worker
> threads.
> Our appliance have 2 cpus that each cpu have 22 core (44 core in hyperthread
> mode). In our Scenario, 6 interfaces is considered. Each 2 interfaces is
> bridged and so we have 3 pair interfaces. Startup config file and other
> configs are added to attachment. With increasing the number of worker
> threads, I expected to see more throughput but it is not happened. I mean
> for example, with 18 workers (current config) I expect to see 45 Gbps
> (Because the best case throughput in sfr scenario is almost 15 Gbps per each
> pair interface) throughput (rx) on our DUT with 6 interfaces but the maximum
> throughput which is observed is approximately 24. And after that, increasing
> the number of worker threads not only did not have any effect on throughput
> but also in some cases decrease overall throughput catastrophically. Why?
>
>
>
> Sent from Outlook
> 
> 
> 
> 
> 
>

-=-=-=-=-=-=-=-=-=-=-=-
Links:

You receive all messages sent to this group.

View/Reply Online (#9224): https://lists.fd.io/g/vpp-dev/message/9224
View All Messages In Topic (4): https://lists.fd.io/g/vpp-dev/topic/18965528
Mute This Topic: https://lists.fd.io/mt/18965528/21656
New Topic: https://lists.fd.io/g/vpp-dev/post

Change Your Subscription: https://lists.fd.io/g/vpp-dev/editsub/21656
Group Home: https://lists.fd.io/g/vpp-dev
Contact Group Owner: vpp-dev+ow...@lists.fd.io
Terms of Service: https://lists.fd.io/static/tos
Unsubscribe: https://lists.fd.io/g/vpp-dev/unsub
-=-=-=-=-=-=-=-=-=-=-=-



Re: [vpp-dev] VPP Scalability Problem

2018-05-09 Thread Rubina Bianchi
Thanks for your prompt response.
If it would be possible I want to know about the maximum throughput which you 
take from vpp in stateful mode (permit+reflect acl rule, session table) ?

Does vpp work scalable? I think the session table is bottleneck in stateful 
scenario and multi-threading? what 's your opinion?

Sent from Outlook

From: Andrew Yourtchenko 
Sent: Wednesday, May 9, 2018 2:47 PM
To: Rubina Bianchi
Cc: vpp-dev@lists.fd.io
Subject: Re: [vpp-dev] VPP Scalability Problem

Dear Rubina,

You could take a look at “perf top” to see what could be going on. if you need 
help, let me know, I would be happy to look at it together.

Also, now as part of the work for 18.07 I am testing a couple of different 
approaches to change the processing for more performance, would you be 
interested to give it a try ?

--a

On 9 May 2018, at 13:19, Rubina Bianchi 
> wrote:

Dear VPP Folks,
I have a problem in vpp scalability in statefull mode (by permit+reflect acl 
rules). I installed vpp on HPE ProLiant DL380 Gen9 Server in order to check the 
vpp performance and throughput scalability based on number of vpp worker 
threads.
Our appliance have 2 cpus that each cpu have 22 core (44 core in hyperthread 
mode). In our Scenario, 6 interfaces is considered. Each 2 interfaces is 
bridged and so we have 3 pair interfaces. Startup config file and other configs 
are added to attachment. With increasing the number of worker threads, I 
expected to see more throughput but it is not happened. I mean for example, 
with 18 workers (current config) I expect to see 45 Gbps (Because the best case 
throughput in sfr scenario is almost 15 Gbps per each pair interface) 
throughput (rx) on our DUT with 6 interfaces but the maximum throughput which 
is observed is approximately 24. And after that, increasing the number of 
worker threads not only did not have any effect on throughput but also in some 
cases decrease overall throughput catastrophically. Why?



Sent from Outlook







Re: [vpp-dev] Fragmented IP and ACL

2018-05-09 Thread Andrew Yourtchenko
Dear Khers,

On 5/8/18, khers  wrote:
> Dear Andrew
>
> I like to write this test as a testcase, I will work on that in my spare
> time.
> I like your solution about separate code path, but I think defragmentation
> could solve the problem and reassembly may have overhead.

I was having in mind doing something similar to
https://wiki.fd.io/view/VPP/NAT#NAT_plugin_virtual_fragmentation_reassembly

> In defragmentation, information of fragmented packets is kept and  all of
> packets are buffered until we decide to accept or deny those packets,
> but in this solution all of fragmented packets are not reassembled. Look at
> nf_defrag_ipv4.c in net/ipv4/netfilter from kernel source tree.
>
> Another issue I found today and I forgot to mention in last mail:
> in multi_acl_match_get_applied_ace_index function (acl/public_inlines.h) we
> check portrange, if
> need_portrange_check is set; but it's not needed for fragmented packets. so
> I suggest following line
> if (PREDICT_FALSE(result_val->need_portrange_check)) {
> to changes to
> if (PREDICT_FALSE(result_val->need_portrange_check&&
> !match->pkt.is_nonfirst_fragment)) {

The need_portrange_check in the result can be possibly set to true
only when the rule is full 5-tuple, and we should not be able to even
hit in the bihash the 5-tuple rule on a non-first fragment since that
flag is part of the hash key. Thus, that check would be redundant.

--a

>
>
> Khers
>
> On Tue, May 8, 2018 at 2:50 PM, Andrew Yourtchenko 
> wrote:
>
>> Yeah back in the day the fragment reassembly code was not there yet, so
>> there is a choice either to drop all the fragments on the floor, or rely
>> on
>> the receiving TCP stack to drop the non-initial fragments, like IOS did.
>> There is a knob that allows you to choose the behavior between the two by
>> flipping the value of l4_match_nonfirst_fragment.
>>
>> Though we should not create a session for the non-initial fragments,
>> since
>> there is no full 5-tuple, do you think you might put together a make test
>> testcase so we can ensure it is behaving properly ?
>>
>> I am now working on porting the TupleMerge approach that Valerio Bruschi
>> did. There the fragments will be going into a separate code path, and it
>> might be possible to add an option for reassembly then.
>>
>> --a
>>
>> On 8 May 2018, at 11:02, emma sdi  wrote:
>>
>> Dear vpp folks
>>
>> I have a simple topology and a permit+reflect rule for udp on
>> destination port 1000 as pasted in this link.
>> 
>> I send a big file from 172.20.1.2 to 172.20.1.1 port 1001 with
>> nc and I receive some packets (non first fragment) in second
>> client (172.20.1.1).
>>
>> Following are commands I used in this sce nario.
>> Clinet 172.20.1.2 >  cat /dev/sda | nc -u 172.20.1.1 1001
>>
>> Client 172.20.1.1> tcpdump -nn -i eth1
>> 01:13:38.164466 IP 172.20.1.2 > 172.20.1.1: ip-proto-17
>> 01:13:38.164467 IP 172.20.1.2 > 172.20.1.1: ip-proto-17
>> 01:13:38.164468 IP 172.20.1.2 > 172.20.1.1: ip-proto-17
>> 01:13:38.164469 IP 172.20.1.2 > 172.20.1.1: ip-proto-17
>>
>> Output of 'show trace' is stored in this link
>>  , First packet matched
>> with acl 1 and dropped but second fragment of that packet is matched
>> with acl 0 and a session created for that. So I dig more in
>> source code, and I found this block in hash_acl_add function:
>>
>>  if (am->l4_match_nonfirst_fragment) {
>>   /* add the second rule which matches the noninitial fragments with
>> the respective mask */
>>   make_mask_and_match_from_rule(, >rules[i], _info, 1);
>>   ace_info.mask_type_index = assign_mask_type_index(am, );
>>   ace_info.match.pkt.mask_type_index_lsb = ace_info.mask_type_index;
>>   DBG("ACE: %d (non-initial frags) mask_type_index: %d", i,
>> ace_info.mask_type_index);
>>   /* Ensure a given index is set in the mask type index bitmap for
>> this ACL */
>>   ha->mask_type_index_bitmap =
>> clib_bitmap_set(ha->mask_type_index_bitmap,
>> ace_info.mask_type_index, 1);
>>   vec_add1(ha->rules, ace_info);
>> }
>>
>> We make 3-tuple rule for non first fragment packets, this code solved
>> the IP fragment problem in a simple and inaccurate way. I think we
>> need a buffer for fragments like netfilter-conntract.
>>
>> Regards,
>> Khers
>>
>>
>>
>>
>> 
>>
>>
>

-=-=-=-=-=-=-=-=-=-=-=-
Links:

You receive all messages sent to this group.

View/Reply Online (#9222): https://lists.fd.io/g/vpp-dev/message/9222
View All Messages In Topic (4): https://lists.fd.io/g/vpp-dev/topic/18846728
Mute This Topic: https://lists.fd.io/mt/18846728/21656
New Topic: https://lists.fd.io/g/vpp-dev/post

Change Your Subscription: https://lists.fd.io/g/vpp-dev/editsub/21656
Group Home: https://lists.fd.io/g/vpp-dev
Contact Group Owner: vpp-dev+ow...@lists.fd.io
Terms of Service: https://lists.fd.io/static/tos
Unsubscribe: 

Re: [vpp-dev] VPP Scalability Problem

2018-05-09 Thread Andrew Yourtchenko
Dear Rubina,

You could take a look at “perf top” to see what could be going on. if you need 
help, let me know, I would be happy to look at it together.

Also, now as part of the work for 18.07 I am testing a couple of different 
approaches to change the processing for more performance, would you be 
interested to give it a try ?

--a

> On 9 May 2018, at 13:19, Rubina Bianchi  wrote:
> 
> Dear VPP Folks,
> I have a problem in vpp scalability in statefull mode (by permit+reflect acl 
> rules). I installed vpp on HPE ProLiant DL380 Gen9 Server in order to check 
> the vpp performance and throughput scalability based on number of vpp worker 
> threads.
> Our appliance have 2 cpus that each cpu have 22 core (44 core in hyperthread 
> mode). In our Scenario, 6 interfaces is considered. Each 2 interfaces is 
> bridged and so we have 3 pair interfaces. Startup config file and other 
> configs are added to attachment. With increasing the number of worker 
> threads, I expected to see more throughput but it is not happened. I mean for 
> example, with 18 workers (current config) I expect to see 45 Gbps (Because 
> the best case throughput in sfr scenario is almost 15 Gbps per each pair 
> interface) throughput (rx) on our DUT with 6 interfaces but the maximum 
> throughput which is observed is approximately 24. And after that, increasing 
> the number of worker threads not only did not have any effect on throughput 
> but also in some cases decrease overall throughput catastrophically. Why? 
> 
> 
> 
> Sent from Outlook
> 
> 
> 
> 
> 


Re: [vpp-dev] TCP performance - TSO - HW offloading in general.

2018-05-09 Thread Luca Muscariello (lumuscar)
Florin,

Thanks for the slide deck, I’ll check it soon.

BTW, VPP/DPDK test was using jumbo frames by default so the TCP stack had a 
little
advantage wrt the Linux TCP stack which was using 1500B by default.

By manually setting DPDK MTU to 1500B the goodput goes down to 8.5Gbps which 
compares
to 4.5Gbps for Linux w/o TSO. Also congestion window adaptation is not the same.

BTW, for what we’re doing it is difficult to reuse the VPP session layer as it 
is.
Our transport stack uses a different kind of namespace and mux/demux is also 
different.

We are using memif as underlying driver which does not seem to be a
bottleneck as we can also control batching there. Also, we have our own
shared memory downstream memif inside VPP through a plugin.

What we observed is that delay-based congestion control does not like
much VPP batching (batching in general) and we are using DBCG.

Linux TSO has the same problem but has TCP pacing to limit bad effects of bursts
on RTT/losses and flow control laws.

I guess you’re aware of these issues already.

Luca


From: Florin Coras 
Date: Monday 7 May 2018 at 22:23
To: Luca Muscariello 
Cc: Luca Muscariello , "vpp-dev@lists.fd.io" 

Subject: Re: [vpp-dev] TCP performance - TSO - HW offloading in general.

Yes, the whole host stack uses shared memory segments and fifos that the 
session layer manages. For a brief description of the session layer see [1, 2]. 
Apart from that, unfortunately, we don’t have any other dev documentation. 
src/vnet/session/segment_manager.[ch] has some good examples of how to allocate 
segments and fifos. Under application_interface.h check 
app_[send|recv]_[stream|dgram]_raw for examples on how to read/write to the 
fifos.

Now, regarding the the writing to the fifos: they are lock free but size 
increments are atomic since the assumption is that we’ll always have one reader 
and one writer. Still, batching helps. VCL doesn’t do it but iperf probably 
does it.

Hope this helps,
Florin

[1] https://wiki.fd.io/view/VPP/HostStack/SessionLayerArchitecture
[2] https://wiki.fd.io/images/1/15/Vpp-hoststack-kc-eu-18.pdf


On May 7, 2018, at 11:35 AM, Luca Muscariello (lumuscar) 
> wrote:

Florin,

So the TCP stack does not connect to VPP using memif.
I’ll check the shared memory you mentioned.

For our transport stack we’re using memif. Nothing to
do with TCP though.

Iperf3 to VPP there must be copies anyway.
There must be some batching with timing though
while doing these copies.

Is there any doc of svm_fifo usage?

Thanks
Luca

On 7 May 2018, at 20:00, Florin Coras 
> wrote:
Hi Luca,

I guess, as you did, that it’s vectorization. VPP is really good at pushing 
packets whereas Linux is good at using all hw optimizations.

The stack uses it’s own shared memory mechanisms (check svm_fifo_t) but given 
that you did the testing with iperf3, I suspect the edge is not there. That is, 
I guess they’re not abusing syscalls with lots of small writes. Moreover, the 
fifos are not zero-copy, apps do have to write to the fifo and vpp has to 
packetize that data.

Florin


On May 7, 2018, at 10:29 AM, Luca Muscariello (lumuscar) 
> wrote:

Hi Florin

Thanks for the info.

So, how do you explain VPP TCP stack beats Linux
implementation by doubling the goodput?
Does it come from vectorization?
Any special memif optimization underneath?

Luca

On 7 May 2018, at 18:17, Florin Coras 
> wrote:
Hi Luca,

We don’t yet support TSO because it requires support within all of vpp (think 
tunnels). Still, it’s on our list.

As for crypto offload, we do have support for IPSec offload with QAT cards and 
we’re now working with Ping and Ray from Intel on accelerating the TLS OpenSSL 
engine also with QAT cards.

Regards,
Florin


On May 7, 2018, at 7:53 AM, Luca Muscariello 
> wrote:

Hi,

A few questions about the TCP stack and HW offloading.
Below is the experiment under test.

  ++  +---+
  |  +-+ DPDK-10GE|   |
  |Iperf3| TCP |  ++  |TCP   Iperf3
  |  ++Nexus Switch+--+   +
  |LXC   | VPP||  ++  |VPP |LXC   |
  ++  DPDK-10GE   +---+


Using the Linux kernel w/ or w/o TSO I get an iperf3 goodput of 9.5Gbps or 
4.5Gbps.
Using VPP TCP stack I get 9.2Gbps, say max goodput as Linux w/ TSO.

Is there any TSO implementation already in VPP one can take advantage of?

Side question. Is there any crypto offloading service available in VPP?
Essentially  for the computation of RSA-1024/2048, EDCSA 192/256 signatures.

Thanks
Luca







[vpp-dev] Regarding Frames

2018-05-09 Thread Prashant Upadhyaya
Hi,

I am looking for some white paper or some slides which explain deeply
how the data structures related to the frames are organized in fd.io.

 Specifically, I need to create a mind map of the following data
structures in the code –

vlib_frame_t
vlib_next_frame_t
vlib_pending_frame_t

Then the presence of next_frames and pending_frames in
vlib_node_main_t, and the presence of next_frame_index in
vlib_node_runtime_t.

Some graphical description of how the frames are organized, which one
pointing to what from the above and so forth.

This will help in critically understanding the lifecycle of a frame (I
do understand in general in terms of a plugin, but want to see the
organization slightly more deeply). I believe I can figure all this
out if I haggle enough with the code (and which I would have to do
eventually if I want to get far and have taken a pass at it), but if
someone has done this hard work and created some kind of a friendly
document around this, I would lap it up before making another pass at
the code.

Regards
-Prashant

-=-=-=-=-=-=-=-=-=-=-=-
Links:

You receive all messages sent to this group.

View/Reply Online (#9219): https://lists.fd.io/g/vpp-dev/message/9219
View All Messages In Topic (1): https://lists.fd.io/g/vpp-dev/topic/18965602
Mute This Topic: https://lists.fd.io/mt/18965602/21656
New Topic: https://lists.fd.io/g/vpp-dev/post

Change Your Subscription: https://lists.fd.io/g/vpp-dev/editsub/21656
Group Home: https://lists.fd.io/g/vpp-dev
Contact Group Owner: vpp-dev+ow...@lists.fd.io
Terms of Service: https://lists.fd.io/static/tos
Unsubscribe: https://lists.fd.io/g/vpp-dev/unsub
-=-=-=-=-=-=-=-=-=-=-=-



Re: [vpp-dev] MTU in vpp

2018-05-09 Thread Ole Troan
> If dpdk plugin used, which is the default case in VPP, MTU to each interface 
> is set by retrieving MTU information from underlying DPDK PMD API: 
> rte_eth_dev_info_get(). So if the device supports max MTU say 9216 then that 
> will be configured.
> 
> I don't think setting MTU via vpp cli or startup.config is supported. 
> Although it should be easy to implement by adding dpdk plugin cli by calling 
> rte_eth_dev_set_mtu().

I have a patch in the work that let's you set MTU on software interfaces. 
Should be there within a few days.

Best regards,
Ole


-=-=-=-=-=-=-=-=-=-=-=-
Links:

You receive all messages sent to this group.

View/Reply Online (#9217): https://lists.fd.io/g/vpp-dev/message/9217
View All Messages In Topic (4): https://lists.fd.io/g/vpp-dev/topic/18847572
Mute This Topic: https://lists.fd.io/mt/18847572/21656
New Topic: https://lists.fd.io/g/vpp-dev/post

Change Your Subscription: https://lists.fd.io/g/vpp-dev/editsub/21656
Group Home: https://lists.fd.io/g/vpp-dev
Contact Group Owner: vpp-dev+ow...@lists.fd.io
Terms of Service: https://lists.fd.io/static/tos
Unsubscribe: https://lists.fd.io/g/vpp-dev/unsub
-=-=-=-=-=-=-=-=-=-=-=-



signature.asc
Description: Message signed with OpenPGP


Re: [vpp-dev] NAT output-feature

2018-05-09 Thread Matus Fabian -X (matfabia - PANTHEON TECHNOLOGIES@Cisco)
Hi,

Only difference is that nat44-in2out-output send packet to interface-output 
instead of ip4-lookup
Instead of "set interface nat44 in GigabitEthernet0/8/0 out 
GigabitEthernet0/a/0" use "set interface nat44 out GigabitEthernet0/a/0 
output-feature" and if you need hairpining use "set interface nat44 in 
GigabitEthernet0/8/0" too.

Regards,
Matus

-Original Message-
From: vpp-dev@lists.fd.io  On Behalf Of Matthew Smith
Sent: Tuesday, May 8, 2018 6:57 PM
To: vpp-dev 
Subject: [vpp-dev] NAT output-feature


Hi,

The NAT plugin CLI command to configure an interface to participate in NAT has 
a flag “output-feature” that affects how outbound (“in2out”) processing is 
done. Can the output-feature option be used in any situation where the standard 
in2out processing can be used? Are there any limitations on what use cases can 
be supported with output-feature enabled (or with it not enabled)?

My reason for asking: I tried to configure an IPsec tunnel to be terminated on 
a NAT inside interface. When ESP packets arrive, the source address gets 
rewritten to the NAT pool address by NAT44 in2out (slow path) . After a FIB 
lookup determined that the destination address was local, the packet is dropped 
because the source address and destination address are both local so it looks 
like the source address is spoofed.

I created a patch that avoids this issue with the standard in2out. Then I 
noticed the output feature version of in2out and wondered if that might be 
better to use in this case. I’m trying to figure out if I would lose anything 
(e.g. interoperability with some feature, throughput) by handling in2out 
traffic as an output feature.

Thanks!
-Matt





-=-=-=-=-=-=-=-=-=-=-=-
Links:

You receive all messages sent to this group.

View/Reply Online (#9216): https://lists.fd.io/g/vpp-dev/message/9216
View All Messages In Topic (2): https://lists.fd.io/g/vpp-dev/topic/18873742
Mute This Topic: https://lists.fd.io/mt/18873742/21656
New Topic: https://lists.fd.io/g/vpp-dev/post

Change Your Subscription: https://lists.fd.io/g/vpp-dev/editsub/21656
Group Home: https://lists.fd.io/g/vpp-dev
Contact Group Owner: vpp-dev+ow...@lists.fd.io
Terms of Service: https://lists.fd.io/static/tos
Unsubscribe: https://lists.fd.io/g/vpp-dev/unsub
-=-=-=-=-=-=-=-=-=-=-=-