Re: [vpp-dev] BondedInterface and memif interface for raw packet #memif #BondedInterface #plugin

2021-07-15 Thread Matthew Smith via lists.fd.io
On Thu, Jul 15, 2021 at 2:12 PM RaviKiran Veldanda 
wrote:

> Hi Experts,
> We implemented a PLUGIN, this plugin reads packets from physical interface
> and sends to memif.
> The memif other end is our application, it takes necessary action.
> The packet flow would be
>
> Devie-Input --> OUR Plugin --> Memif --> Our application.(Plugin filters
> the packets and send to memif or Bond-input.)
> or
> Devie-Input --> OUR Plugin --> bond-input --> VPP processing.
>
> Our plugin is working with only physical interface. Now we are trying to
> make this work with Bonded Interface, its not working with bonded interface.
> If we disable bond-input for that physical interface and only enable our
> plugin it will work fine. But we needed BondedInterface support for LACP.
> So we can't just avoid BondedInterface.
>

Hi Ravi,

What constitutes "not working"? Packets are not reaching your plugin node?
Packets are not reaching your application across the memif? Packets handed
by your plugin node to bond-input are not forwarded as expected? VPP
crashes? Something else?



> Our feature arc is
> VNET_FEATURE_INIT (ipgw_ent, static) =
> {
>   .arc_name = "device-input",
>   .node_name = "ravi_plugin",
>   .runs_before = VNET_FEATURES ("ethernet-input"),
> };
>
> I tried placing different arcs for my plugin, its not working. Can you
> please suggest how can our plugin to make it work for logical interfaces.
> Like if we want to make it work for BondedInterface.1100? which arc we
> should choose, we need raw packet to our application.
> Any pointers are very big help.
>
>
It's hard to say what the problem is without more information. It would be
helpful to see output of a vppctl packet trace for one of the packets which
is being handled incorrectly. Hopefully your plugin node is capable of
adding trace data.
Output of 'vppctl show runtime' might also be helpful.

Some guesses of things that might be related to the problem:

   - If you're using LACP, some inband signaling is required between your
   physical interfaces and the devices they're attached to. Is your plugin
   causing LACP packets to be dropped?
   - bond-input is itself a node on the device-input arc. With your
   declaration containing '.runs_before = VNET_FEATURES ("ethernet-input"),',
   I don't think you are guaranteed that your plugin node will process a
   packet before bond-input does. Maybe changing to '.runs_before =
   VNET_FEATURES ("bond-input"),' will help.
   - How is your plugin node handing packets off to bond-input? Are you
   calling vnet_feature_next() or are you explicitly handing off to
   bond-input? bond-input calls vnet_feature_next() to figure out the next
   node it should hand a packet to. If your plugin node did not call
   vnet_feature_next() and instead just handed the packet directly to
   bond-input, that might cause issues when bond-input calls
   vnet_feature_next().

-Matt

-=-=-=-=-=-=-=-=-=-=-=-
Links: You receive all messages sent to this group.
View/Reply Online (#19795): https://lists.fd.io/g/vpp-dev/message/19795
Mute This Topic: https://lists.fd.io/mt/84233170/21656
Mute #plugin:https://lists.fd.io/g/vpp-dev/mutehashtag/plugin
Mute #memif:https://lists.fd.io/g/vpp-dev/mutehashtag/memif
Mute #bondedinterface:https://lists.fd.io/g/vpp-dev/mutehashtag/bondedinterface
Group Owner: vpp-dev+ow...@lists.fd.io
Unsubscribe: https://lists.fd.io/g/vpp-dev/unsub [arch...@mail-archive.com]
-=-=-=-=-=-=-=-=-=-=-=-



[vpp-dev] BondedInterface and memif interface for raw packet #memif #BondedInterface #plugin

2021-07-15 Thread RaviKiran Veldanda
[Edited Message Follows]

Hi Experts,
We implemented a PLUGIN, this plugin reads packets from physical interface and 
sends to memif.
We implemented this approach to get RAW packets to our application.
The memif other end is our application, it takes necessary action.
The packet flow would be

Devie-Input --> OUR Plugin --> Memif --> Our application.(Plugin filters the 
packets and send to memif or Bond-input.)
or
Devie-Input --> OUR Plugin --> bond-input --> VPP processing.

Our plugin is working with only physical interface. Now we are trying to make 
this work with Bonded Interface, its not working with bonded interface.
If we disable bond-input for that physical interface and only enable our plugin 
it will work fine. But we needed BondedInterface support for LACP. So we can't 
just avoid BondedInterface.
Our feature arc is
VNET_FEATURE_INIT (ipgw_ent, static) =
{
.arc_name = "device-input",
.node_name = "ravi_plugin",
.runs_before = VNET_FEATURES ("ethernet-input"),
};

I tried placing different arcs for my plugin, its not working. Can you please 
suggest how can our plugin to make it work for logical interfaces.
Like if we want to make it work for BondedInterface.1100? which arc we should 
choose, we need raw packets to our application.
Any mechanism which can provide RAW PACKET reading from VPP also works fine.
Any pointers are very big help.

//Ravi.

-=-=-=-=-=-=-=-=-=-=-=-
Links: You receive all messages sent to this group.
View/Reply Online (#19793): https://lists.fd.io/g/vpp-dev/message/19793
Mute This Topic: https://lists.fd.io/mt/84233170/21656
Mute #plugin:https://lists.fd.io/g/vpp-dev/mutehashtag/plugin
Mute #memif:https://lists.fd.io/g/vpp-dev/mutehashtag/memif
Mute #bondedinterface:https://lists.fd.io/g/vpp-dev/mutehashtag/bondedinterface
Group Owner: vpp-dev+ow...@lists.fd.io
Unsubscribe: https://lists.fd.io/g/vpp-dev/unsub [arch...@mail-archive.com]
-=-=-=-=-=-=-=-=-=-=-=-



Re: [vpp-dev] HQOS in latest VPP

2021-07-15 Thread Damjan Marion via lists.fd.io


> On 15.07.2021., at 18:16, satish amara  wrote:
> 
> Hi,
>It looks like Hierarchical Queuing support is not supported in the latest 
> VPP release.  The last release  I see is 20.01.  Any plans to support it 
> again in the future VPP?  I don't see the config commands or nor hqos code in 
> the latest VPP release.

No, currently no plans AFAIK.

— 
Damjan


-=-=-=-=-=-=-=-=-=-=-=-
Links: You receive all messages sent to this group.
View/Reply Online (#19794): https://lists.fd.io/g/vpp-dev/message/19794
Mute This Topic: https://lists.fd.io/mt/84229266/21656
Group Owner: vpp-dev+ow...@lists.fd.io
Unsubscribe: https://lists.fd.io/g/vpp-dev/unsub [arch...@mail-archive.com]
-=-=-=-=-=-=-=-=-=-=-=-



[vpp-dev] BondedInterface and memif interface for raw packet #memif #BondedInterface #plugin

2021-07-15 Thread RaviKiran Veldanda
Hi Experts,
We implemented a PLUGIN, this plugin reads packets from physical interface and 
sends to memif.
The memif other end is our application, it takes necessary action.
The packet flow would be

Devie-Input --> OUR Plugin --> Memif --> Our application.(Plugin filters the 
packets and send to memif or Bond-input.)
or
Devie-Input --> OUR Plugin --> bond-input --> VPP processing.

Our plugin is working with only physical interface. Now we are trying to make 
this work with Bonded Interface, its not working with bonded interface.
If we disable bond-input for that physical interface and only enable our plugin 
it will work fine. But we needed BondedInterface support for LACP. So we can't 
just avoid BondedInterface.
Our feature arc is
VNET_FEATURE_INIT (ipgw_ent, static) =
{
.arc_name = "device-input",
.node_name = "ravi_plugin",
.runs_before = VNET_FEATURES ("ethernet-input"),
};

I tried placing different arcs for my plugin, its not working. Can you please 
suggest how can our plugin to make it work for logical interfaces.
Like if we want to make it work for BondedInterface.1100? which arc we should 
choose, we need raw packet to our application.
Any pointers are very big help.

//Ravi.

-=-=-=-=-=-=-=-=-=-=-=-
Links: You receive all messages sent to this group.
View/Reply Online (#19793): https://lists.fd.io/g/vpp-dev/message/19793
Mute This Topic: https://lists.fd.io/mt/84233170/21656
Mute #memif:https://lists.fd.io/g/vpp-dev/mutehashtag/memif
Mute #bondedinterface:https://lists.fd.io/g/vpp-dev/mutehashtag/bondedinterface
Mute #plugin:https://lists.fd.io/g/vpp-dev/mutehashtag/plugin
Group Owner: vpp-dev+ow...@lists.fd.io
Unsubscribe: https://lists.fd.io/g/vpp-dev/unsub [arch...@mail-archive.com]
-=-=-=-=-=-=-=-=-=-=-=-



Re: [vpp-dev] Buffer chains and pre-data area

2021-07-15 Thread Damjan Marion via lists.fd.io


> On 15.07.2021., at 18:53, jerome.bay...@student.uliege.be wrote:
> 
> Dear vpp-dev,
> 
> I'm trying to do some IPv6 in IPv6 encapsulation with no tunnel configuration.
> 
> The objective is to encapsulate the received packet in an other IPv6 packet 
> that will also "contain" a Hop-by-hop extension header. In summary, the 
> structure of the final packet will look like this : Outer-IP6-Header -> 
> Hop-by-hop-extension-header -> Original packet.
> 
> The main concern here is that the size of the outer IP6 header + the size of 
> the extension header > 128 bytes sometimes. When it arrives, I cannot write 
> my data inside the buffer pre-data area because it has a size of 128 bytes. I 
> already asked for solutions previously and I was adviced to either increase 
> the size of the pre-data area by recompiling VPP or create a new buffer for 
> my data and then chain it to the original one. I was able to create a buffer 
> chain that seemed to work perfectly fine.
> 
> However, when I tried to perform some performance tests I was quite 
> disappointed by the results : the buffer allocation for each packet is not 
> efficient at all. My question is then : Is there any way to increase the 
> performances ? To allocate buffers, I use the function "vlib_buffer_alloc" 
> defined in "buffer_funcs.h" but is it the right function to use ?

I’m quite sure vlib_buffer_alloc() can allocate buffers very fast. hopefully 
you are not calling that function for one buffer at a time...

> 
> In my case, the best option would be to have more space available in the 
> buffer's pre-data area but VPP does not seem to be built in a way that allows 
> easy modifications of the "PRE_DATA_SIZE" value. Am I right or is there any 
> "clean" method to change this value ?

PRE_DATA_SIZE is compile time constant for a good reason. making it 
configurable will decrease performance of almost every dataplane component.

— 
Damjan
-=-=-=-=-=-=-=-=-=-=-=-
Links: You receive all messages sent to this group.
View/Reply Online (#19792): https://lists.fd.io/g/vpp-dev/message/19792
Mute This Topic: https://lists.fd.io/mt/84230132/21656
Group Owner: vpp-dev+ow...@lists.fd.io
Unsubscribe: https://lists.fd.io/g/vpp-dev/unsub [arch...@mail-archive.com]
-=-=-=-=-=-=-=-=-=-=-=-



Re: [vpp-dev] Buffer chains and pre-data area

2021-07-15 Thread Benoit Ganne (bganne) via lists.fd.io
Hi Jerome,

> However, when I tried to perform some performance tests I was quite
> disappointed by the results : the buffer allocation for each packet is not
> efficient at all. My question is then : Is there any way to increase the
> performances ? To allocate buffers, I use the function "vlib_buffer_alloc"
> defined in "buffer_funcs.h" but is it the right function to use ?

Do you allocate buffers in batch? Let's say you want to encapsulate a batch of 
packets, instead of doing:

  while (n_left)
u32 bi
vlib_buffer_t *b
vlib_buffer_alloc(vm, , 1)
b = vlib_get_buffer (vm, bi)
add b to the chain
...

You should do something like (allocation error checking etc. is left as an 
exercise):

  u32 bi[VLIB_FRAME_SIZE]
  vlib_buffer_t *bufs[VLIB_FRAME_SIZE]
  vlib_buffer_alloc (vm, bi, n_left)
  vlib_get_buffers (vm, bi, bufs, n_left)

  while (n_left)
add bufs[i] to the chain
...

> In my case, the best option would be to have more space available in the
> buffer's pre-data area but VPP does not seem to be built in a way that
> allows easy modifications of the "PRE_DATA_SIZE" value. Am I right or is
> there any "clean" method to change this value ?

It is define as a cmake variable and can be customize through eg. cmake-gui.

ben

-=-=-=-=-=-=-=-=-=-=-=-
Links: You receive all messages sent to this group.
View/Reply Online (#19791): https://lists.fd.io/g/vpp-dev/message/19791
Mute This Topic: https://lists.fd.io/mt/84230132/21656
Group Owner: vpp-dev+ow...@lists.fd.io
Unsubscribe: https://lists.fd.io/g/vpp-dev/unsub [arch...@mail-archive.com]
-=-=-=-=-=-=-=-=-=-=-=-



[vpp-dev] Buffer chains and pre-data area

2021-07-15 Thread jerome . bayaux
Dear vpp-dev, 

I'm trying to do some IPv6 in IPv6 encapsulation with no tunnel configuration. 

The objective is to encapsulate the received packet in an other IPv6 packet 
that will also "contain" a Hop-by-hop extension header. In summary, the 
structure of the final packet will look like this : Outer-IP6-Header -> 
Hop-by-hop-extension-header -> Original packet. 

The main concern here is that the size of the outer IP6 header + the size of 
the extension header > 128 bytes sometimes. When it arrives, I cannot write my 
data inside the buffer pre-data area because it has a size of 128 bytes. I 
already asked for solutions previously and I was adviced to either increase the 
size of the pre-data area by recompiling VPP or create a new buffer for my data 
and then chain it to the original one. I was able to create a buffer chain that 
seemed to work perfectly fine. 

However, when I tried to perform some performance tests I was quite 
disappointed by the results : the buffer allocation for each packet is not 
efficient at all. My question is then : Is there any way to increase the 
performances ? To allocate buffers, I use the function " vlib_buffer_alloc " 
defined in " buffer_funcs.h " but is it the right function to use ? 

In my case, the best option would be to have more space available in the 
buffer's pre-data area but VPP does not seem to be built in a way that allows 
easy modifications of the " PRE_DATA_SIZE" value. Am I right or is there any 
"clean" method to change this value ? 

Thank you all already for your help, 

Jérôme 

-=-=-=-=-=-=-=-=-=-=-=-
Links: You receive all messages sent to this group.
View/Reply Online (#19790): https://lists.fd.io/g/vpp-dev/message/19790
Mute This Topic: https://lists.fd.io/mt/84230132/21656
Group Owner: vpp-dev+ow...@lists.fd.io
Unsubscribe: https://lists.fd.io/g/vpp-dev/unsub [arch...@mail-archive.com]
-=-=-=-=-=-=-=-=-=-=-=-



Re: [vpp-dev] MPLS protection

2021-07-15 Thread Gudimetla, Leela Sankar via lists.fd.io
Thanks Neale for responding.
“HA LSP” –  I was referring to primary LSP (Label Switching Path) and backup 
LSP. There can be two or more label encaps configured as a group for each 
label-level in the packet header. In that configuration group, one label will 
refer to primary next-hop and others will refer to backup next-hops. The backup 
label will be used if primary next-hop fails for some reason. And this 
protection-group can be extended to all the label-levels in the label-stack in 
the packet header.

Hope this is clear (on a high-level, it talks about label protection and 
support the protection hierarchically for all the label-levels in the label 
stack in the packet header).

Regards,
Leela sankar

From: Neale Ranns 
Date: Thursday, July 15, 2021 at 1:01 AM
To: Gudimetla, Leela Sankar , vpp-dev@lists.fd.io 

Subject: [**EXTERNAL**] Re: MPLS protection
Hi Leela,

There’s no FRR. I don’t know what a HA LSP would be.
Here’s the docs on what fast convergence support there is:
  
https://github.com/FDio/vpp/blob/master/docs/gettingstarted/developers/fib20/fastconvergence.rst
 
[github.com]

/neale


From: vpp-dev@lists.fd.io  on behalf of Gudimetla, Leela 
Sankar via lists.fd.io 
Date: Wednesday, 14 July 2021 at 20:57
To: vpp-dev@lists.fd.io 
Subject: [vpp-dev] MPLS protection
Hello,

I am trying out MPLS configurations to test what all features are supported 
currently on stable 1908.

I don’t see protection for LSP like HA or FRR support explicitly. So I am 
wondering if it is supported yet. Or I may be missing something.

Can someone please share (documentation, code) what all protection mechanism 
supported for MPLS?

Thanks,
Leela sankar

-=-=-=-=-=-=-=-=-=-=-=-
Links: You receive all messages sent to this group.
View/Reply Online (#19789): https://lists.fd.io/g/vpp-dev/message/19789
Mute This Topic: https://lists.fd.io/mt/84209148/21656
Group Owner: vpp-dev+ow...@lists.fd.io
Unsubscribe: https://lists.fd.io/g/vpp-dev/unsub [arch...@mail-archive.com]
-=-=-=-=-=-=-=-=-=-=-=-



Re: [EXTERNAL] [vpp-dev] Multi-threading locks and synchronization

2021-07-15 Thread satish amara
Thank you guidance on this topic. Is there good documentation on API usage?
VPPINFRA (Infrastructure) — Vector Packet Processor 01 documentation
(fdio-vpp.readthedocs.io)

This info is too little to understand all the semantics.

On Tue, Jul 13, 2021 at 3:39 PM Damjan Marion  wrote:

>
>
> > On 13.07.2021., at 18:41, satish amara  wrote:
> >
> > Sync is needed. It's a question about the design of packet flow in  VPP.
> Locks can be avoided if the packets in a flow are processed by the same
> thread.
>
> You can use the handoff to make sure all packets belonging to specific
> flow or session end up on the same thread.
>
> You can use bihash to store both thread_index and per-thread flow/session
> index in hash result. Bihash have per-bucket locks so it is safe to use
> single hash table from different workers.
>
> After lookup you can simply compare lookup result thread_index with
> current thread index. if they are different you simply handoff packet to
> other thread, if they are the same you continue processing packets on the
> same thread.
>
> After that you can build all your data structures as per-thread and avoid
> locking or atomics.
>
> —
> Damjan
>
>
>

-=-=-=-=-=-=-=-=-=-=-=-
Links: You receive all messages sent to this group.
View/Reply Online (#19788): https://lists.fd.io/g/vpp-dev/message/19788
Mute This Topic: https://lists.fd.io/mt/84186832/21656
Group Owner: vpp-dev+ow...@lists.fd.io
Unsubscribe: https://lists.fd.io/g/vpp-dev/unsub [arch...@mail-archive.com]
-=-=-=-=-=-=-=-=-=-=-=-



[vpp-dev] HQOS in latest VPP

2021-07-15 Thread satish amara
Hi,
   It looks like Hierarchical Queuing support is not supported in the
latest VPP release.  The last release  I see is 20.01.  Any plans to
support it again in the future VPP?  I don't see the config commands or nor
hqos code in the latest VPP release.

Thanks,
Satish K Amara

-=-=-=-=-=-=-=-=-=-=-=-
Links: You receive all messages sent to this group.
View/Reply Online (#19787): https://lists.fd.io/g/vpp-dev/message/19787
Mute This Topic: https://lists.fd.io/mt/84229266/21656
Group Owner: vpp-dev+ow...@lists.fd.io
Unsubscribe: https://lists.fd.io/g/vpp-dev/unsub [arch...@mail-archive.com]
-=-=-=-=-=-=-=-=-=-=-=-



Re: [vpp-dev] VPP on different Linux Platforms

2021-07-15 Thread Damjan Marion via lists.fd.io


> On 15.07.2021., at 16:51, satish amara  wrote:
> 
> [Edited Message Follows]
> 
> Thanks, I am trying to understand the dependencies of VPP are based on the OS 
> kernel/Linux flavors (Centos, Redhat, Ubuntu). What version of Linux kernel 
> it can run. 

On pretty much any OS kernel/Linux flavours which have kernel versions 
described in my previous e-mail.

> Downloading and Installing VPP — The Vector Packet Processor 20.01 
> documentation (fd.io),  
> The above link talks about version 7 of CentOS. No documentation about Ubuntu 
> and other Linux flavors.

Doc is outdated….

> If there are dependencies on Linux Kernel what are they?

CAn you give me one or few examples of dependency, just to understand your 
question?

> Can I compile the VPP code on CentOS and run the code on RedHat.  

Yes, you likely can. We recently decided to remove CentOS from our CI/CD due to 
lack of interested
parties to maintain CentOS support so things may be broken.


> Do I need to compile and run code on the same flavour of OS? 

In theory no, but your life will likely be easier if you do so….

— 
Damjan
-=-=-=-=-=-=-=-=-=-=-=-
Links: You receive all messages sent to this group.
View/Reply Online (#19786): https://lists.fd.io/g/vpp-dev/message/19786
Mute This Topic: https://lists.fd.io/mt/84184116/21656
Group Owner: vpp-dev+ow...@lists.fd.io
Unsubscribe: https://lists.fd.io/g/vpp-dev/unsub [arch...@mail-archive.com]
-=-=-=-=-=-=-=-=-=-=-=-



Re: [vpp-dev] VPP on different Linux Platforms

2021-07-15 Thread satish amara
[Edited Message Follows]

Thanks, I am trying to understand the dependencies of VPP are based on the OS 
kernel/Linux flavors (Centos, Redhat, Ubuntu). What version of Linux kernel it 
can run.
Downloading and Installing VPP — The Vector Packet Processor 20.01 
documentation (fd.io), ( 
https://fd.io/docs/vpp/master/gettingstarted/installing/index.html )
The above link talks about version 7 of CentOS. No documentation about Ubuntu 
and other Linux flavors.
If there are dependencies on Linux Kernel what are they?
Can I compile the VPP code on CentOS and run the code on RedHat.  Do I need to 
compile and run code on the same flavour of OS?

-=-=-=-=-=-=-=-=-=-=-=-
Links: You receive all messages sent to this group.
View/Reply Online (#19785): https://lists.fd.io/g/vpp-dev/message/19785
Mute This Topic: https://lists.fd.io/mt/84184116/21656
Group Owner: vpp-dev+ow...@lists.fd.io
Unsubscribe: https://lists.fd.io/g/vpp-dev/unsub [arch...@mail-archive.com]
-=-=-=-=-=-=-=-=-=-=-=-



Re: [vpp-dev] VPP on different Linux Platforms

2021-07-15 Thread satish amara
Thanks, I am trying to understand the dependencies of VPP are based on the OS 
kernel/Linux flavors (Centos, Redhat, Ubuntu). What version of Linux kernel it 
can run.
Downloading and Installing VPP — The Vector Packet Processor 20.01 
documentation (fd.io), ( 
https://fd.io/docs/vpp/master/gettingstarted/installing/index.html )
The above link talks about version 7 of CentOS. No documentation about Ubuntu 
and other Linux flavors.
If there are dependencies on Linux Kernel what are they?

-=-=-=-=-=-=-=-=-=-=-=-
Links: You receive all messages sent to this group.
View/Reply Online (#19785): https://lists.fd.io/g/vpp-dev/message/19785
Mute This Topic: https://lists.fd.io/mt/84184116/21656
Group Owner: vpp-dev+ow...@lists.fd.io
Unsubscribe: https://lists.fd.io/g/vpp-dev/unsub [arch...@mail-archive.com]
-=-=-=-=-=-=-=-=-=-=-=-



Re: [vpp-dev] FD.io CSIT-2106 Release Report is published

2021-07-15 Thread Maciek Konstantynowicz (mkonstan) via lists.fd.io
Caveat note :)

Number of the latency graphs in report show excessive numbers O(sec)
across frame sizes w/o and with load for both VPP and DPDK. 

Few of us in CSIT are working on the resolution, stay tuned. 
For now please ignore those numbers.

Cheers,
Maciek

> On 15 Jul 2021, at 13:03, Maciek Konstantynowicz (mkonstan) 
>  wrote:
> 
> Hi All,
> 
> FD.io CSIT-2106 report is now available on FD.io docs site:
> 
>   https://docs.fd.io/csit/rls2106/report/
> 
> Another successful release! Many thanks to all contributors in CSIT and
> VPP communities for making it happen.
> 
> See below for CSIT-2106 release summary and pointers to specific
> sections in the report.
> 
> Welcome all comments, best by email to csit-...@lists.fd.io.
> 
> Cheers,
> Maciek (CSIT PTL)
> 
> 
> CSIT-2106 Release Summary
> -
> 
> BENCHMARK TESTS
> 
> - AF_XDP: Added af_xdp driver support for all test cases. Test results
>  will be added in sub-sequent CSIT-2106 report.
> 
> - GTPU tunnel: Added GTPU HW Offload IPv4 routing tests.
> 
> - Intel Xeon Ice Lake: Added initial test data for these platforms.
>  Current CSIT-2106 report data for Intel Xeon Ice Lake comes from an
>  external source (Intel labs running CSIT code on
>  "8360Y D Stepping" and "6338N" processors).
> 
> - CSIT in AWS environment: Added CSIT support for AWS c5n instances
>  environment. Test results will be added in sub-sequent CSIT-2106
>  report.
> 
> - MLRsearch improvements: Added support for multiple packet throughput
>  rates in a single search, each rate is associated with a distinct
>  Packet Loss Ratio (PLR) criterion. Implemented number of
>  optimizations improving rate discovery efficiency.
> 
> TEST FRAMEWORK
> 
> - Telemetry retouch: Refactored telemetry retrieval from DUTs and SUTs.
>  Included VPP perfmon plugin telemetry with all perfmon bundles
>  available in VPP release.
> 
> - Upgrade to Ubuntu 20.04 LTS: Re-installed base operating system to
>  Ubuntu 20.04.2 LTS. Upgrade included also baseline Docker containers
>  used for spawning topology.
> 
> Pointers to CSIT-2106 Report sections
> -
> 
> 1. FD.io CSIT test methodology  [1]
> 2. VPP release notes[2]
> 3. VPP 64B/IMIX throughput graphs   [3]
> 4. VPP throughput speedup multi-core[4]
> 5. VPP latency under load   [5]
> 6. VPP comparisons v21.06 vs. v21.01[6]
> 7. VPP performance all pkt sizes & NICs [7]
> 8. DPDK 20.08 apps release notes[8]
> 9. DPDK 64B throughput graphs   [9]
> 10. DPDK latency under load [10]
> 11. DPDK comparisons 21.02 vs. 20.11[11]
> 
> Functional device tests (VPP_Device) are also included in the report.
> 
> [1] https://docs.fd.io/csit/rls2106/report/introduction/methodology.html
> [2] 
> https://docs.fd.io/csit/rls2106/report/vpp_performance_tests/csit_release_notes.html
> [3] 
> https://docs.fd.io/csit/rls2106/report/vpp_performance_tests/packet_throughput_graphs/index.html
> [4] 
> https://docs.fd.io/csit/rls2106/report/vpp_performance_tests/throughput_speedup_multi_core/index.html
> [5] 
> https://docs.fd.io/csit/rls2106/report/vpp_performance_tests/packet_latency/index.html
> [6] 
> https://docs.fd.io/csit/rls2106/report/vpp_performance_tests/comparisons/current_vs_previous_release.html
> [7] 
> https://docs.fd.io/csit/rls2106/report/detailed_test_results/vpp_performance_results/index.html
> [8] 
> https://docs.fd.io/csit/rls2106/report/dpdk_performance_tests/csit_release_notes.html
> [9] 
> https://docs.fd.io/csit/rls2106/report/dpdk_performance_tests/packet_throughput_graphs/index.html
> [10] 
> https://docs.fd.io/csit/rls2106/report/dpdk_performance_tests/packet_latency/index.html
> [11] 
> https://docs.fd.io/csit/rls2106/report/dpdk_performance_tests/comparisons/current_vs_previous_release.html


-=-=-=-=-=-=-=-=-=-=-=-
Links: You receive all messages sent to this group.
View/Reply Online (#19784): https://lists.fd.io/g/vpp-dev/message/19784
Mute This Topic: https://lists.fd.io/mt/84223353/21656
Group Owner: vpp-dev+ow...@lists.fd.io
Unsubscribe: https://lists.fd.io/g/vpp-dev/unsub [arch...@mail-archive.com]
-=-=-=-=-=-=-=-=-=-=-=-



[vpp-dev] FD.io CSIT-2106 Release Report is published

2021-07-15 Thread Maciek Konstantynowicz (mkonstan) via lists.fd.io
Hi All,

FD.io CSIT-2106 report is now available on FD.io docs site:

   https://docs.fd.io/csit/rls2106/report/

Another successful release! Many thanks to all contributors in CSIT and
VPP communities for making it happen.

See below for CSIT-2106 release summary and pointers to specific
sections in the report.

Welcome all comments, best by email to csit-...@lists.fd.io.

Cheers,
Maciek (CSIT PTL)


CSIT-2106 Release Summary
-

BENCHMARK TESTS

- AF_XDP: Added af_xdp driver support for all test cases. Test results
  will be added in sub-sequent CSIT-2106 report.

- GTPU tunnel: Added GTPU HW Offload IPv4 routing tests.

- Intel Xeon Ice Lake: Added initial test data for these platforms.
  Current CSIT-2106 report data for Intel Xeon Ice Lake comes from an
  external source (Intel labs running CSIT code on
  "8360Y D Stepping" and "6338N" processors).

- CSIT in AWS environment: Added CSIT support for AWS c5n instances
  environment. Test results will be added in sub-sequent CSIT-2106
  report.

- MLRsearch improvements: Added support for multiple packet throughput
  rates in a single search, each rate is associated with a distinct
  Packet Loss Ratio (PLR) criterion. Implemented number of
  optimizations improving rate discovery efficiency.

TEST FRAMEWORK

- Telemetry retouch: Refactored telemetry retrieval from DUTs and SUTs.
  Included VPP perfmon plugin telemetry with all perfmon bundles
  available in VPP release.

- Upgrade to Ubuntu 20.04 LTS: Re-installed base operating system to
  Ubuntu 20.04.2 LTS. Upgrade included also baseline Docker containers
  used for spawning topology.

Pointers to CSIT-2106 Report sections
-

1. FD.io CSIT test methodology  [1]
2. VPP release notes[2]
3. VPP 64B/IMIX throughput graphs   [3]
4. VPP throughput speedup multi-core[4]
5. VPP latency under load   [5]
6. VPP comparisons v21.06 vs. v21.01[6]
7. VPP performance all pkt sizes & NICs [7]
8. DPDK 20.08 apps release notes[8]
9. DPDK 64B throughput graphs   [9]
10. DPDK latency under load [10]
11. DPDK comparisons 21.02 vs. 20.11[11]

Functional device tests (VPP_Device) are also included in the report.

[1] https://docs.fd.io/csit/rls2106/report/introduction/methodology.html
[2] 
https://docs.fd.io/csit/rls2106/report/vpp_performance_tests/csit_release_notes.html
[3] 
https://docs.fd.io/csit/rls2106/report/vpp_performance_tests/packet_throughput_graphs/index.html
[4] 
https://docs.fd.io/csit/rls2106/report/vpp_performance_tests/throughput_speedup_multi_core/index.html
[5] 
https://docs.fd.io/csit/rls2106/report/vpp_performance_tests/packet_latency/index.html
[6] 
https://docs.fd.io/csit/rls2106/report/vpp_performance_tests/comparisons/current_vs_previous_release.html
[7] 
https://docs.fd.io/csit/rls2106/report/detailed_test_results/vpp_performance_results/index.html
[8] 
https://docs.fd.io/csit/rls2106/report/dpdk_performance_tests/csit_release_notes.html
[9] 
https://docs.fd.io/csit/rls2106/report/dpdk_performance_tests/packet_throughput_graphs/index.html
[10] 
https://docs.fd.io/csit/rls2106/report/dpdk_performance_tests/packet_latency/index.html
[11] 
https://docs.fd.io/csit/rls2106/report/dpdk_performance_tests/comparisons/current_vs_previous_release.html
-=-=-=-=-=-=-=-=-=-=-=-
Links: You receive all messages sent to this group.
View/Reply Online (#19783): https://lists.fd.io/g/vpp-dev/message/19783
Mute This Topic: https://lists.fd.io/mt/84223353/21656
Group Owner: vpp-dev+ow...@lists.fd.io
Unsubscribe: https://lists.fd.io/g/vpp-dev/unsub [arch...@mail-archive.com]
-=-=-=-=-=-=-=-=-=-=-=-



Re: [vpp-dev] Need help on IPSEC tunnel

2021-07-15 Thread Neale Ranns

Hi Nikhil,

Reaching the ip4-not-enabled node means your tunnel is not ip4 enabled. Give it 
an IP address or make it unnumbered to an interface that has an address.

/neale

From: vpp-dev@lists.fd.io  on behalf of nikhil subhedar 
via lists.fd.io 
Date: Tuesday, 13 July 2021 at 18:53
To: vpp-dev@lists.fd.io 
Subject: [vpp-dev] Need help on IPSEC tunnel
Greetings of the day...
I am facing a problem in ip-node lookup.  Here is the sequence.

1) decrypting the  esp packet at esp4-decrypt-tun.
2) packet is reaching to ip4-input-no-checksum which is nothing but the 
ip4-lookup .
3)From ip4-input-no-checksum ideally it should  reach to ip-4lookup/ip4-lookup.

But in my case from  ip4-input-no-checksum it is going to ip4-not-enabled

Can you please help me in this regard?

Thanks in advance.
Nikhil

Here is the trace.
  IPSEC_ESP: 20.20.147.217 -> 20.20.147.220
tos 0x00, ttl 64, length 120, checksum 0xea76 dscp CS0 ecn NON_ECN
fragment id 0x, flags DONT_FRAGMENT
00:03:11:387399: ip4-local
IPSEC_ESP: 20.20.147.217 -> 20.20.147.220
  tos 0x00, ttl 64, length 120, checksum 0xea76 dscp CS0 ecn NON_ECN
  fragment id 0x, flags DONT_FRAGMENT
00:03:11:387402: ipsec4-tun-input
  IPSec: remote:20.20.147.217 spi:12347 (0x303b) seq 1 sa 1
00:03:11:387413: esp4-decrypt-tun
  esp: crypto aes-cbc-256 integrity sha1-96 pkt-seq 1 sa-seq 1 sa-seq-hi 0
00:03:11:387471: ip4-input-no-checksum
  TCP: 10.10.10.10 -> 30.30.30.30
tos 0x00, ttl 64, length 60, checksum 0x4eb7 dscp CS0 ecn NON_ECN
fragment id 0x9bb5, flags DONT_FRAGMENT
  TCP: 3268 -> 1234
seq. 0xe5076078 ack 0x
flags 0x02 SYN, tcp header: 40 bytes
window 64240, checksum 0xca58
00:03:11:387473: ip4-not-enabled
TCP: 10.10.10.10 -> 30.30.30.30
  tos 0x00, ttl 64, length 60, checksum 0x4eb7 dscp CS0 ecn NON_ECN
  fragment id 0x9bb5, flags DONT_FRAGMENT
TCP: 3268 -> 1234
  seq. 0xe5076078 ack 0x
  flags 0x02 SYN, tcp header: 40 bytes
  window 64240, checksum 0xca58
00:03:11:387479: error-drop
  rx:ipip0
00:03:11:387481: drop
  ip4-local: unknown ip protocol



--
Regards,
Nikhil

-=-=-=-=-=-=-=-=-=-=-=-
Links: You receive all messages sent to this group.
View/Reply Online (#19782): https://lists.fd.io/g/vpp-dev/message/19782
Mute This Topic: https://lists.fd.io/mt/84182906/21656
Group Owner: vpp-dev+ow...@lists.fd.io
Unsubscribe: https://lists.fd.io/g/vpp-dev/unsub [arch...@mail-archive.com]
-=-=-=-=-=-=-=-=-=-=-=-



Re: [vpp-dev] DPDK compilation under VPP 2101

2021-07-15 Thread Benoit Ganne (bganne) via lists.fd.io
>  I want to include additional DPDK patches for our use case. So whenever I
> compile VPP under non-root user , DPDK compilation fails as it is trying
> to checkout meason related tar ball and pip3, which does not have such
> permissions.
> >Any suggestions regarding this would be helpful?

You can put patches under build/external/patches/dpdk_, eg. 
https://git.fd.io/vpp/tree/build/external/patches/dpdk_20.11
Those will be applied before config & build.

Best
ben

-=-=-=-=-=-=-=-=-=-=-=-
Links: You receive all messages sent to this group.
View/Reply Online (#19781): https://lists.fd.io/g/vpp-dev/message/19781
Mute This Topic: https://lists.fd.io/mt/84158427/21656
Group Owner: vpp-dev+ow...@lists.fd.io
Unsubscribe: https://lists.fd.io/g/vpp-dev/unsub [arch...@mail-archive.com]
-=-=-=-=-=-=-=-=-=-=-=-



Re: [vpp-dev] ACL IPV6 rule addition using the "set acl_plugin acl" command from "vppctl" #vppctl #acl #acl_plugin #ipv6

2021-07-15 Thread Andrew Yourtchenko
Oh cool, thanks, Neale! :-)

this makes much more sense! I was staring at the code yesterday late in the 
evening and questioning what was I missing… :)

--a

> On 15 Jul 2021, at 10:20, Neale Ranns  wrote:
> 
> 
>  
> Evidently a typo. Here you go:
>   https://gerrit.fd.io/r/c/vpp/+/33142
>  
> /neale
>  
> From: vpp-dev@lists.fd.io  on behalf of Andrew 
> Yourtchenko via lists.fd.io 
> Date: Wednesday, 14 July 2021 at 23:53
> To: RaviKiran Veldanda , Jakub Grajciar 
> 
> Cc: vpp-dev@lists.fd.io 
> Subject: Re: [vpp-dev] ACL IPV6 rule addition using the "set acl_plugin acl" 
> command from "vppctl" #vppctl #acl #acl_plugin #ipv6
> 
> Ravi,
> 
> appears that the commit 2f8cd914514fe54f91974c6d465d4769dfac8de8 has
> hardcoded the IP address family in the CLI handler to IPv4:
> 
> 0490db79b src/plugins/acl/acl.c(Neale Ranns2020-03-24
> 15:09:41 + 2873)   else if (unformat (line_input, "src %U/%d",
> bf883bb086 src/plugins/acl/acl.c(Neale Ranns2020-04-23
> 16:01:20 + 2874)  unformat_ip46_address, ,
> IP46_TYPE_ANY,
> bf883bb086 src/plugins/acl/acl.c(Neale Ranns2020-04-23
> 16:01:20 + 2875)  _prefix_length))
> 40490db79b src/plugins/acl/acl.c(Neale Ranns2020-03-24
> 15:09:41 + 2876) {
> 40490db79b src/plugins/acl/acl.c(Neale Ranns2020-03-24
> 15:09:41 + 2877)   vec_validate_acl_rules (rules, rule_idx);
> 2f8cd91451 src/plugins/acl/acl.c(Jakub Grajciar 2020-03-27
> 06:55:06 +0100 2878)   ip_address_encode (, IP46_TYPE_ANY,
> 2f8cd91451 src/plugins/acl/acl.c(Jakub Grajciar 2020-03-27
> 06:55:06 +0100 2879)
> [rule_idx].src_prefix.address);
> 2f8cd91451 src/plugins/acl/acl.c(Jakub Grajciar 2020-03-27
> 06:55:06 +0100 2880)   rules[rule_idx].src_prefix.address.af =
> ADDRESS_IP4;
> 2f8cd91451 src/plugins/acl/acl.c(Jakub Grajciar 2020-03-27
> 06:55:06 +0100 2881)   rules[rule_idx].src_prefix.len =
> src_prefix_length;
> 40490db79b src/plugins/acl/acl.c(Neale Ranns2020-03-24
> 15:09:41 + 2882) }
> 40490db79b src/plugins/acl/acl.c(Neale Ranns2020-03-24
> 15:09:41 + 2883)   else if (unformat (line_input, "dst %U/%d",
> bf883bb086 src/plugins/acl/acl.c(Neale Ranns2020-04-23
> 16:01:20 + 2884)  unformat_ip46_address, ,
> IP46_TYPE_ANY,
> 
> 
> I am including the commit author for the clarification on how that
> code is supposed to work for the IPv6 case.
> 
> Workaround is to use the "binary-api" command which will use vat code
> which will work for you:
> 
> vpp# binary-api acl_add_replace -1 permit src 2001:db8::1/128
> vl_api_acl_add_replace_reply_t_handler:72: ACL index: 0
> vpp# show acl acl
> acl-index 0 count 1 tag {}
>   0: ipv6 permit src 2001:db8::1/128 dst ::/0 proto 0 sport
> 0-65535 dport 0-65535
> vpp#
> 
> --a
> 
> 
> On 7/14/21, RaviKiran Veldanda  wrote:
> > Hi Experts,
> > We were trying to create some ACL rules for IPv6 addresses,
> > *"set acl-plugin acl permit src 2001:5b0::1150::0/64 " in vppctl.
> > * "set acl-plugin acl permit ipv6 src 2001:5b0::1150::0/64 " in vppctl.
> > giving ACL index but when I check "show acl_plugin acl" its not giving any
> > information.
> >
> > vpp# set acl-plugin acl ipv6 permit src 2001:5b0::1150::0/64
> > ACL index:1
> > vpp# show acl-plugin acl
> > acl-index 0 count 1 tag {cli}
> > 0: ipv4 permit src 172.25.169.0/24 dst 0.0.0.0/0 proto 0 sport 0-65535 dport
> > 0-65535
> > acl-index 1 count 0 tag {cli}
> > vpp#
> > We are using VPP 20.05 stable version. We couldn't able to set the ACL for
> > IPv6.
> > We are not seeing any error message on the logs.
> > We could able to set ACL for IPv4 without any issue.
> > We tried same thing from vpp_api_test, still we couldn't able to set IPv6
> > rule.
> > Can you please provide some pointer for creating "acl rule for IPV6."
> > Thanks for your help.
> >
> > //Ravi
> >

-=-=-=-=-=-=-=-=-=-=-=-
Links: You receive all messages sent to this group.
View/Reply Online (#19780): https://lists.fd.io/g/vpp-dev/message/19780
Mute This Topic: https://lists.fd.io/mt/84212274/21656
Mute #acl_plugin:https://lists.fd.io/g/vpp-dev/mutehashtag/acl_plugin
Mute #ipv6:https://lists.fd.io/g/vpp-dev/mutehashtag/ipv6
Mute #vppctl:https://lists.fd.io/g/vpp-dev/mutehashtag/vppctl
Mute #acl:https://lists.fd.io/g/vpp-dev/mutehashtag/acl
Group Owner: vpp-dev+ow...@lists.fd.io
Unsubscribe: https://lists.fd.io/g/vpp-dev/unsub [arch...@mail-archive.com]
-=-=-=-=-=-=-=-=-=-=-=-



Re: [vpp-dev] ACL IPV6 rule addition using the "set acl_plugin acl" command from "vppctl" #vppctl #acl #acl_plugin #ipv6

2021-07-15 Thread Neale Ranns

Evidently a typo. Here you go:
  https://gerrit.fd.io/r/c/vpp/+/33142

/neale

From: vpp-dev@lists.fd.io  on behalf of Andrew Yourtchenko 
via lists.fd.io 
Date: Wednesday, 14 July 2021 at 23:53
To: RaviKiran Veldanda , Jakub Grajciar 

Cc: vpp-dev@lists.fd.io 
Subject: Re: [vpp-dev] ACL IPV6 rule addition using the "set acl_plugin acl" 
command from "vppctl" #vppctl #acl #acl_plugin #ipv6
Ravi,

appears that the commit 2f8cd914514fe54f91974c6d465d4769dfac8de8 has
hardcoded the IP address family in the CLI handler to IPv4:

0490db79b src/plugins/acl/acl.c(Neale Ranns2020-03-24
15:09:41 + 2873)   else if (unformat (line_input, "src %U/%d",
bf883bb086 src/plugins/acl/acl.c(Neale Ranns2020-04-23
16:01:20 + 2874)  unformat_ip46_address, ,
IP46_TYPE_ANY,
bf883bb086 src/plugins/acl/acl.c(Neale Ranns2020-04-23
16:01:20 + 2875)  _prefix_length))
40490db79b src/plugins/acl/acl.c(Neale Ranns2020-03-24
15:09:41 + 2876) {
40490db79b src/plugins/acl/acl.c(Neale Ranns2020-03-24
15:09:41 + 2877)   vec_validate_acl_rules (rules, rule_idx);
2f8cd91451 src/plugins/acl/acl.c(Jakub Grajciar 2020-03-27
06:55:06 +0100 2878)   ip_address_encode (, IP46_TYPE_ANY,
2f8cd91451 src/plugins/acl/acl.c(Jakub Grajciar 2020-03-27
06:55:06 +0100 2879)
[rule_idx].src_prefix.address);
2f8cd91451 src/plugins/acl/acl.c(Jakub Grajciar 2020-03-27
06:55:06 +0100 2880)   rules[rule_idx].src_prefix.address.af =
ADDRESS_IP4;
2f8cd91451 src/plugins/acl/acl.c(Jakub Grajciar 2020-03-27
06:55:06 +0100 2881)   rules[rule_idx].src_prefix.len =
src_prefix_length;
40490db79b src/plugins/acl/acl.c(Neale Ranns2020-03-24
15:09:41 + 2882) }
40490db79b src/plugins/acl/acl.c(Neale Ranns2020-03-24
15:09:41 + 2883)   else if (unformat (line_input, "dst %U/%d",
bf883bb086 src/plugins/acl/acl.c(Neale Ranns2020-04-23
16:01:20 + 2884)  unformat_ip46_address, ,
IP46_TYPE_ANY,


I am including the commit author for the clarification on how that
code is supposed to work for the IPv6 case.

Workaround is to use the "binary-api" command which will use vat code
which will work for you:

vpp# binary-api acl_add_replace -1 permit src 2001:db8::1/128
vl_api_acl_add_replace_reply_t_handler:72: ACL index: 0
vpp# show acl acl
acl-index 0 count 1 tag {}
  0: ipv6 permit src 2001:db8::1/128 dst ::/0 proto 0 sport
0-65535 dport 0-65535
vpp#

--a


On 7/14/21, RaviKiran Veldanda  wrote:
> Hi Experts,
> We were trying to create some ACL rules for IPv6 addresses,
> *"set acl-plugin acl permit src 2001:5b0::1150::0/64 " in vppctl.
> * "set acl-plugin acl permit ipv6 src 2001:5b0::1150::0/64 " in vppctl.
> giving ACL index but when I check "show acl_plugin acl" its not giving any
> information.
>
> vpp# set acl-plugin acl ipv6 permit src 2001:5b0::1150::0/64
> ACL index:1
> vpp# show acl-plugin acl
> acl-index 0 count 1 tag {cli}
> 0: ipv4 permit src 172.25.169.0/24 dst 0.0.0.0/0 proto 0 sport 0-65535 dport
> 0-65535
> acl-index 1 count 0 tag {cli}
> vpp#
> We are using VPP 20.05 stable version. We couldn't able to set the ACL for
> IPv6.
> We are not seeing any error message on the logs.
> We could able to set ACL for IPv4 without any issue.
> We tried same thing from vpp_api_test, still we couldn't able to set IPv6
> rule.
> Can you please provide some pointer for creating "acl rule for IPV6."
> Thanks for your help.
>
> //Ravi
>

-=-=-=-=-=-=-=-=-=-=-=-
Links: You receive all messages sent to this group.
View/Reply Online (#19779): https://lists.fd.io/g/vpp-dev/message/19779
Mute This Topic: https://lists.fd.io/mt/84212274/21656
Mute #acl_plugin:https://lists.fd.io/g/vpp-dev/mutehashtag/acl_plugin
Mute #ipv6:https://lists.fd.io/g/vpp-dev/mutehashtag/ipv6
Mute #vppctl:https://lists.fd.io/g/vpp-dev/mutehashtag/vppctl
Mute #acl:https://lists.fd.io/g/vpp-dev/mutehashtag/acl
Group Owner: vpp-dev+ow...@lists.fd.io
Unsubscribe: https://lists.fd.io/g/vpp-dev/unsub [arch...@mail-archive.com]
-=-=-=-=-=-=-=-=-=-=-=-



Re: [vpp-dev] High packet drop under high binary-api call rate #binapi #vpp #vpp-dev #vapi

2021-07-15 Thread Benoit Ganne (bganne) via lists.fd.io
Hi Govind,

> 1. Would there be any suggestions that can achieve a lower packet drop
> rate under a high binary-API call rate?

The strategy has been to change the APIs functions so that they are thread-safe 
on an ad-hoc basis, see ip_route_add_del for example.

> 2. Are there any plan in future vpp release that can improve the "locking"
> of worker thread for non-thread-safe binary API call?

Not that I know of. What is the usecase here? This usually concerns a small set 
of APIs that can be updated to be thread-safe.

Best
ben

-=-=-=-=-=-=-=-=-=-=-=-
Links: You receive all messages sent to this group.
View/Reply Online (#19778): https://lists.fd.io/g/vpp-dev/message/19778
Mute This Topic: https://lists.fd.io/mt/84196118/21656
Mute #vpp:https://lists.fd.io/g/vpp-dev/mutehashtag/vpp
Mute #binapi:https://lists.fd.io/g/vpp-dev/mutehashtag/binapi
Mute #vapi:https://lists.fd.io/g/vpp-dev/mutehashtag/vapi
Mute #vpp-dev:https://lists.fd.io/g/vpp-dev/mutehashtag/vpp-dev
Group Owner: vpp-dev+ow...@lists.fd.io
Unsubscribe: https://lists.fd.io/g/vpp-dev/unsub [arch...@mail-archive.com]
-=-=-=-=-=-=-=-=-=-=-=-



Re: [vpp-dev] MPLS protection

2021-07-15 Thread Neale Ranns
Hi Leela,

There’s no FRR. I don’t know what a HA LSP would be.
Here’s the docs on what fast convergence support there is:
  
https://github.com/FDio/vpp/blob/master/docs/gettingstarted/developers/fib20/fastconvergence.rst

/neale


From: vpp-dev@lists.fd.io  on behalf of Gudimetla, Leela 
Sankar via lists.fd.io 
Date: Wednesday, 14 July 2021 at 20:57
To: vpp-dev@lists.fd.io 
Subject: [vpp-dev] MPLS protection
Hello,

I am trying out MPLS configurations to test what all features are supported 
currently on stable 1908.

I don’t see protection for LSP like HA or FRR support explicitly. So I am 
wondering if it is supported yet. Or I may be missing something.

Can someone please share (documentation, code) what all protection mechanism 
supported for MPLS?

Thanks,
Leela sankar

-=-=-=-=-=-=-=-=-=-=-=-
Links: You receive all messages sent to this group.
View/Reply Online (#19777): https://lists.fd.io/g/vpp-dev/message/19777
Mute This Topic: https://lists.fd.io/mt/84209148/21656
Group Owner: vpp-dev+ow...@lists.fd.io
Unsubscribe: https://lists.fd.io/g/vpp-dev/unsub [arch...@mail-archive.com]
-=-=-=-=-=-=-=-=-=-=-=-



Re: [vpp-dev] Prefetches improvement for VPP Arm generic image

2021-07-15 Thread Benoit Ganne (bganne) via lists.fd.io
Looks good to me, thanks!

ben

> -Original Message-
> From: vpp-dev@lists.fd.io  On Behalf Of Damjan Marion
> via lists.fd.io
> Sent: mercredi 14 juillet 2021 19:48
> To: vpp-dev 
> Cc: Honnappa Nagarahalli ; Govindarajan
> Mohandoss ; Tianyu Li ;
> Nitin Saxena ; Lijian Zhang 
> Subject: Re: [vpp-dev] Prefetches improvement for VPP Arm generic image
> 
> 
> I spent a bit of time to look at this and come up with some reasonable
> solution.
> 
> First, 128-byte cacheline is not dead, recently announced Marvell Octeon
> 10 have 128 byte cacheline.
> 
> In current code cacheline size defines both amount of data prefetch
> instruction prefetches and
> alignment of data in data structures needed to avoid false sharing.
> 
> So I think ideally we should have following:
> 
> - on x86:
>   - number of bytes prefetch instruction prefetches set to 64
>   - data structures should be aligned to 64 bytes
>   - due the fact that there is adjacent cacehline prefetcher on x86 it may
> be worth
> investigating if aligning to 128 brings some value
> 
> - on AArch64
>   - number of bytes prefetch instruction prefetches set to 64 or 128,
> based on multiarch variant running
>   - data structures should be aligned to 128 bytes as that value prevents
> false sharing for both 64 and 128 byte cacheline systems
> 
> Main problem is abuse of CLIB_PREFETCH() macro in our codebase.
> Original idea of it was good, somebody wanted to provide macro which
> transparently emits 1-4 prefetch
> instructions based on data size recognising that there may be systems with
> different cacheline size
> 
> Like:
>   CLIB_PREFETCH (p, sizeof (ip6_header_t), LOAD);
> 
> But reality is, most of the time we have:
>   CLIB_PREFETCH (p, CLIB_CACHE_LINE_BYTES, LOAD);
> 
> Where it is assumed that cacheline size is 64 and that just wasted
> resources on system with 128-byte cacheline.
> 
> Also, most of places in our codebase are perfectly fine with whatever
> cacheline size is, so I’m thinking about following:
> 
> 1. set CLIB_CACHE_LINE_BYTES to 64 on x86 and 128 on ARM, that will make
> sure false sharing is not happening
> 
> 2. introduce CLIB_CACHE_PREFETCH_BYTES which can be set to different value
> for different multiarch variant (64 for N1, 128 ThinderX2)
> 
> 3. modify CLIB_PREFETCH macro to use CLIB_CACHE_PREFETCH_BYTES to emit
> proper number of prefetch instructions for cases where data size is
> specified
> 
> 4. take the stub and replace all `CLIB_PREFETCH (p, CLIB_CACHE_LINE_BYTES,
> LOAD);` with `clib_prefetch_load (p);`.
>There may be exceptions but those lines typically mean: "i want to
> prefetch few (<=64) bytes at this address and i really don’t care what the
> cache line size is”.
> 
> 5. analyse remaining few cases where CLIB_PREFETCH() is used with size
> specified by CLIB_CACHE_LINE_BYTES.
> 
> Thoughts?
> 
> —
> Damjan
> 
> > On 06.07.2021., at 03:48, Lijian Zhang  wrote:
> >
> > Thanks Damjan for your comments. Some replies in lines.
> > 
> > Hi Lijian,
> >
> > It will be good to know if 128 byte cacheline is something ARM platforms
> will be using in the future or it is just historical.
> > [Lijian] Existing ThunderX1 and OcteonTX2 CPUs are 128 byte cache-line.
> To my knowledge, there may be more CPUs with 128 byte cache-line in the
> future.
> >
> > Cacheline size problem is not just about prefetching, even bigger issue
> is false sharing, so we need to address both.
> > [Lijian] Yes, there may be false-sharing issue when running VPP image
> with 64B definition on 128B cache-line CPUs. We will do some scalability
> testing with that case, and check the multi-core performance.
> >
> > Probably best solution is to have 2 VPP images, one for 128 and one for
> 64 byte cacheline size.
> > [Lijian] For native built image, that’s fine. But I’m not sure if it’s
> possible for cloud binaries installed via “apt-get install”.
> >
> > Going across the whole codebase and replacing prefetch macros is
> something we should definitely avoid.
> > [Lijian] I got your concerns on large scope replacement. My concern is
> when CLIB_PREFETCH() is used to prefetch packet content into cache as
> below example, cache-line (CLIB_CACHE_LINE_BYTES) seems to be assumed as
> 64 bytes always.
> > CLIB_PREFETCH (p2->data, 3 * CLIB_CACHE_LINE_BYTES, LOAD);
> >
> > —
> > Damjan
> >
> >
> > On 05.07.2021., at 07:28, Lijian Zhang  wrote:
> >
> > Hi Damjan,
> > I committed several patches to address some issues around cache-line
> definitions in VPP.
> >
> > Patch [1.1] is to resolve the build error [2] on 64Byte cache line Arm
> CPUs, e.g., ThunderX2, NeoverseN1, caused by the commit
> (https://gerrit.fd.io/r/c/vpp/+/32996, build: remove unused files and
> sections).
> > It also supports building Arm generic image (with command of “make
> build-release”) with 128Byte cache line definition, and building native
> image with 64Byte cache line definition on some Arm CPUs, e.g., ThunderX2,
> NeoverseN1 (with command of “make build-release 

Re: [EXT] Re: [vpp-dev] Prefetches improvement for VPP Arm generic image

2021-07-15 Thread Nitin Saxena
Hi Damjan,

+1. This proposal completely aligns with our thought process. i.e 

-  Data structure alignment to 128B cache line and separate macro for 
prefetching which can be 64B or 128B depending on underlying SoC

It also allow us to have single AARCH64 binary in distros for any cloud 
deployments. We can provide our feedback on the changes for any performance 
impact on our SoC

Thanks,
Nitin

> -Original Message-
> From: Damjan Marion 
> Sent: Wednesday, July 14, 2021 11:18 PM
> To: vpp-dev 
> Cc: Honnappa Nagarahalli ; Govindarajan
> Mohandoss ; Tianyu Li
> ; Nitin Saxena ; Lijian Zhang
> 
> Subject: [EXT] Re: [vpp-dev] Prefetches improvement for VPP Arm generic
> image
> 
> External Email
> 
> --
> 
> I spent a bit of time to look at this and come up with some reasonable
> solution.
> 
> First, 128-byte cacheline is not dead, recently announced Marvell Octeon 10
> have 128 byte cacheline.
> 
> In current code cacheline size defines both amount of data prefetch
> instruction prefetches and alignment of data in data structures needed to
> avoid false sharing.
> 
> So I think ideally we should have following:
> 
> - on x86:
>   - number of bytes prefetch instruction prefetches set to 64
>   - data structures should be aligned to 64 bytes
>   - due the fact that there is adjacent cacehline prefetcher on x86 it may be
> worth
> investigating if aligning to 128 brings some value
> 
> - on AArch64
>   - number of bytes prefetch instruction prefetches set to 64 or 128, based on
> multiarch variant running
>   - data structures should be aligned to 128 bytes as that value prevents 
> false
> sharing for both 64 and 128 byte cacheline systems
> 
> Main problem is abuse of CLIB_PREFETCH() macro in our codebase.
> Original idea of it was good, somebody wanted to provide macro which
> transparently emits 1-4 prefetch instructions based on data size recognising
> that there may be systems with different cacheline size
> 
> Like:
>   CLIB_PREFETCH (p, sizeof (ip6_header_t), LOAD);
> 
> But reality is, most of the time we have:
>   CLIB_PREFETCH (p, CLIB_CACHE_LINE_BYTES, LOAD);
> 
> Where it is assumed that cacheline size is 64 and that just wasted resources
> on system with 128-byte cacheline.
> 
> Also, most of places in our codebase are perfectly fine with whatever
> cacheline size is, so I’m thinking about following:
> 
> 1. set CLIB_CACHE_LINE_BYTES to 64 on x86 and 128 on ARM, that will make
> sure false sharing is not happening
> 
> 2. introduce CLIB_CACHE_PREFETCH_BYTES which can be set to different
> value for different multiarch variant (64 for N1, 128 ThinderX2)
> 
> 3. modify CLIB_PREFETCH macro to use CLIB_CACHE_PREFETCH_BYTES to
> emit proper number of prefetch instructions for cases where data size is
> specified
> 
> 4. take the stub and replace all `CLIB_PREFETCH (p, CLIB_CACHE_LINE_BYTES,
> LOAD);` with `clib_prefetch_load (p);`.
>There may be exceptions but those lines typically mean: "i want to prefetch
> few (<=64) bytes at this address and i really don’t care what the cache line
> size is”.
> 
> 5. analyse remaining few cases where CLIB_PREFETCH() is used with size
> specified by CLIB_CACHE_LINE_BYTES.
> 
> Thoughts?
> 
> —
> Damjan
> 
> > On 06.07.2021., at 03:48, Lijian Zhang  wrote:
> >
> > Thanks Damjan for your comments. Some replies in lines.
> > 
> > Hi Lijian,
> >
> > It will be good to know if 128 byte cacheline is something ARM platforms
> will be using in the future or it is just historical.
> > [Lijian] Existing ThunderX1 and OcteonTX2 CPUs are 128 byte cache-line. To
> my knowledge, there may be more CPUs with 128 byte cache-line in the
> future.
> >
> > Cacheline size problem is not just about prefetching, even bigger issue is
> false sharing, so we need to address both.
> > [Lijian] Yes, there may be false-sharing issue when running VPP image with
> 64B definition on 128B cache-line CPUs. We will do some scalability testing
> with that case, and check the multi-core performance.
> >
> > Probably best solution is to have 2 VPP images, one for 128 and one for 64
> byte cacheline size.
> > [Lijian] For native built image, that’s fine. But I’m not sure if it’s 
> > possible for
> cloud binaries installed via “apt-get install”.
> >
> > Going across the whole codebase and replacing prefetch macros is
> something we should definitely avoid.
> > [Lijian] I got your concerns on large scope replacement. My concern is
> when CLIB_PREFETCH() is used to prefetch packet content into cache as
> below example, cache-line (CLIB_CACHE_LINE_BYTES) seems to be assumed
> as 64 bytes always.
> > CLIB_PREFETCH (p2->data, 3 * CLIB_CACHE_LINE_BYTES, LOAD);
> >
> > —
> > Damjan
> >
> >
> > On 05.07.2021., at 07:28, Lijian Zhang  wrote:
> >
> > Hi Damjan,
> > I committed several patches to address some issues around cache-line
> definitions in VPP.
> >
> > Patch [1.1] is to resolve the build error [2] on 64Byte cache