Re: [vpp-dev] Ipv6 neighbor not getting discovered

2019-12-03 Thread Ole Troan
> Looks right, Neale thinks it looks right.
>  
> Simple question: who’s going to push the patch? I’ll do it if nobody else 
> wants to...

Button pressed.

Best regards,
Ole


>  
> Thanks... Dave
>  
> From: vpp-dev@lists.fd.io  On Behalf Of Jon Loeliger via 
> Lists.Fd.Io
> Sent: Tuesday, December 3, 2019 12:50 PM
> To: Rajith PR 
> Cc: vpp-dev@lists.fd.io
> Subject: Re: [vpp-dev] Ipv6 neighbor not getting discovered
>  
> On Tue, Dec 3, 2019 at 10:09 AM Rajith PR  wrote:
> Hello Team,
>  
> During integration of our software with VPP 19.08 we have found that ipv6 
> neighbor does not get discovered on first sw_if_index on which ipv6 is 
> enabled.
>  
> Do you have a small test case available?
>  
> Based on our understanding, "0" is a valid adjacency index. After changing 
> the code as below the problem seems to have been solved.
>   else
> {
>   adj_index0 = radv_info->mcast_adj_index;
>   if (adj_index0 == ADJ_INDEX_INVALID )
> error0 = ICMP6_ERROR_DST_LOOKUP_MISS;
> else
> {
>   next0 =
> is_dropped ? next0 :
> ICMP6_ROUTER_SOLICITATION_NEXT_REPLY_RW;
>   vnet_buffer (p0)->ip.adj_index[VLIB_TX] =
> adj_index0;
> }
> }
> Is this fix correct?
> I think this is correct.
> If yes, can this be fixed in the master branch please.
> Thanks,
> Rajith
>  
> HTH,
> jdl
>  
> -=-=-=-=-=-=-=-=-=-=-=-
> Links: You receive all messages sent to this group.
> 
> View/Reply Online (#14776): https://lists.fd.io/g/vpp-dev/message/14776
> Mute This Topic: https://lists.fd.io/mt/65768746/675193
> Group Owner: vpp-dev+ow...@lists.fd.io
> Unsubscribe: https://lists.fd.io/g/vpp-dev/unsub  [otr...@employees.org]
> -=-=-=-=-=-=-=-=-=-=-=-

-=-=-=-=-=-=-=-=-=-=-=-
Links: You receive all messages sent to this group.

View/Reply Online (#14778): https://lists.fd.io/g/vpp-dev/message/14778
Mute This Topic: https://lists.fd.io/mt/65768746/21656
Group Owner: vpp-dev+ow...@lists.fd.io
Unsubscribe: https://lists.fd.io/g/vpp-dev/unsub  [arch...@mail-archive.com]
-=-=-=-=-=-=-=-=-=-=-=-


Re: [vpp-dev] VPP / tcp_echo performance

2019-12-03 Thread Florin Coras
Hi Dom, 

I’ve never tried to run the stack in a VM, so not sure about the expected 
performance, but here are a couple of comments:
- What fifo sizes are you using? Are they at least 4MB (see [1] for VCL 
configuration). 
- I don’t think you need to configure more than 16k buffers/numa. 

Additionally, to get more information on the issue:
- What does “show session verbose 2” report? Check the stats section for 
retransmit counts (tr - timer retransmit, fr - fast retansmit) which if 
non-zero indicate that packets are lost. 
- Check interface rx/tx error counts with “show int”. 
- Typically, for improved performance, you should write more than 1.4kB per 
call. But the fact that your average is less than 1.4kB suggests that you often 
find the fifo full or close to full. So probably the issue is not your sender 
app. 

Regards,
Florin

[1] https://wiki.fd.io/view/VPP/HostStack/LDP/iperf

> On Dec 3, 2019, at 11:40 AM, dch...@akouto.com wrote:
> 
> Hi all,
> 
> I've been running some performance tests and not quite getting the results I 
> was hoping for, and have a couple of related questions I was hoping someone 
> could provide some tips with. For context, here's a summary of the results of 
> TCP tests I've run on two VMs (CentOS 7 OpenStack instances, host-1 is the 
> client and host-2 is the server):
> Running iperf3 natively before the interfaces are assigned to DPDK/VPP: 10 
> Gbps TCP throughput
> Running iperf3 with VCL/HostStack: 3.5 Gbps TCP throughput
> Running a modified version of the tcp_echo application (similar results with 
> socket and svm api): 610 Mbps throughput
> Things I've tried to improve performance:
> Anything I could apply from 
> https://wiki.fd.io/view/VPP/How_To_Optimize_Performance_(System_Tuning)
> Added tcp { cc-algo cubic } to VPP startup config
> Using isolcpu and VPP startup config options, allocated first 2, then 4 and 
> finally 6 of the 8 available cores to VPP main & worker threads
> In VPP startup config set "buffers-per-numa 65536" and "default data-size 
> 4096"
> Updated grub boot options to include hugepagesz=1GB hugepages=64 
> default_hugepagesz=1GB
> My goal is to achieve at least the same throughput using VPP as I get when I 
> run iperf3 natively on the same network interfaces (in this case 10 Gbps).
>  
> A couple of related questions:
> Given the items above, do any VPP or kernel configuration items jump out that 
> I may have missed that could justify the difference in native vs VPP 
> performance or help get the two a bit closer?
> In the modified tcp_echo application, n_sent = app_send_stream(...) is called 
> in a loop always using the same length (1400 bytes) in my test version. The 
> return value n_sent indicates that the average bytes sent is only around 130 
> bytes per call after some run time. Are there any parameters or options that 
> might improve this?
> Any tips or pointers to documentation that might shed some light would be 
> hugely appreciated!
>  
> Regards,
> Dom
>  
> -=-=-=-=-=-=-=-=-=-=-=-
> Links: You receive all messages sent to this group.
> 
> View/Reply Online (#14772): https://lists.fd.io/g/vpp-dev/message/14772
> Mute This Topic: https://lists.fd.io/mt/65863639/675152
> Group Owner: vpp-dev+ow...@lists.fd.io
> Unsubscribe: https://lists.fd.io/g/vpp-dev/unsub  [fcoras.li...@gmail.com]
> -=-=-=-=-=-=-=-=-=-=-=-

-=-=-=-=-=-=-=-=-=-=-=-
Links: You receive all messages sent to this group.

View/Reply Online (#14777): https://lists.fd.io/g/vpp-dev/message/14777
Mute This Topic: https://lists.fd.io/mt/65863639/21656
Group Owner: vpp-dev+ow...@lists.fd.io
Unsubscribe: https://lists.fd.io/g/vpp-dev/unsub  [arch...@mail-archive.com]
-=-=-=-=-=-=-=-=-=-=-=-


Re: [vpp-dev] Ipv6 neighbor not getting discovered

2019-12-03 Thread Dave Barach via Lists.Fd.Io
Looks right, Neale thinks it looks right.

Simple question: who’s going to push the patch? I’ll do it if nobody else wants 
to...

Thanks... Dave

From: vpp-dev@lists.fd.io  On Behalf Of Jon Loeliger via 
Lists.Fd.Io
Sent: Tuesday, December 3, 2019 12:50 PM
To: Rajith PR 
Cc: vpp-dev@lists.fd.io
Subject: Re: [vpp-dev] Ipv6 neighbor not getting discovered

On Tue, Dec 3, 2019 at 10:09 AM Rajith PR 
mailto:raj...@rtbrick.com>> wrote:
Hello Team,

During integration of our software with VPP 19.08 we have found that ipv6 
neighbor does not get discovered on first sw_if_index on which ipv6 is enabled.

Do you have a small test case available?


Based on our understanding, "0" is a valid adjacency index. After changing the 
code as below the problem seems to have been solved.

  else
{
  adj_index0 = radv_info->mcast_adj_index;
  if (adj_index0 == ADJ_INDEX_INVALID )

error0 = ICMP6_ERROR_DST_LOOKUP_MISS;
else
{
  next0 =
is_dropped ? next0 :
ICMP6_ROUTER_SOLICITATION_NEXT_REPLY_RW;
  vnet_buffer (p0)->ip.adj_index[VLIB_TX] =
adj_index0;
}
}

Is this fix correct?
I think this is correct.

If yes, can this be fixed in the master branch please.

Thanks,

Rajith

HTH,
jdl

-=-=-=-=-=-=-=-=-=-=-=-
Links: You receive all messages sent to this group.

View/Reply Online (#14776): https://lists.fd.io/g/vpp-dev/message/14776
Mute This Topic: https://lists.fd.io/mt/65768746/21656
Group Owner: vpp-dev+ow...@lists.fd.io
Unsubscribe: https://lists.fd.io/g/vpp-dev/unsub  [arch...@mail-archive.com]
-=-=-=-=-=-=-=-=-=-=-=-


Re: [vpp-dev] efficient use of DPDK

2019-12-03 Thread Jim Thompson via Lists.Fd.Io


> On Dec 3, 2019, at 12:56 PM, Ole Troan  wrote:
> 
> If you don't want that, wouldn't you just build something with a Trident 4? 
> ;-)

Or Tofino, if you want to go that direction. Even then, the amount of 
packet-processing (especially the edge/exception conditions) can overwhelm a 
hardware-based approach.
I’ve recently seen architectures where a VPP-based solution is the “slow path”, 
taking the exception traffic that a Tofino-based forwarder can’t handle.

But VPP is an open source project, and DPDK is also an open source project.  
Similar technologies, but, at least to us, very a very different style of 
interaction.  I’m here to suggest that a different measure of “efficient use of 
$PROJECT” can also be measured by the accomodation of patches.
We've had trouble getting bugs patched in DPDK drivers where we even provided a 
pointer to the exact solution and/or code to fix.

Specifics, if you like:

• ixgbe x552 SFP+ link delay — We had to push multiple times and go as 
far as to pester individuals (itself a violation of the ‘rules’ as contributors 
are only supposed email d...@dpdk.org , based on the 
comment “Pleas avoid private emails” at the top of the MAINTAINERS file.

• i40e did not advertise support for scatter/gather on a PF, but did on 
a VF.  This was the quickest resolution: 11 days after submitting it, someone 
emailed and said “OK”,  so we enabled it.
https://bugs.dpdk.org/show_bug.cgi?id=92

• A month or two ago we submitted a report that the X557 1Gb copper 
ports on the C3000 were advertising that they support higher speeds when they 
do not.  We haven’t heard a word back about it.
Perhaps this is similar to what Damjan referenced when he suggested 
that the ixgbe driver seems largely unmaintained.

To be fair, these are all Intel, not Mellanox, but my point is, and entirely in 
the converse to our experience with DPDK: VPP is has been entirely responsive 
to submitted patches for 3 years running.  We don’t get everything accepted, 
(nor are we asking to) but we do have 160 commits which have been upstreamed.  
This is dwarfed by Cisco, and Intel has about 100 more, but the point is not 
how much we’ve contributed, but the relative ease of contribution to upstream 
in DPDK .vs VPP.

Jim-=-=-=-=-=-=-=-=-=-=-=-
Links: You receive all messages sent to this group.

View/Reply Online (#14775): https://lists.fd.io/g/vpp-dev/message/14775
Mute This Topic: https://lists.fd.io/mt/65218320/21656
Group Owner: vpp-dev+ow...@lists.fd.io
Unsubscribe: https://lists.fd.io/g/vpp-dev/unsub  [arch...@mail-archive.com]
-=-=-=-=-=-=-=-=-=-=-=-


Re: [vpp-dev] efficient use of DPDK

2019-12-03 Thread Jerome Tollet via Lists.Fd.Io
Thomas,
I am afraid you may be missing the point. VPP is a framework where plugins are 
first class citizens. If a plugin requires leveraging offload (inline or 
lookaside), it is more than welcome to do it.
There are multiple examples including hw crypto accelerators 
(https://software.intel.com/en-us/articles/get-started-with-ipsec-acceleration-in-the-fdio-vpp-project).
 
Jerome

Le 03/12/2019 17:07, « vpp-dev@lists.fd.io au nom de Thomas Monjalon » 
 a écrit :

03/12/2019 13:12, Damjan Marion:
> > On 3 Dec 2019, at 09:28, Thomas Monjalon  wrote:
> > 03/12/2019 00:26, Damjan Marion:
> >> 
> >> Hi THomas!
> >> 
> >> Inline...
> >> 
>  On 2 Dec 2019, at 23:35, Thomas Monjalon  wrote:
> >>> 
> >>> Hi all,
> >>> 
> >>> VPP has a buffer called vlib_buffer_t, while DPDK has rte_mbuf.
> >>> Are there some benchmarks about the cost of converting, from one 
format
> >>> to the other one, during Rx/Tx operations?
> >> 
> >> We are benchmarking both dpdk i40e PMD performance and native VPP AVF 
driver performance and we are seeing significantly better performance with 
native AVF.
> >> If you taake a look at [1] you will see that DPDK i40e driver provides 
18.62 Mpps and exactly the same test with native AVF driver is giving us arounf 
24.86 Mpps.
[...]
> >> 
> >>> So why not improving DPDK integration in VPP to make it faster?
> >> 
> >> Yes, if we can get freedom to use parts of DPDK we want instead of 
being forced to adopt whole DPDK ecosystem.
> >> for example, you cannot use dpdk drivers without using EAL, mempool, 
rte_mbuf... rte_eal_init is monster which I was hoping that it will disappear 
for long time...
> > 
> > You could help to improve these parts of DPDK,
> > instead of spending time to try implementing few drivers.
> > Then VPP would benefit from a rich driver ecosystem.
> 
> Thank you for letting me know what could be better use of my time.

"You" was referring to VPP developers.
I think some other Cisco developers are also contributing to VPP.

> At the moment we have good coverage of native drivers, and still there is 
a option for people to use dpdk. It is now mainly up to driver vendors to 
decide if they are happy with performance they wil get from dpdk pmd or they 
want better...

Yes it is possible to use DPDK in VPP with degraded performance.
If an user wants best performance with VPP and a real NIC,
a new driver must be implemented for VPP only.

Anyway real performance benefits are in hardware device offloads
which will be hard to implement in VPP native drivers.
Support (investment) would be needed from vendors to make it happen.
About offloads, VPP is not using crypto or compression drivers
that DPDK provides (plus regex coming).

VPP is a CPU-based packet processing software.
If users want to leverage hardware device offloads,
a truly DPDK-based software is required.
If I understand well your replies, such software cannot be VPP.




-=-=-=-=-=-=-=-=-=-=-=-
Links: You receive all messages sent to this group.

View/Reply Online (#14774): https://lists.fd.io/g/vpp-dev/message/14774
Mute This Topic: https://lists.fd.io/mt/65218320/21656
Group Owner: vpp-dev+ow...@lists.fd.io
Unsubscribe: https://lists.fd.io/g/vpp-dev/unsub  [arch...@mail-archive.com]
-=-=-=-=-=-=-=-=-=-=-=-


Re: [vpp-dev] efficient use of DPDK

2019-12-03 Thread Ole Troan
Interesting discussion.

> Yes it is possible to use DPDK in VPP with degraded performance.
> If an user wants best performance with VPP and a real NIC,
> a new driver must be implemented for VPP only.
> 
> Anyway real performance benefits are in hardware device offloads
> which will be hard to implement in VPP native drivers.
> Support (investment) would be needed from vendors to make it happen.
> About offloads, VPP is not using crypto or compression drivers
> that DPDK provides (plus regex coming).
> 
> VPP is a CPU-based packet processing software.
> If users want to leverage hardware device offloads,
> a truly DPDK-based software is required.
> If I understand well your replies, such software cannot be VPP.

I don't think that we are principled against having features run on the NIC as 
such.
VPP is a framework for building forwarding applications.
That often implies doing lots of funky stuff with packets.

And more often than we like hardware offload creates problems.
Be it architecturally with layer violations like GSO/GRO.
Correct handling of IPv6 extension headers, fragments.
Dealing with various encaps and tunnels.
Or doing symmetric RSS on two sides of a NAT.
Or even other transport protocols than TCP/UDP.

And it's not like there is any consistency across NICs:
http://doc.dpdk.org/guides/nics/overview.html#id1

We don't want VPP to only support DPDK drivers.
It's of course a tradeoff, and it's not like we don't have experience with 
hardware to do packet forwarding.
At least in my own experience, as soon as you want to have features running in 
hardware, you loose a lot of flexiblity and agility.
That's just the name of that game. Living under the yoke of hardware 
limitations is something I've tried to escape for 20 years.
I'm probably not alone, and that's why you are seeing some pushback...

VPP performance for the core features is largely bound by PCI-e bandwidth (with 
all caveats coming with that statement obviously).
It's not like that type of platform is going to grow a terabit switching fabric 
any time soon...
Software forwarding lags probable a decade and an order of magnitude. That it 
makes up for in agility, flexiblity, scale...
If you don't want that, wouldn't you just build something with a Trident 4? ;-)

Best regards,
Ole



-=-=-=-=-=-=-=-=-=-=-=-
Links: You receive all messages sent to this group.

View/Reply Online (#14773): https://lists.fd.io/g/vpp-dev/message/14773
Mute This Topic: https://lists.fd.io/mt/65218320/21656
Group Owner: vpp-dev+ow...@lists.fd.io
Unsubscribe: https://lists.fd.io/g/vpp-dev/unsub  [arch...@mail-archive.com]
-=-=-=-=-=-=-=-=-=-=-=-


[vpp-dev] VPP / tcp_echo performance

2019-12-03 Thread dchons
Hi all,

I've been running some performance tests and not quite getting the results I 
was hoping for, and have a couple of related questions I was hoping someone 
could provide some tips with. For context, here's a summary of the results of 
TCP tests I've run on two VMs (CentOS 7 OpenStack instances, host-1 is the 
client and host-2 is the server):

* Running iperf3 natively before the interfaces are assigned to DPDK/VPP: *10 
Gbps TCP throughput*
* Running iperf3 with VCL/HostStack: *3.5 Gbps TCP throughput*
* Running a modified version of the *tcp_echo* application (similar results 
with socket and svm api): *610 Mbps throughput*

Things I've tried to improve performance:

* Anything I could apply from 
https://wiki.fd.io/view/VPP/How_To_Optimize_Performance_(System_Tuning)
* Added tcp { cc-algo cubic } to VPP startup config
* Using isolcpu and VPP startup config options, allocated first 2, then 4 and 
finally 6 of the 8 available cores to VPP main & worker threads
* In VPP startup config set "buffers-per-numa 65536" and "default data-size 
4096"
* Updated grub boot options to include hugepagesz=1GB hugepages=64 
default_hugepagesz=1GB

My goal is to achieve at least the same throughput using VPP as I get when I 
run iperf3 natively on the same network interfaces (in this case 10 Gbps).

A couple of related questions:

* Given the items above, do any VPP or kernel configuration items jump out that 
I may have missed that could justify the difference in native vs VPP 
performance or help get the two a bit closer?
* In the modified tcp_echo application, *n_sent = app_send_stream(...)* is 
called in a loop always using the same length (1400 bytes) in my test version. 
The return value *n_sent* indicates that the average bytes sent is only around 
130 bytes per call after some run time. Are there any parameters or options 
that might improve this?

Any tips or pointers to documentation that might shed some light would be 
hugely appreciated!

Regards,
Dom
-=-=-=-=-=-=-=-=-=-=-=-
Links: You receive all messages sent to this group.

View/Reply Online (#14772): https://lists.fd.io/g/vpp-dev/message/14772
Mute This Topic: https://lists.fd.io/mt/65863639/21656
Group Owner: vpp-dev+ow...@lists.fd.io
Unsubscribe: https://lists.fd.io/g/vpp-dev/unsub  [arch...@mail-archive.com]
-=-=-=-=-=-=-=-=-=-=-=-


Re: [vpp-dev] efficient use of DPDK

2019-12-03 Thread Damjan Marion via Lists.Fd.Io


> On 3 Dec 2019, at 17:06, Thomas Monjalon  wrote:
> 
> 03/12/2019 13:12, Damjan Marion:
>>> On 3 Dec 2019, at 09:28, Thomas Monjalon  wrote:
>>> 03/12/2019 00:26, Damjan Marion:
 
 Hi THomas!
 
 Inline...
 
>> On 2 Dec 2019, at 23:35, Thomas Monjalon  wrote:
> 
> Hi all,
> 
> VPP has a buffer called vlib_buffer_t, while DPDK has rte_mbuf.
> Are there some benchmarks about the cost of converting, from one format
> to the other one, during Rx/Tx operations?
 
 We are benchmarking both dpdk i40e PMD performance and native VPP AVF 
 driver performance and we are seeing significantly better performance with 
 native AVF.
 If you taake a look at [1] you will see that DPDK i40e driver provides 
 18.62 Mpps and exactly the same test with native AVF driver is giving us 
 arounf 24.86 Mpps.
> [...]
 
> So why not improving DPDK integration in VPP to make it faster?
 
 Yes, if we can get freedom to use parts of DPDK we want instead of being 
 forced to adopt whole DPDK ecosystem.
 for example, you cannot use dpdk drivers without using EAL, mempool, 
 rte_mbuf... rte_eal_init is monster which I was hoping that it will 
 disappear for long time...
>>> 
>>> You could help to improve these parts of DPDK,
>>> instead of spending time to try implementing few drivers.
>>> Then VPP would benefit from a rich driver ecosystem.
>> 
>> Thank you for letting me know what could be better use of my time.
> 
> "You" was referring to VPP developers.
> I think some other Cisco developers are also contributing to VPP.

But that "you" also includes myself, so i felt that i need to thank you...

> 
>> At the moment we have good coverage of native drivers, and still there is a 
>> option for people to use dpdk. It is now mainly up to driver vendors to 
>> decide if they are happy with performance they wil get from dpdk pmd or they 
>> want better...
> 
> Yes it is possible to use DPDK in VPP with degraded performance.
> If an user wants best performance with VPP and a real NIC,
> a new driver must be implemented for VPP only.
> 
> Anyway real performance benefits are in hardware device offloads
> which will be hard to implement in VPP native drivers.
> Support (investment) would be needed from vendors to make it happen.
> About offloads, VPP is not using crypto or compression drivers
> that DPDK provides (plus regex coming).

Nice marketing pitch for your company :)

> 
> VPP is a CPU-based packet processing software.
> If users want to leverage hardware device offloads,
> a truly DPDK-based software is required.
> If I understand well your replies, such software cannot be VPP.

Yes, DPDK is centre of the universe/

So Dear Thomas, I can continue this discussion forever, but that is not 
something I'm going to do as it started to be trolling contest.
I can understand that you may be passionate about you project and that you 
maybe think that it is the greatest thing after sliced bread, but please allow 
that other people have different opinion. Instead of giving the lessons to 
other people what they should do, if you are interested for dpdk to be better 
consumed, please take a feedback provided to you. I assume that you are 
interested as you showed up on this mailing list, if not there was no reason 
for starting this thread in the first place.

-- 
Damjan

-=-=-=-=-=-=-=-=-=-=-=-
Links: You receive all messages sent to this group.

View/Reply Online (#14770): https://lists.fd.io/g/vpp-dev/message/14770
Mute This Topic: https://lists.fd.io/mt/65218320/21656
Group Owner: vpp-dev+ow...@lists.fd.io
Unsubscribe: https://lists.fd.io/g/vpp-dev/unsub  [arch...@mail-archive.com]
-=-=-=-=-=-=-=-=-=-=-=-


Re: [vpp-dev] efficient use of DPDK

2019-12-03 Thread Damjan Marion via Lists.Fd.Io


> On 3 Dec 2019, at 17:06, Thomas Monjalon  wrote:
> 
> 03/12/2019 13:12, Damjan Marion:
>>> On 3 Dec 2019, at 09:28, Thomas Monjalon  wrote:
>>> 03/12/2019 00:26, Damjan Marion:
 
 Hi THomas!
 
 Inline...
 
>> On 2 Dec 2019, at 23:35, Thomas Monjalon  wrote:
> 
> Hi all,
> 
> VPP has a buffer called vlib_buffer_t, while DPDK has rte_mbuf.
> Are there some benchmarks about the cost of converting, from one format
> to the other one, during Rx/Tx operations?
 
 We are benchmarking both dpdk i40e PMD performance and native VPP AVF 
 driver performance and we are seeing significantly better performance with 
 native AVF.
 If you taake a look at [1] you will see that DPDK i40e driver provides 
 18.62 Mpps and exactly the same test with native AVF driver is giving us 
 arounf 24.86 Mpps.
> [...]
 
> So why not improving DPDK integration in VPP to make it faster?
 
 Yes, if we can get freedom to use parts of DPDK we want instead of being 
 forced to adopt whole DPDK ecosystem.
 for example, you cannot use dpdk drivers without using EAL, mempool, 
 rte_mbuf... rte_eal_init is monster which I was hoping that it will 
 disappear for long time...
>>> 
>>> You could help to improve these parts of DPDK,
>>> instead of spending time to try implementing few drivers.
>>> Then VPP would benefit from a rich driver ecosystem.
>> 
>> Thank you for letting me know what could be better use of my time.
> 
> "You" was referring to VPP developers.
> I think some other Cisco developers are also contributing to VPP.

But that "you" also includes myself, so i felt that i need to thank you...

> 
>> At the moment we have good coverage of native drivers, and still there is a 
>> option for people to use dpdk. It is now mainly up to driver vendors to 
>> decide if they are happy with performance they wil get from dpdk pmd or they 
>> want better...
> 
> Yes it is possible to use DPDK in VPP with degraded performance.
> If an user wants best performance with VPP and a real NIC,
> a new driver must be implemented for VPP only.
> 
> Anyway real performance benefits are in hardware device offloads
> which will be hard to implement in VPP native drivers.
> Support (investment) would be needed from vendors to make it happen.
> About offloads, VPP is not using crypto or compression drivers
> that DPDK provides (plus regex coming).

Nice marketing pitch for your company :)

> 
> VPP is a CPU-based packet processing software.
> If users want to leverage hardware device offloads,
> a truly DPDK-based software is required.
> If I understand well your replies, such software cannot be VPP.

Yes, DPDK is centre of the universe/

So Dear Thomas, I can continue this discussion forever, but that is not 
something I'm going to do as it started to be trolling contest.
I can understand that you may be passionate about you project and that you 
maybe think that it is the greatest thing after sliced bread, but please allow 
that other people have different opinion. Instead of giving the lessons to 
other people what they should do, if you are interested for dpdk to be better 
consumed, please take a feedback provided to you. I assume that you are 
interested as you showed up on this mailing list, if not there was no reason 
for starting this thread in the first place.

-- 
Damjan

-=-=-=-=-=-=-=-=-=-=-=-
Links: You receive all messages sent to this group.

View/Reply Online (#14771): https://lists.fd.io/g/vpp-dev/message/14771
Mute This Topic: https://lists.fd.io/mt/65218320/21656
Group Owner: vpp-dev+ow...@lists.fd.io
Unsubscribe: https://lists.fd.io/g/vpp-dev/unsub  [arch...@mail-archive.com]
-=-=-=-=-=-=-=-=-=-=-=-


Re: [vpp-dev] Ipv6 neighbor not getting discovered

2019-12-03 Thread Jon Loeliger via Lists.Fd.Io
On Tue, Dec 3, 2019 at 10:09 AM Rajith PR  wrote:

> Hello Team,
>
> During integration of our software with VPP 19.08 we have found that ipv6
> neighbor does not get discovered on first sw_if_index on which ipv6 is
> enabled.
>

Do you have a small test case available?


> Based on our understanding, "0" is a valid adjacency index. After changing 
> the code as below the problem seems to have been solved.
>
>   else
> {
>   adj_index0 = radv_info->mcast_adj_index;
>   *if (adj_index0 == ADJ_INDEX_INVALID )*
>
>
> *error0 = ICMP6_ERROR_DST_LOOKUP_MISS;else*
> {
>   next0 =
> is_dropped ? next0 :
> ICMP6_ROUTER_SOLICITATION_NEXT_REPLY_RW;
>   vnet_buffer (p0)->ip.adj_index[VLIB_TX] =
> adj_index0;
> }
> }
>
> Is this fix correct?
>
> I think this is correct.

> If yes, can this be fixed in the master branch please.
>
> Thanks,
>
> Rajith
>
>
HTH,
jdl
-=-=-=-=-=-=-=-=-=-=-=-
Links: You receive all messages sent to this group.

View/Reply Online (#14769): https://lists.fd.io/g/vpp-dev/message/14769
Mute This Topic: https://lists.fd.io/mt/65768746/21656
Group Owner: vpp-dev+ow...@lists.fd.io
Unsubscribe: https://lists.fd.io/g/vpp-dev/unsub  [arch...@mail-archive.com]
-=-=-=-=-=-=-=-=-=-=-=-


[vpp-dev] Ipv6 neighbor not getting discovered

2019-12-03 Thread Rajith PR
Hello Team,

During integration of our software with VPP 19.08 we have found that ipv6
neighbor does not get discovered on first sw_if_index on which ipv6 is
enabled.
On further analysis we found that, it is due to radv_info->mcast_adj_index
being checked against "0" in the  following code :-

Function:

static_always_inline uword icmp6_router_solicitation (vlib_main_t * vm,
vlib_node_runtime_t * node, vlib_frame_t * frame)  :-

  else
{
  adj_index0 = radv_info->mcast_adj_index;


*if (adj_index0 == 0)error0 = ICMP6_ERROR_DST_LOOKUP_MISS;else*
{
  next0 =
is_dropped ? next0 :
ICMP6_ROUTER_SOLICITATION_NEXT_REPLY_RW;
  vnet_buffer (p0)->ip.adj_index[VLIB_TX] =
adj_index0;
}
}

Based on our understanding, "0" is a valid adjacency index. After
changing the code as below the problem seems to have been solved.

  else
{
  adj_index0 = radv_info->mcast_adj_index;
  *if (adj_index0 == ADJ_INDEX_INVALID )*


*error0 = ICMP6_ERROR_DST_LOOKUP_MISS;else*
{
  next0 =
is_dropped ? next0 :
ICMP6_ROUTER_SOLICITATION_NEXT_REPLY_RW;
  vnet_buffer (p0)->ip.adj_index[VLIB_TX] =
adj_index0;
}
}

Is this fix correct? If yes, can this be fixed in the master branch please.

Thanks,

Rajith
-=-=-=-=-=-=-=-=-=-=-=-
Links: You receive all messages sent to this group.

View/Reply Online (#14768): https://lists.fd.io/g/vpp-dev/message/14768
Mute This Topic: https://lists.fd.io/mt/65768746/21656
Group Owner: vpp-dev+ow...@lists.fd.io
Unsubscribe: https://lists.fd.io/g/vpp-dev/unsub  [arch...@mail-archive.com]
-=-=-=-=-=-=-=-=-=-=-=-


Re: [vpp-dev] efficient use of DPDK

2019-12-03 Thread Thomas Monjalon
03/12/2019 13:12, Damjan Marion:
> > On 3 Dec 2019, at 09:28, Thomas Monjalon  wrote:
> > 03/12/2019 00:26, Damjan Marion:
> >> 
> >> Hi THomas!
> >> 
> >> Inline...
> >> 
>  On 2 Dec 2019, at 23:35, Thomas Monjalon  wrote:
> >>> 
> >>> Hi all,
> >>> 
> >>> VPP has a buffer called vlib_buffer_t, while DPDK has rte_mbuf.
> >>> Are there some benchmarks about the cost of converting, from one format
> >>> to the other one, during Rx/Tx operations?
> >> 
> >> We are benchmarking both dpdk i40e PMD performance and native VPP AVF 
> >> driver performance and we are seeing significantly better performance with 
> >> native AVF.
> >> If you taake a look at [1] you will see that DPDK i40e driver provides 
> >> 18.62 Mpps and exactly the same test with native AVF driver is giving us 
> >> arounf 24.86 Mpps.
[...]
> >> 
> >>> So why not improving DPDK integration in VPP to make it faster?
> >> 
> >> Yes, if we can get freedom to use parts of DPDK we want instead of being 
> >> forced to adopt whole DPDK ecosystem.
> >> for example, you cannot use dpdk drivers without using EAL, mempool, 
> >> rte_mbuf... rte_eal_init is monster which I was hoping that it will 
> >> disappear for long time...
> > 
> > You could help to improve these parts of DPDK,
> > instead of spending time to try implementing few drivers.
> > Then VPP would benefit from a rich driver ecosystem.
> 
> Thank you for letting me know what could be better use of my time.

"You" was referring to VPP developers.
I think some other Cisco developers are also contributing to VPP.

> At the moment we have good coverage of native drivers, and still there is a 
> option for people to use dpdk. It is now mainly up to driver vendors to 
> decide if they are happy with performance they wil get from dpdk pmd or they 
> want better...

Yes it is possible to use DPDK in VPP with degraded performance.
If an user wants best performance with VPP and a real NIC,
a new driver must be implemented for VPP only.

Anyway real performance benefits are in hardware device offloads
which will be hard to implement in VPP native drivers.
Support (investment) would be needed from vendors to make it happen.
About offloads, VPP is not using crypto or compression drivers
that DPDK provides (plus regex coming).

VPP is a CPU-based packet processing software.
If users want to leverage hardware device offloads,
a truly DPDK-based software is required.
If I understand well your replies, such software cannot be VPP.


-=-=-=-=-=-=-=-=-=-=-=-
Links: You receive all messages sent to this group.

View/Reply Online (#14767): https://lists.fd.io/g/vpp-dev/message/14767
Mute This Topic: https://lists.fd.io/mt/65218320/21656
Group Owner: vpp-dev+ow...@lists.fd.io
Unsubscribe: https://lists.fd.io/g/vpp-dev/unsub  [arch...@mail-archive.com]
-=-=-=-=-=-=-=-=-=-=-=-


[vpp-dev] FD.io Jenkins Maintenance: 2019-12-10 1900 UTC to 2200 UTC

2019-12-03 Thread Vanessa Valderrama
*What:*

  * Jenkins
  o OS and security updates
  o Upgrade to 2.190.3
  o Plugin updates
  * Nexus
  o OS updates
  * Jira
  o OS updates
  * Gerrit
  o OS updates
  * Sonar
  o OS updates
  * OpenGrok
  o OS updates

*When:  *2019-12-10 1900 UTC to 2200 UTC

*Impact:*

Maintenance will require a reboot of each FD.io system. Jenkins will be
placed in shutdown mode at 1800 UTC. Please let us know if specific jobs
cannot be aborted.
The following systems will be unavailable during the maintenance window:

  *     Jenkins sandbox
  *     Jenkins production
  *     Nexus
  *     Jira
  *     Gerrit
  *     Sonar
  *     OpenGrok


-=-=-=-=-=-=-=-=-=-=-=-
Links: You receive all messages sent to this group.

View/Reply Online (#14766): https://lists.fd.io/g/vpp-dev/message/14766
Mute This Topic: https://lists.fd.io/mt/65762523/21656
Group Owner: vpp-dev+ow...@lists.fd.io
Unsubscribe: https://lists.fd.io/g/vpp-dev/unsub  [arch...@mail-archive.com]
-=-=-=-=-=-=-=-=-=-=-=-


Re: [vpp-dev] CSIT - performance tests failing on Taishan

2019-12-03 Thread Peter Mikus via Lists.Fd.Io
Latest update is that Benoit has no access over VPN so he did try to replicate 
in local lab (assuming x86).
I will do quick fix in CSIT. I will disable MLX driver on Taishan.

Peter Mikus
Engineer - Software
Cisco Systems Limited

> -Original Message-
> From: Juraj Linkeš 
> Sent: Tuesday, December 3, 2019 3:09 PM
> To: Benoit Ganne (bganne) ; Peter Mikus -X (pmikus -
> PANTHEON TECH SRO at Cisco) ; Maciek Konstantynowicz
> (mkonstan) ; vpp-dev ; csit-
> d...@lists.fd.io
> Cc: Vratko Polak -X (vrpolak - PANTHEON TECH SRO at Cisco)
> ; lijian.zh...@arm.com; Honnappa Nagarahalli
> 
> Subject: RE: CSIT - performance tests failing on Taishan
> 
> Hi Benoit,
> 
> Do you have access to FD.io lab? The Taishan servers are in it.
> 
> Juraj
> 
> -Original Message-
> From: Benoit Ganne (bganne) 
> Sent: Friday, November 29, 2019 4:03 PM
> To: Peter Mikus -X (pmikus - PANTHEON TECH SRO at Cisco)
> ; Juraj Linkeš ; Maciek
> Konstantynowicz (mkonstan) ; vpp-dev  d...@lists.fd.io>; csit-...@lists.fd.io
> Cc: Vratko Polak -X (vrpolak - PANTHEON TECH SRO at Cisco)
> ; lijian.zh...@arm.com; Honnappa Nagarahalli
> 
> Subject: RE: CSIT - performance tests failing on Taishan
> 
> Hi Peter, can I get access to the setup to investigate?
> 
> Best
> ben
> 
> > -Original Message-
> > From: Peter Mikus -X (pmikus - PANTHEON TECH SRO at Cisco)
> > 
> > Sent: vendredi 29 novembre 2019 11:08
> > To: Benoit Ganne (bganne) ; Juraj Linkeš
> > ; Maciek Konstantynowicz (mkonstan)
> > ; vpp-dev ;
> > csit-...@lists.fd.io
> > Cc: Vratko Polak -X (vrpolak - PANTHEON TECH SRO at Cisco)
> > ; Benoit Ganne (bganne) ;
> > lijian.zh...@arm.com; Honnappa Nagarahalli
> > 
> > Subject: RE: CSIT - performance tests failing on Taishan
> >
> > +dev lists
> >
> > Peter Mikus
> > Engineer - Software
> > Cisco Systems Limited
> >
> > > -Original Message-
> > > From: Peter Mikus -X (pmikus - PANTHEON TECH SRO at Cisco)
> > > Sent: Friday, November 29, 2019 11:06 AM
> > > To: Benoit Ganne (bganne) ; Juraj Linkeš
> > > ; Maciek Konstantynowicz (mkonstan)
> > > 
> > > Cc: Vratko Polak -X (vrpolak - PANTHEON TECH SRO at Cisco)
> > > ; Benoit Ganne (bganne) ;
> > > lijian.zh...@arm.com; Honnappa Nagarahalli
> > 
> > > Subject: CSIT - performance tests failing on Taishan
> > >
> > > Hello all,
> > >
> > > In CSIT we are observing the issue with Taishan boxes where
> > > performance tests are failing.
> > > There has been long misleading discussion about the potential issue,
> > root
> > > cause and what workaround to apply.
> > >
> > > Issue
> > > =
> > > VPP is being restarted after an attempt to read "show pci" over the
> > > socket on '/run/vpp/cli.sock'
> > > in a loop. This loop test is executed in CSIT towards VPP with
> > > default startup configuration via command below to check if VPP is
> > > really UP and responding.
> > >
> > > How to reproduce
> > > 
> > > for i in $(seq 1 120); do echo "show pci" | sudo socat - UNIX-
> > > CONNECT:/run/vpp/cli.sock; sudo netstat -ap | grep vpp; done
> > >
> > > The same can be reproduced using vppctl:
> > >
> > > for i in $(seq 1 120); do echo "show pci" | sudo vppctl; sudo
> > > netstat -
> > ap
> > > | grep vpp; done
> > >
> > > To eliminate the issue with test itself I used "show version"
> > > for i in $(seq 1 120); do echo "show version" | sudo socat - UNIX-
> > > CONNECT:/run/vpp/cli.sock; sudo netstat -ap | grep vpp; done
> > >
> > > This test is passing with "show version" and VPP is not restarted.
> > >
> > >
> > > Root cause
> > > ==
> > > The root cause seems to be:
> > >
> > > Thread 1 "vpp_main" received signal SIGSEGV, Segmentation fault.
> > > 0xbeb4f3d0 in format_vlib_pci_vpd (
> > > s=0x7fabe830 "0002:f9:00.0   0  15b3:1015   8.0 GT/s x8
> > > mlx5_core   CX4121A - ConnectX-4 LX SFP28", args
> > > =)
> > > at /w/workspace/vpp-arm-merge-master-
> > > ubuntu1804/src/vlib/pci/pci.c:230
> > > 230 /w/workspace/vpp-arm-merge-master-
> ubuntu1804/src/vlib/pci/pci.c:
> > > No such file or directory.
> > > (gdb)
> > > Continuing.
> > >
> > > Thread 1 "vpp_main" received signal SIGABRT, Aborted.
> > > __GI_raise (sig=sig@entry=6) at ../sysdeps/unix/sysv/linux/raise.c:51
> > > 51  ../sysdeps/unix/sysv/linux/raise.c: No such file or
> directory.
> > > (gdb)
> > >
> > >
> > > Issue started after MLX was installed into Taishan.
> > >
> > >
> > > @Benoit Ganne (bganne) can you please help fixing the root cause?
> > >
> > > Thank you.
> > >
> > > Peter Mikus
> > > Engineer - Software
> > > Cisco Systems Limited

-=-=-=-=-=-=-=-=-=-=-=-
Links: You receive all messages sent to this group.

View/Reply Online (#14765): https://lists.fd.io/g/vpp-dev/message/14765
Mute This Topic: https://lists.fd.io/mt/64332740/21656
Group Owner: vpp-dev+ow...@lists.fd.io
Unsubscribe: https://lists.fd.io/g/vpp-dev/unsub  [arch...@mail-archive.com]
-=-=-=-=-=-=-=-=-=-=-=-


Re: [vpp-dev] CSIT - performance tests failing on Taishan

2019-12-03 Thread Juraj Linkeš
Hi Benoit,

Do you have access to FD.io lab? The Taishan servers are in it.

Juraj

-Original Message-
From: Benoit Ganne (bganne)  
Sent: Friday, November 29, 2019 4:03 PM
To: Peter Mikus -X (pmikus - PANTHEON TECH SRO at Cisco) ; 
Juraj Linkeš ; Maciek Konstantynowicz (mkonstan) 
; vpp-dev ; csit-...@lists.fd.io
Cc: Vratko Polak -X (vrpolak - PANTHEON TECH SRO at Cisco) ; 
lijian.zh...@arm.com; Honnappa Nagarahalli 
Subject: RE: CSIT - performance tests failing on Taishan

Hi Peter, can I get access to the setup to investigate?

Best
ben

> -Original Message-
> From: Peter Mikus -X (pmikus - PANTHEON TECH SRO at Cisco) 
> 
> Sent: vendredi 29 novembre 2019 11:08
> To: Benoit Ganne (bganne) ; Juraj Linkeš 
> ; Maciek Konstantynowicz (mkonstan) 
> ; vpp-dev ; 
> csit-...@lists.fd.io
> Cc: Vratko Polak -X (vrpolak - PANTHEON TECH SRO at Cisco) 
> ; Benoit Ganne (bganne) ; 
> lijian.zh...@arm.com; Honnappa Nagarahalli 
> 
> Subject: RE: CSIT - performance tests failing on Taishan
> 
> +dev lists
> 
> Peter Mikus
> Engineer - Software
> Cisco Systems Limited
> 
> > -Original Message-
> > From: Peter Mikus -X (pmikus - PANTHEON TECH SRO at Cisco)
> > Sent: Friday, November 29, 2019 11:06 AM
> > To: Benoit Ganne (bganne) ; Juraj Linkeš 
> > ; Maciek Konstantynowicz (mkonstan) 
> > 
> > Cc: Vratko Polak -X (vrpolak - PANTHEON TECH SRO at Cisco) 
> > ; Benoit Ganne (bganne) ; 
> > lijian.zh...@arm.com; Honnappa Nagarahalli
> 
> > Subject: CSIT - performance tests failing on Taishan
> >
> > Hello all,
> >
> > In CSIT we are observing the issue with Taishan boxes where 
> > performance tests are failing.
> > There has been long misleading discussion about the potential issue,
> root
> > cause and what workaround to apply.
> >
> > Issue
> > =
> > VPP is being restarted after an attempt to read "show pci" over the 
> > socket on '/run/vpp/cli.sock'
> > in a loop. This loop test is executed in CSIT towards VPP with 
> > default startup configuration via command below to check if VPP is 
> > really UP and responding.
> >
> > How to reproduce
> > 
> > for i in $(seq 1 120); do echo "show pci" | sudo socat - UNIX- 
> > CONNECT:/run/vpp/cli.sock; sudo netstat -ap | grep vpp; done
> >
> > The same can be reproduced using vppctl:
> >
> > for i in $(seq 1 120); do echo "show pci" | sudo vppctl; sudo 
> > netstat -
> ap
> > | grep vpp; done
> >
> > To eliminate the issue with test itself I used "show version"
> > for i in $(seq 1 120); do echo "show version" | sudo socat - UNIX- 
> > CONNECT:/run/vpp/cli.sock; sudo netstat -ap | grep vpp; done
> >
> > This test is passing with "show version" and VPP is not restarted.
> >
> >
> > Root cause
> > ==
> > The root cause seems to be:
> >
> > Thread 1 "vpp_main" received signal SIGSEGV, Segmentation fault.
> > 0xbeb4f3d0 in format_vlib_pci_vpd (
> > s=0x7fabe830 "0002:f9:00.0   0  15b3:1015   8.0 GT/s x8
> > mlx5_core   CX4121A - ConnectX-4 LX SFP28", args
> > =)
> > at /w/workspace/vpp-arm-merge-master-
> > ubuntu1804/src/vlib/pci/pci.c:230
> > 230 /w/workspace/vpp-arm-merge-master-ubuntu1804/src/vlib/pci/pci.c:
> > No such file or directory.
> > (gdb)
> > Continuing.
> >
> > Thread 1 "vpp_main" received signal SIGABRT, Aborted.
> > __GI_raise (sig=sig@entry=6) at ../sysdeps/unix/sysv/linux/raise.c:51
> > 51  ../sysdeps/unix/sysv/linux/raise.c: No such file or directory.
> > (gdb)
> >
> >
> > Issue started after MLX was installed into Taishan.
> >
> >
> > @Benoit Ganne (bganne) can you please help fixing the root cause?
> >
> > Thank you.
> >
> > Peter Mikus
> > Engineer - Software
> > Cisco Systems Limited

-=-=-=-=-=-=-=-=-=-=-=-
Links: You receive all messages sent to this group.

View/Reply Online (#14764): https://lists.fd.io/g/vpp-dev/message/14764
Mute This Topic: https://lists.fd.io/mt/64332740/21656
Group Owner: vpp-dev+ow...@lists.fd.io
Unsubscribe: https://lists.fd.io/g/vpp-dev/unsub  [arch...@mail-archive.com]
-=-=-=-=-=-=-=-=-=-=-=-


Re: [vpp-dev] efficient use of DPDK

2019-12-03 Thread Damjan Marion via Lists.Fd.Io


> 
> On 3 Dec 2019, at 09:28, Thomas Monjalon  wrote:
> 
> 03/12/2019 00:26, Damjan Marion:
>> 
>> Hi THomas!
>> 
>> Inline...
>> 
 On 2 Dec 2019, at 23:35, Thomas Monjalon  wrote:
>>> 
>>> Hi all,
>>> 
>>> VPP has a buffer called vlib_buffer_t, while DPDK has rte_mbuf.
>>> Are there some benchmarks about the cost of converting, from one format
>>> to the other one, during Rx/Tx operations?
>> 
>> We are benchmarking both dpdk i40e PMD performance and native VPP AVF driver 
>> performance and we are seeing significantly better performance with native 
>> AVF.
>> If you taake a look at [1] you will see that DPDK i40e driver provides 18.62 
>> Mpps and exactly the same test with native AVF driver is giving us arounf 
>> 24.86 Mpps.
> 
> Why not comparing with DPDK AVF?


i40e was simply there from day one...

> 
>> Thanks for native AVF driver and new buffer management code we managed to go 
>> bellow 100 clocks per packet for the whole ipv4 routing base test. 
>> 
>> My understanding is that performance difference is caused by 4 factors, but 
>> i cannot support each of them with number as i never conducted detailed 
>> testing.
>> 
>> - less work done in driver code, as we have freedom to cherrypick only data 
>> we need, and in case of DPDK, PMD needs to be universal
> 
> For info, offloads are disabled by default now in DPDK.

Good to know...

> 
>> - no cost of medatata processing (rtr_mbuf -> vlib_buffer_t) conversion
>> 
>> - less pressure on cache (we touch 2 cacheline less with native driver for 
>> each packet), this is specially observable on smaller devices with less cache
>> 
>> - faster buffer management code
>> 
>> 
>>> I'm sure there would be some benefits of switching VPP to natively use
>>> the DPDK mbuf allocated in mempools.
>> 
>> I dont agree with this statement, we hawe own buffer management code an we 
>> are not interested in using dpdk mempools. There are many use cases where we 
>> don't need DPDK and we wan't VPP not to be dependant on DPDK code.
>> 
>>> What would be the drawbacks?
>> 
>> 
>>> Last time I asked this question, the answer was about compatibility with
>>> other driver backends, especially ODP. What happened?
>>> DPDK drivers are still the only external drivers used by VPP?
>> 
>> No, we still use DPDK drivers in many cases, but also we 
>> have lot of native drivers in VPP those days:
>> 
>> - intel AVF
>> - virtio
>> - vmxnet3
>> - rdma (for mlx4, mlx5 an other rdma capable cards), direct verbs for mlx5 
>> work in progess
>> - tap with virtio backend
>> - memif
>> - marvlel pp2
>> - (af_xdp - work in progress)
>> 
>>> When using DPDK, more than 40 networking drivers are available:
>>>https://core.dpdk.org/supported/
>>> After 4 years of Open Source VPP, there are less than 10 native drivers:
>>>- virtual drivers: virtio, vmxnet3, af_packet, netmap, memif
>>>- hardware drivers: ixge, avf, pp2
>>> And if looking at ixge driver, we can read:
>>> "
>>>This driver is not intended for production use and it is unsupported.
>>>It is provided for educational use only.
>>>Please use supported DPDK driver instead.
>>> "
>> 
>> yep, ixgbe driver is not maintained for long time...
>> 
>>> So why not improving DPDK integration in VPP to make it faster?
>> 
>> Yes, if we can get freedom to use parts of DPDK we want instead of being 
>> forced to adopt whole DPDK ecosystem.
>> for example, you cannot use dpdk drivers without using EAL, mempool, 
>> rte_mbuf... rte_eal_init is monster which I was hoping that it will 
>> disappear for long time...
> 
> You could help to improve these parts of DPDK,
> instead of spending time to try implementing few drivers.
> Then VPP would benefit from a rich driver ecosystem.


Thank you for letting me know what could be better use of my time.

At the moment we have good coverage of native drivers, and still there is a 
option for people to use dpdk. It is now mainly up to driver vendors to decide 
if they are happy with performance they wil get from dpdk pmd or they want 
better...

> 
> 
>> Good example what will be good fit for us is rdma-core library, it allows 
>> you to programm nic and fetch packets from it in much more lightweight way, 
>> and if you really want to have super-fast datapath, there is direct verbs 
>> interface which gives you access to tx/rx rings directly.
>> 
>>> DPDK mbuf has dynamic fields now; it can help to register metadata on 
>>> demand.
>>> And it is still possible to statically reserve some extra space for
>>> application-specific metadata in each packet.
>> 
>> I don't see this s a huge benefit, you still need to call rte_eal_init, you 
>> still need to use dpdk mempools. Basically it still requires adoption of the 
>> whole dpdk ecosystem which we don't want...
>> 
>> 
>>> Other improvements, like meson packaging usable with pkg-config,
>>> were done during last years and may deserve to be considered.
>> 
>> I'm aware of that but I was not able to found good justification to 

Re: [vpp-dev] efficient use of DPDK

2019-12-03 Thread Thomas Monjalon
03/12/2019 00:26, Damjan Marion:
> 
> Hi THomas!
> 
> Inline...
> 
> > On 2 Dec 2019, at 23:35, Thomas Monjalon  wrote:
> > 
> > Hi all,
> > 
> > VPP has a buffer called vlib_buffer_t, while DPDK has rte_mbuf.
> > Are there some benchmarks about the cost of converting, from one format
> > to the other one, during Rx/Tx operations?
> 
> We are benchmarking both dpdk i40e PMD performance and native VPP AVF driver 
> performance and we are seeing significantly better performance with native 
> AVF.
> If you taake a look at [1] you will see that DPDK i40e driver provides 18.62 
> Mpps and exactly the same test with native AVF driver is giving us arounf 
> 24.86 Mpps.

Why not comparing with DPDK AVF?

> Thanks for native AVF driver and new buffer management code we managed to go 
> bellow 100 clocks per packet for the whole ipv4 routing base test. 
> 
> My understanding is that performance difference is caused by 4 factors, but i 
> cannot support each of them with number as i never conducted detailed testing.
> 
> - less work done in driver code, as we have freedom to cherrypick only data 
> we need, and in case of DPDK, PMD needs to be universal

For info, offloads are disabled by default now in DPDK.

> - no cost of medatata processing (rtr_mbuf -> vlib_buffer_t) conversion
> 
> - less pressure on cache (we touch 2 cacheline less with native driver for 
> each packet), this is specially observable on smaller devices with less cache
> 
> - faster buffer management code
> 
> 
> > I'm sure there would be some benefits of switching VPP to natively use
> > the DPDK mbuf allocated in mempools.
> 
> I dont agree with this statement, we hawe own buffer management code an we 
> are not interested in using dpdk mempools. There are many use cases where we 
> don't need DPDK and we wan't VPP not to be dependant on DPDK code.
> 
> > What would be the drawbacks?
> 
> 
> > Last time I asked this question, the answer was about compatibility with
> > other driver backends, especially ODP. What happened?
> > DPDK drivers are still the only external drivers used by VPP?
> 
> No, we still use DPDK drivers in many cases, but also we 
> have lot of native drivers in VPP those days:
> 
> - intel AVF
> - virtio
> - vmxnet3
> - rdma (for mlx4, mlx5 an other rdma capable cards), direct verbs for mlx5 
> work in progess
> - tap with virtio backend
> - memif
> - marvlel pp2
> - (af_xdp - work in progress)
> 
> > When using DPDK, more than 40 networking drivers are available:
> > https://core.dpdk.org/supported/
> > After 4 years of Open Source VPP, there are less than 10 native drivers:
> > - virtual drivers: virtio, vmxnet3, af_packet, netmap, memif
> > - hardware drivers: ixge, avf, pp2
> > And if looking at ixge driver, we can read:
> > "
> > This driver is not intended for production use and it is unsupported.
> > It is provided for educational use only.
> > Please use supported DPDK driver instead.
> > "
> 
> yep, ixgbe driver is not maintained for long time...
> 
> > So why not improving DPDK integration in VPP to make it faster?
> 
> Yes, if we can get freedom to use parts of DPDK we want instead of being 
> forced to adopt whole DPDK ecosystem.
> for example, you cannot use dpdk drivers without using EAL, mempool, 
> rte_mbuf... rte_eal_init is monster which I was hoping that it will disappear 
> for long time...

You could help to improve these parts of DPDK,
instead of spending time to try implementing few drivers.
Then VPP would benefit from a rich driver ecosystem.


> Good example what will be good fit for us is rdma-core library, it allows you 
> to programm nic and fetch packets from it in much more lightweight way, and 
> if you really want to have super-fast datapath, there is direct verbs 
> interface which gives you access to tx/rx rings directly.
> 
> > DPDK mbuf has dynamic fields now; it can help to register metadata on 
> > demand.
> > And it is still possible to statically reserve some extra space for
> > application-specific metadata in each packet.
> 
> I don't see this s a huge benefit, you still need to call rte_eal_init, you 
> still need to use dpdk mempools. Basically it still requires adoption of the 
> whole dpdk ecosystem which we don't want...
> 
> 
> > Other improvements, like meson packaging usable with pkg-config,
> > were done during last years and may deserve to be considered.
> 
> I'm aware of that but I was not able to found good justification to invest 
> time to change existing scripting to move to meson. As typically vpp 
> developers doesn't need to compile dpdk very frequently current solution is 
> simply good enough...



-=-=-=-=-=-=-=-=-=-=-=-
Links: You receive all messages sent to this group.

View/Reply Online (#14761): https://lists.fd.io/g/vpp-dev/message/14761
Mute This Topic: https://lists.fd.io/mt/65218320/21656
Group Owner: vpp-dev+ow...@lists.fd.io
Unsubscribe: https://lists.fd.io/g/vpp-dev/unsub  [arch...@mail-archive.com]
-=-=-=-=-=-=-=-=-=-=-=-