Re: [vpp-dev] efficient use of DPDK

2019-12-05 Thread Honnappa Nagarahalli


> >
> > Actually native drivers (like Mellanox or AVF) can be faster w/o buffer
> > conversion and tend to be faster than when used by DPDK. I suspect VPP 
> is
> not
> > the only project to report this extra cost.
> It would be good to know other projects that report this extra cost. It 
> will
> help support changes to DPDK.
> [JT] I may be wrong but I think there was a presentation about that last week
> during DPDK user conf in the US.
That was from me using VPP as an example.  I am trying to explore solutions in 
DPDK to bridge the gap between native drivers and DPDK, assuming such 
situations exist in other applications (which could be private) too.
-=-=-=-=-=-=-=-=-=-=-=-
Links: You receive all messages sent to this group.

View/Reply Online (#14816): https://lists.fd.io/g/vpp-dev/message/14816
Mute This Topic: https://lists.fd.io/mt/65218320/21656
Group Owner: vpp-dev+ow...@lists.fd.io
Unsubscribe: https://lists.fd.io/g/vpp-dev/unsub  [arch...@mail-archive.com]
-=-=-=-=-=-=-=-=-=-=-=-


Re: [vpp-dev] efficient use of DPDK

2019-12-05 Thread Jerome Tollet via Lists.Fd.Io
inlined

Le 05/12/2019 09:03, « vpp-dev@lists.fd.io au nom de Honnappa Nagarahalli » 
 a écrit :



> -Original Message-
> From: vpp-dev@lists.fd.io  On Behalf Of Jerome 
Tollet via
> Lists.Fd.Io
> Sent: Wednesday, December 4, 2019 9:33 AM
> To: tho...@monjalon.net
> Cc: vpp-dev@lists.fd.io
    > Subject: Re: [vpp-dev] efficient use of DPDK
>
> Actually native drivers (like Mellanox or AVF) can be faster w/o buffer
> conversion and tend to be faster than when used by DPDK. I suspect VPP is 
not
> the only project to report this extra cost.
It would be good to know other projects that report this extra cost. It 
will help support changes to DPDK.
[JT] I may be wrong but I think there was a presentation about that last week 
during DPDK user conf in the US.

> Jerome
>
> Le 04/12/2019 15:43, « Thomas Monjalon »  a écrit :
>
> 03/12/2019 22:11, Jerome Tollet (jtollet):
> > Thomas,
> > I am afraid you may be missing the point. VPP is a framework where 
plugins
> are first class citizens. If a plugin requires leveraging offload (inline 
or
> lookaside), it is more than welcome to do it.
> > There are multiple examples including hw crypto accelerators
> 
(https://software.intel.com/en-us/articles/get-started-with-ipsec-acceleration-
> in-the-fdio-vpp-project).
>
> OK I understand plugins are open.
> My point was about the efficiency of the plugins,
> given the need for buffer conversion.
> If some plugins are already efficient enough, great:
> it means there is no need for bringing effort in native VPP drivers.
>
>
> > Le 03/12/2019 17:07, « vpp-dev@lists.fd.io au nom de Thomas Monjalon
> »  a écrit :
> >
> > 03/12/2019 13:12, Damjan Marion:
> > > > On 3 Dec 2019, at 09:28, Thomas Monjalon 

> wrote:
> > > > 03/12/2019 00:26, Damjan Marion:
> > > >>
> > > >> Hi THomas!
> > > >>
> > > >> Inline...
> > > >>
> > > >>>> On 2 Dec 2019, at 23:35, Thomas Monjalon
>  wrote:
> > > >>>
> > > >>> Hi all,
> > > >>>
> > > >>> VPP has a buffer called vlib_buffer_t, while DPDK has 
rte_mbuf.
> > > >>> Are there some benchmarks about the cost of converting, 
from one
> format
> > > >>> to the other one, during Rx/Tx operations?
> > > >>
> > > >> We are benchmarking both dpdk i40e PMD performance and 
native
> VPP AVF driver performance and we are seeing significantly better
> performance with native AVF.
> > > >> If you taake a look at [1] you will see that DPDK i40e 
driver provides
> 18.62 Mpps and exactly the same test with native AVF driver is giving us 
arounf
> 24.86 Mpps.
> > [...]
> > > >>
> > > >>> So why not improving DPDK integration in VPP to make it 
faster?
> > > >>
> > > >> Yes, if we can get freedom to use parts of DPDK we want 
instead of
> being forced to adopt whole DPDK ecosystem.
> > > >> for example, you cannot use dpdk drivers without using EAL,
> mempool, rte_mbuf... rte_eal_init is monster which I was hoping that it 
will
> disappear for long time...
> > > >
> > > > You could help to improve these parts of DPDK,
> > > > instead of spending time to try implementing few drivers.
> > > > Then VPP would benefit from a rich driver ecosystem.
> > >
> > > Thank you for letting me know what could be better use of my 
time.
> >
> > "You" was referring to VPP developers.
> > I think some other Cisco developers are also contributing to 
VPP.
> >
> > > At the moment we have good coverage of native drivers, and 
still there
> is a option for people to use dpdk. It is now mainly up to driver vendors 
to
> decide if they are happy with performance they wil get from dpdk pmd or 
they
> want better...
> >
> > Yes it is possible to use DPDK in VPP with degraded performance.
> > If

Re: [vpp-dev] efficient use of DPDK

2019-12-05 Thread Honnappa Nagarahalli


> -Original Message-
> From: vpp-dev@lists.fd.io  On Behalf Of Jerome Tollet via
> Lists.Fd.Io
> Sent: Wednesday, December 4, 2019 9:33 AM
> To: tho...@monjalon.net
> Cc: vpp-dev@lists.fd.io
> Subject: Re: [vpp-dev] efficient use of DPDK
>
> Actually native drivers (like Mellanox or AVF) can be faster w/o buffer
> conversion and tend to be faster than when used by DPDK. I suspect VPP is not
> the only project to report this extra cost.
It would be good to know other projects that report this extra cost. It will 
help support changes to DPDK.

> Jerome
>
> Le 04/12/2019 15:43, « Thomas Monjalon »  a écrit :
>
> 03/12/2019 22:11, Jerome Tollet (jtollet):
> > Thomas,
> > I am afraid you may be missing the point. VPP is a framework where 
> plugins
> are first class citizens. If a plugin requires leveraging offload (inline or
> lookaside), it is more than welcome to do it.
> > There are multiple examples including hw crypto accelerators
> (https://software.intel.com/en-us/articles/get-started-with-ipsec-acceleration-
> in-the-fdio-vpp-project).
>
> OK I understand plugins are open.
> My point was about the efficiency of the plugins,
> given the need for buffer conversion.
> If some plugins are already efficient enough, great:
> it means there is no need for bringing effort in native VPP drivers.
>
>
> > Le 03/12/2019 17:07, « vpp-dev@lists.fd.io au nom de Thomas Monjalon
> »  a écrit :
> >
> > 03/12/2019 13:12, Damjan Marion:
> > > > On 3 Dec 2019, at 09:28, Thomas Monjalon 
> wrote:
> > > > 03/12/2019 00:26, Damjan Marion:
> > > >>
> > > >> Hi THomas!
> > > >>
> > > >> Inline...
> > > >>
> > > >>>> On 2 Dec 2019, at 23:35, Thomas Monjalon
>  wrote:
> > > >>>
> > > >>> Hi all,
> > > >>>
> > > >>> VPP has a buffer called vlib_buffer_t, while DPDK has 
> rte_mbuf.
> > > >>> Are there some benchmarks about the cost of converting, from 
> one
> format
> > > >>> to the other one, during Rx/Tx operations?
> > > >>
> > > >> We are benchmarking both dpdk i40e PMD performance and native
> VPP AVF driver performance and we are seeing significantly better
> performance with native AVF.
> > > >> If you taake a look at [1] you will see that DPDK i40e driver 
> provides
> 18.62 Mpps and exactly the same test with native AVF driver is giving us 
> arounf
> 24.86 Mpps.
> > [...]
> > > >>
> > > >>> So why not improving DPDK integration in VPP to make it 
> faster?
> > > >>
> > > >> Yes, if we can get freedom to use parts of DPDK we want 
> instead of
> being forced to adopt whole DPDK ecosystem.
> > > >> for example, you cannot use dpdk drivers without using EAL,
> mempool, rte_mbuf... rte_eal_init is monster which I was hoping that it will
> disappear for long time...
> > > >
> > > > You could help to improve these parts of DPDK,
> > > > instead of spending time to try implementing few drivers.
> > > > Then VPP would benefit from a rich driver ecosystem.
> > >
> > > Thank you for letting me know what could be better use of my time.
> >
> > "You" was referring to VPP developers.
> > I think some other Cisco developers are also contributing to VPP.
> >
> > > At the moment we have good coverage of native drivers, and still 
> there
> is a option for people to use dpdk. It is now mainly up to driver vendors to
> decide if they are happy with performance they wil get from dpdk pmd or they
> want better...
> >
> > Yes it is possible to use DPDK in VPP with degraded performance.
> > If an user wants best performance with VPP and a real NIC,
> > a new driver must be implemented for VPP only.
> >
> > Anyway real performance benefits are in hardware device offloads
> > which will be hard to implement in VPP native drivers.
> > Support (investment) would be needed from vendors to make it happen.
> > About offloads, VPP is not using crypto or compression drivers
> > that DPDK provides (plus regex coming).
> >
> > VPP i

Re: [vpp-dev] efficient use of DPDK

2019-12-04 Thread Nitin Saxena
Hi Jerome,

Thanks for the clarification

Regards,
Nitin

> -Original Message-
> From: Jerome Tollet (jtollet) 
> Sent: Wednesday, December 4, 2019 11:30 PM
> To: Nitin Saxena ; Thomas Monjalon
> ; Damjan Marion 
> Cc: vpp-dev@lists.fd.io
> Subject: [EXT] Re: [vpp-dev] efficient use of DPDK
> 
> External Email
> 
> --
> Hi Nitin,
> 
> I am not necessarily speaking about Inline IPSec. I was just saying that VPP
> lets you the choice to do both inline and lookaside types of offload.
> 
> Here is a public example of inline acceleration:
> https://urldefense.proofpoint.com/v2/url?u=https-
> 3A__www.intel.com_content_dam_www_programmable_us_en_pdfs_litera
> ture_wp_wp-2D01295-2Dhcl-2Dsegment-2Drouting-2Dover-2Dipv6-
> 2Dacceleration-2Dusing-2Dintel-2Dfpga-2Dprogrammable-2Dacceleration-
> 2Dcard-
> 2Dn3000.pdf=DwIGaQ=nKjWec2b6R0mOyPaz7xtfQ=S4H7jibYAtA5YO
> vfL3IkGduCfk9LbZMPOAecQGDzWV0=cegWttxd0q-
> zJRoby35GO4izb4G9cJy-87BbbFqgeu8=7bej1I-
> Gyc_F0C_RWhtavWmKiyNbC1m-0tkkrG005r0=
> 
> Jerome
> 
> 
> 
> Le 04/12/2019 18:19, « Nitin Saxena »  a écrit :
> 
> 
> 
> Hi Jerome,
> 
> 
> 
> I have query unrelated to the original thread.
> 
> 
> 
> >> There are other examples (lookaside and inline)
> 
> By inline do you mean "Inline IPSEC"? Could you please elaborate what you
> meant by inline offload in VPP?
> 
> 
> 
> Thanks,
> 
> Nitin
> 
> 
> 
> > -Original Message-
> 
> > From: vpp-dev@lists.fd.io  On Behalf Of Jerome
> Tollet
> 
>     > via Lists.Fd.Io
> 
> > Sent: Wednesday, December 4, 2019 9:00 PM
> 
> > To: Thomas Monjalon ; Damjan Marion
> 
> > 
> 
> > Cc: vpp-dev@lists.fd.io
> 
> > Subject: [EXT] Re: [vpp-dev] efficient use of DPDK
> 
> >
> 
> > External Email
> 
> >
> 
> > --
> 
> > Hi Thomas,
> 
> > I strongly disagree with your conclusions from this discussion:
> 
> > 1) Yes, VPP made the choice of not being DPDK dependent BUT certainly
> not
> 
> > at the cost of performance. (It's actually the opposite ie AVF driver)
> 
> > 2) VPP is NOT exclusively CPU centric. I gave you the example of crypto
> 
> > offload based on Intel QAT cards (lookaside). There are other examples
> 
> > (lookaside and inline)
> 
> > 3) Plugins are free to use any sort of offload (and they do).
> 
> >
> 
> > Jerome
> 
> >
> 
> > Le 04/12/2019 15:19, « vpp-dev@lists.fd.io au nom de Thomas Monjalon
> »
> 
> >  a écrit :
> 
> >
> 
> > 03/12/2019 20:01, Damjan Marion:
> 
> > > On 3 Dec 2019, at 17:06, Thomas Monjalon wrote:
> 
> > > > 03/12/2019 13:12, Damjan Marion:
> 
> > > >> On 3 Dec 2019, at 09:28, Thomas Monjalon wrote:
> 
> > > >>> 03/12/2019 00:26, Damjan Marion:
> 
> > > >>>> On 2 Dec 2019, at 23:35, Thomas Monjalon wrote:
> 
> > > >>>>> VPP has a buffer called vlib_buffer_t, while DPDK has
> rte_mbuf.
> 
> > > >>>>> Are there some benchmarks about the cost of converting,
> from
> 
> > one format
> 
> > > >>>>> to the other one, during Rx/Tx operations?
> 
> > > >>>>
> 
> > > >>>> We are benchmarking both dpdk i40e PMD performance and
> native
> 
> > VPP AVF driver performance and we are seeing significantly better
> 
> > performance with native AVF.
> 
> > > >>>> If you taake a look at [1] you will see that DPDK i40e driver
> provides
> 
> > 18.62 Mpps and exactly the same test with native AVF driver is giving us
> 
> > arounf 24.86 Mpps.
> 
> > > > [...]
> 
> > > >>>>
> 
> > > >>>>> So why not improving DPDK integration in VPP to make it
> faster?
> 
> > > >>>>
> 
> > > >>>> Yes, if we can get freedom to use parts of DPDK we want
> instead of
> 
> > being forced to adopt whole DPDK ecosystem.
> 
> > > >>>> for example, you cannot use dpdk drivers without using EAL,
> 
> > mempool, rte_mbuf... rte_e

Re: [vpp-dev] efficient use of DPDK

2019-12-04 Thread Thomas Monjalon
04/12/2019 16:29, Jerome Tollet (jtollet):
> Hi Thomas,
> I strongly disagree with your conclusions from this discussion:
> 
> 1) Yes, VPP made the choice of not being DPDK dependent BUT certainly not at 
> the cost of performance. (It's actually the opposite ie AVF driver)

I mean performance cost when using DPDK from VPP.
Of course there is no cost when using native VPP driver.

> 2) VPP is NOT exclusively CPU centric. I gave you the example of crypto 
> offload based on Intel QAT cards (lookaside). There are other examples 
> (lookaside and inline)

Yes there is QAT lookaside and can be others.
But you pay the cost of buffer conversion each time you use a DPDK driver.

> 3) Plugins are free to use any sort of offload (and they do).

I understand from Ole Troan that the focus is more on generic features,
so you avoid hardware limitations.


> Le 04/12/2019 15:19, « vpp-dev@lists.fd.io au nom de Thomas Monjalon » 
>  a écrit :
> 
> 03/12/2019 20:01, Damjan Marion:
> > On 3 Dec 2019, at 17:06, Thomas Monjalon wrote:
> > > 03/12/2019 13:12, Damjan Marion:
> > >> On 3 Dec 2019, at 09:28, Thomas Monjalon wrote:
> > >>> 03/12/2019 00:26, Damjan Marion:
> >  On 2 Dec 2019, at 23:35, Thomas Monjalon wrote:
> > > VPP has a buffer called vlib_buffer_t, while DPDK has rte_mbuf.
> > > Are there some benchmarks about the cost of converting, from one 
> format
> > > to the other one, during Rx/Tx operations?
> >  
> >  We are benchmarking both dpdk i40e PMD performance and native VPP 
> AVF driver performance and we are seeing significantly better performance 
> with native AVF.
> >  If you taake a look at [1] you will see that DPDK i40e driver 
> provides 18.62 Mpps and exactly the same test with native AVF driver is 
> giving us arounf 24.86 Mpps.
> > > [...]
> >  
> > > So why not improving DPDK integration in VPP to make it faster?
> >  
> >  Yes, if we can get freedom to use parts of DPDK we want instead of 
> being forced to adopt whole DPDK ecosystem.
> >  for example, you cannot use dpdk drivers without using EAL, 
> mempool, rte_mbuf... rte_eal_init is monster which I was hoping that it will 
> disappear for long time...
> 
> As stated below, I take this feedback, thanks.
> However it won't change VPP choice of not using rte_mbuf natively.
> 
> [...]
> > >> At the moment we have good coverage of native drivers, and still 
> there is a option for people to use dpdk. It is now mainly up to driver 
> vendors to decide if they are happy with performance they wil get from dpdk 
> pmd or they want better...
> > > 
> > > Yes it is possible to use DPDK in VPP with degraded performance.
> > > If an user wants best performance with VPP and a real NIC,
> > > a new driver must be implemented for VPP only.
> > > 
> > > Anyway real performance benefits are in hardware device offloads
> > > which will be hard to implement in VPP native drivers.
> > > Support (investment) would be needed from vendors to make it happen.
> > > About offloads, VPP is not using crypto or compression drivers
> > > that DPDK provides (plus regex coming).
> > 
> > Nice marketing pitch for your company :)
> 
> I guess you mean Mellanox has a good offloads offering.
> But my point is about the end of Moore's law,
> and the offload trending of most of device vendors.
> However I truly respect the choice of avoiding device offloads.
> 
> > > VPP is a CPU-based packet processing software.
> > > If users want to leverage hardware device offloads,
> > > a truly DPDK-based software is required.
> > > If I understand well your replies, such software cannot be VPP.
> > 
> > Yes, DPDK is centre of the universe/
> 
> DPDK is where most of networking devices are supported in userspace.
> That's all.
> 
> 
> > So Dear Thomas, I can continue this discussion forever, but that is not 
> something I'm going to do as it started to be trolling contest.
> 
> I agree
> 
> > I can understand that you may be passionate about you project and that 
> you maybe think that it is the greatest thing after sliced bread, but please 
> allow that other people have different opinion. Instead of giving the lessons 
> to other people what they should do, if you are interested for dpdk to be 
> better consumed, please take a feedback provided to you. I assume that you 
> are interested as you showed up on this mailing list, if not there was no 
> reason for starting this thread in the first place.
> 
> Thank you for the feedbacks, this discussion was required:
> 1/ it gives more motivation to improve EAL API
> 2/ it confirms the VPP design choice of not being DPDK-dependent (at a 
> performance cost)
> 3/ it confirms the VPP design choice of being focused on CPU-based 
> processing




Re: [vpp-dev] efficient use of DPDK

2019-12-04 Thread Jerome Tollet via Lists.Fd.Io
Hi Nitin,
I am not necessarily speaking about Inline IPSec. I was just saying that VPP 
lets you the choice to do both inline and lookaside types of offload.
Here is a public example of inline acceleration: 
https://www.intel.com/content/dam/www/programmable/us/en/pdfs/literature/wp/wp-01295-hcl-segment-routing-over-ipv6-acceleration-using-intel-fpga-programmable-acceleration-card-n3000.pdf
Jerome

Le 04/12/2019 18:19, « Nitin Saxena »  a écrit :

Hi Jerome,

I have query unrelated to the original thread. 

>> There are other examples (lookaside and inline)
By inline do you mean "Inline IPSEC"? Could you please elaborate what you 
meant by inline offload in VPP?

Thanks,
Nitin

> -Original Message-
> From: vpp-dev@lists.fd.io  On Behalf Of Jerome Tollet
> via Lists.Fd.Io
> Sent: Wednesday, December 4, 2019 9:00 PM
> To: Thomas Monjalon ; Damjan Marion
> 
> Cc: vpp-dev@lists.fd.io
> Subject: [EXT] Re: [vpp-dev] efficient use of DPDK
> 
> External Email
> 
> --
> Hi Thomas,
> I strongly disagree with your conclusions from this discussion:
> 1) Yes, VPP made the choice of not being DPDK dependent BUT certainly not
> at the cost of performance. (It's actually the opposite ie AVF driver)
> 2) VPP is NOT exclusively CPU centric. I gave you the example of crypto
> offload based on Intel QAT cards (lookaside). There are other examples
> (lookaside and inline)
> 3) Plugins are free to use any sort of offload (and they do).
> 
> Jerome
> 
> Le 04/12/2019 15:19, « vpp-dev@lists.fd.io au nom de Thomas Monjalon »
>  a écrit :
> 
> 03/12/2019 20:01, Damjan Marion:
> > On 3 Dec 2019, at 17:06, Thomas Monjalon wrote:
> > > 03/12/2019 13:12, Damjan Marion:
> > >> On 3 Dec 2019, at 09:28, Thomas Monjalon wrote:
> > >>> 03/12/2019 00:26, Damjan Marion:
> > >>>> On 2 Dec 2019, at 23:35, Thomas Monjalon wrote:
> > >>>>> VPP has a buffer called vlib_buffer_t, while DPDK has 
rte_mbuf.
> > >>>>> Are there some benchmarks about the cost of converting, from
> one format
> > >>>>> to the other one, during Rx/Tx operations?
> > >>>>
> > >>>> We are benchmarking both dpdk i40e PMD performance and native
> VPP AVF driver performance and we are seeing significantly better
> performance with native AVF.
> > >>>> If you taake a look at [1] you will see that DPDK i40e driver 
provides
> 18.62 Mpps and exactly the same test with native AVF driver is giving us
> arounf 24.86 Mpps.
> > > [...]
> > >>>>
> > >>>>> So why not improving DPDK integration in VPP to make it 
faster?
> > >>>>
> > >>>> Yes, if we can get freedom to use parts of DPDK we want 
instead of
> being forced to adopt whole DPDK ecosystem.
> > >>>> for example, you cannot use dpdk drivers without using EAL,
> mempool, rte_mbuf... rte_eal_init is monster which I was hoping that it 
will
> disappear for long time...
> 
> As stated below, I take this feedback, thanks.
> However it won't change VPP choice of not using rte_mbuf natively.
> 
> [...]
> > >> At the moment we have good coverage of native drivers, and still
> there is a option for people to use dpdk. It is now mainly up to driver 
vendors
> to decide if they are happy with performance they wil get from dpdk pmd or
> they want better...
> > >
> > > Yes it is possible to use DPDK in VPP with degraded performance.
> > > If an user wants best performance with VPP and a real NIC,
> > > a new driver must be implemented for VPP only.
> > >
> > > Anyway real performance benefits are in hardware device offloads
> > > which will be hard to implement in VPP native drivers.
> > > Support (investment) would be needed from vendors to make it
> happen.
> > > About offloads, VPP is not using crypto or compression drivers
> > > that DPDK provides (plus regex coming).
> >
> > Nice marketing pitch for your company :)
> 
> I guess you mean Mellanox has a good offloads offering.
> But my point is about the end of Moore's law

Re: [vpp-dev] efficient use of DPDK

2019-12-04 Thread Nitin Saxena
Hi Jerome,

I have query unrelated to the original thread. 

>> There are other examples (lookaside and inline)
By inline do you mean "Inline IPSEC"? Could you please elaborate what you meant 
by inline offload in VPP?

Thanks,
Nitin

> -Original Message-
> From: vpp-dev@lists.fd.io  On Behalf Of Jerome Tollet
> via Lists.Fd.Io
> Sent: Wednesday, December 4, 2019 9:00 PM
> To: Thomas Monjalon ; Damjan Marion
> 
> Cc: vpp-dev@lists.fd.io
> Subject: [EXT] Re: [vpp-dev] efficient use of DPDK
> 
> External Email
> 
> --
> Hi Thomas,
> I strongly disagree with your conclusions from this discussion:
> 1) Yes, VPP made the choice of not being DPDK dependent BUT certainly not
> at the cost of performance. (It's actually the opposite ie AVF driver)
> 2) VPP is NOT exclusively CPU centric. I gave you the example of crypto
> offload based on Intel QAT cards (lookaside). There are other examples
> (lookaside and inline)
> 3) Plugins are free to use any sort of offload (and they do).
> 
> Jerome
> 
> Le 04/12/2019 15:19, « vpp-dev@lists.fd.io au nom de Thomas Monjalon »
>  a écrit :
> 
> 03/12/2019 20:01, Damjan Marion:
> > On 3 Dec 2019, at 17:06, Thomas Monjalon wrote:
> > > 03/12/2019 13:12, Damjan Marion:
> > >> On 3 Dec 2019, at 09:28, Thomas Monjalon wrote:
> > >>> 03/12/2019 00:26, Damjan Marion:
> > >>>> On 2 Dec 2019, at 23:35, Thomas Monjalon wrote:
> > >>>>> VPP has a buffer called vlib_buffer_t, while DPDK has rte_mbuf.
> > >>>>> Are there some benchmarks about the cost of converting, from
> one format
> > >>>>> to the other one, during Rx/Tx operations?
> > >>>>
> > >>>> We are benchmarking both dpdk i40e PMD performance and native
> VPP AVF driver performance and we are seeing significantly better
> performance with native AVF.
> > >>>> If you taake a look at [1] you will see that DPDK i40e driver 
> provides
> 18.62 Mpps and exactly the same test with native AVF driver is giving us
> arounf 24.86 Mpps.
> > > [...]
> > >>>>
> > >>>>> So why not improving DPDK integration in VPP to make it faster?
> > >>>>
> > >>>> Yes, if we can get freedom to use parts of DPDK we want instead of
> being forced to adopt whole DPDK ecosystem.
> > >>>> for example, you cannot use dpdk drivers without using EAL,
> mempool, rte_mbuf... rte_eal_init is monster which I was hoping that it will
> disappear for long time...
> 
> As stated below, I take this feedback, thanks.
> However it won't change VPP choice of not using rte_mbuf natively.
> 
> [...]
> > >> At the moment we have good coverage of native drivers, and still
> there is a option for people to use dpdk. It is now mainly up to driver 
> vendors
> to decide if they are happy with performance they wil get from dpdk pmd or
> they want better...
> > >
> > > Yes it is possible to use DPDK in VPP with degraded performance.
> > > If an user wants best performance with VPP and a real NIC,
> > > a new driver must be implemented for VPP only.
> > >
> > > Anyway real performance benefits are in hardware device offloads
> > > which will be hard to implement in VPP native drivers.
> > > Support (investment) would be needed from vendors to make it
> happen.
> > > About offloads, VPP is not using crypto or compression drivers
> > > that DPDK provides (plus regex coming).
> >
> > Nice marketing pitch for your company :)
> 
> I guess you mean Mellanox has a good offloads offering.
> But my point is about the end of Moore's law,
> and the offload trending of most of device vendors.
> However I truly respect the choice of avoiding device offloads.
> 
> > > VPP is a CPU-based packet processing software.
> > > If users want to leverage hardware device offloads,
> > > a truly DPDK-based software is required.
> > > If I understand well your replies, such software cannot be VPP.
> >
> > Yes, DPDK is centre of the universe/
> 
> DPDK is where most of networking devices are supported in userspace.
> That's all.
> 
> 
> > So Dear Thomas, I can continue this discussion forever, but that is not
> something I'm going to do as it started to be trolling contest.
> 
> I agree
> 
> > I 

Re: [vpp-dev] efficient use of DPDK

2019-12-04 Thread Jerome Tollet via Lists.Fd.Io
Actually native drivers (like Mellanox or AVF) can be faster w/o buffer 
conversion and tend to be faster than when used by DPDK. I suspect VPP is not 
the only project to report this extra cost.
Jerome

Le 04/12/2019 15:43, « Thomas Monjalon »  a écrit :

03/12/2019 22:11, Jerome Tollet (jtollet):
> Thomas,
> I am afraid you may be missing the point. VPP is a framework where 
plugins are first class citizens. If a plugin requires leveraging offload 
(inline or lookaside), it is more than welcome to do it.
> There are multiple examples including hw crypto accelerators 
(https://software.intel.com/en-us/articles/get-started-with-ipsec-acceleration-in-the-fdio-vpp-project).

OK I understand plugins are open.
My point was about the efficiency of the plugins,
given the need for buffer conversion.
If some plugins are already efficient enough, great:
it means there is no need for bringing effort in native VPP drivers.


> Le 03/12/2019 17:07, « vpp-dev@lists.fd.io au nom de Thomas Monjalon » 
 a écrit :
> 
> 03/12/2019 13:12, Damjan Marion:
> > > On 3 Dec 2019, at 09:28, Thomas Monjalon  
wrote:
> > > 03/12/2019 00:26, Damjan Marion:
> > >> 
> > >> Hi THomas!
> > >> 
> > >> Inline...
> > >> 
> >  On 2 Dec 2019, at 23:35, Thomas Monjalon  
wrote:
> > >>> 
> > >>> Hi all,
> > >>> 
> > >>> VPP has a buffer called vlib_buffer_t, while DPDK has rte_mbuf.
> > >>> Are there some benchmarks about the cost of converting, from 
one format
> > >>> to the other one, during Rx/Tx operations?
> > >> 
> > >> We are benchmarking both dpdk i40e PMD performance and native 
VPP AVF driver performance and we are seeing significantly better performance 
with native AVF.
> > >> If you taake a look at [1] you will see that DPDK i40e driver 
provides 18.62 Mpps and exactly the same test with native AVF driver is giving 
us arounf 24.86 Mpps.
> [...]
> > >> 
> > >>> So why not improving DPDK integration in VPP to make it faster?
> > >> 
> > >> Yes, if we can get freedom to use parts of DPDK we want instead 
of being forced to adopt whole DPDK ecosystem.
> > >> for example, you cannot use dpdk drivers without using EAL, 
mempool, rte_mbuf... rte_eal_init is monster which I was hoping that it will 
disappear for long time...
> > > 
> > > You could help to improve these parts of DPDK,
> > > instead of spending time to try implementing few drivers.
> > > Then VPP would benefit from a rich driver ecosystem.
> > 
> > Thank you for letting me know what could be better use of my time.
> 
> "You" was referring to VPP developers.
> I think some other Cisco developers are also contributing to VPP.
> 
> > At the moment we have good coverage of native drivers, and still 
there is a option for people to use dpdk. It is now mainly up to driver vendors 
to decide if they are happy with performance they wil get from dpdk pmd or they 
want better...
> 
> Yes it is possible to use DPDK in VPP with degraded performance.
> If an user wants best performance with VPP and a real NIC,
> a new driver must be implemented for VPP only.
> 
> Anyway real performance benefits are in hardware device offloads
> which will be hard to implement in VPP native drivers.
> Support (investment) would be needed from vendors to make it happen.
> About offloads, VPP is not using crypto or compression drivers
> that DPDK provides (plus regex coming).
> 
> VPP is a CPU-based packet processing software.
> If users want to leverage hardware device offloads,
> a truly DPDK-based software is required.
> If I understand well your replies, such software cannot be VPP.





-=-=-=-=-=-=-=-=-=-=-=-
Links: You receive all messages sent to this group.

View/Reply Online (#14796): https://lists.fd.io/g/vpp-dev/message/14796
Mute This Topic: https://lists.fd.io/mt/65218320/21656
Group Owner: vpp-dev+ow...@lists.fd.io
Unsubscribe: https://lists.fd.io/g/vpp-dev/unsub  [arch...@mail-archive.com]
-=-=-=-=-=-=-=-=-=-=-=-


Re: [vpp-dev] efficient use of DPDK

2019-12-04 Thread Jerome Tollet via Lists.Fd.Io
Hi Thomas,
I strongly disagree with your conclusions from this discussion:
1) Yes, VPP made the choice of not being DPDK dependent BUT certainly not at 
the cost of performance. (It's actually the opposite ie AVF driver)
2) VPP is NOT exclusively CPU centric. I gave you the example of crypto offload 
based on Intel QAT cards (lookaside). There are other examples (lookaside and 
inline)
3) Plugins are free to use any sort of offload (and they do). 

Jerome

Le 04/12/2019 15:19, « vpp-dev@lists.fd.io au nom de Thomas Monjalon » 
 a écrit :

03/12/2019 20:01, Damjan Marion:
> On 3 Dec 2019, at 17:06, Thomas Monjalon wrote:
> > 03/12/2019 13:12, Damjan Marion:
> >> On 3 Dec 2019, at 09:28, Thomas Monjalon wrote:
> >>> 03/12/2019 00:26, Damjan Marion:
>  On 2 Dec 2019, at 23:35, Thomas Monjalon wrote:
> > VPP has a buffer called vlib_buffer_t, while DPDK has rte_mbuf.
> > Are there some benchmarks about the cost of converting, from one 
format
> > to the other one, during Rx/Tx operations?
>  
>  We are benchmarking both dpdk i40e PMD performance and native VPP 
AVF driver performance and we are seeing significantly better performance with 
native AVF.
>  If you taake a look at [1] you will see that DPDK i40e driver 
provides 18.62 Mpps and exactly the same test with native AVF driver is giving 
us arounf 24.86 Mpps.
> > [...]
>  
> > So why not improving DPDK integration in VPP to make it faster?
>  
>  Yes, if we can get freedom to use parts of DPDK we want instead of 
being forced to adopt whole DPDK ecosystem.
>  for example, you cannot use dpdk drivers without using EAL, mempool, 
rte_mbuf... rte_eal_init is monster which I was hoping that it will disappear 
for long time...

As stated below, I take this feedback, thanks.
However it won't change VPP choice of not using rte_mbuf natively.

[...]
> >> At the moment we have good coverage of native drivers, and still there 
is a option for people to use dpdk. It is now mainly up to driver vendors to 
decide if they are happy with performance they wil get from dpdk pmd or they 
want better...
> > 
> > Yes it is possible to use DPDK in VPP with degraded performance.
> > If an user wants best performance with VPP and a real NIC,
> > a new driver must be implemented for VPP only.
> > 
> > Anyway real performance benefits are in hardware device offloads
> > which will be hard to implement in VPP native drivers.
> > Support (investment) would be needed from vendors to make it happen.
> > About offloads, VPP is not using crypto or compression drivers
> > that DPDK provides (plus regex coming).
> 
> Nice marketing pitch for your company :)

I guess you mean Mellanox has a good offloads offering.
But my point is about the end of Moore's law,
and the offload trending of most of device vendors.
However I truly respect the choice of avoiding device offloads.

> > VPP is a CPU-based packet processing software.
> > If users want to leverage hardware device offloads,
> > a truly DPDK-based software is required.
> > If I understand well your replies, such software cannot be VPP.
> 
> Yes, DPDK is centre of the universe/

DPDK is where most of networking devices are supported in userspace.
That's all.


> So Dear Thomas, I can continue this discussion forever, but that is not 
something I'm going to do as it started to be trolling contest.

I agree

> I can understand that you may be passionate about you project and that 
you maybe think that it is the greatest thing after sliced bread, but please 
allow that other people have different opinion. Instead of giving the lessons 
to other people what they should do, if you are interested for dpdk to be 
better consumed, please take a feedback provided to you. I assume that you are 
interested as you showed up on this mailing list, if not there was no reason 
for starting this thread in the first place.

Thank you for the feedbacks, this discussion was required:
1/ it gives more motivation to improve EAL API
2/ it confirms the VPP design choice of not being DPDK-dependent (at a 
performance cost)
3/ it confirms the VPP design choice of being focused on CPU-based 
processing




-=-=-=-=-=-=-=-=-=-=-=-
Links: You receive all messages sent to this group.

View/Reply Online (#14795): https://lists.fd.io/g/vpp-dev/message/14795
Mute This Topic: https://lists.fd.io/mt/65218320/21656
Group Owner: vpp-dev+ow...@lists.fd.io
Unsubscribe: https://lists.fd.io/g/vpp-dev/unsub  [arch...@mail-archive.com]
-=-=-=-=-=-=-=-=-=-=-=-


Re: [vpp-dev] efficient use of DPDK

2019-12-04 Thread Thomas Monjalon
03/12/2019 22:11, Jerome Tollet (jtollet):
> Thomas,
> I am afraid you may be missing the point. VPP is a framework where plugins 
> are first class citizens. If a plugin requires leveraging offload (inline or 
> lookaside), it is more than welcome to do it.
> There are multiple examples including hw crypto accelerators 
> (https://software.intel.com/en-us/articles/get-started-with-ipsec-acceleration-in-the-fdio-vpp-project).

OK I understand plugins are open.
My point was about the efficiency of the plugins,
given the need for buffer conversion.
If some plugins are already efficient enough, great:
it means there is no need for bringing effort in native VPP drivers.


> Le 03/12/2019 17:07, « vpp-dev@lists.fd.io au nom de Thomas Monjalon » 
>  a écrit :
> 
> 03/12/2019 13:12, Damjan Marion:
> > > On 3 Dec 2019, at 09:28, Thomas Monjalon  wrote:
> > > 03/12/2019 00:26, Damjan Marion:
> > >> 
> > >> Hi THomas!
> > >> 
> > >> Inline...
> > >> 
> >  On 2 Dec 2019, at 23:35, Thomas Monjalon  
> wrote:
> > >>> 
> > >>> Hi all,
> > >>> 
> > >>> VPP has a buffer called vlib_buffer_t, while DPDK has rte_mbuf.
> > >>> Are there some benchmarks about the cost of converting, from one 
> format
> > >>> to the other one, during Rx/Tx operations?
> > >> 
> > >> We are benchmarking both dpdk i40e PMD performance and native VPP 
> AVF driver performance and we are seeing significantly better performance 
> with native AVF.
> > >> If you taake a look at [1] you will see that DPDK i40e driver 
> provides 18.62 Mpps and exactly the same test with native AVF driver is 
> giving us arounf 24.86 Mpps.
> [...]
> > >> 
> > >>> So why not improving DPDK integration in VPP to make it faster?
> > >> 
> > >> Yes, if we can get freedom to use parts of DPDK we want instead of 
> being forced to adopt whole DPDK ecosystem.
> > >> for example, you cannot use dpdk drivers without using EAL, mempool, 
> rte_mbuf... rte_eal_init is monster which I was hoping that it will disappear 
> for long time...
> > > 
> > > You could help to improve these parts of DPDK,
> > > instead of spending time to try implementing few drivers.
> > > Then VPP would benefit from a rich driver ecosystem.
> > 
> > Thank you for letting me know what could be better use of my time.
> 
> "You" was referring to VPP developers.
> I think some other Cisco developers are also contributing to VPP.
> 
> > At the moment we have good coverage of native drivers, and still there 
> is a option for people to use dpdk. It is now mainly up to driver vendors to 
> decide if they are happy with performance they wil get from dpdk pmd or they 
> want better...
> 
> Yes it is possible to use DPDK in VPP with degraded performance.
> If an user wants best performance with VPP and a real NIC,
> a new driver must be implemented for VPP only.
> 
> Anyway real performance benefits are in hardware device offloads
> which will be hard to implement in VPP native drivers.
> Support (investment) would be needed from vendors to make it happen.
> About offloads, VPP is not using crypto or compression drivers
> that DPDK provides (plus regex coming).
> 
> VPP is a CPU-based packet processing software.
> If users want to leverage hardware device offloads,
> a truly DPDK-based software is required.
> If I understand well your replies, such software cannot be VPP.



-=-=-=-=-=-=-=-=-=-=-=-
Links: You receive all messages sent to this group.

View/Reply Online (#14793): https://lists.fd.io/g/vpp-dev/message/14793
Mute This Topic: https://lists.fd.io/mt/65218320/21656
Group Owner: vpp-dev+ow...@lists.fd.io
Unsubscribe: https://lists.fd.io/g/vpp-dev/unsub  [arch...@mail-archive.com]
-=-=-=-=-=-=-=-=-=-=-=-


Re: [vpp-dev] efficient use of DPDK

2019-12-04 Thread Thomas Monjalon
04/12/2019 15:25, Ole Troan:
> Thomas,
> 
> > 2/ it confirms the VPP design choice of not being DPDK-dependent (at a 
> > performance cost)
> 
> Do you have any examples/features where a DPDK/offload solution would be 
> performing better than VPP?
> Any numbers?

No sorry, I am not benchmarking VPP.
I am just referring at the obvious cost of converting packet metadata
from DPDK mbuf to VPP buffer.


-=-=-=-=-=-=-=-=-=-=-=-
Links: You receive all messages sent to this group.

View/Reply Online (#14791): https://lists.fd.io/g/vpp-dev/message/14791
Mute This Topic: https://lists.fd.io/mt/65218320/21656
Group Owner: vpp-dev+ow...@lists.fd.io
Unsubscribe: https://lists.fd.io/g/vpp-dev/unsub  [arch...@mail-archive.com]
-=-=-=-=-=-=-=-=-=-=-=-


Re: [vpp-dev] efficient use of DPDK

2019-12-04 Thread Thomas Monjalon
03/12/2019 20:56, Ole Troan:
> Interesting discussion.
> 
> > Yes it is possible to use DPDK in VPP with degraded performance.
> > If an user wants best performance with VPP and a real NIC,
> > a new driver must be implemented for VPP only.
> > 
> > Anyway real performance benefits are in hardware device offloads
> > which will be hard to implement in VPP native drivers.
> > Support (investment) would be needed from vendors to make it happen.
> > About offloads, VPP is not using crypto or compression drivers
> > that DPDK provides (plus regex coming).
> > 
> > VPP is a CPU-based packet processing software.
> > If users want to leverage hardware device offloads,
> > a truly DPDK-based software is required.
> > If I understand well your replies, such software cannot be VPP.
> 
> I don't think that we are principled against having features run on the NIC 
> as such.
> VPP is a framework for building forwarding applications.
> That often implies doing lots of funky stuff with packets.
> 
> And more often than we like hardware offload creates problems.
> Be it architecturally with layer violations like GSO/GRO.
> Correct handling of IPv6 extension headers, fragments.
> Dealing with various encaps and tunnels.
> Or doing symmetric RSS on two sides of a NAT.
> Or even other transport protocols than TCP/UDP.
> 
> And it's not like there is any consistency across NICs:
> http://doc.dpdk.org/guides/nics/overview.html#id1
> 
> We don't want VPP to only support DPDK drivers.
> It's of course a tradeoff, and it's not like we don't have experience with 
> hardware to do packet forwarding.

I agree, of course there are a lot of experience in Cisco!

> At least in my own experience, as soon as you want to have features running 
> in hardware, you loose a lot of flexiblity and agility.
> That's just the name of that game. Living under the yoke of hardware 
> limitations is something I've tried to escape for 20 years.
> I'm probably not alone, and that's why you are seeing some pushback...

Thank you for this interesting point of view.
I understand and I think it is good to have different choices
being implemented and experimented in various Open Source softwares.
In order to motivate others to experiment the opposite,
i.e. implementing dataplanes with aggressive device offloads,
I think it is good to advertise clearly the VPP design choices.


> VPP performance for the core features is largely bound by PCI-e bandwidth 
> (with all caveats coming with that statement obviously).
> It's not like that type of platform is going to grow a terabit switching 
> fabric any time soon...
> Software forwarding lags probable a decade and an order of magnitude. That it 
> makes up for in agility, flexiblity, scale...
> If you don't want that, wouldn't you just build something with a Trident 4? 
> ;-)

Good point, this is all about use cases and trade-off.


-=-=-=-=-=-=-=-=-=-=-=-
Links: You receive all messages sent to this group.

View/Reply Online (#14790): https://lists.fd.io/g/vpp-dev/message/14790
Mute This Topic: https://lists.fd.io/mt/65218320/21656
Group Owner: vpp-dev+ow...@lists.fd.io
Unsubscribe: https://lists.fd.io/g/vpp-dev/unsub  [arch...@mail-archive.com]
-=-=-=-=-=-=-=-=-=-=-=-


Re: [vpp-dev] efficient use of DPDK

2019-12-04 Thread Ole Troan
Thomas,

> 2/ it confirms the VPP design choice of not being DPDK-dependent (at a 
> performance cost)

Do you have any examples/features where a DPDK/offload solution would be 
performing better than VPP?
Any numbers?

Best regards,
Ole-=-=-=-=-=-=-=-=-=-=-=-
Links: You receive all messages sent to this group.

View/Reply Online (#14789): https://lists.fd.io/g/vpp-dev/message/14789
Mute This Topic: https://lists.fd.io/mt/65218320/21656
Group Owner: vpp-dev+ow...@lists.fd.io
Unsubscribe: https://lists.fd.io/g/vpp-dev/unsub  [arch...@mail-archive.com]
-=-=-=-=-=-=-=-=-=-=-=-


Re: [vpp-dev] efficient use of DPDK

2019-12-04 Thread Thomas Monjalon
03/12/2019 20:01, Damjan Marion:
> On 3 Dec 2019, at 17:06, Thomas Monjalon wrote:
> > 03/12/2019 13:12, Damjan Marion:
> >> On 3 Dec 2019, at 09:28, Thomas Monjalon wrote:
> >>> 03/12/2019 00:26, Damjan Marion:
>  On 2 Dec 2019, at 23:35, Thomas Monjalon wrote:
> > VPP has a buffer called vlib_buffer_t, while DPDK has rte_mbuf.
> > Are there some benchmarks about the cost of converting, from one format
> > to the other one, during Rx/Tx operations?
>  
>  We are benchmarking both dpdk i40e PMD performance and native VPP AVF 
>  driver performance and we are seeing significantly better performance 
>  with native AVF.
>  If you taake a look at [1] you will see that DPDK i40e driver provides 
>  18.62 Mpps and exactly the same test with native AVF driver is giving us 
>  arounf 24.86 Mpps.
> > [...]
>  
> > So why not improving DPDK integration in VPP to make it faster?
>  
>  Yes, if we can get freedom to use parts of DPDK we want instead of being 
>  forced to adopt whole DPDK ecosystem.
>  for example, you cannot use dpdk drivers without using EAL, mempool, 
>  rte_mbuf... rte_eal_init is monster which I was hoping that it will 
>  disappear for long time...

As stated below, I take this feedback, thanks.
However it won't change VPP choice of not using rte_mbuf natively.

[...]
> >> At the moment we have good coverage of native drivers, and still there is 
> >> a option for people to use dpdk. It is now mainly up to driver vendors to 
> >> decide if they are happy with performance they wil get from dpdk pmd or 
> >> they want better...
> > 
> > Yes it is possible to use DPDK in VPP with degraded performance.
> > If an user wants best performance with VPP and a real NIC,
> > a new driver must be implemented for VPP only.
> > 
> > Anyway real performance benefits are in hardware device offloads
> > which will be hard to implement in VPP native drivers.
> > Support (investment) would be needed from vendors to make it happen.
> > About offloads, VPP is not using crypto or compression drivers
> > that DPDK provides (plus regex coming).
> 
> Nice marketing pitch for your company :)

I guess you mean Mellanox has a good offloads offering.
But my point is about the end of Moore's law,
and the offload trending of most of device vendors.
However I truly respect the choice of avoiding device offloads.

> > VPP is a CPU-based packet processing software.
> > If users want to leverage hardware device offloads,
> > a truly DPDK-based software is required.
> > If I understand well your replies, such software cannot be VPP.
> 
> Yes, DPDK is centre of the universe/

DPDK is where most of networking devices are supported in userspace.
That's all.


> So Dear Thomas, I can continue this discussion forever, but that is not 
> something I'm going to do as it started to be trolling contest.

I agree

> I can understand that you may be passionate about you project and that you 
> maybe think that it is the greatest thing after sliced bread, but please 
> allow that other people have different opinion. Instead of giving the lessons 
> to other people what they should do, if you are interested for dpdk to be 
> better consumed, please take a feedback provided to you. I assume that you 
> are interested as you showed up on this mailing list, if not there was no 
> reason for starting this thread in the first place.

Thank you for the feedbacks, this discussion was required:
1/ it gives more motivation to improve EAL API
2/ it confirms the VPP design choice of not being DPDK-dependent (at a 
performance cost)
3/ it confirms the VPP design choice of being focused on CPU-based processing


-=-=-=-=-=-=-=-=-=-=-=-
Links: You receive all messages sent to this group.

View/Reply Online (#14788): https://lists.fd.io/g/vpp-dev/message/14788
Mute This Topic: https://lists.fd.io/mt/65218320/21656
Group Owner: vpp-dev+ow...@lists.fd.io
Unsubscribe: https://lists.fd.io/g/vpp-dev/unsub  [arch...@mail-archive.com]
-=-=-=-=-=-=-=-=-=-=-=-


Re: [vpp-dev] efficient use of DPDK

2019-12-03 Thread Jim Thompson via Lists.Fd.Io


> On Dec 3, 2019, at 12:56 PM, Ole Troan  wrote:
> 
> If you don't want that, wouldn't you just build something with a Trident 4? 
> ;-)

Or Tofino, if you want to go that direction. Even then, the amount of 
packet-processing (especially the edge/exception conditions) can overwhelm a 
hardware-based approach.
I’ve recently seen architectures where a VPP-based solution is the “slow path”, 
taking the exception traffic that a Tofino-based forwarder can’t handle.

But VPP is an open source project, and DPDK is also an open source project.  
Similar technologies, but, at least to us, very a very different style of 
interaction.  I’m here to suggest that a different measure of “efficient use of 
$PROJECT” can also be measured by the accomodation of patches.
We've had trouble getting bugs patched in DPDK drivers where we even provided a 
pointer to the exact solution and/or code to fix.

Specifics, if you like:

• ixgbe x552 SFP+ link delay — We had to push multiple times and go as 
far as to pester individuals (itself a violation of the ‘rules’ as contributors 
are only supposed email d...@dpdk.org , based on the 
comment “Pleas avoid private emails” at the top of the MAINTAINERS file.

• i40e did not advertise support for scatter/gather on a PF, but did on 
a VF.  This was the quickest resolution: 11 days after submitting it, someone 
emailed and said “OK”,  so we enabled it.
https://bugs.dpdk.org/show_bug.cgi?id=92

• A month or two ago we submitted a report that the X557 1Gb copper 
ports on the C3000 were advertising that they support higher speeds when they 
do not.  We haven’t heard a word back about it.
Perhaps this is similar to what Damjan referenced when he suggested 
that the ixgbe driver seems largely unmaintained.

To be fair, these are all Intel, not Mellanox, but my point is, and entirely in 
the converse to our experience with DPDK: VPP is has been entirely responsive 
to submitted patches for 3 years running.  We don’t get everything accepted, 
(nor are we asking to) but we do have 160 commits which have been upstreamed.  
This is dwarfed by Cisco, and Intel has about 100 more, but the point is not 
how much we’ve contributed, but the relative ease of contribution to upstream 
in DPDK .vs VPP.

Jim-=-=-=-=-=-=-=-=-=-=-=-
Links: You receive all messages sent to this group.

View/Reply Online (#14775): https://lists.fd.io/g/vpp-dev/message/14775
Mute This Topic: https://lists.fd.io/mt/65218320/21656
Group Owner: vpp-dev+ow...@lists.fd.io
Unsubscribe: https://lists.fd.io/g/vpp-dev/unsub  [arch...@mail-archive.com]
-=-=-=-=-=-=-=-=-=-=-=-


Re: [vpp-dev] efficient use of DPDK

2019-12-03 Thread Jerome Tollet via Lists.Fd.Io
Thomas,
I am afraid you may be missing the point. VPP is a framework where plugins are 
first class citizens. If a plugin requires leveraging offload (inline or 
lookaside), it is more than welcome to do it.
There are multiple examples including hw crypto accelerators 
(https://software.intel.com/en-us/articles/get-started-with-ipsec-acceleration-in-the-fdio-vpp-project).
 
Jerome

Le 03/12/2019 17:07, « vpp-dev@lists.fd.io au nom de Thomas Monjalon » 
 a écrit :

03/12/2019 13:12, Damjan Marion:
> > On 3 Dec 2019, at 09:28, Thomas Monjalon  wrote:
> > 03/12/2019 00:26, Damjan Marion:
> >> 
> >> Hi THomas!
> >> 
> >> Inline...
> >> 
>  On 2 Dec 2019, at 23:35, Thomas Monjalon  wrote:
> >>> 
> >>> Hi all,
> >>> 
> >>> VPP has a buffer called vlib_buffer_t, while DPDK has rte_mbuf.
> >>> Are there some benchmarks about the cost of converting, from one 
format
> >>> to the other one, during Rx/Tx operations?
> >> 
> >> We are benchmarking both dpdk i40e PMD performance and native VPP AVF 
driver performance and we are seeing significantly better performance with 
native AVF.
> >> If you taake a look at [1] you will see that DPDK i40e driver provides 
18.62 Mpps and exactly the same test with native AVF driver is giving us arounf 
24.86 Mpps.
[...]
> >> 
> >>> So why not improving DPDK integration in VPP to make it faster?
> >> 
> >> Yes, if we can get freedom to use parts of DPDK we want instead of 
being forced to adopt whole DPDK ecosystem.
> >> for example, you cannot use dpdk drivers without using EAL, mempool, 
rte_mbuf... rte_eal_init is monster which I was hoping that it will disappear 
for long time...
> > 
> > You could help to improve these parts of DPDK,
> > instead of spending time to try implementing few drivers.
> > Then VPP would benefit from a rich driver ecosystem.
> 
> Thank you for letting me know what could be better use of my time.

"You" was referring to VPP developers.
I think some other Cisco developers are also contributing to VPP.

> At the moment we have good coverage of native drivers, and still there is 
a option for people to use dpdk. It is now mainly up to driver vendors to 
decide if they are happy with performance they wil get from dpdk pmd or they 
want better...

Yes it is possible to use DPDK in VPP with degraded performance.
If an user wants best performance with VPP and a real NIC,
a new driver must be implemented for VPP only.

Anyway real performance benefits are in hardware device offloads
which will be hard to implement in VPP native drivers.
Support (investment) would be needed from vendors to make it happen.
About offloads, VPP is not using crypto or compression drivers
that DPDK provides (plus regex coming).

VPP is a CPU-based packet processing software.
If users want to leverage hardware device offloads,
a truly DPDK-based software is required.
If I understand well your replies, such software cannot be VPP.




-=-=-=-=-=-=-=-=-=-=-=-
Links: You receive all messages sent to this group.

View/Reply Online (#14774): https://lists.fd.io/g/vpp-dev/message/14774
Mute This Topic: https://lists.fd.io/mt/65218320/21656
Group Owner: vpp-dev+ow...@lists.fd.io
Unsubscribe: https://lists.fd.io/g/vpp-dev/unsub  [arch...@mail-archive.com]
-=-=-=-=-=-=-=-=-=-=-=-


Re: [vpp-dev] efficient use of DPDK

2019-12-03 Thread Ole Troan
Interesting discussion.

> Yes it is possible to use DPDK in VPP with degraded performance.
> If an user wants best performance with VPP and a real NIC,
> a new driver must be implemented for VPP only.
> 
> Anyway real performance benefits are in hardware device offloads
> which will be hard to implement in VPP native drivers.
> Support (investment) would be needed from vendors to make it happen.
> About offloads, VPP is not using crypto or compression drivers
> that DPDK provides (plus regex coming).
> 
> VPP is a CPU-based packet processing software.
> If users want to leverage hardware device offloads,
> a truly DPDK-based software is required.
> If I understand well your replies, such software cannot be VPP.

I don't think that we are principled against having features run on the NIC as 
such.
VPP is a framework for building forwarding applications.
That often implies doing lots of funky stuff with packets.

And more often than we like hardware offload creates problems.
Be it architecturally with layer violations like GSO/GRO.
Correct handling of IPv6 extension headers, fragments.
Dealing with various encaps and tunnels.
Or doing symmetric RSS on two sides of a NAT.
Or even other transport protocols than TCP/UDP.

And it's not like there is any consistency across NICs:
http://doc.dpdk.org/guides/nics/overview.html#id1

We don't want VPP to only support DPDK drivers.
It's of course a tradeoff, and it's not like we don't have experience with 
hardware to do packet forwarding.
At least in my own experience, as soon as you want to have features running in 
hardware, you loose a lot of flexiblity and agility.
That's just the name of that game. Living under the yoke of hardware 
limitations is something I've tried to escape for 20 years.
I'm probably not alone, and that's why you are seeing some pushback...

VPP performance for the core features is largely bound by PCI-e bandwidth (with 
all caveats coming with that statement obviously).
It's not like that type of platform is going to grow a terabit switching fabric 
any time soon...
Software forwarding lags probable a decade and an order of magnitude. That it 
makes up for in agility, flexiblity, scale...
If you don't want that, wouldn't you just build something with a Trident 4? ;-)

Best regards,
Ole



-=-=-=-=-=-=-=-=-=-=-=-
Links: You receive all messages sent to this group.

View/Reply Online (#14773): https://lists.fd.io/g/vpp-dev/message/14773
Mute This Topic: https://lists.fd.io/mt/65218320/21656
Group Owner: vpp-dev+ow...@lists.fd.io
Unsubscribe: https://lists.fd.io/g/vpp-dev/unsub  [arch...@mail-archive.com]
-=-=-=-=-=-=-=-=-=-=-=-


Re: [vpp-dev] efficient use of DPDK

2019-12-03 Thread Damjan Marion via Lists.Fd.Io


> On 3 Dec 2019, at 17:06, Thomas Monjalon  wrote:
> 
> 03/12/2019 13:12, Damjan Marion:
>>> On 3 Dec 2019, at 09:28, Thomas Monjalon  wrote:
>>> 03/12/2019 00:26, Damjan Marion:
 
 Hi THomas!
 
 Inline...
 
>> On 2 Dec 2019, at 23:35, Thomas Monjalon  wrote:
> 
> Hi all,
> 
> VPP has a buffer called vlib_buffer_t, while DPDK has rte_mbuf.
> Are there some benchmarks about the cost of converting, from one format
> to the other one, during Rx/Tx operations?
 
 We are benchmarking both dpdk i40e PMD performance and native VPP AVF 
 driver performance and we are seeing significantly better performance with 
 native AVF.
 If you taake a look at [1] you will see that DPDK i40e driver provides 
 18.62 Mpps and exactly the same test with native AVF driver is giving us 
 arounf 24.86 Mpps.
> [...]
 
> So why not improving DPDK integration in VPP to make it faster?
 
 Yes, if we can get freedom to use parts of DPDK we want instead of being 
 forced to adopt whole DPDK ecosystem.
 for example, you cannot use dpdk drivers without using EAL, mempool, 
 rte_mbuf... rte_eal_init is monster which I was hoping that it will 
 disappear for long time...
>>> 
>>> You could help to improve these parts of DPDK,
>>> instead of spending time to try implementing few drivers.
>>> Then VPP would benefit from a rich driver ecosystem.
>> 
>> Thank you for letting me know what could be better use of my time.
> 
> "You" was referring to VPP developers.
> I think some other Cisco developers are also contributing to VPP.

But that "you" also includes myself, so i felt that i need to thank you...

> 
>> At the moment we have good coverage of native drivers, and still there is a 
>> option for people to use dpdk. It is now mainly up to driver vendors to 
>> decide if they are happy with performance they wil get from dpdk pmd or they 
>> want better...
> 
> Yes it is possible to use DPDK in VPP with degraded performance.
> If an user wants best performance with VPP and a real NIC,
> a new driver must be implemented for VPP only.
> 
> Anyway real performance benefits are in hardware device offloads
> which will be hard to implement in VPP native drivers.
> Support (investment) would be needed from vendors to make it happen.
> About offloads, VPP is not using crypto or compression drivers
> that DPDK provides (plus regex coming).

Nice marketing pitch for your company :)

> 
> VPP is a CPU-based packet processing software.
> If users want to leverage hardware device offloads,
> a truly DPDK-based software is required.
> If I understand well your replies, such software cannot be VPP.

Yes, DPDK is centre of the universe/

So Dear Thomas, I can continue this discussion forever, but that is not 
something I'm going to do as it started to be trolling contest.
I can understand that you may be passionate about you project and that you 
maybe think that it is the greatest thing after sliced bread, but please allow 
that other people have different opinion. Instead of giving the lessons to 
other people what they should do, if you are interested for dpdk to be better 
consumed, please take a feedback provided to you. I assume that you are 
interested as you showed up on this mailing list, if not there was no reason 
for starting this thread in the first place.

-- 
Damjan

-=-=-=-=-=-=-=-=-=-=-=-
Links: You receive all messages sent to this group.

View/Reply Online (#14770): https://lists.fd.io/g/vpp-dev/message/14770
Mute This Topic: https://lists.fd.io/mt/65218320/21656
Group Owner: vpp-dev+ow...@lists.fd.io
Unsubscribe: https://lists.fd.io/g/vpp-dev/unsub  [arch...@mail-archive.com]
-=-=-=-=-=-=-=-=-=-=-=-


Re: [vpp-dev] efficient use of DPDK

2019-12-03 Thread Damjan Marion via Lists.Fd.Io


> On 3 Dec 2019, at 17:06, Thomas Monjalon  wrote:
> 
> 03/12/2019 13:12, Damjan Marion:
>>> On 3 Dec 2019, at 09:28, Thomas Monjalon  wrote:
>>> 03/12/2019 00:26, Damjan Marion:
 
 Hi THomas!
 
 Inline...
 
>> On 2 Dec 2019, at 23:35, Thomas Monjalon  wrote:
> 
> Hi all,
> 
> VPP has a buffer called vlib_buffer_t, while DPDK has rte_mbuf.
> Are there some benchmarks about the cost of converting, from one format
> to the other one, during Rx/Tx operations?
 
 We are benchmarking both dpdk i40e PMD performance and native VPP AVF 
 driver performance and we are seeing significantly better performance with 
 native AVF.
 If you taake a look at [1] you will see that DPDK i40e driver provides 
 18.62 Mpps and exactly the same test with native AVF driver is giving us 
 arounf 24.86 Mpps.
> [...]
 
> So why not improving DPDK integration in VPP to make it faster?
 
 Yes, if we can get freedom to use parts of DPDK we want instead of being 
 forced to adopt whole DPDK ecosystem.
 for example, you cannot use dpdk drivers without using EAL, mempool, 
 rte_mbuf... rte_eal_init is monster which I was hoping that it will 
 disappear for long time...
>>> 
>>> You could help to improve these parts of DPDK,
>>> instead of spending time to try implementing few drivers.
>>> Then VPP would benefit from a rich driver ecosystem.
>> 
>> Thank you for letting me know what could be better use of my time.
> 
> "You" was referring to VPP developers.
> I think some other Cisco developers are also contributing to VPP.

But that "you" also includes myself, so i felt that i need to thank you...

> 
>> At the moment we have good coverage of native drivers, and still there is a 
>> option for people to use dpdk. It is now mainly up to driver vendors to 
>> decide if they are happy with performance they wil get from dpdk pmd or they 
>> want better...
> 
> Yes it is possible to use DPDK in VPP with degraded performance.
> If an user wants best performance with VPP and a real NIC,
> a new driver must be implemented for VPP only.
> 
> Anyway real performance benefits are in hardware device offloads
> which will be hard to implement in VPP native drivers.
> Support (investment) would be needed from vendors to make it happen.
> About offloads, VPP is not using crypto or compression drivers
> that DPDK provides (plus regex coming).

Nice marketing pitch for your company :)

> 
> VPP is a CPU-based packet processing software.
> If users want to leverage hardware device offloads,
> a truly DPDK-based software is required.
> If I understand well your replies, such software cannot be VPP.

Yes, DPDK is centre of the universe/

So Dear Thomas, I can continue this discussion forever, but that is not 
something I'm going to do as it started to be trolling contest.
I can understand that you may be passionate about you project and that you 
maybe think that it is the greatest thing after sliced bread, but please allow 
that other people have different opinion. Instead of giving the lessons to 
other people what they should do, if you are interested for dpdk to be better 
consumed, please take a feedback provided to you. I assume that you are 
interested as you showed up on this mailing list, if not there was no reason 
for starting this thread in the first place.

-- 
Damjan

-=-=-=-=-=-=-=-=-=-=-=-
Links: You receive all messages sent to this group.

View/Reply Online (#14771): https://lists.fd.io/g/vpp-dev/message/14771
Mute This Topic: https://lists.fd.io/mt/65218320/21656
Group Owner: vpp-dev+ow...@lists.fd.io
Unsubscribe: https://lists.fd.io/g/vpp-dev/unsub  [arch...@mail-archive.com]
-=-=-=-=-=-=-=-=-=-=-=-


Re: [vpp-dev] efficient use of DPDK

2019-12-03 Thread Thomas Monjalon
03/12/2019 13:12, Damjan Marion:
> > On 3 Dec 2019, at 09:28, Thomas Monjalon  wrote:
> > 03/12/2019 00:26, Damjan Marion:
> >> 
> >> Hi THomas!
> >> 
> >> Inline...
> >> 
>  On 2 Dec 2019, at 23:35, Thomas Monjalon  wrote:
> >>> 
> >>> Hi all,
> >>> 
> >>> VPP has a buffer called vlib_buffer_t, while DPDK has rte_mbuf.
> >>> Are there some benchmarks about the cost of converting, from one format
> >>> to the other one, during Rx/Tx operations?
> >> 
> >> We are benchmarking both dpdk i40e PMD performance and native VPP AVF 
> >> driver performance and we are seeing significantly better performance with 
> >> native AVF.
> >> If you taake a look at [1] you will see that DPDK i40e driver provides 
> >> 18.62 Mpps and exactly the same test with native AVF driver is giving us 
> >> arounf 24.86 Mpps.
[...]
> >> 
> >>> So why not improving DPDK integration in VPP to make it faster?
> >> 
> >> Yes, if we can get freedom to use parts of DPDK we want instead of being 
> >> forced to adopt whole DPDK ecosystem.
> >> for example, you cannot use dpdk drivers without using EAL, mempool, 
> >> rte_mbuf... rte_eal_init is monster which I was hoping that it will 
> >> disappear for long time...
> > 
> > You could help to improve these parts of DPDK,
> > instead of spending time to try implementing few drivers.
> > Then VPP would benefit from a rich driver ecosystem.
> 
> Thank you for letting me know what could be better use of my time.

"You" was referring to VPP developers.
I think some other Cisco developers are also contributing to VPP.

> At the moment we have good coverage of native drivers, and still there is a 
> option for people to use dpdk. It is now mainly up to driver vendors to 
> decide if they are happy with performance they wil get from dpdk pmd or they 
> want better...

Yes it is possible to use DPDK in VPP with degraded performance.
If an user wants best performance with VPP and a real NIC,
a new driver must be implemented for VPP only.

Anyway real performance benefits are in hardware device offloads
which will be hard to implement in VPP native drivers.
Support (investment) would be needed from vendors to make it happen.
About offloads, VPP is not using crypto or compression drivers
that DPDK provides (plus regex coming).

VPP is a CPU-based packet processing software.
If users want to leverage hardware device offloads,
a truly DPDK-based software is required.
If I understand well your replies, such software cannot be VPP.


-=-=-=-=-=-=-=-=-=-=-=-
Links: You receive all messages sent to this group.

View/Reply Online (#14767): https://lists.fd.io/g/vpp-dev/message/14767
Mute This Topic: https://lists.fd.io/mt/65218320/21656
Group Owner: vpp-dev+ow...@lists.fd.io
Unsubscribe: https://lists.fd.io/g/vpp-dev/unsub  [arch...@mail-archive.com]
-=-=-=-=-=-=-=-=-=-=-=-


Re: [vpp-dev] efficient use of DPDK

2019-12-03 Thread Damjan Marion via Lists.Fd.Io


> 
> On 3 Dec 2019, at 09:28, Thomas Monjalon  wrote:
> 
> 03/12/2019 00:26, Damjan Marion:
>> 
>> Hi THomas!
>> 
>> Inline...
>> 
 On 2 Dec 2019, at 23:35, Thomas Monjalon  wrote:
>>> 
>>> Hi all,
>>> 
>>> VPP has a buffer called vlib_buffer_t, while DPDK has rte_mbuf.
>>> Are there some benchmarks about the cost of converting, from one format
>>> to the other one, during Rx/Tx operations?
>> 
>> We are benchmarking both dpdk i40e PMD performance and native VPP AVF driver 
>> performance and we are seeing significantly better performance with native 
>> AVF.
>> If you taake a look at [1] you will see that DPDK i40e driver provides 18.62 
>> Mpps and exactly the same test with native AVF driver is giving us arounf 
>> 24.86 Mpps.
> 
> Why not comparing with DPDK AVF?


i40e was simply there from day one...

> 
>> Thanks for native AVF driver and new buffer management code we managed to go 
>> bellow 100 clocks per packet for the whole ipv4 routing base test. 
>> 
>> My understanding is that performance difference is caused by 4 factors, but 
>> i cannot support each of them with number as i never conducted detailed 
>> testing.
>> 
>> - less work done in driver code, as we have freedom to cherrypick only data 
>> we need, and in case of DPDK, PMD needs to be universal
> 
> For info, offloads are disabled by default now in DPDK.

Good to know...

> 
>> - no cost of medatata processing (rtr_mbuf -> vlib_buffer_t) conversion
>> 
>> - less pressure on cache (we touch 2 cacheline less with native driver for 
>> each packet), this is specially observable on smaller devices with less cache
>> 
>> - faster buffer management code
>> 
>> 
>>> I'm sure there would be some benefits of switching VPP to natively use
>>> the DPDK mbuf allocated in mempools.
>> 
>> I dont agree with this statement, we hawe own buffer management code an we 
>> are not interested in using dpdk mempools. There are many use cases where we 
>> don't need DPDK and we wan't VPP not to be dependant on DPDK code.
>> 
>>> What would be the drawbacks?
>> 
>> 
>>> Last time I asked this question, the answer was about compatibility with
>>> other driver backends, especially ODP. What happened?
>>> DPDK drivers are still the only external drivers used by VPP?
>> 
>> No, we still use DPDK drivers in many cases, but also we 
>> have lot of native drivers in VPP those days:
>> 
>> - intel AVF
>> - virtio
>> - vmxnet3
>> - rdma (for mlx4, mlx5 an other rdma capable cards), direct verbs for mlx5 
>> work in progess
>> - tap with virtio backend
>> - memif
>> - marvlel pp2
>> - (af_xdp - work in progress)
>> 
>>> When using DPDK, more than 40 networking drivers are available:
>>>https://core.dpdk.org/supported/
>>> After 4 years of Open Source VPP, there are less than 10 native drivers:
>>>- virtual drivers: virtio, vmxnet3, af_packet, netmap, memif
>>>- hardware drivers: ixge, avf, pp2
>>> And if looking at ixge driver, we can read:
>>> "
>>>This driver is not intended for production use and it is unsupported.
>>>It is provided for educational use only.
>>>Please use supported DPDK driver instead.
>>> "
>> 
>> yep, ixgbe driver is not maintained for long time...
>> 
>>> So why not improving DPDK integration in VPP to make it faster?
>> 
>> Yes, if we can get freedom to use parts of DPDK we want instead of being 
>> forced to adopt whole DPDK ecosystem.
>> for example, you cannot use dpdk drivers without using EAL, mempool, 
>> rte_mbuf... rte_eal_init is monster which I was hoping that it will 
>> disappear for long time...
> 
> You could help to improve these parts of DPDK,
> instead of spending time to try implementing few drivers.
> Then VPP would benefit from a rich driver ecosystem.


Thank you for letting me know what could be better use of my time.

At the moment we have good coverage of native drivers, and still there is a 
option for people to use dpdk. It is now mainly up to driver vendors to decide 
if they are happy with performance they wil get from dpdk pmd or they want 
better...

> 
> 
>> Good example what will be good fit for us is rdma-core library, it allows 
>> you to programm nic and fetch packets from it in much more lightweight way, 
>> and if you really want to have super-fast datapath, there is direct verbs 
>> interface which gives you access to tx/rx rings directly.
>> 
>>> DPDK mbuf has dynamic fields now; it can help to register metadata on 
>>> demand.
>>> And it is still possible to statically reserve some extra space for
>>> application-specific metadata in each packet.
>> 
>> I don't see this s a huge benefit, you still need to call rte_eal_init, you 
>> still need to use dpdk mempools. Basically it still requires adoption of the 
>> whole dpdk ecosystem which we don't want...
>> 
>> 
>>> Other improvements, like meson packaging usable with pkg-config,
>>> were done during last years and may deserve to be considered.
>> 
>> I'm aware of that but I was not able to found good justification to 

Re: [vpp-dev] efficient use of DPDK

2019-12-03 Thread Thomas Monjalon
03/12/2019 00:26, Damjan Marion:
> 
> Hi THomas!
> 
> Inline...
> 
> > On 2 Dec 2019, at 23:35, Thomas Monjalon  wrote:
> > 
> > Hi all,
> > 
> > VPP has a buffer called vlib_buffer_t, while DPDK has rte_mbuf.
> > Are there some benchmarks about the cost of converting, from one format
> > to the other one, during Rx/Tx operations?
> 
> We are benchmarking both dpdk i40e PMD performance and native VPP AVF driver 
> performance and we are seeing significantly better performance with native 
> AVF.
> If you taake a look at [1] you will see that DPDK i40e driver provides 18.62 
> Mpps and exactly the same test with native AVF driver is giving us arounf 
> 24.86 Mpps.

Why not comparing with DPDK AVF?

> Thanks for native AVF driver and new buffer management code we managed to go 
> bellow 100 clocks per packet for the whole ipv4 routing base test. 
> 
> My understanding is that performance difference is caused by 4 factors, but i 
> cannot support each of them with number as i never conducted detailed testing.
> 
> - less work done in driver code, as we have freedom to cherrypick only data 
> we need, and in case of DPDK, PMD needs to be universal

For info, offloads are disabled by default now in DPDK.

> - no cost of medatata processing (rtr_mbuf -> vlib_buffer_t) conversion
> 
> - less pressure on cache (we touch 2 cacheline less with native driver for 
> each packet), this is specially observable on smaller devices with less cache
> 
> - faster buffer management code
> 
> 
> > I'm sure there would be some benefits of switching VPP to natively use
> > the DPDK mbuf allocated in mempools.
> 
> I dont agree with this statement, we hawe own buffer management code an we 
> are not interested in using dpdk mempools. There are many use cases where we 
> don't need DPDK and we wan't VPP not to be dependant on DPDK code.
> 
> > What would be the drawbacks?
> 
> 
> > Last time I asked this question, the answer was about compatibility with
> > other driver backends, especially ODP. What happened?
> > DPDK drivers are still the only external drivers used by VPP?
> 
> No, we still use DPDK drivers in many cases, but also we 
> have lot of native drivers in VPP those days:
> 
> - intel AVF
> - virtio
> - vmxnet3
> - rdma (for mlx4, mlx5 an other rdma capable cards), direct verbs for mlx5 
> work in progess
> - tap with virtio backend
> - memif
> - marvlel pp2
> - (af_xdp - work in progress)
> 
> > When using DPDK, more than 40 networking drivers are available:
> > https://core.dpdk.org/supported/
> > After 4 years of Open Source VPP, there are less than 10 native drivers:
> > - virtual drivers: virtio, vmxnet3, af_packet, netmap, memif
> > - hardware drivers: ixge, avf, pp2
> > And if looking at ixge driver, we can read:
> > "
> > This driver is not intended for production use and it is unsupported.
> > It is provided for educational use only.
> > Please use supported DPDK driver instead.
> > "
> 
> yep, ixgbe driver is not maintained for long time...
> 
> > So why not improving DPDK integration in VPP to make it faster?
> 
> Yes, if we can get freedom to use parts of DPDK we want instead of being 
> forced to adopt whole DPDK ecosystem.
> for example, you cannot use dpdk drivers without using EAL, mempool, 
> rte_mbuf... rte_eal_init is monster which I was hoping that it will disappear 
> for long time...

You could help to improve these parts of DPDK,
instead of spending time to try implementing few drivers.
Then VPP would benefit from a rich driver ecosystem.


> Good example what will be good fit for us is rdma-core library, it allows you 
> to programm nic and fetch packets from it in much more lightweight way, and 
> if you really want to have super-fast datapath, there is direct verbs 
> interface which gives you access to tx/rx rings directly.
> 
> > DPDK mbuf has dynamic fields now; it can help to register metadata on 
> > demand.
> > And it is still possible to statically reserve some extra space for
> > application-specific metadata in each packet.
> 
> I don't see this s a huge benefit, you still need to call rte_eal_init, you 
> still need to use dpdk mempools. Basically it still requires adoption of the 
> whole dpdk ecosystem which we don't want...
> 
> 
> > Other improvements, like meson packaging usable with pkg-config,
> > were done during last years and may deserve to be considered.
> 
> I'm aware of that but I was not able to found good justification to invest 
> time to change existing scripting to move to meson. As typically vpp 
> developers doesn't need to compile dpdk very frequently current solution is 
> simply good enough...



-=-=-=-=-=-=-=-=-=-=-=-
Links: You receive all messages sent to this group.

View/Reply Online (#14761): https://lists.fd.io/g/vpp-dev/message/14761
Mute This Topic: https://lists.fd.io/mt/65218320/21656
Group Owner: vpp-dev+ow...@lists.fd.io
Unsubscribe: https://lists.fd.io/g/vpp-dev/unsub  [arch...@mail-archive.com]
-=-=-=-=-=-=-=-=-=-=-=-


Re: [vpp-dev] efficient use of DPDK

2019-12-02 Thread Honnappa Nagarahalli
Thanks for bringing up the discussion

> -Original Message-
> From: vpp-dev@lists.fd.io  On Behalf Of Thomas
> Monjalon via Lists.Fd.Io
> Sent: Monday, December 2, 2019 4:35 PM
> To: vpp-dev@lists.fd.io
> Cc: vpp-dev@lists.fd.io
> Subject: [vpp-dev] efficient use of DPDK
> 
> Hi all,
> 
> VPP has a buffer called vlib_buffer_t, while DPDK has rte_mbuf.
> Are there some benchmarks about the cost of converting, from one format to
> the other one, during Rx/Tx operations?
> 
> I'm sure there would be some benefits of switching VPP to natively use the
> DPDK mbuf allocated in mempools.
> What would be the drawbacks?
> 
> Last time I asked this question, the answer was about compatibility with
> other driver backends, especially ODP. What happened?
I think, the ODP4VPP project was closed sometime back. I do not know of anyone 
working on this project anymore.

> DPDK drivers are still the only external drivers used by VPP?
> 
> When using DPDK, more than 40 networking drivers are available:
>   https://core.dpdk.org/supported/
> After 4 years of Open Source VPP, there are less than 10 native drivers:
>   - virtual drivers: virtio, vmxnet3, af_packet, netmap, memif
>   - hardware drivers: ixge, avf, pp2
> And if looking at ixge driver, we can read:
> "
>   This driver is not intended for production use and it is unsupported.
>   It is provided for educational use only.
>   Please use supported DPDK driver instead.
> "
> 
> So why not improving DPDK integration in VPP to make it faster?
> 
> DPDK mbuf has dynamic fields now; it can help to register metadata on
> demand.
> And it is still possible to statically reserve some extra space for 
> application-
> specific metadata in each packet.
> 
> Other improvements, like meson packaging usable with pkg-config, were
> done during last years and may deserve to be considered.
> 

-=-=-=-=-=-=-=-=-=-=-=-
Links: You receive all messages sent to this group.

View/Reply Online (#14760): https://lists.fd.io/g/vpp-dev/message/14760
Mute This Topic: https://lists.fd.io/mt/65218320/21656
Group Owner: vpp-dev+ow...@lists.fd.io
Unsubscribe: https://lists.fd.io/g/vpp-dev/unsub  [arch...@mail-archive.com]
-=-=-=-=-=-=-=-=-=-=-=-


Re: [vpp-dev] efficient use of DPDK

2019-12-02 Thread Damjan Marion via Lists.Fd.Io

Hi THomas!

Inline...

> On 2 Dec 2019, at 23:35, Thomas Monjalon  wrote:
> 
> Hi all,
> 
> VPP has a buffer called vlib_buffer_t, while DPDK has rte_mbuf.
> Are there some benchmarks about the cost of converting, from one format
> to the other one, during Rx/Tx operations?

We are benchmarking both dpdk i40e PMD performance and native VPP AVF driver 
performance and we are seeing significantly better performance with native AVF.
If you taake a look at [1] you will see that DPDK i40e driver provides 18.62 
Mpps and exactly the same test with native AVF driver is giving us arounf 24.86 
Mpps.

Thanks for native AVF driver and new buffer management code we managed to go 
bellow 100 clocks per packet for the whole ipv4 routing base test. 

My understanding is that performance difference is caused by 4 factors, but i 
cannot support each of them with number as i never conducted detailed testing.

- less work done in driver code, as we have freedom to cherrypick only data we 
need, and in case of DPDK, PMD needs to be universal

- no cost of medatata processing (rtr_mbuf -> vlib_buffer_t) conversion

- less pressure on cache (we touch 2 cacheline less with native driver for each 
packet), this is specially observable on smaller devices with less cache

- faster buffer management code


> 
> I'm sure there would be some benefits of switching VPP to natively use
> the DPDK mbuf allocated in mempools.

I dont agree with this statement, we hawe own buffer management code an we are 
not interested in using dpdk mempools. There are many use cases where we don't 
need DPDK and we wan't VPP not to be dependant on DPDK code.

> What would be the drawbacks?



> 
> Last time I asked this question, the answer was about compatibility with
> other driver backends, especially ODP. What happened?
> DPDK drivers are still the only external drivers used by VPP?

No, we still use DPDK drivers in many cases, but also we 
have lot of native drivers in VPP those days:

- intel AVF
- virtio
- vmxnet3
- rdma (for mlx4, mlx5 an other rdma capable cards), direct verbs for mlx5 work 
in progess
- tap with virtio backend
- memif
- marvlel pp2
- (af_xdp - work in progress)

> 
> When using DPDK, more than 40 networking drivers are available:
>   https://core.dpdk.org/supported/
> After 4 years of Open Source VPP, there are less than 10 native drivers:
>   - virtual drivers: virtio, vmxnet3, af_packet, netmap, memif
>   - hardware drivers: ixge, avf, pp2
> And if looking at ixge driver, we can read:
> "
>   This driver is not intended for production use and it is unsupported.
>   It is provided for educational use only.
>   Please use supported DPDK driver instead.
> "

yep, ixgbe driver is not maintained for long time...

> 
> So why not improving DPDK integration in VPP to make it faster?

Yes, if we can get freedom to use parts of DPDK we want instead of being forced 
to adopt whole DPDK ecosystem.
for example, you cannot use dpdk drivers without using EAL, mempool, 
rte_mbuf... rte_eal_init is monster which I was hoping that it will disappear 
for long time...

Good example what will be good fit for us is rdma-core library, it allows you 
to programm nic and fetch packets from it in much more lightweight way, and if 
you really want to have super-fast datapath, there is direct verbs interface 
which gives you access to tx/rx rings directly.

> DPDK mbuf has dynamic fields now; it can help to register metadata on demand.
> And it is still possible to statically reserve some extra space for
> application-specific metadata in each packet.

I don't see this s a huge benefit, you still need to call rte_eal_init, you 
still need to use dpdk mempools. Basically it still requires adoption of the 
whole dpdk ecosystem which we don't want...


> Other improvements, like meson packaging usable with pkg-config,
> were done during last years and may deserve to be considered.

I'm aware of that but I was not able to found good justification to invest time 
to change existing scripting to move to meson. As typically vpp developers 
doesn't need to compile dpdk very frequently current solution is simply good 
enough...

-- 
Damjan 

[1] 
https://docs.fd.io/csit/master/report/vpp_performance_tests/packet_throughput_graphs/ip4-3n-skx-xxv710.html#


-=-=-=-=-=-=-=-=-=-=-=-
Links: You receive all messages sent to this group.

View/Reply Online (#14759): https://lists.fd.io/g/vpp-dev/message/14759
Mute This Topic: https://lists.fd.io/mt/65218320/21656
Group Owner: vpp-dev+ow...@lists.fd.io
Unsubscribe: https://lists.fd.io/g/vpp-dev/unsub  [arch...@mail-archive.com]
-=-=-=-=-=-=-=-=-=-=-=-


[vpp-dev] efficient use of DPDK

2019-12-02 Thread Thomas Monjalon
Hi all,

VPP has a buffer called vlib_buffer_t, while DPDK has rte_mbuf.
Are there some benchmarks about the cost of converting, from one format
to the other one, during Rx/Tx operations?

I'm sure there would be some benefits of switching VPP to natively use
the DPDK mbuf allocated in mempools.
What would be the drawbacks?

Last time I asked this question, the answer was about compatibility with
other driver backends, especially ODP. What happened?
DPDK drivers are still the only external drivers used by VPP?

When using DPDK, more than 40 networking drivers are available:
https://core.dpdk.org/supported/
After 4 years of Open Source VPP, there are less than 10 native drivers:
- virtual drivers: virtio, vmxnet3, af_packet, netmap, memif
- hardware drivers: ixge, avf, pp2
And if looking at ixge driver, we can read:
"
This driver is not intended for production use and it is unsupported.
It is provided for educational use only.
Please use supported DPDK driver instead.
"

So why not improving DPDK integration in VPP to make it faster?

DPDK mbuf has dynamic fields now; it can help to register metadata on demand.
And it is still possible to statically reserve some extra space for
application-specific metadata in each packet.

Other improvements, like meson packaging usable with pkg-config,
were done during last years and may deserve to be considered.


-=-=-=-=-=-=-=-=-=-=-=-
Links: You receive all messages sent to this group.

View/Reply Online (#14758): https://lists.fd.io/g/vpp-dev/message/14758
Mute This Topic: https://lists.fd.io/mt/65218320/21656
Group Owner: vpp-dev+ow...@lists.fd.io
Unsubscribe: https://lists.fd.io/g/vpp-dev/unsub  [arch...@mail-archive.com]
-=-=-=-=-=-=-=-=-=-=-=-