RE: mlx5 VF packet lost between rx_port_unicast_packets and rx_good_packets

2021-11-15 Thread Francesco Montorsi
Hi Asaf,
Thanks for your quick answer.
I’m trying to upgrade, will update you shortly.
However I think from reading the full email thread

https://inbox.dpdk.org/users/dm8pr12mb5494459b49353faccacef3c3cd...@dm8pr12mb5494.namprd12.prod.outlook.com/t/#mc9927dd8f5f092d5042d95fa520b29765d17ddf8

that upgrade is not fixing this problem (at least it didn’t fix it for Yan 
FWICS)
So please check on your side if possible.
Reproducing the problem just requires overloading the receiver side with too 
many PPS…

Thanks a lot,
Francesco

From: Asaf Penso 
Sent: Thursday, November 11, 2021 6:28 AM
To: Francesco Montorsi ; Yan, Xiaoping (NSB - 
CN/Hangzhou) ; Gerry Wan ; 
Slava Ovsiienko ; Matan Azrad ; 
Raslan Darawsheh 
Cc: Martin Weiser ; David Marchand 
; users@dpdk.org
Subject: Re: mlx5 VF packet lost between rx_port_unicast_packets and 
rx_good_packets

CAUTION: External Email : Be wary of clicking links or if this claims to be 
internal.
Hello Francesco,
To ensure the issue still exists, could you try the latest 19.11 LTS? 19.11.5 
is a bit out dated and doesn't contain a lot of DPDK fixes.
In the meanwhile, I'll check internally about this issue and update.

Regards,
Asaf Penso

From: Francesco Montorsi 
mailto:francesco.monto...@infovista.com>>
Sent: Thursday, November 11, 2021 1:53:43 AM
To: Yan, Xiaoping (NSB - CN/Hangzhou) 
mailto:xiaoping@nokia-sbell.com>>; Gerry Wan 
mailto:ger...@stanford.edu>>; Asaf Penso 
mailto:as...@nvidia.com>>; Slava Ovsiienko 
mailto:viachesl...@nvidia.com>>; Matan Azrad 
mailto:ma...@nvidia.com>>; Raslan Darawsheh 
mailto:rasl...@nvidia.com>>
Cc: Martin Weiser 
mailto:martin.wei...@allegro-packets.com>>; 
David Marchand mailto:david.march...@redhat.com>>; 
users@dpdk.org mailto:users@dpdk.org>>
Subject: RE: mlx5 VF packet lost between rx_port_unicast_packets and 
rx_good_packets


Hi all,

I hit the exact same problem reported by Yan.

I’m using:

  *   2 Mellanox CX5 MT28800 installed on 2 different servers, connected 
together
  *   Device FW (as reported by DPDK): 16.31.1014
  *   DPDK 19.11.5 (from 6WindGate actually)



I sent roughly 360M packets from one server to the other using “testpmd” (in 
-forward-mode=txonly).

My DPDK application on the other server is reporting the following xstats 
counters (:



CounterName  PORT0PORT1TOTAL

rx_good_packets:  76727920,   0,76727920

tx_good_packets: 0,   0,   0

rx_good_bytes:  4910586880,   0,  4910586880

tx_good_bytes:   0,   0,   0

rx_missed_errors:0,   0,   0

rx_errors:   0,   0,   0

tx_errors:   0,   0,   0

rx_mbuf_allocation_errors:   0,   0,   0

rx_q0packets:0,   0,   0

rx_q0bytes:  0,   0,   0

rx_q0errors: 0,   0,   0

rx_q1packets:0,   0,   0

rx_q1bytes:  0,   0,   0

rx_q1errors: 0,   0,   0

rx_q2packets:0,   0,   0

rx_q2bytes:  0,   0,   0

rx_q2errors: 0,   0,   0

rx_q3packets:0,   0,   0

rx_q3bytes:  0,   0,   0

rx_q3errors: 0,   0,   0

rx_q4packets:0,   0,   0

rx_q4bytes:  0,   0,   0

rx_q4errors: 0,   0,   0

rx_q5packets: 76727920,   0,76727920

rx_q5bytes: 4910586880,   0,  4910586880

rx_q5errors: 0,   0,   0

rx_q6packets:0,   0,   0

rx_q6bytes:  0,   0,   0

rx_q6errors: 0,   0,   0

rx_q7packets:0,   0,   0

rx_q7bytes:  0,   0,   0

rx_q7errors: 0,   0,   0

rx_q8packets:0,   0,   0

rx_q8bytes:  0,   0,   0

rx_q8errors: 0,   0,   0

rx_q9packets:0,   0,   0

rx_q9bytes:  0,   0,   0

rx_q9errors: 0,   0,   0

rx_q10packets:   0,   0,   0

rx_q10bytes: 

RE: mlx5 VF packet lost between rx_port_unicast_packets and rx_good_packets

2021-11-15 Thread Francesco Montorsi
Hi all,
I hit the exact same problem reported by Yan.
I’m using:

  *   2 Mellanox CX5 MT28800 installed on 2 different servers, connected 
together
  *   Device FW (as reported by DPDK): 16.31.1014
  *   DPDK 19.11.5 (from 6WindGate actually)

I sent roughly 360M packets from one server to the other using “testpmd” (in 
–forward-mode=txonly).
My DPDK application on the other server is reporting the following xstats 
counters (:

CounterName  PORT0PORT1TOTAL
rx_good_packets:  76727920,   0,76727920
tx_good_packets: 0,   0,   0
rx_good_bytes:  4910586880,   0,  4910586880
tx_good_bytes:   0,   0,   0
rx_missed_errors:0,   0,   0
rx_errors:   0,   0,   0
tx_errors:   0,   0,   0
rx_mbuf_allocation_errors:   0,   0,   0
rx_q0packets:0,   0,   0
rx_q0bytes:  0,   0,   0
rx_q0errors: 0,   0,   0
rx_q1packets:0,   0,   0
rx_q1bytes:  0,   0,   0
rx_q1errors: 0,   0,   0
rx_q2packets:0,   0,   0
rx_q2bytes:  0,   0,   0
rx_q2errors: 0,   0,   0
rx_q3packets:0,   0,   0
rx_q3bytes:  0,   0,   0
rx_q3errors: 0,   0,   0
rx_q4packets:0,   0,   0
rx_q4bytes:  0,   0,   0
rx_q4errors: 0,   0,   0
rx_q5packets: 76727920,   0,76727920
rx_q5bytes: 4910586880,   0,  4910586880
rx_q5errors: 0,   0,   0
rx_q6packets:0,   0,   0
rx_q6bytes:  0,   0,   0
rx_q6errors: 0,   0,   0
rx_q7packets:0,   0,   0
rx_q7bytes:  0,   0,   0
rx_q7errors: 0,   0,   0
rx_q8packets:0,   0,   0
rx_q8bytes:  0,   0,   0
rx_q8errors: 0,   0,   0
rx_q9packets:0,   0,   0
rx_q9bytes:  0,   0,   0
rx_q9errors: 0,   0,   0
rx_q10packets:   0,   0,   0
rx_q10bytes: 0,   0,   0
rx_q10errors:0,   0,   0
rx_q11packets:   0,   0,   0
rx_q11bytes: 0,   0,   0
rx_q11errors:0,   0,   0
tx_q0packets:0,   0,   0
tx_q0bytes:  0,   0,   0
rx_wqe_err:  0,   0,   0
rx_port_unicast_packets: 360316064,   0,   360316064
rx_port_unicast_bytes: 23060228096,   0, 23060228096
tx_port_unicast_packets: 0,   0,   0
tx_port_unicast_bytes:   0,   0,   0
rx_port_multicast_packets:   0,   0,   0
rx_port_multicast_bytes: 0,   0,   0
tx_port_multicast_packets:   0,   0,   0
tx_port_multicast_bytes: 0,   0,   0
rx_port_broadcast_packets:   0,   0,   0
rx_port_broadcast_bytes: 0,   0,   0
tx_port_broadcast_packets:   0,   0,   0
tx_port_broadcast_bytes: 0,   0,   0
tx_packets_phy:  0,   0,   0
rx_packets_phy:  0,   0,   0
rx_crc_errors_phy:   0,   0,   0
tx_bytes_phy:0,   0,   0
rx_bytes_phy:0,   0,   0
rx_in_range_len_errors_phy   0,   0,   0
rx_symbol_err_phy:   0,   0,   0
rx_discards_phy: 0,   0,   0
tx_discards_phy: 0,   0,   0
tx_errors_phy:   0,   0,   0

Re: release schedule change proposal

2021-11-15 Thread Shepard Siegel
> Opinions?

Atomic Rules has been releasing our Arkville product in lockstep with DPDK
for the past 19 quarters. Our FPGA solution has the added burden of testing
with async releases of FPGA vendor CAD tools. Although we have gotten used
to the quarterly cadence, for the reasons given by Thomas and others,
Atomic Rules supports the move to a three release per year schedule.

Shepard Siegel, CTO and Founder
atomicrules.com


Re: release schedule change proposal

2021-11-15 Thread Stephen Hemminger
On Mon, 15 Nov 2021 15:58:15 +0100
Thomas Monjalon  wrote:

> For the last 5 years, DPDK was doing 4 releases per year,
> in February, May, August and November (the LTS one):
>   .02   .05   .08   .11 (LTS)
> 
> This schedule has multiple issues:
>   - clash with China's Spring Festival
>   - too many rushes, impacting maintainers & testers
>   - not much buffer, impacting proposal period
> 
> I propose to switch to a new schedule with 3 releases per year:
>   .03  .07  .11 (LTS)

This nicely adapts to the natural slowdown due to holidays
in December and August.


Re: release schedule change proposal

2021-11-15 Thread Kevin Traynor

On 15/11/2021 14:58, Thomas Monjalon wrote:

For the last 5 years, DPDK was doing 4 releases per year,
in February, May, August and November (the LTS one):
.02   .05   .08   .11 (LTS)

This schedule has multiple issues:
- clash with China's Spring Festival
- too many rushes, impacting maintainers & testers
- not much buffer, impacting proposal period

I propose to switch to a new schedule with 3 releases per year:
.03  .07  .11 (LTS)

New LTS branch would start at the same time of the year as before.
There would be one less intermediate release during spring/summer:
.05 and .08 intermediate releases would become a single .07.
I think it has almost no impact for the users.
This change could be done starting next year.

In details, this is how we could extend some milestones:

ideal schedule so far (in 13 weeks):
proposal deadline: 4
rc1 - API freeze: 5
rc2 - PMD features freeze: 2
rc3 - app features freeze: 1
rc4 - last chance to fix: 1
release: 0

proposed schedule (in 17 weeks):
proposal deadline: 4
rc1 - API freeze: 7
rc2 - PMD features freeze: 3
rc3 - app features freeze: 1
rc4 - more fixes: 1
rc5 - last chance buffer: 1
release: 0

Opinions?




Someone else might comment if they spot something, but to me looks ok 
for RH distro and OVS project.


RH distro is also using DPDK .11 who's release date is not changing. 
(+cc Timothy/Flavio)


For OVS project, it only integrates DPDK .11 release too and aims to do 
that by EOY to make the next OVS release. DPDK stable releases are 
integrated into older OVS branches when available. I don't think older 
OVS branch releases have a strict release schedule and having the latest 
stable DPDK release is not a blocker anyway. (+cc Ilya/Ian/ovs-discuss)




Re: release schedule change proposal

2021-11-15 Thread Jerin Jacob
On Mon, Nov 15, 2021 at 8:42 PM Luca Boccassi  wrote:
>
> On Mon, 2021-11-15 at 15:58 +0100, Thomas Monjalon wrote:
> > For the last 5 years, DPDK was doing 4 releases per year,
> > in February, May, August and November (the LTS one):
> >   .02   .05   .08   .11 (LTS)
> >
> > This schedule has multiple issues:
> >   - clash with China's Spring Festival
> >   - too many rushes, impacting maintainers & testers
> >   - not much buffer, impacting proposal period
> >
> > I propose to switch to a new schedule with 3 releases per year:
> >   .03  .07  .11 (LTS)


+1


> >
> > New LTS branch would start at the same time of the year as before.
> > There would be one less intermediate release during spring/summer:
> > .05 and .08 intermediate releases would become a single .07.
> > I think it has almost no impact for the users.
> > This change could be done starting next year.
> >
> > In details, this is how we could extend some milestones:
> >
> >   ideal schedule so far (in 13 weeks):
> >   proposal deadline: 4
> >   rc1 - API freeze: 5
> >   rc2 - PMD features freeze: 2
> >   rc3 - app features freeze: 1
> >   rc4 - last chance to fix: 1
> >   release: 0
> >
> >   proposed schedule (in 17 weeks):
> >   proposal deadline: 4
> >   rc1 - API freeze: 7
> >   rc2 - PMD features freeze: 3
> >   rc3 - app features freeze: 1
> >   rc4 - more fixes: 1
> >   rc5 - last chance buffer: 1
> >   release: 0
> >
> > Opinions?
>
> We upload only LTS releases to Debian/Ubuntu, so as long as those stay
> the same as it is proposed here, no problem for us.
>
> --
> Kind regards,
> Luca Boccassi


Re: release schedule change proposal

2021-11-15 Thread Luca Boccassi
On Mon, 2021-11-15 at 15:58 +0100, Thomas Monjalon wrote:
> For the last 5 years, DPDK was doing 4 releases per year,
> in February, May, August and November (the LTS one):
>   .02   .05   .08   .11 (LTS)
> 
> This schedule has multiple issues:
>   - clash with China's Spring Festival
>   - too many rushes, impacting maintainers & testers
>   - not much buffer, impacting proposal period
> 
> I propose to switch to a new schedule with 3 releases per year:
>   .03  .07  .11 (LTS)
> 
> New LTS branch would start at the same time of the year as before.
> There would be one less intermediate release during spring/summer:
> .05 and .08 intermediate releases would become a single .07.
> I think it has almost no impact for the users.
> This change could be done starting next year.
> 
> In details, this is how we could extend some milestones:
> 
>   ideal schedule so far (in 13 weeks):
>   proposal deadline: 4
>   rc1 - API freeze: 5
>   rc2 - PMD features freeze: 2
>   rc3 - app features freeze: 1
>   rc4 - last chance to fix: 1
>   release: 0
> 
>   proposed schedule (in 17 weeks):
>   proposal deadline: 4
>   rc1 - API freeze: 7
>   rc2 - PMD features freeze: 3
>   rc3 - app features freeze: 1
>   rc4 - more fixes: 1
>   rc5 - last chance buffer: 1
>   release: 0
> 
> Opinions?

We upload only LTS releases to Debian/Ubuntu, so as long as those stay
the same as it is proposed here, no problem for us.

-- 
Kind regards,
Luca Boccassi


release schedule change proposal

2021-11-15 Thread Thomas Monjalon
For the last 5 years, DPDK was doing 4 releases per year,
in February, May, August and November (the LTS one):
.02   .05   .08   .11 (LTS)

This schedule has multiple issues:
- clash with China's Spring Festival
- too many rushes, impacting maintainers & testers
- not much buffer, impacting proposal period

I propose to switch to a new schedule with 3 releases per year:
.03  .07  .11 (LTS)

New LTS branch would start at the same time of the year as before.
There would be one less intermediate release during spring/summer:
.05 and .08 intermediate releases would become a single .07.
I think it has almost no impact for the users.
This change could be done starting next year.

In details, this is how we could extend some milestones:

ideal schedule so far (in 13 weeks):
proposal deadline: 4
rc1 - API freeze: 5
rc2 - PMD features freeze: 2
rc3 - app features freeze: 1
rc4 - last chance to fix: 1
release: 0

proposed schedule (in 17 weeks):
proposal deadline: 4
rc1 - API freeze: 7
rc2 - PMD features freeze: 3
rc3 - app features freeze: 1
rc4 - more fixes: 1
rc5 - last chance buffer: 1
release: 0

Opinions?




RE: Pdump Didn't capture the packet

2021-11-15 Thread Pattan, Reshma


From: 廖書華 
Sent: Monday, November 15, 2021 5:39 AM
To: users@dpdk.org
Cc: 林庭安 
Subject: Pdump Didn't capture the packet

Dear all,

Currently, I want to use pdump to capture our DPDK application, however for the 
pdump side, unfortunately, didn't capture any packet, also pdump didn't print 
any error. While for our application, it also didn't print any log related to 
pdump.
- Here's the log of pdump
[oran@localhost pdump]$ sudo ./dpdk-pdump --file-prefix wls_1 -- --pdump 
'port=0,queue=*,tx-dev=/home/oran/Music/tx.pcap,rx-dev=/home/oran/Music/rx.pcap'



Primary process is no longer active, exiting...

[Reshma]: From this log it is clear that, primary application is not running.  
Rerun the primary application and in other terminal run the pdump application.

Best Regards,
Shu-hua, Liao