octeontx2 driver is removed Can we close this patch?
01/02/2022 07:30, Rakesh Kudurumalla: > ping > > > -----Original Message----- > > From: Rakesh Kudurumalla > > Sent: Monday, January 10, 2022 2:35 PM > > To: Thomas Monjalon <tho...@monjalon.net>; Jerin Jacob Kollanukkaran > > <jer...@marvell.com> > > Cc: sta...@dpdk.org; dev@dpdk.org; david.march...@redhat.com; > > ferruh.yi...@intel.com; andrew.rybche...@oktetlabs.ru; > > ajit.khapa...@broadcom.com > > Subject: RE: [EXT] Re: [dpdk-stable] [PATCH v2] test: avoid hang if queues > > are > > full and Tx fails > > > > ping > > > > > -----Original Message----- > > > From: Rakesh Kudurumalla > > > Sent: Monday, December 13, 2021 12:10 PM > > > To: Thomas Monjalon <tho...@monjalon.net>; Jerin Jacob Kollanukkaran > > > <jer...@marvell.com> > > > Cc: sta...@dpdk.org; dev@dpdk.org; david.march...@redhat.com; > > > ferruh.yi...@intel.com; andrew.rybche...@oktetlabs.ru; > > > ajit.khapa...@broadcom.com > > > Subject: RE: [EXT] Re: [dpdk-stable] [PATCH v2] test: avoid hang if > > > queues are full and Tx fails > > > > > > > > > > > > > -----Original Message----- > > > > From: Thomas Monjalon <tho...@monjalon.net> > > > > Sent: Monday, November 29, 2021 2:44 PM > > > > To: Rakesh Kudurumalla <rkuduruma...@marvell.com>; Jerin Jacob > > > > Kollanukkaran <jer...@marvell.com> > > > > Cc: sta...@dpdk.org; dev@dpdk.org; david.march...@redhat.com; > > > > ferruh.yi...@intel.com; andrew.rybche...@oktetlabs.ru; > > > > ajit.khapa...@broadcom.com > > > > Subject: Re: [EXT] Re: [dpdk-stable] [PATCH v2] test: avoid hang if > > > > queues are full and Tx fails > > > > > > > > 29/11/2021 09:52, Rakesh Kudurumalla: > > > > > From: Thomas Monjalon <tho...@monjalon.net> > > > > > > 22/11/2021 08:59, Rakesh Kudurumalla: > > > > > > > From: Thomas Monjalon <tho...@monjalon.net> > > > > > > > > 20/07/2021 18:50, Rakesh Kudurumalla: > > > > > > > > > Current pmd_perf_autotest() in continuous mode tries to > > > > > > > > > enqueue MAX_TRAFFIC_BURST completely before starting the > > test. > > > > > > > > > Some drivers cannot accept complete MAX_TRAFFIC_BURST even > > > > > > > > > though > > > > > > rx+tx > > > > > > > > > desc > > > > > > > > count > > > > > > > > > can fit it. > > > > > > > > > > > > > > > > Which driver is failing to do so? > > > > > > > > Why it cannot enqueue 32 packets? > > > > > > > > > > > > > > Octeontx2 driver is failing to enqueue because hardware > > > > > > > buffers are full > > > > > > before test. > > > > > > > > Aren't you stopping the support of octeontx2? > > > > Why do you care now? > > > > yes we are not supporting octeontx2,but this issue is observed in > > > > cnxk driver ,current patch fixes the same > > > > > > > > > > > > Why hardware buffers are full? > > > > > Hardware buffers are full because number of number of descriptors > > > > > in continuous mode Is less than MAX_TRAFFIC_BURST, so if enque > > > > > fails , there is no way hardware can drop the Packets . > > > > > pmd_per_autotest application evaluates performance after enqueueing > > packets Initially. > > > > > > > > > > > > > pmd_perf_autotest() in continuous mode tries to enqueue > > > > > > > MAX_TRAFFIC_BURST (2048) before starting the test. > > > > > > > > > > > > > > > > This patch changes behaviour to stop enqueuing after few > > retries. > > > > > > > > > > > > > > > > If there is a real limitation, there will be issues in more > > > > > > > > places than this test program. > > > > > > > > I feel it should be addressed either in the driver or at ethdev > > > > > > > > level. > > > > > > > > > > > > > > > > [...] > > > > > > > > > @@ -480,10 +483,19 @@ main_loop(__rte_unused void *args) > > > > > > > > > nb_tx = RTE_MIN(MAX_PKT_BURST, num); > > > > > > > > > nb_tx = rte_eth_tx_burst(portid, 0, > > > > > > > > > &tx_burst[idx], > > > > nb_tx); > > > > > > > > > + if (nb_tx == 0) > > > > > > > > > + retry_cnt++; > > > > > > > > > num -= nb_tx; > > > > > > > > > idx += nb_tx; > > > > > > > > > + if (retry_cnt == MAX_RETRY_COUNT) { > > > > > > > > > + retry_cnt = 0; > > > > > > > > > + break; > > > > > > > > > + } > > > > > > > > > >