[dpdk-users] Difference between APP and LIB

2018-05-24 Thread Filip Janiszewski
Hi, I've a weird situation: If I build my code using rte.app.mk, then calling rte_eal_init and rte_eth_dev_count returns the proper amount of NICs. The very same code built with rte.lib.mk into a *.a library does not recognize any NIC (If I include the lib in another test application that only

[dpdk-users] Correct setup of sfc

2018-06-13 Thread Filip Janiszewski
Hi, I'm trying to test a SF card (Flareon Ultra SFN7142Q Dual-Port 40GbE) in our testing box, the details of the device are: . Solarstorm firmware update utility [v7.1.1] Copyright Solarflare Communications 2006-2018, Level 5 Networks 2002-2005 enp101s0f0 - MAC: 00-0F-53-2C-3A-10 Firmware

Re: [dpdk-users] Correct setup of sfc

2018-06-13 Thread Filip Janiszewski
Hi Andrew, > PCI devices of Solarflare NIC should be bound to vfio, uio-pci-generic or > igb_uio (part of DPDK) module. In the case of Solarflare NICs, Linux > driver is > not required and not used in DPDK. > > So, you should load one of above modules (depending on your server > IOMMU

Re: [dpdk-users] Correct setup of sfc

2018-06-15 Thread Filip Janiszewski
018 dmar3 -> ../../devices/virtual/iommu/dmar3 . Thanks everybody for the support. Il 15/06/18 19:02, Filip Janiszewski ha scritto: > Adding here also Rami Rosen to continue just one thread. > > First of all thanks for replying, now here's the current status: > > It seems that t

Re: [dpdk-users] Correct setup of sfc

2018-06-15 Thread Filip Janiszewski
to find some specific option for that as the BIOS suggest that "Intel Virtualization" is enabled, but that might refer to what we need here. Filip Il 15/06/18 11:44, Andrew Rybchenko ha scritto: > On 06/13/2018 10:14 PM, Filip Janiszewski wrote: >> Hi Andrew, >> >> &g

[dpdk-users] DPDK device name

2018-04-19 Thread Filip Janiszewski
Hi, Is there any mapping between the port names I see while using 'ifconfig' and the port numbers I use normally to handle NIC ports? In other words, is there a way to identify which ports number corresponds to a given linux interface name? Thanks -- BR, Filip +48 666 369 823

[dpdk-users] Packets drop while fetching with rte_eth_rx_burst

2018-03-25 Thread Filip Janiszewski
Hi Everybody, I have a weird drop problem, and to understand my question the best way is to have a look at this simple (and cleaned from all the not relevant stuff) snippet: while( 1 ) { if( config->running == false ) { break; } num_of_pkt = rte_eth_rx_burst( config->port_id,

Re: [dpdk-users] 回覆﹕ Packets drop while fetching with rte_eth_rx_burst

2018-03-25 Thread Filip Janiszewski
on are you using? You can take a look to the source code > of dpdk , the rxdrop counter may be not implemented in dpdk. So you always > get 0 in rxdrop. > > Thanks, > Marco > -------- > 18/3/25 (週日),Filip Janiszewski <cont...@filipja

[dpdk-users] testpmd and jumbo frames

2019-02-25 Thread Filip Janiszewski
Hi, Can someone suggest the proper configuration required to test jumbo frames along with scatter mode using testpmd? I'm running the tool as follow: . sudo ./testpmd -l 0-3 -n 4 -- -i --max-pkt-len=9600 --enable-scatter --txd=512 --rxd=512 --forward-mode=flowgen . then attempting to set the

Re: [dpdk-users] testpmd and jumbo frames

2019-02-26 Thread Filip Janiszewski
meter. > For example, add "--mbuf-size=1" and then try "set txpkts 9036" > from the testpmd cli > > Regards, > Rami Rosen > > On Tue, 26 Feb 2019 at 08:49, Filip Janiszewski > wrote: >> >> Hi, >> >> Can someone suggest th

[dpdk-users] Not supported card on board, is segfault expected?

2019-02-27 Thread Filip Janiszewski
Hi, I'm running DPDK 18.05 with a nic card that is not present in the list of supported devices: MCX512F-ACAT (ConnectX®-5 EN network interface card, 25GbE dual-port SFP28, PCIe3.0 x16, tall bracket), during rte_eal_init there's a segfault: . #0 0x777a98ce in mlx5_alloc_td () from

[dpdk-users] Time-stamping from 18.05 to 19.02

2019-03-01 Thread Filip Janiszewski
Hi, In order to understand how DPDK handle HW timestamps (which is very confusing between version) I've prepare a small DPDK test application which capture packets on a give port and print some information, like pkt_len, timestamp etc - very basic stuff. I've enabled the offload

[dpdk-users] Can't capture Jumbo Frames with rte.lib.mk

2019-03-11 Thread Filip Janiszewski
Hi, (The complete code of the example application is available at https://github.com/fjanisze/dpdk-jf-test) While working on the sample code I made a strange discovery, the same identical piece of software works and capture jumbo frame while compile with $(RTE_SDK)/mk/rte.extapp.mk but it does

Re: [dpdk-users] Can't capture Jumbo Frames with rte.lib.mk

2019-03-11 Thread Filip Janiszewski
14:41, Filip Janiszewski ha scritto: > Hi, > > (The complete code of the example application is available at > https://github.com/fjanisze/dpdk-jf-test) > > While working on the sample code I made a strange discovery, the same > identical piece of software works and capt

[dpdk-users] Segfault while running on older CPU

2019-02-06 Thread Filip Janiszewski
Hi Everybody, We have one 'slightly' older machine (well, very old CPU.) in our Lab that seems to crash DPDK on every execution attempt, I was wondering if anybody encountered a similar issue and if there's a change in the DPDK config that might remedy the problem, this is the stack trace of the

Re: [dpdk-users] flow rule rejected by device

2019-02-16 Thread Filip Janiszewski
it should be traceable in the driver. > > On Sat, Feb 16, 2019 at 6:20 AM Filip Janiszewski > mailto:cont...@filipjaniszewski.com>> wrote: > > Hi All, > > I have a weird issue with an re-branded Mellanox card (sold as HP, but > MLX hardware: HP InfiniBa

Re: [dpdk-users] RX of multi-segment jumbo frames

2019-02-14 Thread Filip Janiszewski
configuration (DEV_RX_OFFLOAD_SCATTER | DEV_RX_OFFLOAD_JUMBO_FRAME) The JF are reported as ierror in rte_eth_stats. Thanks Il 09/02/19 16:36, Wiles, Keith ha scritto: > > >> On Feb 9, 2019, at 9:27 AM, Filip Janiszewski >> wrote: >> >> >> >>

Re: [dpdk-users] Segfault while running on older CPU

2019-02-14 Thread Filip Janiszewski
Il 14/02/19 15:46, Van Haaren, Harry ha scritto: >> -Original Message- >> From: users [mailto:users-boun...@dpdk.org] On Behalf Of Filip Janiszewski >> Sent: Thursday, February 14, 2019 2:15 PM >> To: Wiles, Keith >> Cc: users@dpdk.org >> Subj

[dpdk-users] flow rule rejected by device

2019-02-16 Thread Filip Janiszewski
Hi All, I have a weird issue with an re-branded Mellanox card (sold as HP, but MLX hardware: HP InfiniBand FDR/EN 10/40Gb Dual Port 544FLR-QSFP Adapter) which is failing to start (rte_eth_dev_start) with the following message: . PMD: net_mlx4: 0xebbf40: cannot attach flow rules (code 95,

Re: [dpdk-users] RX of multi-segment jumbo frames

2019-02-09 Thread Filip Janiszewski
Il 09/02/19 14:51, Wiles, Keith ha scritto: > > >> On Feb 9, 2019, at 5:11 AM, Filip Janiszewski >> wrote: >> >> Hi, >> >> I'm attempting to receive jumbo frames (~9000 bytes) on a Mellonox card >> using DPDK, I've configured the DEV_RX

Re: [dpdk-users] Segfault while running on older CPU

2019-02-14 Thread Filip Janiszewski
with CONFIG_RTE_EXEC_ENV set to 'native' instead of 'linuxapp' but nothing changes. Did anybody had a similar issue? Any suggestion? Thanks Il 06/02/19 11:47, Filip Janiszewski ha scritto: > Hi Everybody, > > We have one 'slightly' older machine (well, very old CPU.) in our Lab > that seems to crash D

Re: [dpdk-users] Segfault while running on older CPU

2019-02-14 Thread Filip Janiszewski
and then setting CONFIG_RTE_MACHINE="default" in the x86_64-native-linuxapp-gcc/.config file, but I'm not sure whether it's picking it up while building (make T=x86_64-native-linuxapp-gcc DESTDIR=. -j28). Il 14/02/19 14:34, Wiles, Keith ha scritto: > > >> On Feb 14, 2019, at 7:04

[dpdk-users] RX of multi-segment jumbo frames

2019-02-09 Thread Filip Janiszewski
Hi, I'm attempting to receive jumbo frames (~9000 bytes) on a Mellonox card using DPDK, I've configured the DEV_RX_OFFLOAD_JUMBO_FRAME offload for rte_eth_conf and rte_eth_rxconf (per RX Queue), but I can capture jumbo frames only if the mbuf is large enough to contain the whole packet, is there

[dpdk-users] Linking to multiple DPDK .so builds

2019-07-09 Thread Filip Janiszewski
Hi, I've an issue that's better explained by an example: We're shipping a tool to different customers with different environments, sometime they run on Mellanox hardware, sometime on Intel etc. The main DPDK application is released in multiple builds to accommodate every customer environment,

[dpdk-users] couldn't find suitable memseg_list

2019-04-25 Thread Filip Janiszewski
Hi, While allocating a memory pool with rte_pktmbuf_pool_create, I'm getting this EAL print: . EAL: eal_memalloc_alloc_seg_bulk(): couldn't find suitable memseg_list . Followed by an error code "Cannot allocate memory", the number of elements I'm trying to allocate is 16777216, the size 800

[dpdk-users] DPDK and isolcpus

2019-07-26 Thread Filip Janiszewski
Hi, I've configured a bunch of my box cores with isolcpus and nohz_full in the attempt to squeeze a little more performance out of them, but, apparently I can't use those core in DPDK anymore as it seems that rte_lcore_count reports only core which are not isolated, also I can't launch any thread

[dpdk-users] Swapping membuf pools while running

2020-09-03 Thread Filip Janiszewski
Hi, Is there a way to swap the currently configured mempool for a given queue while the nic is up and running (so, without reconfiguration)? The scenario would be: a) capturing packets (rx_burst) in a loop b) mempool configured for the queue (while calling rte_eth_rx_queue_setup) gets filled up

Re: [dpdk-users] Round-robin packet distribution

2020-08-31 Thread Filip Janiszewski
t; powerful, it may be worth it to have a PM-to-PM meeting asking about >> feature, or CC one of the maintainer. Devs do not always look at all >> mails. >> >> Cheers, >> >> Tom >> >> Le 18/08/2020 à 16:27, Filip Janiszewski a écrit : >>> Do y

Re: [dpdk-users] Round-robin packet distribution

2020-08-18 Thread Filip Janiszewski
he same time when you receive a burst of similar packets. > > Tom > > Le 24/07/2020 à 12:05, Filip Janiszewski a écrit : >> Hi, >> >> Is there a way in DPDK to configure the NIC to distribute the incoming >> packets to multiple queues in a round robin fashion?

Re: [dpdk-users] Round-robin packet distribution

2020-08-18 Thread Filip Janiszewski
up to first 64b of > the > packet - at least in e1000, ixgbe and i40e. > > It will be easier if you can provide some details about you network > traffic. > > Paweł > > On 18.08.2020 14:40, Filip Janiszewski wrote: >> Hi, >> >> We had a look at that, and d

[dpdk-users] Bigger mempool leads to worst performances.

2020-06-17 Thread Filip Janiszewski
Hi All, I'm very aware the question is generic, but we can't really understand what would be the problem here.. In short, we've a capture software running smoothly at 20GbE and capturing everything, recently we've switched gear and increased the amount of data, and encountered some drops. One

[dpdk-users] Round-robin packet distribution

2020-07-24 Thread Filip Janiszewski
Hi, Is there a way in DPDK to configure the NIC to distribute the incoming packets to multiple queues in a round robin fashion? Without taking into account the payload/headers or type of packet, just plain round robin distribution to multiple queues. I'm struggling to obtain a fair mechanism

[dpdk-users] rte_ring_dequeue returns 0 but pointer is null

2020-12-04 Thread Filip Janiszewski
Hi, Given this sample code, running on DPDK 20.08: if( rte_ring_dequeue( master->buffers, ( void** ) ) ) { // Handle situation } else { // do stuff with data } We're encountering a situation where data is null but the function

[dpdk-users] pkt mempool with custom refcnt

2021-01-20 Thread Filip Janiszewski
Hi, Is there a way to create/configure a mempool in such a way that while RX bursting, the packets created from the pool have the refcnt greater than the default value 1? I can iterate over all the mbufs and update the refcnt valued manually for each received burst before pushing the bufs for

[dpdk-users] mempool item alignment

2021-01-04 Thread Filip Janiszewski
Hi, Is there a way to force the items allocated from a mempool to have a certain alignment? I've attempted to add: . #ifdef RTE_MEMPOOL_ALIGN #undef RTE_MEMPOOL_ALIGN #endif #define RTE_MEMPOOL_ALIGN 512 . Before including the rte mempool header, but the pointers i get from rte_mempool_get

[dpdk-users] rte_mempool_get returning null

2021-06-30 Thread Filip Janiszewski
Hi, What would be the reason for rte_mempool_get returning null even if rte_mempool_avail_count returns a big positive number? I know is positive since I've added a print in my code to test that.. I've a strange issue where once I extract N items from the pool (where N is the exact size of the

Re: [dpdk-users] rte_mempool_get returning null

2021-06-30 Thread Filip Janiszewski
Wed, Jun 30, 2021, 6:51 PM Filip Janiszewski > mailto:cont...@filipjaniszewski.com>> wrote: > > Hi, > > What would be the reason for rte_mempool_get returning null even if > rte_mempool_avail_count returns a big positive number? I know is > positive

[dpdk-users] Performance of rte_eth_stats_get

2021-05-19 Thread Filip Janiszewski
Hi, Is it safe to call rte_eth_stats_get while capturing from the port? I'm mostly concerned about performance, if rte_eth_stats_get will in any way impact the port performance, in the application I plan to call the function from a thread that is not directly involved in the capture, there's

[dpdk-users] EAL: UIO_RESOURCE_LIST tailq is already registered

2021-03-23 Thread Filip Janiszewski
Hi All, All of a sudden my DPDK drive app started to crash at the very beginning, when pretty much nothing has been done yet: . root build : ./DPDK_BASED_APP EAL: UIO_RESOURCE_LIST tailq is already registered PANIC in tailqinitfn_rte_uio_tailq(): Cannot initialize tailq: UIO_RESOURCE_LIST 6:

[dpdk-users] Intel XL710, timestamping

2021-03-03 Thread Filip Janiszewski
Hi, Is there a way to enable HW timestamps with this Intel card using DPDK by means of the IEEE1588 set of function API? (Dpdk 20.08). In my understanding the nic should support hardware time-stamping of RX packets if: "A separate PTP application would be required that communicates directly with

[dpdk-users] Proper configuration of pktgen

2021-04-14 Thread Filip Janiszewski
Hi, I'm trying to generate 56G worth of data with pktgen (on a 56G link with two mellanox endpoints), using this simple configuration: . pktgen -l 5-19 -- -P -m "[6:10-19].0" . On a Linux machine that is doing nothing but running pktgen, with such a command I'm able to send around 50G of data,

[dpdk-users] Configuring port input set, NIC XL710

2021-02-15 Thread Filip Janiszewski
Hi, I'm testing an Intel 700-Series (XL710) NIC using testpmd, given the instructions from here https://software.intel.com/content/www/us/en/develop/articles/intel-ethernet-controller-700-series-hash-and-flow-director-filters.html I wanted to modify the inset such that an UDP header checksum

Re: [dpdk-users] MLX ConnectX-4 Discarding packets

2021-09-12 Thread Filip Janiszewski
Alright, nailed it down to a wrong preferred PCIe device in the BIOS configuration, it has not been changed after the NIC have been moved to another PCIe slot. Now the EPYC is going really great, getting 100Gbps rate easily. Thank Il 9/11/21 4:34 PM, Filip Janiszewski ha scritto: > I wan

[dpdk-users] pktgen not showing any capture.

2021-09-10 Thread Filip Janiszewski
Hi, While attempting to capture with pktgen, I see the counter rx_steer_missed_packets increasing in ethtool and nothing being captured. in pktgen 'page stats' is always empty and 'page xstats' shows something is received but i guess nothing is delivered to the queues. How should pktgen be

[dpdk-users] MLX ConnectX-4 Discarding packets

2021-09-10 Thread Filip Janiszewski
Hi, I've switched a 100Gbe MLX ConnectX-4 card from an Intel Xeon server to an AMD EPYC server (running 75F3 CPU, 256GiB of RAM and PCIe4 lanes), and using the same capture software we can't get any faster than 10Gbps, when exceeding that speed regardless of the amount of queues configured the

Re: [dpdk-users] MLX ConnectX-4 Discarding packets

2021-09-11 Thread Filip Janiszewski
are configured - I've not observed this situation on the Intel server, where adding more queues/cores scale to higher throughput. This issue have been verified now with both Mellanox and Intel (810 series, 100GbE) NICs. Anybody encountered anything similar? Thanks Il 9/10/21 3:34 PM, Filip

Re: [dpdk-users] MLX ConnectX-4 Discarding packets

2021-09-11 Thread Filip Janiszewski
mail), the Xeon don't loss anything. *Confusion!* Il 9/11/21 4:19 PM, Filip Janiszewski ha scritto: > Thanks, > > I knew that document and we've implemented many of those settings/rules, > but perhaps there's one crucial I've forgot? Wonder which one. > > Anyway, increa

Re: [dpdk-users] MLX ConnectX-4 Discarding packets

2021-09-11 Thread Filip Janiszewski
; Hope it helps. > > Cheers, > Steffen Weise > > >> Am 11.09.2021 um 10:56 schrieb Filip Janiszewski >> : >> >> I ran more tests, >> >> This AMD server is a bit confusing, I can tune it to capture 28Mpps (64 >> bytes frame) from on

Failed to create flow rule using E810 while setting priority 1

2021-11-30 Thread Filip Janiszewski
Hi, For some reason this rule cannot be created (DPDK 21.11, Intel E810-2CQDA2): . testpmd> flow create 0 ingress priority 1 pattern eth type spec 0x8000 type mask 0x8000 / end actions drop / end ice_flow_create(): Failed to create flow port_flow_complain(): Caught PMD error type 2 (flow rule

rte flow rule not clear with DPDK 21.11 and Intel E810

2021-11-30 Thread Filip Janiszewski
Hi, Is there any sensible reason for which this flow rule works: . testpmd> flow create 0 ingress pattern eth / ipv4 dst spec 199.168.152.2 dst mask 255.255.0.255 / end actions queue index 1 / end Flow rule #0 created . But this one not?: . testpmd> flow create 0 ingress pattern eth / ipv4 dst

Re: rte flow rule not clear with DPDK 21.11 and Intel E810

2021-12-01 Thread Filip Janiszewski
wrong with testpmd or anybody can confirm that this is just not working until fixed? Thanks Il 12/1/21 8:48 AM, Filip Janiszewski ha scritto: > Hi, > > Is there any sensible reason for which this flow rule works: > > . > testpmd> flow create 0 ingress pattern eth / ipv4 dst sp

Re: flow rule to drop all the packets

2021-12-01 Thread Filip Janiszewski
to capture just the packets I'm interested. So I guess if there's some Intel guy they can have a look on why nothing like this is supported in the DPDK ICE driver (or perhaps the nic?) Thanks Il 11/30/21 4:24 PM, Filip Janiszewski ha scritto: > Hi, > > Is there a way to create a flow rule t

rte flow priority not working

2021-11-30 Thread Filip Janiszewski
Hi, I've an Intel E810 NIC, and two flow rules: . testpmd> flow list 0 ID Group PrioAttrRule 1 0 0 i-- ETH IPV4 => QUEUE 0 0 1 i-- ETH IPV4 => DROP . The one with priority 0 steer packets with a certain ip to the queue 1, while the rule

flow rule to drop all the packets

2021-11-30 Thread Filip Janiszewski
Hi, Is there a way to create a flow rule that drops all the eth packets? I've attempted to setup a flow rule that match any ether type but it's never validated, for example a rule like: . rte_flow_item_eth eth_spec{}; eth_spec.hdr.ether_type = RTE_BE16(0x8100); rte_flow_item_eth

[dpdk-users] Mellanox NIC, DPDK 21.05 disappearing packets

2021-07-28 Thread Filip Janiszewski
Hi, I've noticed that some packets are disappearing while capturing with a Mellanox ConnectX4, from the xstats I can see that rx_good_packets != rx_phy_packets but no error counter of any kind is increased, how do I know what happened to those packets? Thanks -- BR, Filip +48 666 369 823

Re: rte_pktmbuf_free_bulk vs rte_pktmbuf_free

2022-01-11 Thread Filip Janiszewski
Il 1/11/22 7:02 PM, Stephen Hemminger ha scritto: > On Tue, 11 Jan 2022 13:12:24 +0100 > Filip Janiszewski wrote: > >> Hi, >> >> Is there any specific reason why using rte_pktmbuf_free_bulk seems to be >> much slower than rte_pktmbuf_free in a loop? (DPDK 21

rte_pktmbuf_free_bulk vs rte_pktmbuf_free

2022-01-11 Thread Filip Janiszewski
Hi, Is there any specific reason why using rte_pktmbuf_free_bulk seems to be much slower than rte_pktmbuf_free in a loop? (DPDK 21.11) I ran a bunch of tests on a 50GbE link where I'm getting packet drops (running with too few RX cores on purpose, to make some performance verification) and when

mlnx_qos configuration while dropping with DPDK 21.11

2022-02-07 Thread Filip Janiszewski
Hi, I've an issue where my mellanox card can't get any faster than 75Mpps (64 bytes frame) before starting to drop, and even that 75Mpps speed is maintained just for a short time, then it gets even slower to round 55Mpps.. (Not having the same issue with an Intel E810 using the same exact setup

ConnectX-6 timestamps

2022-04-09 Thread Filip Janiszewski
Hello, We've a customer with a brand new ConnectX-6 card (SW running on top of DPDK 21.02) that is observing timestamps going backward every few seconds (No strict time order), I would expect the card to always timestamp the packets in a monotonically increasing order even if no timestamping

DPDK 22.03 substantially slower with Intel E810-C

2022-08-04 Thread Filip Janiszewski
Hello, DPDK 22.03 contains quite some changed to the ICE driver and the implementation for the Intel E810-C card, I'm running some tests and while switching to this new version from 21.02 I see a degradation of performance of around 30% using 4 capture core with 40Gbps rate (64bytes frame),