[MLX5, Windows] Flows rules are limited

2022-09-27 Thread Antoine POLLENUS
Hello,

I'm trying to do a reception using DPDK windows using a connectX6DX. I need to 
redirect some network traffic into a specific queue.

I have set the DevxEnabled to true and the DevxFsRules to 0xff

When I try to setup the flow using the testpmd launched with those commands:

./dpdk-testpmd -l 2-3 -n 4 -a 5e:00.0 --log-level=8 
--log-level=pmd.common.mlx5:8 --log-level=pmd.net.mlx5:8  -- --socket-num=0 
--burst=64 --txd=4096 --rxd=1024 --mbcache=512 --rxq=1 --txq=0 --nb-cores=1 
--txpkts=1500 -i --forward-mode=rxonly  --flow-isolate-all

testpmd> flow create 0 ingress pattern eth / ipv4 / end actions queue index 0 / 
end
mlx5_net: port 0 group=0 transfer=0 external=1 fdb_def_rule=0 translate=STANDARD
mlx5_net: port 0 group=0 table=0
mlx5_common: mlx5 list NIC_ingress_0_0_matcher_list was created.
mlx5_common: mlx5 list Mellanox ConnectX-6 Dx Adapter_ entry 0196D5E84990 
new: 1.
mlx5_net: table_level 0 table_id 0 tunnel 0 group 0 registered.
mlx5_common: mlx5 list NIC_ingress_0_0_matcher_list entry 0196EE1E8E40 new: 
1.
mlx5_common: mlx5 list hrxq entry 0196EE1E6300 new: 1.
Flow rule #0 created


I see the flow is created correctly.

but when trying to filter on the destination ip I get an error:

testpmd> flow create 0 ingress pattern eth / ipv4 dst is 10.10.1.185 / end 
actions queue index 0 / end
mlx5_net: port 0 group=0 transfer=0 external=1 fdb_def_rule=0 translate=STANDARD
mlx5_net: port 0 group=0 table=0
mlx5_common: mlx5 list Mellanox ConnectX-6 Dx Adapter_ entry 0196D5E849E8 
ref: 2.
mlx5_net: table_level 0 table_id 0 tunnel 0 group 0 registered.000196D5E84990 
new: 1.
mlx5_common: mlx5 list NIC_ingress_0_0_matcher_list entry 0196EE1E5E80 new: 
1.
mlx5_common: mlx5 list hrxq entry 0196EE1E6380 ref: 2.0196EE1E8E40 new: 
1.
mlx5_common: mlx5 list NIC_ingress_0_0_matcher_list entry 0196EE1E5E80 
removed.
port_flow_complain(): Caught PMD error type 1 (cause unspecified): hardware 
refuses to create flow: Invalid argument

I also tried to filter on the source Ethernet MAC and I get the same error but 
on the destination MAC it works ?
testpmd> flow create 0 ingress pattern eth dst is 10:10:10:10:10:10 / ipv4 / 
end actions queue index 0 / end
mlx5_net: port 0 group=0 transfer=0 external=1 fdb_def_rule=0 translate=STANDARD
mlx5_net: port 0 group=0 table=0
mlx5_common: mlx5 list Mellanox ConnectX-6 Dx Adapter_ entry 0196D5E849E8 
ref: 2.
mlx5_net: table_level 0 table_id 0 tunnel 0 group 0 registered.
mlx5_common: mlx5 list NIC_ingress_0_0_matcher_list entry 0196EE1E5E80 new: 
1.
mlx5_common: mlx5 list hrxq entry 0196EE1E6380 ref: 2.
Flow rule #1 created

testpmd> flow create 0 ingress pattern eth src is 10:10:10:10:10:10 / ipv4 / 
end actions queue index 0 / end
mlx5_net: port 0 group=0 transfer=0 external=1 fdb_def_rule=0 translate=STANDARD
mlx5_net: port 0 group=0 table=0
mlx5_common: mlx5 list Mellanox ConnectX-6 Dx Adapter_ entry 0196D5E849E8 
ref: 2.
mlx5_net: table_level 0 table_id 0 tunnel 0 group 0 registered.
mlx5_common: mlx5 list NIC_ingress_0_0_matcher_list entry 0196EE1E5E80 new: 
1.
mlx5_common: mlx5 list hrxq entry 0196EE1E6380 ref: 2.
mlx5_common: mlx5 list NIC_ingress_0_0_matcher_list entry 0196EE1E5E80 
removed.
port_flow_complain(): Caught PMD error type 1 (cause unspecified): hardware 
refuses to create flow: Invalid argument


Is that a limitation of the windows version or am I doing something wrong ?

regards,

Antoine Pollenus



[MLX5] Tx scheduling strange behavior on connectx6DX

2022-09-14 Thread Antoine POLLENUS
Hello,

I'm trying to use the TX scheduling feature on a ConnectX-6Dx with latest 
firmware and 21.11 version of DPDK.
But I see some strange behavior.

When I advise the nic with the timestamp in the dynamic mbuf field the packet 
go out way too early.

Time I want : 166238592556794
Received time : 1662385925469600500

The time I want is 100ms in the future of when calling rte_tx_burst.

When looking at the xstats I see no packet in the past no packets in a too long 
future. But as shown below I see tx_pp_sync_lost, 
tx_pp_missed_interrupt_errors, tx_pp_rearm_queue_errors

Which seems to be a bad behavior.

tx_good_packets:83160
tx_good_bytes:528950026
tx_q0_packets:83160
tx_q0_bytes:528950026
rx_multicast_packets:64
rx_multicast_bytes:6318
tx_multicast_packets:66772
tx_multicast_bytes:94358324
tx_phy_packets:66750
rx_phy_packets:64
tx_phy_bytes:94595646
rx_phy_bytes:6318
tx_pp_missed_interrupt_errors:2
tx_pp_rearm_queue_errors:2
tx_pp_jitter:40
tx_pp_sync_lost:1

For information my tx_pp is set to 500. My NIC is locked on PTP with ptp4l and 
phc2sys is used to synchronize the system clock

Another strange thing is that the data rate and the pacing seams ok but every 
packets are too early but constant.

Sometimes I also see the tx_pp_wander going really high at the start of the 
transition ( more than 3000).

I tried with testpmd and I see no error so it's seams the problem is something 
I do that cause the issue.

My question is what could cause the errors I see in the xstats, I think it's 
the key to my problem.

Could you also explain a bit what those tx_pp xstats represent because even 
when looking at the source code it doesn't seems clear.

Thank you in advance for your help.

Regards,

Antoine Pollenus





Flow filtering issues : not deleted on NIC at time of call rte_flow_destroy

2022-07-08 Thread Antoine POLLENUS
Hello,

We have some issues with the flow filtering API.

When deleting a filter, just after rte_flow_destroy, we clear the received 
packets by receiving all packet in that mempool.
The issue is that the nic still receive packet in that mempool after the 
rte_flow_destroy is done.
We than reuse this mempool to receive packet from a different origin, and we 
see that this mempool still have packet from the previous origin.

The filter we use is simply a queue one based on the IP and the UDP dst and src.

The questions are:
- Is the rte_flow_destroy asynchronous on the NIC ? we see the filter doesn't 
exist anymore in DPDK.
- If yes, is there a way to know when the filter is effectively deleted on the 
NIC ?
- Is there a way to reset the mempool in a clean way without deleting it ?

At this stage to avoid this issue the only fix we found is to sleep during one 
second after the call of rte_flow_destroy and then receiving the remaining 
packets still present in mempool.

Hope I'll find help here,

Regards,

Antoine Pollenus


RE: [ConnectX 6Dx]Issue using Tx scheduling feature in DPDK

2022-05-02 Thread Antoine POLLENUS
I'm not really familiar with testpmd, how am I supposed to do that ?

From: Asaf Penso [mailto:as...@nvidia.com]
Sent: lundi 2 mai 2022 11:30
To: Antoine POLLENUS ; users@dpdk.org; Slava Ovsiienko 

Subject: Re: [ConnectX 6Dx]Issue using Tx scheduling feature in DPDK

For example, I don't see you add the tx_pp devarg as part of the testpmd 
command line.

Regards,
Asaf Penso

From: Antoine POLLENUS mailto:a.polle...@deltacast.tv>>
Sent: Monday, May 2, 2022 11:53:18 AM
To: Asaf Penso mailto:as...@nvidia.com>>; 
users@dpdk.org<mailto:users@dpdk.org> mailto:users@dpdk.org>>; 
Slava Ovsiienko mailto:viachesl...@nvidia.com>>
Subject: RE: [ConnectX 6Dx]Issue using Tx scheduling feature in DPDK


Thanks for you answer,

Already red the doc on the subject but can't make it work in testpmd.
Didn't implemented it myself at this step but seams I'm missing something.

Do I need to enable a specific offload ?



From: Asaf Penso [mailto:as...@nvidia.com]
Sent: lundi 2 mai 2022 09:59
To: Antoine POLLENUS mailto:a.polle...@deltacast.tv>>; 
users@dpdk.org<mailto:users@dpdk.org>; Slava Ovsiienko 
mailto:viachesl...@nvidia.com>>
Subject: RE: [ConnectX 6Dx]Issue using Tx scheduling feature in DPDK



Hello Antoine,



Have you had a look into mlx5 documentation?

http://doc.dpdk.org/guides/nics/mlx5.html

Please look for tx_pp.



I'm adding @Slava Ovsiienko<mailto:viachesl...@nvidia.com> in case you need 
further support.



Regards,

Asaf Penso



From: Antoine POLLENUS mailto:a.polle...@deltacast.tv>>
Sent: Thursday, April 28, 2022 3:25 PM
To: users@dpdk.org<mailto:users@dpdk.org>
Subject: [ConnectX 6Dx]Issue using Tx scheduling feature in DPDK



Hello,



DPDK Version: 21.11

Firmware version : 22.32.1010

MLNX_OFED version: MLNX_OFED_LINUX-5.5-1.0.3.2-ubuntu20.04-x86_64



We are trying to use the DPDK tx scheduling feature on a ConnectX6 DX adapter. 
We experience some issues with the feature not working.



The test is using Test-pmd in txonly mode.



Here are the command used:



sudo ./dpdk-testpmd -l 0-3 -n 4 -- -i --portmask=0x1 --nb-cores=1 
--eth-peer=0,01:00:5e:00:00:08 --tx-ip=10.10.1.168,239.0.0.8

testpmd> set fwd txonly

testpmd> set burst 64

testpmd> set txtimes 100,1



By doing this I expect the feature working. Am i missing something ?



I also added a print in txonly.c and clearly sees that the feature is not 
enabled



dynf = rte_mbuf_dynflag_lookup

(RTE_MBUF_DYNFLAG_TX_TIMESTAMP_NAME, NULL);

if (dynf >= 0)

timestamp_mask = 1ULL << dynf;

dynf = rte_mbuf_dynfield_lookup

(RTE_MBUF_DYNFIELD_TIMESTAMP_NAME, NULL);

if (dynf >= 0)

timestamp_off = dynf;



both function ( rte_mbuf_dynfield_lookup and rte_mbuf_dynflag_lookup) returns -1



I also tried to enabled the feature.



testpmd> port config 0 tx_offload send_on_timestamp on

but when doing this DPDK tells me that I don't have these offload capabilities



Hope you will be able to help me.



Regards



Antoine


RE: [ConnectX 6Dx]Issue using Tx scheduling feature in DPDK

2022-05-02 Thread Antoine POLLENUS
Thanks for you answer,

Already red the doc on the subject but can't make it work in testpmd.
Didn't implemented it myself at this step but seams I'm missing something.

Do I need to enable a specific offload ?


From: Asaf Penso [mailto:as...@nvidia.com]
Sent: lundi 2 mai 2022 09:59
To: Antoine POLLENUS ; users@dpdk.org; Slava Ovsiienko 

Subject: RE: [ConnectX 6Dx]Issue using Tx scheduling feature in DPDK

Hello Antoine,

Have you had a look into mlx5 documentation?
http://doc.dpdk.org/guides/nics/mlx5.html
Please look for tx_pp.

I'm adding @Slava Ovsiienko<mailto:viachesl...@nvidia.com> in case you need 
further support.

Regards,
Asaf Penso

From: Antoine POLLENUS mailto:a.polle...@deltacast.tv>>
Sent: Thursday, April 28, 2022 3:25 PM
To: users@dpdk.org<mailto:users@dpdk.org>
Subject: [ConnectX 6Dx]Issue using Tx scheduling feature in DPDK

Hello,

DPDK Version: 21.11
Firmware version : 22.32.1010
MLNX_OFED version: MLNX_OFED_LINUX-5.5-1.0.3.2-ubuntu20.04-x86_64

We are trying to use the DPDK tx scheduling feature on a ConnectX6 DX adapter. 
We experience some issues with the feature not working.

The test is using Test-pmd in txonly mode.

Here are the command used:

sudo ./dpdk-testpmd -l 0-3 -n 4 -- -i --portmask=0x1 --nb-cores=1 
--eth-peer=0,01:00:5e:00:00:08 --tx-ip=10.10.1.168,239.0.0.8
testpmd> set fwd txonly
testpmd> set burst 64
testpmd> set txtimes 100,1

By doing this I expect the feature working. Am i missing something ?

I also added a print in txonly.c and clearly sees that the feature is not 
enabled

dynf = rte_mbuf_dynflag_lookup
(RTE_MBUF_DYNFLAG_TX_TIMESTAMP_NAME, NULL);
if (dynf >= 0)
timestamp_mask = 1ULL << dynf;
dynf = rte_mbuf_dynfield_lookup
(RTE_MBUF_DYNFIELD_TIMESTAMP_NAME, NULL);
if (dynf >= 0)
timestamp_off = dynf;

both function ( rte_mbuf_dynfield_lookup and rte_mbuf_dynflag_lookup) returns -1

I also tried to enabled the feature.

testpmd> port config 0 tx_offload send_on_timestamp on
but when doing this DPDK tells me that I don't have these offload capabilities

Hope you will be able to help me.

Regards

Antoine


[ConnectX 6Dx]Issue using Tx scheduling feature in DPDK

2022-04-28 Thread Antoine POLLENUS
Hello,

DPDK Version: 21.11
Firmware version : 22.32.1010
MLNX_OFED version: MLNX_OFED_LINUX-5.5-1.0.3.2-ubuntu20.04-x86_64

We are trying to use the DPDK tx scheduling feature on a ConnectX6 DX adapter. 
We experience some issues with the feature not working.

The test is using Test-pmd in txonly mode.

Here are the command used:

sudo ./dpdk-testpmd -l 0-3 -n 4 -- -i --portmask=0x1 --nb-cores=1 
--eth-peer=0,01:00:5e:00:00:08 --tx-ip=10.10.1.168,239.0.0.8
testpmd> set fwd txonly
testpmd> set burst 64
testpmd> set txtimes 100,1

By doing this I expect the feature working. Am i missing something ?

I also added a print in txonly.c and clearly sees that the feature is not 
enabled

dynf = rte_mbuf_dynflag_lookup
(RTE_MBUF_DYNFLAG_TX_TIMESTAMP_NAME, NULL);
if (dynf >= 0)
timestamp_mask = 1ULL << dynf;
dynf = rte_mbuf_dynfield_lookup
(RTE_MBUF_DYNFIELD_TIMESTAMP_NAME, NULL);
if (dynf >= 0)
timestamp_off = dynf;

both function ( rte_mbuf_dynfield_lookup and rte_mbuf_dynflag_lookup) returns -1

I also tried to enabled the feature.

testpmd> port config 0 tx_offload send_on_timestamp on
but when doing this DPDK tells me that I don't have these offload capabilities

Hope you will be able to help me.

Regards

Antoine


Understand main Lcore

2022-02-14 Thread Antoine POLLENUS
Hello,

We are struggling to understand how the main lcore works and what is exactly 
done in it when it run.

This need of understanding come from some issues we see when setting the main 
lcore on the core 0 NUMA 0.
When we do that we see some latency spike in the packet interval time like if 
the tx_burst was blocked for 1ms despite being on another lcore than the main 
one.

The spike are really annoying because our usecase is time sensitive.

The questions are:

- What is the exact role of the main lcore?
- How does the main lcore works ?
- Do you have recommendation on which cpu core to assign main lcore ?
- Why would we observe the packet interval time only on core 0 ?
- Is there an implication of having interrupt on the main lcore on the other 
lcore ?

Thank you in advance for your help,

Antoine


[dpdk-users] [Broadcom BNXT] Link event not working

2021-09-17 Thread Antoine POLLENUS
Hi,

I'm experiencing some issues with a Broadcom P225P and DPDK in 19.11.3 version.

We register a handler to take care of the link status as specified in the DPDK 
documentation.

But with this specific board the event is never triggered. I tested by plugging 
and unplugging the SFP28 cable.

The issue is kind of problematic because our code relies on this event.

I've tested with an intel xxv710-da2 and with that board no problem.

Is it normal that this link status event is not working on Broadcom ?

Is it fixed in an higher version ?

Regards,

Antoine Pollenus


Re: [dpdk-users] Issues with rte_flow_destroy

2021-06-01 Thread Antoine POLLENUS
Hello,

I've also tried with a broadcom P225P and I have an other issues there.

I have an error on the destroy filter.

bnxt_hwrm_clear_ntuple_filter(): error 1:16:00c00014:
destroy 2 message: Failed to destroy flow.

I'm really asking myself if I'm doing something wrong and I really need some 
help on my issues.

Regards,

Antoine Pollenus


Re: [dpdk-users] Issues with rte_flow_destroy

2021-05-28 Thread Antoine POLLENUS
I tried with various version starting from 19.11 to the latest.

Could it be an issue with the firmware of the intel cards I'm using ?

If yes how can I get that firmware version ?

Have you tried with my code in the flow_filtering example ?

Regards,

Antoine Pollenus


Re: [dpdk-users] Issues with rte_flow_destroy

2021-05-26 Thread Antoine POLLENUS
I've also test using directly the flow director capabilities using ethtool and 
there it seems there is no issues I can make the same work flow as I want.

I've also test through DPDK with a XL710 (40G) and I have the same issues it 
seems the issue come from someware in the i40e functions. Maybe the validate.

This is really blocking for us and I have no idea how to fix that or any work 
around.

thank you in advance for your help,

regards

Antoine Pollenus


[dpdk-users] Issues with rte_flow_destroy

2021-05-25 Thread Antoine POLLENUS
Hi,

I'm experiencing some issues using the flow API and a intel XXV710 (i40e).

I managed to reproduce it  in the flow filtering sample.

I'm creating one flow than deleting it and then creating another with basic 
change
#define SRC_IP ((0<<24) + (0<<16) + (0<<8) + 0) /* src ip = 0.0.0.0 */
#define SRC_IP_1 ((192<<24) + (168<<16) + (1<<8) + 3) /* dest ip = 192.168.1.1 
*/
#define DEST_IP ((192<<24) + (168<<16) + (1<<8) + 1) /* dest ip = 192.168.1.1 */
#define DEST_IP_1 ((192<<24) + (168<<16) + (1<<8) + 2) /* dest ip = 192.168.1.1 
*/

flow = generate_ipv4_flow(port_id, selected_queue,
   SRC_IP, 
EMPTY_MASK,
   DEST_IP, 
FULL_MASK, );
if (!flow) {
   printf("Flow can't be created %d message: %s\n",
   error.type,
   error.message ? error.message : 
"(no stated reason)");
   rte_exit(EXIT_FAILURE, "error in creating flow");
}
//Deleting the rule
int returned;
returned = rte_flow_destroy(port_id, flow, );
if(returned < 0)
{
   printf("destroy %d message: %s\n",
   error.type,
   error.message ? error.message : 
"(no stated reason)");
}
//Generating another rule
flow1 = generate_ipv4_flow(port_id, selected_queue,
   SRC_IP_1, 
FULL_MASK,
   DEST_IP_1, 
FULL_MASK, );
if (!flow1) {
   printf("Flow can't be created %d message: %s\n",
   error.type,
   error.message ? error.message : 
"(no stated reason)");
   rte_exit(EXIT_FAILURE, "error in creating flow");
}

When doing that I always get an error on the second flow I want to add.

Flow can't be created 13 message: Conflict with the first rule's input set.

The rule is indeed in conflict because it uses the same as the previous but 
with the source IP changing and also the destination IP.

The strange thing is that a destroy has been made on the previous rule and 
should not be there anymore

Am I doing something wrong or is there a bug in the destroy function ?

Thank you in advance for your answer,

Regards,

Antoine Pollenus


[dpdk-users] [rte_flow]How to redirect all non matching traffic to a specific queue

2019-07-17 Thread Antoine POLLENUS
Hello,

I have a problem in my DPDK implementation,

I'm redirecting/filtering ingress traffic to a specific queue depending on the 
udp port using rte_flow.

Now that I have that I would like to redirect all non-matching packets to a 
specific queue.

How can I do that with RTE_FLOW ?

Thank you in advance for your answer.

Regards,

Antoine Pollenus


Re: [dpdk-users] What is the best threading technology when using DPDK ?

2019-07-01 Thread Antoine POLLENUS
Thanks a lot for your answer. I can now see clearly what I have to do in my 
implementation.

Regards

Antoine Pollenus

-Original Message-
From: Van Haaren, Harry [mailto:harry.van.haa...@intel.com] 
Sent: lundi 1 juillet 2019 17:39
To: Antoine POLLENUS ; users@dpdk.org
Subject: RE: What is the best threading technology when using DPDK ?

> -Original Message-
> From: users [mailto:users-boun...@dpdk.org] On Behalf Of Antoine 
> POLLENUS
> Sent: Monday, July 1, 2019 2:20 PM
> To: users@dpdk.org
> Subject: [dpdk-users] What is the best threading technology when using 
> DPDK ?
> 
> Hello,

Hi Antoine,


> I'm developing a time critical application using DPDK that require to 
> be multithreaded. I'm wondering what threading technology I should use ?
> 
> -  Can I use the standard pthread library and if yes, is there a
> trade of in term of performance ?
> 
> -  I see on this page that a lthread library also exist but is kind
> of limited in term of functionality:
> https://doc.dpdk.org/guides/sample_app_ug/performance_thread.html
> 
> -  I see also that we can launch a function on another lcore using
> the rte_eal_remote_launch(...)
> 
> Is there a recommendation when using DPDK to use a technology 
> threading technology or another ?

Good questions to ask, I'll bullet a few thoughts in reply;

- DPDK provides its own threading APIs, that depending on the platform calls 
the OS native implementation. For Linux this means pthreads. So by using DPDK's 
thread APIs, you're really using pthreads, but with a wrapper layer. This 
wrapper layer means that you can recompile against other targets (windows 
support is WIP for DPDK) and you won't have to change your threading code...

- Lthreads is a method of scheduling large numbers of work items on a lower 
numbers of "real" threads. Think of it as a scheduler implementation (like any 
OS has, to multiplex tasks to HW CPU cores). If you are running in a 
time-critical domain, general practice is to avoid multiplexing, and to 
dedicate resources just to the time critical work. In short; I suggest you run 
a DPDK lcore dedicated to the time-critical task, and do not use lthreads.

- The DPDK threading APIs use rte_eal_remote_launch() to "spawn" a worker 
thread to a given hardware thread on a CPU. (With hyper-threading, or running 2 
"logical" threads on one "physical" core, this enumeration becomes a little 
more complex, but is still valid). DPDK uses this feature to do core-pinning, 
which means that a worker pthread is affinitized with a specific 
hardware-thread on the CPU. This stops the linux scheduler from moving the 
software-thread to a different CPU core/thread, which is desirable as you want 
to minimize jitter for time sensitive workloads. (And switching to a different 
CPU core/thread requires work, and hence takes time).

- For time sensitive processing, my recommendation would be to spawn a worker 
thread into a busy-loop for the time critical task. If possible it is best to 
dedicate that CPU for the task and not put any other work on that thread - this 
will minimize the jitter/latency.

- Investigate the "isolcpus" kernel boot parameter, and IRQ affinities if you 
have not already done so, to reduce jitter due to Linux scheduler and IRQ 
subsystem interference with the DPDK thread.


> Regards
> 
> Antoine Pollenus


Hope the above helps! Regards, -Harry


[dpdk-users] What is the best threading technology when using DPDK ?

2019-07-01 Thread Antoine POLLENUS
Hello,

I'm developing a time critical application using DPDK that require to be 
multithreaded. I'm wondering what threading technology I should use ?


-  Can I use the standard pthread library and if yes, is there a trade 
of in term of performance ?


-  I see on this page that a lthread library also exist but is kind of 
limited in term of functionality:
https://doc.dpdk.org/guides/sample_app_ug/performance_thread.html


-  I see also that we can launch a function on another lcore using the 
rte_eal_remote_launch(...)

Is there a recommendation when using DPDK to use a technology threading 
technology or another ?

Regards

Antoine Pollenus