Re: dpdk-testpmd works but dpdk pktgen crashes on startup with MLX5 card

2023-07-12 Thread Wiles, Keith
From: Maayan Kashani 
Date: Wednesday, July 12, 2023 at 8:57 AM
To: Antonio Di Bacco , users@dpdk.org 
Cc: Raslan Darawsheh , Ali Alnubani 
Subject: RE: dpdk-testpmd works but dpdk pktgen crashes on startup with MLX5 
card
Hi, Antonio,
Sorry for the late reply,
Thanks for bringing this issue to our attention.
We need to investigate it, and share more data once we have it.

Regards,
Maayan Kashani

> -Original Message-
> From: Antonio Di Bacco 
> Sent: Wednesday, 12 July 2023 0:52
> To: users@dpdk.org
> Subject: dpdk-testpmd works but dpdk pktgen crashes on startup with MLX5
> card
>
> External email: Use caution opening links or attachments
>
>
> If I try to use dpdk-pktgen on a MLX5 card, I get this SIGSEGV
>
> [user@dhcp-10-84-89-229 pktgen-dpdk]$  sudo
> LD_LIBRARY_PATH=/usr/local/lib64 ./usr/local/bin/pktgen -l50-54  -n 2  --allow
> c1:00.0 -- -P -m "52.1"

Hope the format is correct I told macos outlook to reply as text, but it never 
seems to work. ☹

I noticed here you define lcores  -l 50-54, which means 50 is used for timers 
and display output. Then 51-54 are used for ports.
The one thing I see here is that you define a lcore.port mapping  of -m “52.1” 
meaning lcore 52 and port 1. You only have 1 port, which means it should be -m 
“52.0” the other unused lcores will be reported as not used. Looks like I need 
to add some tests to detect this problem. ☹

I hope this helps. I did not see this email as I have a filter set to detect a 
subject line with Pktgen in the text.

>
> *** Copyright(c) <2010-2023>, Intel Corporation. All rights reserved.
> *** Pktgen  created by: Keith Wiles -- >>> Powered by DPDK <<<
>
> 0: mlx5_pci9  1   15b3:1019/c1:00.0
>
>
>
> *** Unable to create capture memzone for socket ID 2
> *** Unable to create capture memzone for socket ID 3
> *** Unable to create capture memzone for socket ID 4
> *** Unable to create capture memzone for socket ID 5
> *** Unable to create capture memzone for socket ID 6
> *** Unable to create capture memzone for socket ID 7
>  repeating message
> 
> *** Unable to create capture memzone for socket ID 219
> *** Unable to create capture memzone for socket ID 220
> *** Unable to create capture memzone for socket ID 221
> *** Unable to create capture memzone for socket ID 222
> WARNING: Nothing to do on lcore 51: exiting
> WARNING: Nothing to do on lcore 53: exiting
> WARNING: Nothing to do on lcore 54: exiting
> - Ports 0-0 of 1 Copyright(c) <2010-2023>, Intel Corporation
>   Port:Flags:
> Link State  :
> Pkts/s Rx   :
>Tx   :
> MBits/s Rx/Tx   :
> Pkts/s Rx Max   :
>Tx Max   :
> Broadcast   :
> Multicast   :
> Sizes 64:
>   65-127:
>   128-255   :
>   256-511   :
>   512-1023  :
>   1024-1518 :
> Runts/Jumbos:
> ARP/ICMP Pkts   :
> Errors Rx/Tx:
> Total Rx Pkts   :
>   Tx Pkts   :
>   Rx/Tx MBs :
> TCP Flags   :
> TCP Seq/Ack :
> Pattern Type:
> Tx Count/% Rate :
> Pkt Size/Rx:Tx Burst:
> TTL/Port Src/Dest   :
> Pkt Type:VLAN ID:
> 802.1p CoS/DSCP/IPP :
> VxLAN Flg/Grp/vid   :
> IP  Destination :
> Source  :
> MAC Destination :
> Source  :
> NUMA/Vend:ID/PCI:
> -- Pktgen 23.06.1 (DPDK 22.11.2)  Powered by DPDK  (pid:20433) 
> 
>
>
> == Pktgen got a Segment Fault
>
> Obtained 11 stack frames.
> ./usr/local/bin/pktgen() [0x43f1b8]
> /lib64/libc.so.6(+0x54df0) [0x7fe22a2a3df0]
> ./usr/local/bin/pktgen() [0x458859]
> ./usr/local/bin/pktgen() [0x4592cc]
> ./usr/local/bin/pktgen() [0x43d6d9]
> ./usr/local/bin/pktgen() [0x43d73a]
> ./usr/local/bin/pktgen() [0x41cd10]
> ./usr/local/bin/pktgen() [0x43f601]
> /lib64/libc.so.6(+0x3feb0) [0x7fe22a28eeb0]
> /lib64/libc.so.6(__libc_start_main+0x80) [0x7fe22a28ef60]
> ./usr/local/bin/pktgen() [0x404bf5]
>
>
> Testpmd works fine on the same card.
>
> Anyone can give me a suggestion?
>
> Best regards.


Re: [dpdk-users] DPDK-PKTGEN crashes when using Jumbo frame in AWS EC2

2021-05-05 Thread Wiles, Keith
The DPDK code states:

#define DEV_TX_OFFLOAD_MULTI_SEGS   0x8000 /**< Device supports multi 
segment send. */

That is all I know about this option, maybe someone else can explain how the 
drivers use this option.

From: Anand Gupta 
Date: Wednesday, May 5, 2021 at 7:58 AM
To: Wiles, Keith 
Cc: users@dpdk.org 
Subject: RE: [dpdk-users] DPDK-PKTGEN crashes when using Jumbo frame in AWS EC2
Hi Keith,

Thanks for the reply.

Can you explain if possible what does “DEV_TX_OFFLOAD_MULTI_SEGS” do if it is 
used in TX-Offload ? What is the use of this in Jumbo frame scenario?

Thanks,
Anand

From: Wiles, Keith 
Sent: Wednesday, May 5, 2021 6:19 PM
To: Anand Gupta 
Cc: users@dpdk.org
Subject: Re: [dpdk-users] DPDK-PKTGEN crashes when using Jumbo frame in AWS EC2

We can try to test for Multi-segs in pktgen in the file app/pktgen-port-cfg.c 
you can search for MULTI_SEGS and add the test.

if (info->dev_info.tx_offload_capa & 
DEV_TX_OFFLOAD_MULTI_SEGS)
conf.txmode.offloads |= 
DEV_TX_OFFLOAD_MULTI_SEGS;

From: Anand Gupta mailto:anand.gu...@keysight.com>>
Date: Wednesday, May 5, 2021 at 6:54 AM
To: Wiles, Keith mailto:keith.wi...@intel.com>>
Cc: users@dpdk.org<mailto:users@dpdk.org> 
mailto:users@dpdk.org>>
Subject: RE: [dpdk-users] DPDK-PKTGEN crashes when using Jumbo frame in AWS EC2
Hi Keith,

I found the root cause for the problem " requested Tx offloads 0x8000 doesn't 
match Tx offloads capabilities 0xe" in EC2 instances.
The DPDK ENA driver does not have "DEV_TX_OFFLOAD_MULTI_SEGS" TX offload 
capability in TX offloads capabilities and in Pktgen when Jumbo frame is 
enabled, the TX-offload adds " DEV_TX_OFFLOAD_MULTI_SEGS".

Is "DEV_TX_OFFLOAD_MULTI_SEGS" necessary when the Jumbo frame is enable in the 
Pktgen or can we first check if the Device supports the offload or not and then 
add " DEV_TX_OFFLOAD_MULTI_SEGS"?

What is the mechanism of "DEV_TX_OFFLOAD_MULTI_SEGS" when enabling Jumbo frames?

Thanks,
Anand
-Original Message-
From: Wiles, Keith mailto:keith.wi...@intel.com>>
Sent: Friday, February 12, 2021 7:26 PM
To: Anand Gupta mailto:anand.gu...@keysight.com>>
Cc: users@dpdk.org<mailto:users@dpdk.org>
Subject: Re: [dpdk-users] DPDK-PKTGEN crashes when using Jumbo frame in AWS EC2

CAUTION: This message originates from an external sender.

> On Feb 11, 2021, at 11:03 PM, Anand Gupta 
> mailto:anand.gu...@keysight.com>> wrote:
>
> Hello All,
>
> I'm trying to run DPDK-PKTGEN in AWS EC2 instances and able to send packets 
> up to 1.5 KB packets.
> If we use Jumbo frame packets then we are getting error "requested Tx 
> offloads 0x8000 doesn't match Tx offloads capabilities 0xe".
>
> Setup:
> Ubuntu: 16.04
> Driver: ENA 2.0.3K
> DPDK: 18.11.1
> PKTGEN: 3.7.1

Please update to the latest Pktgen 21.02.0 with the latest DPDK version. If you 
can not upgrade then please look at the latest version as some work for jumbo 
frames was done a few months ago.
>
> Can anyone help me with this issue ? Is anything I'm missing ?
>
> Thanks,
> Anand


Re: [dpdk-users] Running Pktgen with Mellanox CX5

2021-04-09 Thread Wiles, Keith


> On Apr 8, 2021, at 7:08 PM, Narcisa Ana Maria Vasile 
>  wrote:
> 
> Thanks for the quick reply! Just to clarify, the libmlx4 doesn't get 
> installed when installing the latest Mellanox drivers.
> Instead, the libmlx5 is installed, which I believe is the right one to use 
> with Cx5s.
> 
> It looks like pktgen needs libmlx4 to run, so does this mean that it is only 
> working with older Mellanox tools and NICs?
> I could try install the libmlx4, but my understanding was that libmlx4 is for 
> Cx3s and libmlx5 is for Cx4s and Cx5s.

Pktgen has no requirement to work with any mlx library or hardware for that 
matter. I do not know what this problem is, but I believe it is not a Pktgen 
problem. Did you try with testpmd or any of the DPDK examples. If they work 
with mlx then I can look at why Pktgen does not work in this case. If anything 
the libmlx5 is referencing the mlx4 library for some reason and when you added 
the mlx5 PMD it also needed the mlx4.

BTW, notice the error message states "EAL: libmlx4.so.1: cannot open shared 
object file: No such file or directory” EAL Is DPDK initialization not Pktgen.
> 
> -Original Message-
> From: Wiles, Keith  
> Sent: Thursday, April 8, 2021 2:12 PM
> To: Narcisa Ana Maria Vasile 
> Cc: users@dpdk.org; Kevin Daniel (WIPRO LIMITED) ; 
> Omar Cardona 
> Subject: [EXTERNAL] Re: Running Pktgen with Mellanox CX5
> 
> 
> 
>> On Apr 8, 2021, at 1:47 PM, Narcisa Ana Maria Vasile 
>>  wrote:
>> 
>> Hi,
>> 
>> I’m trying to run pktgen (latest ‘master’ branch) with a Mellanox CX5 NIC on 
>> Ubuntu 20.04.
>> I’ve installed the latest Mellanox drivers 
>> (MLNX_OFED_LINUX-5.3-1.0.0.1-ubuntu20.04-x86_64.iso).
>> I’ve compiled and installed DPDK successfully (latest ‘main’ branch).
>> 
>> As you can see below, I’m getting an error message saying “libmlx4.so.1: 
>> cannot open shared object file: No such file or directory”.
>> I am able to run other DPDK applications such as ‘testpmd’.
>> 
>> Is pktgen supported with the latest Mellanox drivers on CX5? Thank you!
>> 
>> --
>> pktgen -l 1,3,5 -a 04:00.0 -d librte_net_mlx5.so -- -P -m "[3:5].0" -T
>> 
>> Copyright(c) <2010-2021>, Intel Corporation. All rights reserved. Powered by 
>> DPDK
>> EAL: Detected 20 lcore(s)
>> EAL: Detected 2 NUMA nodes
>> EAL: Detected shared linkage of DPDK
>> EAL: libmlx4.so.1: cannot open shared object file: No such file or directory
>> EAL: FATAL: Cannot init plugins
>> EAL: Cannot init plugins
> 
> I have not built anything with mlx in a long time. My guess is the 
> libmix4.so.1 is not located in a place the system can pick up. Maybe you need 
> to set LD_LIBRARY_PATH to the path where this library is located. I see you 
> included the DPDK PMD, but you still need to tell applications where to 
> locate the library. Another option is to add it to the /etc/ld.so.conf.d/ 
> file or use pkg-config to locate the libs may help too. In some cases 
> external packages do not store the libs in a standard place for ldconfig to 
> locate or the package does not provide ldconfig configuration files.
>> 
>> Thank you,
>> Narcisa V.
> 



Re: [dpdk-users] Running Pktgen with Mellanox CX5

2021-04-08 Thread Wiles, Keith


> On Apr 8, 2021, at 1:47 PM, Narcisa Ana Maria Vasile 
>  wrote:
> 
> Hi,
>  
> I’m trying to run pktgen (latest ‘master’ branch) with a Mellanox CX5 NIC on 
> Ubuntu 20.04.
> I’ve installed the latest Mellanox drivers 
> (MLNX_OFED_LINUX-5.3-1.0.0.1-ubuntu20.04-x86_64.iso).
> I’ve compiled and installed DPDK successfully (latest ‘main’ branch).
>  
> As you can see below, I’m getting an error message saying “libmlx4.so.1: 
> cannot open shared object file: No such file or directory”.
> I am able to run other DPDK applications such as ‘testpmd’.
>  
> Is pktgen supported with the latest Mellanox drivers on CX5? Thank you!
>  
> --
> pktgen -l 1,3,5 -a 04:00.0 -d librte_net_mlx5.so -- -P -m "[3:5].0" -T
>  
> Copyright(c) <2010-2021>, Intel Corporation. All rights reserved. Powered by 
> DPDK
> EAL: Detected 20 lcore(s)
> EAL: Detected 2 NUMA nodes
> EAL: Detected shared linkage of DPDK
> EAL: libmlx4.so.1: cannot open shared object file: No such file or directory
> EAL: FATAL: Cannot init plugins
> EAL: Cannot init plugins

I have not built anything with mlx in a long time. My guess is the libmix4.so.1 
is not located in a place the system can pick up. Maybe you need to set 
LD_LIBRARY_PATH to the path where this library is located. I see you included 
the DPDK PMD, but you still need to tell applications where to locate the 
library. Another option is to add it to the /etc/ld.so.conf.d/ file or use 
pkg-config to locate the libs may help too. In some cases external packages do 
not store the libs in a standard place for ldconfig to locate or the package 
does not provide ldconfig configuration files.
>  
> Thank you,
> Narcisa V.



Re: [dpdk-users] Problems building pktgen

2021-04-08 Thread Wiles, Keith


> On Apr 8, 2021, at 11:45 AM, David Aldrich  
> wrote:
> 
> Let me know if you can not get it working with the step I provided, the 
> problem will be I do not build Pktgen on CentOS.
> 
> Thank you for your help, it is building ok now.  Where is the best place to 
> ask for help with using pktgen? 

On DPDK.org email list with ‘[PKTGEN]’ in the subject line or on GitHub 
https://github.com/pktgen/Pktgen-DPDK



Re: [dpdk-users] Problems building pktgen

2021-04-07 Thread Wiles, Keith



> On Apr 7, 2021, at 11:44 AM, David Aldrich  
> wrote:
> 
> Hi
> 
> I am trying to build DPDK PktGen on Centos 7 using gcc 4.8.5, using the
> HEAD revisions of DPDK and pktgen.  The build is failing:
> 
> [76/2153] /usr/bin/meson --internal exe --capture lib/ip_frag.sym_chk --
> /data/daldrich/pktgen/dpdk/buildtools/check-symbols.sh
> /data/daldrich/pktgen/dpdk/lib/librte_ip_frag/version.map
> lib/librte_ip_frag.a
> FAILED: lib/ip_frag.sym_chk
> /usr/bin/meson --internal exe --capture lib/ip_frag.sym_chk --
> /data/daldrich/pktgen/dpdk/buildtools/check-symbols.sh
> /data/daldrich/pktgen/dpdk/lib/librte_ip_frag/version.map
> lib/librte_ip_frag.a
> rte_frag_table_del_expired_entries is flagged as experimental
> but is not listed in version map
> Please add rte_frag_table_del_expired_entries to the version map
> 
> Are the "Please add  to the version map" messages errors or just
> warnings?
> 
> I have put a longer version of this question on StackOverflow, which I
> would be grateful if anyone could take a look at:
> 
> Problems building DPDK PktGen - Stack Overflow
> 
> 
> Best regards
> David

I answered your post on Stack Overflow, but it is not normally were I look for 
Pktgen related questions.

The short answer is the errors appear to be from DPDK not having experimental 
APIs enabled, but I could be wrong.

Let me know if you can not get it working with the step I provided, the problem 
will be I do not build Pktgen on CentOS.

Re: [dpdk-users] Pktgen port mapping error

2021-04-02 Thread Wiles, Keith
You should pick a lcore that is on the same socket as the NIC (socket 0). It 
appears that lcore 1 is on socket 1. Please use the cpu_layout.py script in the 
DPDK tools directory to determine which lcores to use.

From: users  on behalf of Danushka Menikkumbura 

Date: Friday, April 2, 2021 at 11:37 AM
To: users@dpdk.org 
Subject: [dpdk-users] Pktgen port mapping error
Hello,

When I run pktget using the command line:

sudo pktgen -l 0-1 -n 3 -- -P -m 1.0

I get the following error:

PANIC in pktgen_main_rxtx_loop():

*** port 0 on socket ID 0 has different socket ID on lcore 1 socket ID 1

I have just one port on my machine. Please help me understand what I might
be missing here.

Thanks,
Danushka


Re: [dpdk-users] Pktgen with dynamic rate limiting

2021-03-30 Thread Wiles, Keith



From: users  on behalf of Danushka Menikkumbura 

Date: Tuesday, March 30, 2021 at 10:11 AM
To: users@dpdk.org 
Subject: [dpdk-users] Pktgen with dynamic rate limiting
Hello everyone,

I need to use high-throughput traffic generation, while being able to
control rate based on an input received from another process. Please let me
know how I can achieve this. Does pktgen have provisions for something like
that?
[Keith] Not sure I am fully following you, but Pktgen can be controlled over a 
socket using the -G/g option. In the readme I have couple examples, but may not 
be exactly what you want.
"  -g address   Optional IP address and port number default is 
(localhost:0x5606)\n"
"   If -g is used that enable socket support as a 
server application\n"
"  -G   Enable socket support using default server 
values localhost:0x5606 \n"
Hopefully this will work for you.

Thank you!

Best,
Danushka


Re: [dpdk-users] DPDK-PKTGEN crashes when using Jumbo frame in AWS EC2

2021-02-12 Thread Wiles, Keith



> On Feb 11, 2021, at 11:03 PM, Anand Gupta  wrote:
> 
> Hello All,
> 
> I'm trying to run DPDK-PKTGEN in AWS EC2 instances and able to send packets 
> up to 1.5 KB packets.
> If we use Jumbo frame packets then we are getting error "requested Tx 
> offloads 0x8000 doesn't match Tx offloads capabilities 0xe".
> 
> Setup:
> Ubuntu: 16.04
> Driver: ENA 2.0.3K
> DPDK: 18.11.1
> PKTGEN: 3.7.1

Please update to the latest Pktgen 21.02.0 with the latest DPDK version. If you 
can not upgrade then please look at the latest version as some work for jumbo 
frames was done a few months ago.
> 
> Can anyone help me with this issue ? Is anything I'm missing ?
> 
> Thanks,
> Anand



Re: [dpdk-users] DPDK-PKTGEN crashes when using Jumbo frame in AWS EC2

2021-02-12 Thread Wiles, Keith




On Feb 11, 2021, at 11:03 PM, Anand Gupta  wrote:

Hello All,

I'm trying to run DPDK-PKTGEN in AWS EC2 instances and able to send packets up 
to 1.5 KB packets.
If we use Jumbo frame packets then we are getting error "requested Tx offloads 
0x8000 doesn't match Tx offloads capabilities 0xe".

Setup:
Ubuntu: 16.04
Driver: ENA 2.0.3K
DPDK: 18.11.1
PKTGEN: 3.7.1

[KW] Please update to the latest Pktgen 21.02.0 with the latest DPDK version. 
If you can’t upgrade then please look at the latest version as some work for 
jumbo frames has been done.


Can anyone help me with this issue ? Is anything I'm missing ?

Thanks,
Anand



Re: [dpdk-users] DPDK-pktgen Flows

2021-01-04 Thread Wiles, Keith


> On Jan 3, 2021, at 10:03 AM, Merve Orakcı  wrote:
> 
> Hi everyone, I will get flows using the "range" command. So I am reviewing
> documents and sample codes about this topic. I would like to ask a few
> things that I cannot understand. What is the flow generation procedure of
> range command? Take port 0 as an example. 128 different ip addresses and
> 2000 different port numbers are used for source and destination. As far as
> I understand, these lines are repeated for each package.But how is the flow
> creation procedure?
> My opinions:
> First: Is it a random port number and ip address for each pass over the
> file? So total 128 * 2000= 256000 flows were created here.
> Second: when we think of increment operator, generated packets:
> 1. packet:  dst_ip: 192.168.1.1  ,  src_ip:  192.168.0.1 , dst_port: 2000 ,
> src_port: 5000
> 2. packet   dst_ip: 192.168.1.2  ,  src_ip:  192.168.0.2 , dst_port: 2001 ,
> src_port: 5001
> 
> 128. packet:   dst_ip: 192.168.1.128  ,  src_ip:  192.168.0.128 , dst_port:
> 2128 , src_port: 5128
> 129.packet  dst_ip: 192.168.1.1  ,  src_ip:  192.168.0.1 , dst_port:
> 2129, src_port: 5129  or (???)  dst_ip: 192.168.1.128  ,  src_ip:
> 192.168.0.128 , dst_port: 2129, src_port: 5129
> 
> then it will end when source and destination port numbers reach their
> highest value..
> 

The flow generation in Pktgen is pretty simple. Every time a packet is sent it 
fields you have setup are updated. This means if you have four fields setup to 
increment then they are changed for each packet sent. They are not random 
values as you can tell below.

> 
> Can you help with this issue? I want to measure the performance of the
> system that I have created in different flow numbers with the pktgen tool.
> For this reason, the number of flows is important for me. Thanks for your
> help.
> 
> pktgen.range.dst_mac("0", "start", "3c:fd:fe:9c:5c:b8");
> 
> pktgen.range.src_mac("0", "start", "3c:fd:fe:9c:5c:d8");
> 
> pktgen.range.dst_ip("0", "start", "192.168.1.1");
> 
> pktgen.range.dst_ip("0", "inc", "0.0.0.1");
> 
> pktgen.range.dst_ip("0", "min", "192.168.1.1");
> 
> pktgen.range.dst_ip("0", "max", "192.168.1.128");
> 
> pktgen.range.src_ip("0", "start", "192.168.0.1");
> 
> pktgen.range.src_ip("0", "inc", "0.0.0.1");
> 
> pktgen.range.src_ip("0", "min", "192.168.0.1");
> 
> 
> pktgen.range.src_ip("0", "max", "192.168.0.128");
> 
> pktgen.set_proto("0", "udp");
> 
> pktgen.range.dst_port("0", "start", 2000);
> 
> pktgen.range.dst_port("0", "inc", 1);
> 
> pktgen.range.dst_port("0", "min", 2000);
> 
> pktgen.range.dst_port("0", "max", 4000);
> 
> pktgen.range.src_port("0", "start", 5000);
> 
> pktgen.range.src_port("0", "inc", 1);
> 
> pktgen.range.src_port("0", "min", 5000);
> 
> pktgen.range.src_port("0", "max", 7000);
> 
> pktgen.range.pkt_size("0", "start", 64);
> 
> pktgen.range.pkt_size("0", "inc", 0);
> 
> pktgen.range.pkt_size("0", "min", 64);
> 
> pktgen.range.pkt_size("0", "max", 256);
> 
> -- Set up second port
> 
> pktgen.range.dst_mac("1", "start", "3c:fd:fe:9c:5c:d8");
> 
> pktgen.range.src_mac("1", "start", "3c:fd:fe:9c:5c:b8");
> 
> 
> pktgen.range.dst_ip("1", "start", "192.168.0.1");
> 
> pktgen.range.dst_ip("1", "inc", "0.0.0.1");
> 
> pktgen.range.dst_ip("1", "min", "192.168.0.1");
> 
> pktgen.range.dst_ip("1", "max", "192.168.0.128");
> 
> pktgen.range.src_ip("1", "start", "192.168.1.1");
> 
> pktgen.range.src_ip("1", "inc", "0.0.0.1");
> 
> pktgen.range.src_ip("1", "min", "192.168.1.1");
> 
> pktgen.range.src_ip("1", "max", "192.168.1.128");
> 
> pktgen.set_proto("all", "udp");
> 
> pktgen.range.dst_port("1", "start", 5000);
> 
> pktgen.range.dst_port("1", "inc", 1);
> 
> pktgen.range.dst_port("1", "min", 5000);
> 
> pktgen.range.dst_port("1", "max", 7000);
> 
> pktgen.range.src_port("1", "start", 2000);
> 
> pktgen.range.src_port("1", "inc", 1);
> 
> pktgen.range.src_port("1", "min", 2000);
> 
> pktgen.range.src_port("1", "max", 4000);
> 
> pktgen.range.pkt_size("1", "start", 64);
> 
> pktgen.range.pkt_size("1", "inc", 0);
> 
> pktgen.range.pkt_size("1", "min", 64);
> 
> pktgen.range.pkt_size("1", "max", 256);
> 
> pktgen.set_range("all", "on");
> 
> -- 
> *Merve Orakcı*
> Research Asistant
> Gazi University - Institute of Informatics
> Computer Forensics
> Phone :+90 0312 202 3814



Re: [dpdk-users] DPDK-pktgen Flows

2020-12-22 Thread Wiles, Keith



From: users 
Date: Monday, December 21, 2020 at 2:12 PM
To: users 
Subject: [dpdk-users] DPDK-pktgen Flows
Hi everyone, I want to perform tests by creating different numbers of
flows. Are
there any commands that I can see the number of flows I have created? Also,
where can I find flow samples created in different numbers? I could get
very little information in the document. For example how do I create 1000
or 1 flows?

Look into the range command in Pktgen ‘help range’ on the command line or TRex 
from Cisco could also be an option.

The range command allows you to adjust fields in the packets for 
starting/ending and increment values. The ‘page range’ allow you the 
configuration per port.

Switch between ports with ‘port X’ command to see other port configs while in 
the ‘page range’ screen.
Make sure you use the ‘enable  range’ command to enable range sending 
on a port.

One of the limitation for range is it will slow down the traffic as Pktgen 
needs to touch every packet.


--
*Merve Orakcı*
Research Asistant
Gazi University - Institute of Informatics
Computer Forensics
Phone :+90 0312 202 3814


Re: [dpdk-users] pktgen-dpdk

2020-12-08 Thread Wiles, Keith



> On Dec 7, 2020, at 11:59 PM, Madi Nuralin  wrote:
> 
> *Hi all,
> **  I have a question about pktgen-dpdk: Does pktgen-dpdk support
> IPv6 addressing, if not will it be implemented in the future?*

Sorry Pktgen does not support IPv6. It would be nice if someone did send a 
patch to support IPv6 :-)
> 
> *Best regards,*
> 
> *Madi*



Re: [dpdk-users] DPDK-Pktgen big pcap files

2020-12-05 Thread Wiles, Keith


> On Dec 5, 2020, at 4:36 AM, Merve Orakcı  wrote:
> 
> Hi everyboyd, I have 58 GB pcap file. I want to use this pcap file and send
> packet over DPDK port. For this reason, I use Pktgen tool but when i try to
> replay pcap file. pktgen aborted. This is about not enough memory for this
> pcap file? How should I adjust pktgen?
> 
> root@lucky-X10SRA:/home/lucky# grep -i huge /proc/meminfo
> AnonHugePages: 0 kB
> ShmemHugePages:0 kB
> HugePages_Total:  28
> HugePages_Free:   28
> HugePages_Rsvd:0
> HugePages_Surp:0
> Hugepagesize:1048576 kB

Getting 58GB pcap file to be sent by Pktgen is going to be hard as the number 
of packet buffers currently allocated to Pktgen is not that huge. The other 
problem you have only have 28 hugepages. For Pktgen to use that large of file 
it needs to load all 58GB into memory and then we have to adjust the number of 
buffers that Pktgen allocates, which means changing the code. The PCAP file is 
somewhat compressed meaning packets are back to back and Pktgen would need 
(normally) a 2K buffer for even one 64 byte frame. So multiply the number of 
frames in the 58GB file with 2K and that is how much memory you would need to 
hold that 58K file.

Maybe some other tool is needed to send this large of a file.  It appears I am 
not going to be much help here :-(

> 
> -- 
> *Merve Orakcı*
> Research Asistant
> Gazi University - Institute of Informatics
> Computer Forensics
> Phone :+90 0312 202 3814



Re: [dpdk-users] question related to build issue with Pktgen-DPDK tool

2020-10-29 Thread Wiles, Keith


> On Oct 28, 2020, at 9:33 PM, Sudharshan Krishnakumar 
>  wrote:
> 
> Hi All,
> 
> This is the first time, I am posting to this group, please excuse if there
> are any errors.
> 
> I have a much older version of DPDK(17.08) and corresponding old version of
> PktGen both built using Makefiles,
> working fine on my system.
> 
> Trying to move to latest DPDK(20.08) and PktGen tool.
> Based on what I read, I am guessing it is NOT possible to build latest
> version of Pktgen using make,
> needs to be built using meson/ninja. So I have tried to build both DPDK and
> Pktgen tool using meson/ninja.
> 
> Running into a build issue with latest Pktgen(using meson/ninja)->seeing
> this error->
> meson.build:58:0: ERROR:  Native dependency 'libdpdk' not found.
> 
> I have the 20.08 DPDK built and installed(in a different prefix path, other
> than the default /usr/local):
> 
> DPDK Build/install steps:
> meson build --prefix /home/my_userid/new_dpdk/dpdk-20.08/install
> ninja -C build
> ninja -C build install
> 
> ~/new_dpdk/dpdk-20.08$ find . -name libdpdk.a
> ./x86_64-native-linux-gcc/lib/libdpdk.a
> ./install/lib/x86_64-linux-gnu/libdpdk.a
> 
> ~/new_dpdk/dpdk-20.08$ find . -name *.pc
> ./build/meson-private/libdpdk.pc
> ./build/meson-private/libdpdk-libs.pc
> ./install/lib/x86_64-linux-gnu/pkgconfig/libdpdk.pc
> ./install/lib/x86_64-linux-gnu/pkgconfig/libdpdk-libs.pc
> 
> For building Pktgen-DPDK, I am setting the path PKG_CONFIG_PATH to point to
> the libdpdk.pc file,
> but it still doesn’t find the libdpdk library(not sure why, that is the
> case).
> 
> ~/new_pktgen_dpdk/Pktgen-DPDK$
> export
> PKG_CONFIG_PATH=/home/my_userid/new_dpdk/dpdk-20.08/install/lib/x86_64-linux-gnu/pkgconfig/libdpdk.pc
> 
> When building PktGen seeing this error-> meson.build:58:0: ERROR:  Native
> dependency 'libdpdk' not found.
> 
> Have the build logs below, can you please let me know.
> 
> ~/new_pktgen_dpdk/Pktgen-DPDK$ make
 Use 'make help' for more commands
> 
> ./tools/pktgen-build.sh build
 lua_enabled  : '-Denable_lua=false'
 gui_enabled  : '-Denable_gui=false'
 SDK Directory: '/home/my_userid/new_pktgen_dpdk/Pktgen-DPDK'
 Build Directory  :
> '/home/my_userid/new_pktgen_dpdk/Pktgen-DPDK/Builddir'
 Target Directory : '/home/my_userid/new_pktgen_dpdk/Pktgen-DPDK/usr'
> 
 Ninja build in '/home/my_userid/new_pktgen_dpdk/Pktgen-DPDK/Builddir'
> buildtype='release'
> meson -Dbuildtype=release -Denable_lua=false -Denable_gui=false Builddir
> The Meson build system
> Version: 0.47.1
> Source dir: /home/my_userid/new_pktgen_dpdk/Pktgen-DPDK
> Build dir: /home/my_userid/new_pktgen_dpdk/Pktgen-DPDK/Builddir
> Build type: native build
> Program cat found: YES (/bin/cat)
> Project name: pktgen
> Project version: 20.10.0
> Native C compiler: cc (gcc 7.5.0 "cc (Ubuntu 7.5.0-3ubuntu1~18.04) 7.5.0")
> Build machine cpu family: x86_64
> Build machine cpu: x86_64
> Compiler for C supports arguments -mavx2: YES
> Compiler for C supports arguments -Wno-pedantic -Wpedantic: YES
> Compiler for C supports arguments -Wno-format-truncation
> -Wformat-truncation: YES
> Found pkg-config: /usr/bin/pkg-config (0.29.1)
> 
> meson.build:58:0: ERROR:  Native dependency 'libdpdk' not found

The latest pktgen uses meson/ninja and DPDK must be built and installed, which 
appears to be the case. The ‘find’s you did were only looking in the local 
build directories and not in the DPDK installed location.

The normal installed location of DPDK is here from the ‘pkg-config’ command:

rkwiles@purley (main):.../intel/dpdk$ pkg-config --libs libdpdk
-L/usr/local/lib/x86_64-linux-gnu -Wl,--as-needed -lrte_node -lrte_graph 
-lrte_bpf -lrte_flow_classify -lrte_pipeline -lrte_table -lrte_port -lrte_fib 
-lrte_ipsec -lrte_vhost -lrte_stack -lrte_security -lrte_sched -lrte_reorder 
-lrte_rib -lrte_regexdev -lrte_rawdev -lrte_pdump -lrte_power -lrte_member 
-lrte_lpm -lrte_latencystats -lrte_kni -lrte_jobstats -lrte_ip_frag -lrte_gso 
-lrte_gro -lrte_eventdev -lrte_efd -lrte_distributor -lrte_cryptodev 
-lrte_compressdev -lrte_cfgfile -lrte_bitratestats -lrte_bbdev -lrte_acl 
-lrte_timer -lrte_hash -lrte_metrics -lrte_cmdline -lrte_pci -lrte_ethdev 
-lrte_meter -lrte_net -lrte_mbuf -lrte_mempool -lrte_rcu -lrte_ring -lrte_eal 
-lrte_telemetry -lrte_kvargs -lbsd

Please note the location is /usr/local/lib/x86_64-linux-gnu directory for libs 
and the /usr/local/lib/x86_64-linux-gnu/pkgconfig for the libdpdp.pc file.

The PKG_CONFIG_PATH should be set to /usr/local/lib/x86_64-linux-gnu/pkgconfig 
if needed.

I am using Ubuntu 20.04.1 and I do not have PKG_CONFIG_PATH set. It is possible 
the /etc/ld.so.conf.d/x86_64-linux-gnu.conf contains the 
/usr/local/lib/x86_64-linux-gnu directory as a search path for DPDK libs and 
libdpdk.pc file.

Because you don’t have DPDK installed in the default location, you will need to 
install it in the default location or find the magic path to the libdpdk.pc 
file and set PKG_CONFIG_PATH. Make sure you 

Re: [dpdk-users] Replicating PCAP packets read by pktgen for multiple source addresses

2020-09-14 Thread Wiles, Keith
You can edit the packet in the function below, but you have to do that action 
everytime. Freeing the mbuf I sok, you can look into the refcnt value in the 
mbuf if you want tx_burst to not free the packet completely.

From: users 
Date: Sunday, September 13, 2020 at 5:00 PM
To: users@dpdk.org 
Subject: [dpdk-users] Replicating PCAP packets read by pktgen for multiple 
source addresses
Hi,

I want to use pktgen to read a pcap file and play that pcap file say 10
times by editing the source IP address for each run, please let me know
what's the best way of doing this.

I first thought of editing the source IP in the rte_mbuf pointed by pkts
before calling the rte_eth_tx_burst(info->pid, qid, pkts, cnt) function but
it seems that rte_eth_tx_burst function frees up the rte_mbuf pointed by
pkts after sending the packet once so I cannot edit the pkts buffer again.


static __inline__ void
trafficgen_send_burst(port_info_t *info, uint16_t qid)
{
struct mbuf_table   *mtab = >q[qid].tx_mbufs;
struct rte_mbuf **pkts;
struct qstats_s *qstats;
uint32_t ret, cnt, tap, rnd, tstamp, i;
int32_t seq_idx;

if ((cnt = mtab->len) == 0)
return;

mtab->len = 0;
pkts = mtab->m_table;

if (trafficgen_tst_port_flags(info, SEND_RANGE_PKTS))
seq_idx = RANGE_PKT;
else if (trafficgen_tst_port_flags(info, SEND_RATE_PACKETS))
seq_idx = RATE_PKT;
else
seq_idx = SINGLE_PKT;

tap = trafficgen_tst_port_flags(info, PROCESS_TX_TAP_PKTS);
rnd = trafficgen_tst_port_flags(info, SEND_RANDOM_PKTS);
tstamp = trafficgen_tst_port_flags(info, (SEND_LATENCY_PKTS |
SEND_RATE_PACKETS));

qstats = >qstats[qid];
qstats->txpkts += cnt;
for (i = 0; i < cnt; i++)
qstats->txbytes += rte_pktmbuf_data_len(pkts[i]);

*/*Inserting a for loop here doesn't help as rte_mbuf is freed by
rte_eth_tx_burst*/*
/* Send all of the packets before we can exit this function */
while (cnt) {

if (rnd)
trafficgen_rnd_bits_apply(info, pkts, cnt, NULL);

if (tstamp)
trafficgen_tstamp_apply(info, pkts, cnt, seq_idx);

ret = rte_eth_tx_burst(info->pid, qid, pkts, cnt);

if (tap)
trafficgen_do_tx_tap(info, pkts, ret);

pkts += ret;
cnt -= ret;
}
}

Basically, I want to resend buffers stored in rte_mbuf again and again,
after modifying source IP in each run.

Thanks
Ravi


Re: [dpdk-users] Regarding pktgen-dpdk

2020-08-28 Thread Wiles, Keith



-Original Message-
From: users 
Date: Friday, August 28, 2020 at 9:14 AM
To: users@dpdk.org 
Subject: [dpdk-users] Regarding pktgen-dpdk
Greetings of the day,
I am new to pktgen and dpdk.
I am using dpdk-19.11.3.
I downloaded pktgen-dpdk and when I try to run command make, it gives error
Meson.build:31:0: error: dependency "libbsd" not found , tried pkgconfig
and cmake.

Unless you need to use 19.11.3 for Pktgen please move to the latest version of 
DPDK and Pktgen. You can still use DPDK 19.11.3 for your application as Pktgen 
does not need to use the same version.

I use Ubuntu 20.04 and the command to install libbsd is ‘sudo apt-get install 
libbsd-dev’, you did not tell me your OS distro and version. I am surprised 
DPDK would build as it also requires libbsd.

With the latest DPDK and Pktgen versions here are the commands to build pktgen 
if you have all of the system dependencies addressed.

# cd DPDK
# meson build
# ninja -C build
# sudo ninja -C install

# cd pktgen-dpdk
# make rebuild

The above will build DPDK and install it into your system then build pktgen. 
Let me know if you have more Pktgen build issues.

Please help me

Thank you
Regards
Utkarsh




Re: [dpdk-users] pktgen can't send packet forever

2020-07-13 Thread Wiles, Keith



> On Jul 13, 2020, at 3:32 AM, Xu, Chenjie  wrote:
> 
> Dear all,
> I'm using pktgen 17.11.3 and pktgen can only send packets a few seconds 
> though pktgen is configured to send packet forever as below:
> Tx Count/% Rate   :   Forever /100%
> 
> Does anyone know how to fix this issue? By the way, I'm following the below 
> guide to use pktgen:
> https://github.com/intel/userspace-cni-network-plugin#testing-with-dpdk-l3fwd-application

You must mean DPDK 17.11.3 and Pktgen as Pktgen does not have a version number 
of 17.11.3. What is the version of Pktgen you are using?

The next thing I would like you ask is if you can more to using the latest 
Pktgen and latest DPDK?
With my limited time I try to only debug latest DPDK and latest Pktgen. Using 
the latest versions Pktgen should work and you can still use DPDK 17.11.3 for 
your application if you need. The only reason to use and old version of 
DPDK/Pktgen is some driver version or OS version requires you to use these 
older versions.
> 
> Best Regards,
> Xu, Chenjie
> 



Re: [dpdk-users] Does 20.05 version has process_info lib?

2020-06-30 Thread Wiles, Keith


> On Jun 25, 2020, at 2:30 AM, 张斌  wrote:
> 
> Dear:
>   I am a newbie for DPDK.I want to compile the pktgen-dpdk for testing 
> which need process_info lib.But I can not find the lib source in DPDK 20.05 
> version,where can I get the source codes?
> 

DPDK versin 20.05 contains the redesign or telemetry which contains the old 
process_info support. The pktgen-dpdk does not require process_info support, 
which version of pktgen are you using? 

> 
> Thanks!



Re: [dpdk-users] the latest version of pktgen-dpdk dose not work well

2020-05-25 Thread Wiles, Keith


> On May 25, 2020, at 10:42 AM, tang wei  wrote:
> 
> Hi all,
>  I have a question about pktgen-dpdk, please help !
> The application has bind the driver vfio-pci and can bringup normally ,
> but can not send data package, and no response after receiving no more than 
> 1023 packages .
> Is there some log about the application that I can check or some debug 
> methods?
> Best regards,
> wilton
> 
> 
> 发送自 Windows 10 版邮件应用
> 

Please include the DPDK version, Pktgen version and the pktgen command line you 
are using.

Re: [dpdk-users] pktgen:I nvalid link properties for slave 0 in bonding mode 4

2019-06-11 Thread Wiles, Keith



> On Jun 11, 2019, at 11:06 AM, Vincent Li  wrote:
> 
>>> thanks Keith for looking into it, I guess I can try some older version
>>> of DPDK and pktgen, what version that might be?
>> It would be pretty old and I have no idea which version would work :-(
> 
> FYI, I tested DPDK 18.11.1 testpmd txonly mode with bonding, it
> appears working ok, I guess the the problem is in pktgen, not in DPDK
> then, correct?

Yes, I had assumed it was in Pktgen. I am working on get some help from the 
bonding PMD folk to see what the problem is with the configuration in Pktgen.

Regards,
Keith



Re: [dpdk-users] pktgen - options for TX per range of flows

2019-06-05 Thread Wiles, Keith
Sorry, none of the questions below are supported. You can have multiple ports 
each sending at a different rate. The mice/elephant and entropy is no.

> On Jun 5, 2019, at 10:02 AM, Sara Gittlin  wrote:
> 
> Hi  Guys
> is there an option to :
> - set high/low  tx  rate for a range of flows ?
> - set mice/elephant tx per range of flows ?
> - set high/low tx entropy per range of flows ?
> Appreciate your help
> -Sara

Regards,
Keith



Re: [dpdk-users] pktgen:I nvalid link properties for slave 0 in bonding mode 4

2019-06-04 Thread Wiles, Keith
> On Jun 4, 2019, at 10:06 AM, Vincent Li  wrote:
> 
>> I am looking at the problem, it does appear something change.
>> 
> 
> thanks Keith for looking into it, I guess I can try some older version
> of DPDK and pktgen, what version that might be?
It would be pretty old and I have no idea which version would work :-(

Regards,
Keith



Re: [dpdk-users] pktgen:I nvalid link properties for slave 0 in bonding mode 4

2019-06-03 Thread Wiles, Keith



> On May 31, 2019, at 3:54 PM, Vincent Li  wrote:
> 
> Hi,
> 
> I am running Pktgen 3.6.6 + DPDK 18.11.1. when I ran Pktgen with
> bonding as  ( Pktgen runs fine without bonding and pass packet ok):
> 
> root@compute1:/home/dpdk/pktgen-3.6.6#
> ./app/x86_64-native-linuxapp-gcc/pktgen -c ff
> --vdev='net_bonding0,mode=4,xmit_policy=l34,slave=:01:00.0,slave=:01:00.1'
> -- -P -m [1:2-7].2

I am looking at the problem, it does appear something change.


Regards,
Keith



Re: [dpdk-users] pktgen - transmitting multiple sessions

2019-06-03 Thread Wiles, Keith


> On Jun 3, 2019, at 6:28 AM, Sara Gittlin  wrote:
> 
> Hello all,
> can someone refer me to a script or commands to transmit multiple sessions
> i.e   to all destinations in  subnet  10.0.0.0/8 ?

Look at the range command ‘page range’ and the other commands to setup range, 
i.e. range …

> Thank you
> -Sara

Regards,
Keith



Re: [dpdk-users] pktgen compilation error

2019-05-29 Thread Wiles, Keith


> On May 29, 2019, at 8:25 AM, Wiles, Keith  wrote:
> 
> 
> 
>> On May 29, 2019, at 4:09 AM, Sara Gittlin  wrote:
>> 
>> Hi
>> I git clone git://dpdk.org/apps/pktgen-dpdk
>> i have ubuntu 16.04 w  4.15.0-50-generic kernel
>> i've install libpcap-dev
> 
> I am going to guess you are using DPDK master HEAD  and Pktgen master HEAD, a 
> number of changes are going on in DPDK after 19.05 release and they most 
> likely have broken Pktgen builds. Please checkout the 19.05 release tag of 
> DPDK and that should work.
> 
> I hope to sync backup with DPDK soon. 

I just got back to DPDK 19.05 (been busy) and pktgen does not build with DPDK 
19.05, so use 19.02 please. I will see if I can get this fixed soon.
> 
>> 
>> i get this error when make pktgen
>> /lib/common/pg_inet.h:154:17: error: field ‘udp’ has incomplete type
>> struct udp_hdr udp; /* UDP header for protocol */
>> 
>> Appreciate your help
>> Regards
>> -Sara
> 
> Regards,
> Keith
> 

Regards,
Keith



Re: [dpdk-users] pktgen compilation error

2019-05-29 Thread Wiles, Keith


> On May 29, 2019, at 4:09 AM, Sara Gittlin  wrote:
> 
> Hi
> I git clone git://dpdk.org/apps/pktgen-dpdk
> i have ubuntu 16.04 w  4.15.0-50-generic kernel
> i've install libpcap-dev

I am going to guess you are using DPDK master HEAD  and Pktgen master HEAD, a 
number of changes are going on in DPDK after 19.05 release and they most likely 
have broken Pktgen builds. Please checkout the 19.05 release tag of DPDK and 
that should work.

I hope to sync backup with DPDK soon. 

> 
> i get this error when make pktgen
> /lib/common/pg_inet.h:154:17: error: field ‘udp’ has incomplete type
>  struct udp_hdr udp; /* UDP header for protocol */
> 
> Appreciate your help
> Regards
> -Sara

Regards,
Keith



Re: [dpdk-users] UDP packet transmission issue - DPDK 18.08

2019-05-08 Thread Wiles, Keith
Hi Hirok,

> On May 8, 2019, at 9:28 AM, hirok borah  wrote:
> 
> Thanks Keith for sharing the info.
> 
> But as also stated by Anupama, the problem here is with KNI. 
> 
> When UDP packet of certain length, say 730, is written to kni, it is getting 
> dropped.
> 
> Whereas UDP of length say 60, does not have any issues, it goes to kernel.
> 
> And this issue was not seen with previous versions, only seen when we 
> upgraded to v18.08

I have not used KNI per say, but it also does not seem to understand UDP or TCP 
or any another protocol sent only mbufs. Unless I am wrong that means the 
application must construct the UDP frame to be sent to the kernel and setup the 
mbuf headers correctly, right? Even the KNI test code does not seem to 
understand UDP frames, just looking at the code anyway.

Using testpmd or traffic generators to send a UDP frame to KNI interface and 
see if the problem still exists does seem like a reasonable direction to 
eliminate possible problems.

Beyond that I can not help, take my suggestions as you see fit.
Thanks
> 
> Regards
> -Hirok
> 
> 
> On Wed, May 8, 2019, 7:28 PM Wiles, Keith  wrote:
> 
> 
> > On May 8, 2019, at 1:41 AM, hirok borah  wrote:
> > 
> > Hi All,
> > 
> > Do we have any resolution for this issue yet ?
> > This looks like a defect in DPDK 18.08.
> 
> DPDK does not understand UDP/TCP or any L4 Packet per say, it only sends the 
> packets constructed in the application. It will send the packets as defined 
> and the only place DPDK tries to determine the packet type is when it 
> receives the packet which in this case is the kernel.
> 
> I do not use testpmd much, but use pktgen-dpdk for my work. Please try with 
> testpmd or pktgen or TRex or moongen and see if the problem still exists.
> > 
> > Plz share if anyone have any inputs on this issue.
> > 
> > Regards
> > Hirok
> > 
> > On Sat, Apr 20, 2019 at 9:41 AM Anupama Laxmi 
> > wrote:
> > 
> >> We have recently upgraded to DPDK 18.08 version. After upgrading to the
> >> latest version observing issue with UDP packets transmission errors for few
> >> packets ( UDP size 736 bytes) .With older version of DPDK  we never faced
> >> this issue.
> >> 
> >> Attaching the filtered tcpdump which shows "Bad UDP length > IP payload
> >> length" for one such packet.
> >> 
> >> No issue observed while transferring UDP packets with size 28 bytes and 48
> >> bytes. I tried to print the packet length calculation in my program just
> >> before sending it out to the Kernel using rte_kni_tx_burst. The packet
> >> length calculation seems correct to me.
> >> 
> >> 1.)
> >> size_udp:48
> >> sizeof(struct udp_hdr):8
> >> size_ApplMsg:40
> >> udphdr->dgram_len:12288
> >> m->data_len:82
> >> size_ip:68
> >> l2_data_shift:14
> >> 
> >> 2.)
> >> size_udp:28
> >> sizeof(struct udp_hdr):8
> >> size_ApplMsg:20
> >> udphdr->dgram_len:7168
> >> m->data_len:62
> >> ip->total_length:12288
> >> size_ip:48
> >> l2_data_shift:14
> >> 
> >> *** Packets with UDP size 736 are not getting transmitted to the
> >> receiving end and getting dropped.
> >> 
> >> 3.)
> >> size_udp:736
> >> sizeof(struct udp_hdr):8
> >> size_ApplMsg:728
> >> udphdr->dgram_len:57346
> >> m->data_len:770
> >> size_ip:756
> >> l2_data_shift:14
> >> 
> >> Also MTU is set to 1500 in my program. So it shouldn't be an issue to
> >> transfer 736 bytes UDP data which is less than 1500 bytes MTU.
> >> 
> >> Looks like the kernel is dropping this packet.
> >> 
> >> So I tried to increase the kernel buffer size but that didn't help.
> >> 
> >> netstat -su -> output shows 0 send/receive buffer errors.
> >> 
> >> 
> >> What has changed in DPDK 18.08 with respect to UDP packets? Please suggest
> >> if I need to consider tuning udp ( using new API ) , offloading udp
> >> traffic(new offload flags)  to resolve this issue.
> >> 
> >> Thanks,
> >> -- next part --
> >> A non-text attachment was scrubbed...
> >> Name: bad_udp_length.png
> >> Type: image/png
> >> Size: 56015 bytes
> >> Desc: not available
> >> URL: <
> >> http://mails.dpdk.org/archives/users/attachments/20190420/f5dd6779/attachment.png
> >>> 
> >> 
> > 
> > 
> > -- 
> > Regards,
> > -Hirok
> 
> Regards,
> Keith
> 

Regards,
Keith



Re: [dpdk-users] UDP packet transmission issue - DPDK 18.08

2019-05-08 Thread Wiles, Keith



> On May 8, 2019, at 1:41 AM, hirok borah  wrote:
> 
> Hi All,
> 
> Do we have any resolution for this issue yet ?
> This looks like a defect in DPDK 18.08.

DPDK does not understand UDP/TCP or any L4 Packet per say, it only sends the 
packets constructed in the application. It will send the packets as defined and 
the only place DPDK tries to determine the packet type is when it receives the 
packet which in this case is the kernel.

I do not use testpmd much, but use pktgen-dpdk for my work. Please try with 
testpmd or pktgen or TRex or moongen and see if the problem still exists.
> 
> Plz share if anyone have any inputs on this issue.
> 
> Regards
> Hirok
> 
> On Sat, Apr 20, 2019 at 9:41 AM Anupama Laxmi 
> wrote:
> 
>> We have recently upgraded to DPDK 18.08 version. After upgrading to the
>> latest version observing issue with UDP packets transmission errors for few
>> packets ( UDP size 736 bytes) .With older version of DPDK  we never faced
>> this issue.
>> 
>> Attaching the filtered tcpdump which shows "Bad UDP length > IP payload
>> length" for one such packet.
>> 
>> No issue observed while transferring UDP packets with size 28 bytes and 48
>> bytes. I tried to print the packet length calculation in my program just
>> before sending it out to the Kernel using rte_kni_tx_burst. The packet
>> length calculation seems correct to me.
>> 
>> 1.)
>> size_udp:48
>> sizeof(struct udp_hdr):8
>> size_ApplMsg:40
>> udphdr->dgram_len:12288
>> m->data_len:82
>> size_ip:68
>> l2_data_shift:14
>> 
>> 2.)
>> size_udp:28
>> sizeof(struct udp_hdr):8
>> size_ApplMsg:20
>> udphdr->dgram_len:7168
>> m->data_len:62
>> ip->total_length:12288
>> size_ip:48
>> l2_data_shift:14
>> 
>> *** Packets with UDP size 736 are not getting transmitted to the
>> receiving end and getting dropped.
>> 
>> 3.)
>> size_udp:736
>> sizeof(struct udp_hdr):8
>> size_ApplMsg:728
>> udphdr->dgram_len:57346
>> m->data_len:770
>> size_ip:756
>> l2_data_shift:14
>> 
>> Also MTU is set to 1500 in my program. So it shouldn't be an issue to
>> transfer 736 bytes UDP data which is less than 1500 bytes MTU.
>> 
>> Looks like the kernel is dropping this packet.
>> 
>> So I tried to increase the kernel buffer size but that didn't help.
>> 
>> netstat -su -> output shows 0 send/receive buffer errors.
>> 
>> 
>> What has changed in DPDK 18.08 with respect to UDP packets? Please suggest
>> if I need to consider tuning udp ( using new API ) , offloading udp
>> traffic(new offload flags)  to resolve this issue.
>> 
>> Thanks,
>> -- next part --
>> A non-text attachment was scrubbed...
>> Name: bad_udp_length.png
>> Type: image/png
>> Size: 56015 bytes
>> Desc: not available
>> URL: <
>> http://mails.dpdk.org/archives/users/attachments/20190420/f5dd6779/attachment.png
>>> 
>> 
> 
> 
> -- 
> Regards,
> -Hirok

Regards,
Keith



Re: [dpdk-users] segmentation fault after using rte_malloc()

2019-04-24 Thread Wiles, Keith



Sent from my iPhone

On Apr 24, 2019, at 10:55 PM, 曾懷恩 mailto:t...@csie.io>> wrote:

Hi Keith,

Wiles, Keith mailto:keith.wi...@intel.com>> 於 2019年4月24日 
下午10:38 寫道:



On Apr 24, 2019, at 9:22 AM, 曾懷恩 mailto:t...@csie.io>> wrote:

Hi Keith,

I have tried DPDK 19.05-rc2, 19.02, 18.11 on VMware e1000 driver, Dell R630 
with Mellanox Connectx-3 and Intel X520

However I still got segmentation fault with all above setting

So you are using the simple example and you get a invalid rte_malloc memory?
Yes

I do not know how to debug this problem as it sounds like a race condition or 
memory corruption.
Do you mean that there is another process using this memory space?
As far as I know, while calling rte_malloc(), it will search a free memory 
space and return the address.

Yes but it pulls the memory from the huge pages. Small memory allocations using 
rte_malloc is not a great use of rte_malloc it would be better if you used 
malloc. The rte_malloc is great for packets or large segments of memory. If you 
need the memory in huge pages then it would have been better to allocate a 
large segment and handle it yourself.

The simple example code is doing the right things to use that API, so if you 
are getting the same memory address returned then I would use GDB and set a 
hardware break point to try to see where this is going wrong. Not much help as 
I can not reproduce the problem.
thank you, I will try GDB later, btw, actually I got same memory address return 
by rte_malloc().

We know that DPDK works, what we need to find out is why it does not work in 
your platform. Try different size mallocs, but just shooting in the dark here. 
Now rte_malloc(2) of two bytes is a real waste of memory as the over head for a 
2 byte request is very high.
So the rte_malloc() is not suggested to use?

I saw it’s a replacement of glibc malloc() in DPDK doc.

It’s not a replacement for malloc and small allocations as you were doing. You 
can use rte_malloc but you need to be careful how you use it.

Or should I declare a larger size to make the memory space not to be fragmented?

Thanks a lot.

Best Regards,


here are my settings :

With CX3

modprobe -a ib_uverbs mlx4_en mlx4_core mlx4_ib
/etc/init.d/openibd restart
ls -d /sys/class/net/*/device/infiniband_verbs/uverbs* | cut -d / -f 5
{
  for intf in ens3 ens8;
  do
  (cd "/sys/class/net/${intf}/device/" && pwd -P);
  done;
} |
sed -n 's,.*/\(.*\),-w \1,p'
mount -t hugetlbfs nodev /mnt/huge

With X520 and e1000:

mount -t hugetlbfs nodev /mnt/huge
modprobe uio
insmod dpdk-18.11/x86_64-native-linuxapp-gcc/kmod/igb_uio.ko
/root/dpdk-18.11//usertools/dpdk-devbind.py --bind=igb_uio 00:0a.0
/root/dpdk-18.11//usertools/dpdk-devbind.py --bind=igb_uio 00:08.0

My OS is CentOS 7.5 in KVM with SRIOV enable

hugepage size is set to 2MB

Thanks for reply

Best Regard,

曾懷恩 mailto:t...@csie.io>> 於 2019年4月24日 上午1:34 寫道:

Hi Keith,

Yes I ran this program as root

However I ran it with DPDK 18.11 release.

I will try 19.05 later.

Besides, my cpu is E5-2650 v4.
NICs are Intel x520 DA2 and Mellanox connectx-3

thank you for reply

Best Regards,




Wiles, Keith mailto:keith.wi...@intel.com>> 於 2019年4月22日 
下午9:09 寫道:



On Apr 22, 2019, at 1:43 AM, 曾懷恩 mailto:t...@csie.io>> wrote:

Hi Wiles,

here is my sample code with just doing rte_eal_init() and rte_malloc() .




I tried the attached code and it works on my machine with something close to 
DPDK 19.05 release.

I only use 2 Meg pages, but I assumed it would not make any difference.

Did you run this example as root?

And my start eal cmdline option is ./build/test -l 0-1 -n 4

Thank you very much for your reply
Wiles, Keith mailto:keith.wi...@intel.com>> 於 2019年4月21日 
上午4:29 寫道:



Sent from my iPhone

On Apr 18, 2019, at 11:31 PM, 曾懷恩 mailto:t...@csie.io>> wrote:

HI, Stephen,

Yes, I set huge page in  default_hugepagesz=1G hugepagesz=1G hugepages=4

and also did rte_eal_init at the beginning of my program.

thanks for reply.

Is the core doing the rte_malloc one of the cores listed in the core list on 
the command line.  In other words the pthread doing the allocation should be 
the master lcore or one of the slave lcores.

Also I seems like a very simple test case, can you do the rte_eal_init() and 
then do the allocation as your sample code looks and then exit? Does this cause 
a segfault?


Stephen Hemminger 
mailto:step...@networkplumber.org>> 於 2019年4月19日 
上午10:59 寫道:

On Fri, 19 Apr 2019 09:11:05 +0800
曾懷恩 mailto:t...@csie.io>> wrote:

Hi all,

i have 1 problem while using rte_malloc

Every time I use this function and use the memory it returns, it shows 
segmentation fault(core dump)

Is something wrong?

thanks.


rte init …
………...
unsigned char *str1;
printf("str1 addr = %x\n", str1);
str1 = rte_malloc(NULL,2,RTE_CACHE_LINE_SIZE);
printf("str1 addr = %x\n", str1);
str1[0] = 'a’; //segmentation fault here
str1[1] = '\0';
Do you have huge pages?
Did you do eal_init?




Regards,
Keith



Regards,
Keith



Re: [dpdk-users] segmentation fault after using rte_malloc()

2019-04-24 Thread Wiles, Keith


> On Apr 24, 2019, at 9:22 AM, 曾懷恩  wrote:
> 
> Hi Keith,
> 
> I have tried DPDK 19.05-rc2, 19.02, 18.11 on VMware e1000 driver, Dell R630 
> with Mellanox Connectx-3 and Intel X520
> 
> However I still got segmentation fault with all above setting

So you are using the simple example and you get a invalid rte_malloc memory?

I do not know how to debug this problem as it sounds like a race condition or 
memory corruption.

The simple example code is doing the right things to use that API, so if you 
are getting the same memory address returned then I would use GDB and set a 
hardware break point to try to see where this is going wrong. Not much help as 
I can not reproduce the problem.

We know that DPDK works, what we need to find out is why it does not work in 
your platform. Try different size mallocs, but just shooting in the dark here. 
Now rte_malloc(2) of two bytes is a real waste of memory as the over head for a 
2 byte request is very high.

> 
> here are my settings : 
> 
> With CX3 
> 
> modprobe -a ib_uverbs mlx4_en mlx4_core mlx4_ib
> /etc/init.d/openibd restart
> ls -d /sys/class/net/*/device/infiniband_verbs/uverbs* | cut -d / -f 5
> {
>for intf in ens3 ens8;
>do
>(cd "/sys/class/net/${intf}/device/" && pwd -P);
>done;
> } |
> sed -n 's,.*/\(.*\),-w \1,p'
> mount -t hugetlbfs nodev /mnt/huge
> 
> With X520 and e1000:
> 
> mount -t hugetlbfs nodev /mnt/huge
> modprobe uio
> insmod dpdk-18.11/x86_64-native-linuxapp-gcc/kmod/igb_uio.ko
> /root/dpdk-18.11//usertools/dpdk-devbind.py --bind=igb_uio 00:0a.0
> /root/dpdk-18.11//usertools/dpdk-devbind.py --bind=igb_uio 00:08.0
> 
> My OS is CentOS 7.5 in KVM with SRIOV enable
> 
> hugepage size is set to 2MB
> 
> Thanks for reply
> 
> Best Regard,
> 
>> 曾懷恩  於 2019年4月24日 上午1:34 寫道:
>> 
>> Hi Keith,
>> 
>> Yes I ran this program as root 
>> 
>> However I ran it with DPDK 18.11 release.
>> 
>> I will try 19.05 later.
>> 
>> Besides, my cpu is E5-2650 v4.
>> NICs are Intel x520 DA2 and Mellanox connectx-3
>> 
>> thank you for reply
>> 
>> Best Regards,
>> 
>> 
>> 
>> 
>>> Wiles, Keith  於 2019年4月22日 下午9:09 寫道:
>>> 
>>> 
>>> 
>>>> On Apr 22, 2019, at 1:43 AM, 曾懷恩  wrote:
>>>> 
>>>> Hi Wiles,
>>>> 
>>>> here is my sample code with just doing rte_eal_init() and rte_malloc() .
>>>> 
>>>> 
>>>> 
>>> 
>>> I tried the attached code and it works on my machine with something close 
>>> to DPDK 19.05 release.
>>> 
>>> I only use 2 Meg pages, but I assumed it would not make any difference.
>>> 
>>> Did you run this example as root?
>>>> 
>>>> And my start eal cmdline option is ./build/test -l 0-1 -n 4
>>>> 
>>>> Thank you very much for your reply
>>>>> Wiles, Keith  於 2019年4月21日 上午4:29 寫道:
>>>>> 
>>>>> 
>>>>> 
>>>>> Sent from my iPhone
>>>>> 
>>>>>> On Apr 18, 2019, at 11:31 PM, 曾懷恩  wrote:
>>>>>> 
>>>>>> HI, Stephen,
>>>>>> 
>>>>>> Yes, I set huge page in  default_hugepagesz=1G hugepagesz=1G hugepages=4
>>>>>> 
>>>>>> and also did rte_eal_init at the beginning of my program.
>>>>>> 
>>>>>> thanks for reply.
>>>>> 
>>>>> Is the core doing the rte_malloc one of the cores listed in the core list 
>>>>> on the command line.  In other words the pthread doing the allocation 
>>>>> should be the master lcore or one of the slave lcores.
>>>>> 
>>>>> Also I seems like a very simple test case, can you do the rte_eal_init() 
>>>>> and then do the allocation as your sample code looks and then exit? Does 
>>>>> this cause a segfault?
>>>>>> 
>>>>>> 
>>>>>>> Stephen Hemminger  於 2019年4月19日 上午10:59 寫道:
>>>>>>> 
>>>>>>> On Fri, 19 Apr 2019 09:11:05 +0800
>>>>>>> 曾懷恩  wrote:
>>>>>>> 
>>>>>>>> Hi all, 
>>>>>>>> 
>>>>>>>> i have 1 problem while using rte_malloc
>>>>>>>> 
>>>>>>>> Every time I use this function and use the memory it returns, it shows 
>>>>>>>> segmentation fault(core dump)
>>>>>>>> 
>>>>>>>> Is something wrong?
>>>>>>>> 
>>>>>>>> thanks.
>>>>>>>> 
>>>>>>>> 
>>>>>>>> rte init …
>>>>>>>> ………...
>>>>>>>> unsigned char *str1;
>>>>>>>> printf("str1 addr = %x\n", str1);
>>>>>>>> str1 = rte_malloc(NULL,2,RTE_CACHE_LINE_SIZE);
>>>>>>>> printf("str1 addr = %x\n", str1);
>>>>>>>> str1[0] = 'a’; //segmentation fault here
>>>>>>>> str1[1] = '\0';
>>>>>>> Do you have huge pages?
>>>>>>> Did you do eal_init?
>>>>>> 
>>>> 
>>>> 
>>> 
>>> Regards,
>>> Keith
>>> 
> 

Regards,
Keith



Re: [dpdk-users] segmentation fault after using rte_malloc()

2019-04-22 Thread Wiles, Keith


> On Apr 22, 2019, at 1:43 AM, 曾懷恩  wrote:
> 
> Hi Wiles,
> 
> here is my sample code with just doing rte_eal_init() and rte_malloc() .
> 
>  
> 

I tried the attached code and it works on my machine with something close to 
DPDK 19.05 release.

I only use 2 Meg pages, but I assumed it would not make any difference.

Did you run this example as root?
> 
> And my start eal cmdline option is ./build/test -l 0-1 -n 4
> 
> Thank you very much for your reply
> > Wiles, Keith  於 2019年4月21日 上午4:29 寫道:
> > 
> > 
> > 
> > Sent from my iPhone
> > 
> >> On Apr 18, 2019, at 11:31 PM, 曾懷恩  wrote:
> >> 
> >> HI, Stephen,
> >> 
> >> Yes, I set huge page in  default_hugepagesz=1G hugepagesz=1G hugepages=4
> >> 
> >> and also did rte_eal_init at the beginning of my program.
> >> 
> >> thanks for reply.
> > 
> > Is the core doing the rte_malloc one of the cores listed in the core list 
> > on the command line.  In other words the pthread doing the allocation 
> > should be the master lcore or one of the slave lcores.
> > 
> > Also I seems like a very simple test case, can you do the rte_eal_init() 
> > and then do the allocation as your sample code looks and then exit? Does 
> > this cause a segfault?
> >> 
> >> 
> >>> Stephen Hemminger  於 2019年4月19日 上午10:59 寫道:
> >>> 
> >>> On Fri, 19 Apr 2019 09:11:05 +0800
> >>> 曾懷恩  wrote:
> >>> 
> >>>>   Hi all, 
> >>>> 
> >>>>   i have 1 problem while using rte_malloc
> >>>> 
> >>>>   Every time I use this function and use the memory it returns, it shows 
> >>>> segmentation fault(core dump)
> >>>> 
> >>>>   Is something wrong?
> >>>> 
> >>>>   thanks.
> >>>> 
> >>>> 
> >>>>   rte init …
> >>>>   ………...
> >>>>   unsigned char *str1;
> >>>>   printf("str1 addr = %x\n", str1);
> >>>>   str1 = rte_malloc(NULL,2,RTE_CACHE_LINE_SIZE);
> >>>>   printf("str1 addr = %x\n", str1);
> >>>>   str1[0] = 'a’; //segmentation fault here
> >>>>   str1[1] = '\0';
> >>> Do you have huge pages?
> >>> Did you do eal_init?
> >> 
> 
> 

Regards,
Keith



Re: [dpdk-users] segmentation fault after using rte_malloc()

2019-04-20 Thread Wiles, Keith



Sent from my iPhone

> On Apr 18, 2019, at 11:31 PM, 曾懷恩  wrote:
> 
> HI, Stephen,
> 
> Yes, I set huge page in  default_hugepagesz=1G hugepagesz=1G hugepages=4
> 
> and also did rte_eal_init at the beginning of my program.
> 
> thanks for reply.

Is the core doing the rte_malloc one of the cores listed in the core list on 
the command line.  In other words the pthread doing the allocation should be 
the master lcore or one of the slave lcores.

Also I seems like a very simple test case, can you do the rte_eal_init() and 
then do the allocation as your sample code looks and then exit? Does this cause 
a segfault?
> 
> 
>> Stephen Hemminger  於 2019年4月19日 上午10:59 寫道:
>> 
>> On Fri, 19 Apr 2019 09:11:05 +0800
>> 曾懷恩  wrote:
>> 
>>>Hi all, 
>>> 
>>>i have 1 problem while using rte_malloc
>>> 
>>>Every time I use this function and use the memory it returns, it shows 
>>> segmentation fault(core dump)
>>> 
>>>Is something wrong?
>>> 
>>>thanks.
>>> 
>>> 
>>>rte init …
>>>………...
>>>unsigned char *str1;
>>>printf("str1 addr = %x\n", str1);
>>>str1 = rte_malloc(NULL,2,RTE_CACHE_LINE_SIZE);
>>>printf("str1 addr = %x\n", str1);
>>>str1[0] = 'a’; //segmentation fault here
>>>str1[1] = '\0';
>> Do you have huge pages?
>> Did you do eal_init?
> 


Re: [dpdk-users] Reg: Performance degradation in DPDK-18.11

2019-04-09 Thread Wiles, Keith


> On Apr 8, 2019, at 7:01 AM, Gokilavani A  
> wrote:
> 
> Hi,
> 
> We have written simple Traffic Generator using DPDK.
> 
> We used a DPDK version of 17.05.2 then 17.08.2.
> 
> When we want to move that code to latest LTS version 18.11,  performance
> seems to be degraded interms of rate.
> 
> Even for 512B, got this rate reduction issue in transmission. Is it because
> of any timing APIs changed in 18.11?
> 
> Tried with DPDK 17.11, got the same problem. When I gone through the
> documents, not able to find any timer regarding API change or bug fixing.
> 
> Can any one help me in this regard?

You may want to give more details, NICs used, setup of hardware, what is the 
host system used, how is the generator sending packets (fixed packets or 
modified on the fly, …)

What was the performance difference with the DPDK versions you tried?

Have you tried Pktgen, TRex or testpmd to see if they have the same performance 
problem?

> 
> 
> Thanks
> 
> A.Gokilavani

Regards,
Keith



Re: [dpdk-users] pktgen-dpdk failed build "│/usr/bin/ld: cannot find -llua'

2019-04-04 Thread Wiles, Keith



> On Apr 4, 2019, at 1:57 AM, ..  wrote:
> 
> Hi Keith,
> 
> Ok will upgrade to latest version.
> 
> Is there much documentation of creating custom payloads with dpdk-pktgen?

You have three modes sequence, range and pcap.
Sequence mode allows changing for the L2/L3 and some L4 fields. 16 packets per 
port are available.
range mode allows changing L2/L3 fields and create ranges for theses packets.
pcap mode allows you send any type of packets via a pcap file.

> 
> Have you any opinions on Warp17 from Juniper the dpdk stateful packet 
> generator?

I have not looked at the Warp17
> 
> Regards
> 
> Roland
> 
> On Wed, 3 Apr 2019 at 14:57, Wiles, Keith  wrote:
> Please update to the latest Pktgen, I believe I was able to fix the build 
> system for CentOS.
> 
> > On Apr 3, 2019, at 1:54 AM, ..  wrote:
> > 
> > Hi dpdk-users!
> > 
> > OS: CentOS Linux release 7.0.1406 (Core)
> > Kernel : 3.10.0-123.el7.x86_64
> > 
> > I am trying to build dpdk-pktgen (I chose version 3.6.1 with dpdk 18.08).
> > Whenever I build I get this error:
> > 
> > [root@ipservera pktgen-dpdk-pktgen-3.6.1]# make -j
> > == lib
> > == common
> > == utils
> > == vec
> > == lua
> > == cli
> > == app
> >  LD pktgen
> > /usr/bin/ld: cannot find -llua
> > collect2: error: ld returned 1 exit status
> > make[2]: *** [pktgen] Error 1
> > make[1]: *** [all] Error 2
> > make: *** [app] Error 2
> > 
> > As it is Centos, I followed the instructions for building lua 5.3.2 via the
> > INSTALL.md notes with the pktgen install notes and also copied the
> > generated libraries as per documentation:
> > 
> > # ls -l /usr/lib64/ | grep liblua
> > -rwxr-xr-x   1 root root  193864 Nov  6  2016 liblua-5.1.so
> > -rw-r--r--   1 root root  442084 Apr  1 12:46 liblua-5.3.a
> > -rwxr-xr-x   1 root root  260977 Apr  1 12:46 liblua-5.3.so
> > # lua -v
> > Lua 5.3.2  Copyright (C) 1994-2015 Lua.org, PUC-Rio
> > 
> > # luac -v
> > Lua 5.3.2  Copyright (C) 1994-2015 Lua.org, PUC-Rio
> > 
> > Dpdk install details:
> > 
> > # echo $RTE_SDK
> > /usr/local/share/dpdk
> > 
> > # echo $RTE_TARGET
> > x86_64-native-linuxapp-gcc
> > 
> > # ls -l /usr/local/share/dpdk
> > total 24
> > drwxr-xr-x  3 root root 4096 Aug  9  2018 buildtools
> > drwxr-xr-x 50 root root 4096 Aug  9  2018 examples
> > drwxr-xr-x  8 root root 4096 Aug  9  2018 mk
> > drwxrwxr-x 14 root root 4096 Apr  3 06:39 pktgen-dpdk-pktgen-3.6.1
> > drwxr-xr-x  2 root root 4096 Aug  9  2018 usertools
> > drwxr-xr-x  3 root root 4096 Oct 10 18:25 x86_64-native-linuxapp-gcc
> > 
> > I have never used pktgen -dpdk before against a dpdk application target.  I
> > have another dpdk server running a reflector application we developed, but
> > I used the standard Linux kernel pktgen and commercial software fpga nics
> > to fire traffic at it, but I would now like to try with pktgen-dpdk.
> > 
> > Thanks
> > 
> > Roland
> 
> Regards,
> Keith
> 

Regards,
Keith



Re: [dpdk-users] pktgen-dpdk failed build "│/usr/bin/ld: cannot find -llua'

2019-04-03 Thread Wiles, Keith
Please update to the latest Pktgen, I believe I was able to fix the build 
system for CentOS.

> On Apr 3, 2019, at 1:54 AM, ..  wrote:
> 
> Hi dpdk-users!
> 
> OS: CentOS Linux release 7.0.1406 (Core)
> Kernel : 3.10.0-123.el7.x86_64
> 
> I am trying to build dpdk-pktgen (I chose version 3.6.1 with dpdk 18.08).
> Whenever I build I get this error:
> 
> [root@ipservera pktgen-dpdk-pktgen-3.6.1]# make -j
> == lib
> == common
> == utils
> == vec
> == lua
> == cli
> == app
>  LD pktgen
> /usr/bin/ld: cannot find -llua
> collect2: error: ld returned 1 exit status
> make[2]: *** [pktgen] Error 1
> make[1]: *** [all] Error 2
> make: *** [app] Error 2
> 
> As it is Centos, I followed the instructions for building lua 5.3.2 via the
> INSTALL.md notes with the pktgen install notes and also copied the
> generated libraries as per documentation:
> 
> # ls -l /usr/lib64/ | grep liblua
> -rwxr-xr-x   1 root root  193864 Nov  6  2016 liblua-5.1.so
> -rw-r--r--   1 root root  442084 Apr  1 12:46 liblua-5.3.a
> -rwxr-xr-x   1 root root  260977 Apr  1 12:46 liblua-5.3.so
> # lua -v
> Lua 5.3.2  Copyright (C) 1994-2015 Lua.org, PUC-Rio
> 
> # luac -v
> Lua 5.3.2  Copyright (C) 1994-2015 Lua.org, PUC-Rio
> 
> Dpdk install details:
> 
> # echo $RTE_SDK
> /usr/local/share/dpdk
> 
> # echo $RTE_TARGET
> x86_64-native-linuxapp-gcc
> 
> # ls -l /usr/local/share/dpdk
> total 24
> drwxr-xr-x  3 root root 4096 Aug  9  2018 buildtools
> drwxr-xr-x 50 root root 4096 Aug  9  2018 examples
> drwxr-xr-x  8 root root 4096 Aug  9  2018 mk
> drwxrwxr-x 14 root root 4096 Apr  3 06:39 pktgen-dpdk-pktgen-3.6.1
> drwxr-xr-x  2 root root 4096 Aug  9  2018 usertools
> drwxr-xr-x  3 root root 4096 Oct 10 18:25 x86_64-native-linuxapp-gcc
> 
> I have never used pktgen -dpdk before against a dpdk application target.  I
> have another dpdk server running a reflector application we developed, but
> I used the standard Linux kernel pktgen and commercial software fpga nics
> to fire traffic at it, but I would now like to try with pktgen-dpdk.
> 
> Thanks
> 
> Roland

Regards,
Keith



Re: [dpdk-users] RX of multi-segment jumbo frames

2019-02-15 Thread Wiles, Keith



> On Feb 14, 2019, at 11:59 PM, Filip Janiszewski 
>  wrote:
> 
> Unfortunately I didn't get much help from the maintainers at Mellanox,
> but I discovered that with DPDK 18.05 there's the flag
> ignore_offload_bitfield which once toggled to 1 along with the offloads
> set to DEV_RX_OFFLOAD_JUMBO_FRAME|DEV_RX_OFFLOAD_SCATTER allows DPDK to
> capture Jumbo on Mellanox:
> 
> https://doc.dpdk.org/api-18.05/structrte__eth__rxmode.html
> 
> In DPDK 19.02 this flag is missing and I can't capture Jumbos with my
> current configuration.
> 
> Sadly, even if setting ignore_offload_bitfield to 1 fix my problem it
> creates a bunch more, the packets coming in are not timestamped for
> example (setting hw_timestamp to 1 does not fix the issue as the
> timestamp are still EPOCH + some ms.).
> 
> Not sure if this can trigger any idea, for me it is not completely clear
> what was the purpose of ignore_offload_bitfield (removed later) and how
> to enable Jumbos properly.
> 
> What I've attempted so far (apart from the ignore_offload_bitfield):
> 
> 1) Set mtu to 9600 (rte_eth_dev_set_mtu)
> 2) Configure port with offloads DEV_RX_OFFLOAD_SCATTER |
> DEV_RX_OFFLOAD_JUMBO_FRAME, max_rx_pkt_len set to 9600
> 3) Configure RX queue with default_rxconf (from rte_eth_dev_info) adding
> the offloads from the port configuration (DEV_RX_OFFLOAD_SCATTER |
> DEV_RX_OFFLOAD_JUMBO_FRAME)
> 
> The JF are reported as ierror in rte_eth_stats.

sorry, the last time i had any dealings with mellanox i was not able to get it 
to work. so not going to be much help here.
> 
> Thanks
> 
> Il 09/02/19 16:36, Wiles, Keith ha scritto:
>> 
>> 
>>> On Feb 9, 2019, at 9:27 AM, Filip Janiszewski 
>>>  wrote:
>>> 
>>> 
>>> 
>>> Il 09/02/19 14:51, Wiles, Keith ha scritto:
>>>> 
>>>> 
>>>>> On Feb 9, 2019, at 5:11 AM, Filip Janiszewski 
>>>>>  wrote:
>>>>> 
>>>>> Hi,
>>>>> 
>>>>> I'm attempting to receive jumbo frames (~9000 bytes) on a Mellonox card
>>>>> using DPDK, I've configured the DEV_RX_OFFLOAD_JUMBO_FRAME offload for
>>>>> rte_eth_conf and rte_eth_rxconf (per RX Queue), but I can capture jumbo
>>>>> frames only if the mbuf is large enough to contain the whole packet, is
>>>>> there a way to enable DPDK to chain the incoming data in mbufs smaller
>>>>> than the actual packet?
>>>>> 
>>>>> We don't have many of those big packets coming in, so would be optimal
>>>>> to leave the mbuf size to RTE_MBUF_DEFAULT_BUF_SIZE and then configure
>>>>> the RX device to chain those bufs for larger packets, but can't find a
>>>>> way to do it, any suggestion?
>>>>> 
>>>> 
>>>> the best i understand is the nic or pmd needs to be configured to split up 
>>>> packets between mbufs in the rx ring. i look in the docs for the nic and 
>>>> see if it supports splitting up packets or ask the maintainer from the 
>>>> maintainers file.
>>> 
>>> I can capture jumbo packets with Wireshark on the same card (same port,
>>> same setup), which let me think the problem is purely on my DPDK card
>>> configuration.
>>> 
>>> According to ethtools, the jumbo packet (from now on JF, Jumbo Frame) is
>>> detected at phy level, the couters rx_packets_phy, rx_bytes_phy,
>>> rx_8192_to_10239_bytes_phy are properly increased.
>>> 
>>> There was an option to setup manually the support for JF but was remove
>>> from DPDK after version 16.07: CONFIG_RTE_LIBRTE_MLX5_SGE_WR_N.
>>> According to the release note:
>>> 
>>> .
>>> Improved jumbo frames support, by dynamically setting RX scatter gather
>>> elements according to the MTU and mbuf size, no need for compilation
>>> parameter ``MLX5_PMD_SGE_WR_N``
>>> .
>>> 
>>> Not quire sure where to look for..
>>> 
>> 
>> maintainer is your best bet now.
>>>>> Thanks
>>>>> 
>>>>> -- 
>>>>> BR, Filip
>>>>> +48 666 369 823
>>>> 
>>>> Regards,
>>>> Keith
>>>> 
>>> 
>>> -- 
>>> BR, Filip
>>> +48 666 369 823
>> 
>> Regards,
>> Keith
>> 
> 
> -- 
> BR, Filip
> +48 666 369 823

Regards,
Keith



Re: [dpdk-users] Segfault while running on older CPU

2019-02-14 Thread Wiles, Keith


> On Feb 14, 2019, at 7:04 AM, Filip Janiszewski  
> wrote:
> 
> Hi All,
> 
> We've just encountered the same issue on a new server with a couple of
> Intel(R) Xeon(R) CPU E5-2690 0 @ 2.90GHz, stack trace the same:
> 

What version of DPDK?
> .
> Program received signal SIGILL, Illegal instruction.
> 
> 0x00557ce9 in rte_cpu_get_flag_enabled ()
> 
> Missing separate debuginfos, use: debuginfo-install
> glibc-2.17-260.el7.x86_64 libaio-0.3.109-13.el7.x86_64
> libgcc-4.8.5-36.el7.x86_64 libxml2-2.9.1-6.el7_2.3.x86_64
> numactl-libs-2.0.9-7.el7.x86_64 xz-libs-5.2.2-1.el7.x86_64
> zlib-1.2.7-18.el7.x86_64
> 
> (gdb) bt
> 
> #0  0x00557ce9 in rte_cpu_get_flag_enabled ()
> #1  0x0046076e in rte_acl_init ()
> #2  0x0085f56d in __libc_csu_init ()
> #3  0x76636365 in __libc_start_main () from /lib64/libc.so.6
> #4  0x004650de in _start ()
> .
> 
> I'm building DPDK with x86_64-native-linuxapp-gcc, nothing else, plain
> config from DPDK. I've attempted to recompile with CONFIG_RTE_EXEC_ENV
> set to 'native' instead of 'linuxapp' but nothing changes.

Not sure what this means it looks like you already compiled it as native.
> 
> Did anybody had a similar issue? Any suggestion?

maybe compile with  -g for debug symbols. i use ‘make install T=$RTE_TARGET 
EXTRA_CFLAGS=“-g”’  you can also try reducing the optimization, but I do not 
think that is the problem.

Does this happen every time, what OS version, what is the application yours or 
one of the examples?
If testpmd works then it maybe your application. I can not tell for sure but 
could this be the first time this routine is called?
Are you calling some thing in DPDK before rte_eal_init() is called?

With -g you should them be able to location the line in DPDK where it is 
failing.
> 
> Thanks
> 
> Il 06/02/19 11:47, Filip Janiszewski ha scritto:
>> Hi Everybody,
>> 
>> We have one 'slightly' older machine (well, very old CPU.) in our Lab
>> that seems to crash DPDK on every execution attempt, I was wondering if
>> anybody encountered a similar issue and if there's a change in the DPDK
>> config that might remedy the problem, this is the stack trace of the fault:
>> 
>> .
>> #0 0x0057bd19 in rte_cpu_get_flag_enabled ()
>> #1 0x0046067e in rte_acl_init ()
>> #2 0x0088359d in __libc_csu_init ()
>> #3 0x76642cf5 in __libc_start_main () from /lib64/libc.so.6
>> #4 0x00464fee in _start ()
>> .
>> 
>> The CPU on this machine is: "Intel(R) Core(TM) i7-3770 CPU @ 3.40GHz
>> (fam: 06, model: 3a, stepping: 09)" running with "3.10.0-862.el7.x86_64".
>> 
>> Our builds are running fine on newest hardware, nevertheless the
>> segfault seems a bit weird even for an unsupported CPU (some error
>> prompt would be more friendly), any suggestion on what the problem might be?
>> 
>> Thanks
>> 
> 
> -- 
> BR, Filip
> +48 666 369 823

Regards,
Keith



Re: [dpdk-users] RX of multi-segment jumbo frames

2019-02-09 Thread Wiles, Keith



> On Feb 9, 2019, at 9:27 AM, Filip Janiszewski  
> wrote:
> 
> 
> 
> Il 09/02/19 14:51, Wiles, Keith ha scritto:
>> 
>> 
>>> On Feb 9, 2019, at 5:11 AM, Filip Janiszewski 
>>>  wrote:
>>> 
>>> Hi,
>>> 
>>> I'm attempting to receive jumbo frames (~9000 bytes) on a Mellonox card
>>> using DPDK, I've configured the DEV_RX_OFFLOAD_JUMBO_FRAME offload for
>>> rte_eth_conf and rte_eth_rxconf (per RX Queue), but I can capture jumbo
>>> frames only if the mbuf is large enough to contain the whole packet, is
>>> there a way to enable DPDK to chain the incoming data in mbufs smaller
>>> than the actual packet?
>>> 
>>> We don't have many of those big packets coming in, so would be optimal
>>> to leave the mbuf size to RTE_MBUF_DEFAULT_BUF_SIZE and then configure
>>> the RX device to chain those bufs for larger packets, but can't find a
>>> way to do it, any suggestion?
>>> 
>> 
>> the best i understand is the nic or pmd needs to be configured to split up 
>> packets between mbufs in the rx ring. i look in the docs for the nic and see 
>> if it supports splitting up packets or ask the maintainer from the 
>> maintainers file.
> 
> I can capture jumbo packets with Wireshark on the same card (same port,
> same setup), which let me think the problem is purely on my DPDK card
> configuration.
> 
> According to ethtools, the jumbo packet (from now on JF, Jumbo Frame) is
> detected at phy level, the couters rx_packets_phy, rx_bytes_phy,
> rx_8192_to_10239_bytes_phy are properly increased.
> 
> There was an option to setup manually the support for JF but was remove
> from DPDK after version 16.07: CONFIG_RTE_LIBRTE_MLX5_SGE_WR_N.
> According to the release note:
> 
> .
> Improved jumbo frames support, by dynamically setting RX scatter gather
> elements according to the MTU and mbuf size, no need for compilation
> parameter ``MLX5_PMD_SGE_WR_N``
> .
> 
> Not quire sure where to look for..
> 

maintainer is your best bet now.
>>> Thanks
>>> 
>>> -- 
>>> BR, Filip
>>> +48 666 369 823
>> 
>> Regards,
>> Keith
>> 
> 
> -- 
> BR, Filip
> +48 666 369 823

Regards,
Keith



Re: [dpdk-users] RX of multi-segment jumbo frames

2019-02-09 Thread Wiles, Keith



> On Feb 9, 2019, at 5:11 AM, Filip Janiszewski  
> wrote:
> 
> Hi,
> 
> I'm attempting to receive jumbo frames (~9000 bytes) on a Mellonox card
> using DPDK, I've configured the DEV_RX_OFFLOAD_JUMBO_FRAME offload for
> rte_eth_conf and rte_eth_rxconf (per RX Queue), but I can capture jumbo
> frames only if the mbuf is large enough to contain the whole packet, is
> there a way to enable DPDK to chain the incoming data in mbufs smaller
> than the actual packet?
> 
> We don't have many of those big packets coming in, so would be optimal
> to leave the mbuf size to RTE_MBUF_DEFAULT_BUF_SIZE and then configure
> the RX device to chain those bufs for larger packets, but can't find a
> way to do it, any suggestion?
> 

the best i understand is the nic or pmd needs to be configured to split up 
packets between mbufs in the rx ring. i look in the docs for the nic and see if 
it supports splitting up packets or ask the maintainer from the maintainers 
file.
> Thanks
> 
> -- 
> BR, Filip
> +48 666 369 823

Regards,
Keith



Re: [dpdk-users] i40e : Transmit rate limit control

2019-02-08 Thread Wiles, Keith



> On Feb 8, 2019, at 6:02 AM, terry.montague.1...@btinternet.com wrote:
> 
> 
> 
> 
> Hi there.
> 
> Please can I ask this question again , as no-one replied.
> 
> Having transmit rate limiting at 25Gbe/40Gbe for the Intel i40e driver (710 
> based cards) would be extremely useful for some applications.
> 
> Does anyone have any thoughts on achieving this or is it on the roadmap for 
> i40e ?
> 
> Many thanks

maybe include i40e maintainer in the email addresses then he may see the 
question. he email is in the maintainers file.
> 
> Terry
> 
> 
> -- Original Message --
> From: "terry.montague.1...@btinternet.com" 
> 
> To: users@dpdk.org
> Sent: Monday, 28 Jan, 19 At 11:19
> Subject: [SPAM] [dpdk-users] i40e : Transmit rate limit control
> Hi All,Is anyone working on some way of implementing transmit rate limit 
> control in the i40e driver ?I realise the X**710 Intel chip implements queue 
> sets, where the tx queues can be assigned to a queue set and the set given an 
> overall transmit rate - so its not as simple as the ixgbe PMD driver.  Is 
> there a simplistic way of assigning just a single tx queue to a single queue 
> set and setting its transmit bandwidth through set_queue_rate_limit ?Many 
> thanksTerry

Regards,
Keith



Re: [dpdk-users] Query on handling packets

2019-02-05 Thread Wiles, Keith


> On Feb 5, 2019, at 8:22 AM, Harsh Patel  wrote:
> 
> Can you help us with those questions we asked you? We need them as parameters 
> for our testing.

i would love to but i do not know much about what you are asking, sorry.

i hope someone else steps in, maybe the pmd maintainer could help. look in the 
maintainers file and message him directly.
> 
> Thanks, 
> Harsh & Hrishikesh 
> 
> On Tue, Feb 5, 2019, 19:42 Wiles, Keith  wrote:
> 
> 
> > On Feb 5, 2019, at 8:00 AM, Harsh Patel  wrote:
> > 
> > Hi, 
> > One of the mistake was as following. ns-3 frees the packet buffer just as 
> > it writes to the socket and thus we thought that we should also do the 
> > same. But dpdk while writing places the packet buffer to the tx descriptor 
> > ring and perform the transmission after that on its own. And we were 
> > freeing early so sometimes the packets were lost i.e. freed before 
> > transmission. 
> > 
> > Another thing was that as you suggested earlier we compiled the whole ns-3 
> > in optimized mode. That improved the performance. 
> > 
> > These 2 things combined got us the desired results. 
> 
> Excellent thanks
> > 
> > Regards, 
> > Harsh & Hrishikesh 
> > 
> > On Tue, Feb 5, 2019, 18:33 Wiles, Keith  wrote:
> > 
> > 
> > > On Feb 5, 2019, at 12:37 AM, Harsh Patel  wrote:
> > > 
> > > Hi, 
> > > 
> > > We would like to inform you that our code is working as expected and we 
> > > are able to obtain 95-98 Mbps data rate for a 100Mbps application rate. 
> > > We are now working on the testing of the code. Thanks a lot, especially 
> > > to Keith for all the help you provided.
> > > 
> > > We have 2 main queries :-
> > > 1) We wanted to calculate Backlog at the NIC Tx Descriptors but were not 
> > > able to find anything in the documentation. Can you help us in how to 
> > > calculate the backlog?
> > > 2) We searched on how to use Byte Queue Limit (BQL) on the NIC queue but 
> > > couldn't find anything like that in DPDK. Does DPDK support BQL? If so, 
> > > can you help us on how to use it for our project?
> > 
> > what was the last set of problems if I may ask?
> > > 
> > > Thanks & Regards
> > > Harsh & Hrishikesh
> > > 
> > > On Thu, 31 Jan 2019 at 22:28, Wiles, Keith  wrote:
> > > 
> > > 
> > > Sent from my iPhone
> > > 
> > > On Jan 30, 2019, at 5:36 PM, Harsh Patel  wrote:
> > > 
> > >> Hello, 
> > >> 
> > >> This mail is to inform you that the integration of DPDK is working with 
> > >> ns-3 on a basic level. The model is running. 
> > >> For UDP traffic we are getting throughput same or better than raw 
> > >> socket. (Around 100Mbps)
> > >> But unfortunately for TCP, there are burst packet losses due to which 
> > >> the throughput is drastically affected after some point of time. The 
> > >> bandwidth of the link used was 100Mbps. 
> > >> We have obtained cwnd and ssthresh graphs which show that once the flow 
> > >> gets out from Slow Start mode, there are so many packet losses that the 
> > >> congestion window & the slow start threshold is not able to go above 4-5 
> > >> packets. 
> > > 
> > > Can you determine where the packets are being dropped?
> > >> We have attached the graphs with this mail.
> > >> 
> > > 
> > > I do not see the graphs attached but that’s OK. 
> > >> We would like to know if there is any reason to this or how can we fix 
> > >> this. 
> > > 
> > > I think we have to find out where the packets are being dropped this is 
> > > the only reason for the case to your referring to. 
> > >> 
> > >> Thanks & Regards
> > >> Harsh & Hrishikesh
> > >> 
> > >> On Wed, 16 Jan 2019 at 19:25, Harsh Patel  
> > >> wrote:
> > >> Hi
> > >> 
> > >> We were able to optimise the DPDK version. There were couple of things 
> > >> we needed to do.
> > >> 
> > >> We were using tx timeout as 1s/2048, which we found out to be very less. 
> > >> Then we increased the timeout, but we were getting lot of 
> > >> retransmissions.
> > >> 
> > >> So we removed the timeout and sent single packet as soon as we get it. 
> > >> This increased the throughput.
> > >> 

Re: [dpdk-users] Query on handling packets

2019-02-05 Thread Wiles, Keith


> On Feb 5, 2019, at 8:00 AM, Harsh Patel  wrote:
> 
> Hi, 
> One of the mistake was as following. ns-3 frees the packet buffer just as it 
> writes to the socket and thus we thought that we should also do the same. But 
> dpdk while writing places the packet buffer to the tx descriptor ring and 
> perform the transmission after that on its own. And we were freeing early so 
> sometimes the packets were lost i.e. freed before transmission. 
> 
> Another thing was that as you suggested earlier we compiled the whole ns-3 in 
> optimized mode. That improved the performance. 
> 
> These 2 things combined got us the desired results. 

Excellent thanks
> 
> Regards, 
> Harsh & Hrishikesh 
> 
> On Tue, Feb 5, 2019, 18:33 Wiles, Keith  wrote:
> 
> 
> > On Feb 5, 2019, at 12:37 AM, Harsh Patel  wrote:
> > 
> > Hi, 
> > 
> > We would like to inform you that our code is working as expected and we are 
> > able to obtain 95-98 Mbps data rate for a 100Mbps application rate. We are 
> > now working on the testing of the code. Thanks a lot, especially to Keith 
> > for all the help you provided.
> > 
> > We have 2 main queries :-
> > 1) We wanted to calculate Backlog at the NIC Tx Descriptors but were not 
> > able to find anything in the documentation. Can you help us in how to 
> > calculate the backlog?
> > 2) We searched on how to use Byte Queue Limit (BQL) on the NIC queue but 
> > couldn't find anything like that in DPDK. Does DPDK support BQL? If so, can 
> > you help us on how to use it for our project?
> 
> what was the last set of problems if I may ask?
> > 
> > Thanks & Regards
> > Harsh & Hrishikesh
> > 
> > On Thu, 31 Jan 2019 at 22:28, Wiles, Keith  wrote:
> > 
> > 
> > Sent from my iPhone
> > 
> > On Jan 30, 2019, at 5:36 PM, Harsh Patel  wrote:
> > 
> >> Hello, 
> >> 
> >> This mail is to inform you that the integration of DPDK is working with 
> >> ns-3 on a basic level. The model is running. 
> >> For UDP traffic we are getting throughput same or better than raw socket. 
> >> (Around 100Mbps)
> >> But unfortunately for TCP, there are burst packet losses due to which the 
> >> throughput is drastically affected after some point of time. The bandwidth 
> >> of the link used was 100Mbps. 
> >> We have obtained cwnd and ssthresh graphs which show that once the flow 
> >> gets out from Slow Start mode, there are so many packet losses that the 
> >> congestion window & the slow start threshold is not able to go above 4-5 
> >> packets. 
> > 
> > Can you determine where the packets are being dropped?
> >> We have attached the graphs with this mail.
> >> 
> > 
> > I do not see the graphs attached but that’s OK. 
> >> We would like to know if there is any reason to this or how can we fix 
> >> this. 
> > 
> > I think we have to find out where the packets are being dropped this is the 
> > only reason for the case to your referring to. 
> >> 
> >> Thanks & Regards
> >> Harsh & Hrishikesh
> >> 
> >> On Wed, 16 Jan 2019 at 19:25, Harsh Patel  wrote:
> >> Hi
> >> 
> >> We were able to optimise the DPDK version. There were couple of things we 
> >> needed to do.
> >> 
> >> We were using tx timeout as 1s/2048, which we found out to be very less. 
> >> Then we increased the timeout, but we were getting lot of retransmissions.
> >> 
> >> So we removed the timeout and sent single packet as soon as we get it. 
> >> This increased the throughput.
> >> 
> >> Then we used DPDK feature to launch function on core, and gave a dedicated 
> >> core for Rx. This increased the throughput further.
> >> 
> >> The code is working really well for low bandwidth (<~50Mbps) and is 
> >> outperforming raw socket version.
> >> But for high bandwidth, we are getting packet length mismatches for some 
> >> reason. We are investigating it.
> >> 
> >> We really thank you for the suggestions given by you and also for keeping 
> >> the patience for last couple of months. 
> >> 
> >> Thank you
> >> 
> >> Regards, 
> >> Harsh & Hrishikesh 
> >> 
> >> On Fri, Jan 4, 2019, 11:27 Harsh Patel  wrote:
> >> Yes that would be helpful. 
> >> It'd be ok for now to use the same dpdk version to overcome the build 
> >> issues. 
> >> We will look into updating the code for lat

Re: [dpdk-users] Query on handling packets

2019-02-05 Thread Wiles, Keith


> On Feb 5, 2019, at 12:37 AM, Harsh Patel  wrote:
> 
> Hi, 
> 
> We would like to inform you that our code is working as expected and we are 
> able to obtain 95-98 Mbps data rate for a 100Mbps application rate. We are 
> now working on the testing of the code. Thanks a lot, especially to Keith for 
> all the help you provided.
> 
> We have 2 main queries :-
> 1) We wanted to calculate Backlog at the NIC Tx Descriptors but were not able 
> to find anything in the documentation. Can you help us in how to calculate 
> the backlog?
> 2) We searched on how to use Byte Queue Limit (BQL) on the NIC queue but 
> couldn't find anything like that in DPDK. Does DPDK support BQL? If so, can 
> you help us on how to use it for our project?

what was the last set of problems if I may ask?
> 
> Thanks & Regards
> Harsh & Hrishikesh
> 
> On Thu, 31 Jan 2019 at 22:28, Wiles, Keith  wrote:
> 
> 
> Sent from my iPhone
> 
> On Jan 30, 2019, at 5:36 PM, Harsh Patel  wrote:
> 
>> Hello, 
>> 
>> This mail is to inform you that the integration of DPDK is working with ns-3 
>> on a basic level. The model is running. 
>> For UDP traffic we are getting throughput same or better than raw socket. 
>> (Around 100Mbps)
>> But unfortunately for TCP, there are burst packet losses due to which the 
>> throughput is drastically affected after some point of time. The bandwidth 
>> of the link used was 100Mbps. 
>> We have obtained cwnd and ssthresh graphs which show that once the flow gets 
>> out from Slow Start mode, there are so many packet losses that the 
>> congestion window & the slow start threshold is not able to go above 4-5 
>> packets. 
> 
> Can you determine where the packets are being dropped?
>> We have attached the graphs with this mail.
>> 
> 
> I do not see the graphs attached but that’s OK. 
>> We would like to know if there is any reason to this or how can we fix this. 
> 
> I think we have to find out where the packets are being dropped this is the 
> only reason for the case to your referring to. 
>> 
>> Thanks & Regards
>> Harsh & Hrishikesh
>> 
>> On Wed, 16 Jan 2019 at 19:25, Harsh Patel  wrote:
>> Hi
>> 
>> We were able to optimise the DPDK version. There were couple of things we 
>> needed to do.
>> 
>> We were using tx timeout as 1s/2048, which we found out to be very less. 
>> Then we increased the timeout, but we were getting lot of retransmissions.
>> 
>> So we removed the timeout and sent single packet as soon as we get it. This 
>> increased the throughput.
>> 
>> Then we used DPDK feature to launch function on core, and gave a dedicated 
>> core for Rx. This increased the throughput further.
>> 
>> The code is working really well for low bandwidth (<~50Mbps) and is 
>> outperforming raw socket version.
>> But for high bandwidth, we are getting packet length mismatches for some 
>> reason. We are investigating it.
>> 
>> We really thank you for the suggestions given by you and also for keeping 
>> the patience for last couple of months. 
>> 
>> Thank you
>> 
>> Regards, 
>> Harsh & Hrishikesh 
>> 
>> On Fri, Jan 4, 2019, 11:27 Harsh Patel  wrote:
>> Yes that would be helpful. 
>> It'd be ok for now to use the same dpdk version to overcome the build 
>> issues. 
>> We will look into updating the code for latest versions once we get past 
>> this problem. 
>> 
>> Thank you very much. 
>> 
>> Regards, 
>> Harsh & Hrishikesh
>> 
>> On Fri, Jan 4, 2019, 04:13 Wiles, Keith  wrote:
>> 
>> 
>> > On Jan 3, 2019, at 12:12 PM, Harsh Patel  wrote:
>> > 
>> > Hi
>> > 
>> > We applied your suggestion of removing the `IsLinkUp()` call. But the 
>> > performace is even worse. We could only get around 340kbits/s.
>> > 
>> > The Top Hotspots are:
>> > 
>> > FunctionModuleCPU Time
>> > eth_em_recv_pktslibrte_pmd_e1000.so15.106s
>> > rte_delay_us_blocklibrte_eal.so.6.17.372s
>> > ns3::DpdkNetDevice::Readlibns3.28.1-fd-net-device-debug.so5.080s
>> > rte_eth_rx_burstlibns3.28.1-fd-net-device-debug.so3.558s
>> > ns3::DpdkNetDeviceReader::DoReadlibns3.28.1-fd-net-device-debug.so
>> > 3.364s
>> > [Others]4.760s
>> 
>> Performance reduced by removing that link status check, that is weird.
>> > 
>> > Upon checking the callers of `rte_delay_us_block`, we got to know that 
>> >

Re: [dpdk-users] Create new driver error

2019-01-31 Thread Wiles, Keith



Sent from my iPhone

> On Jan 31, 2019, at 2:44 AM, Oscar Pap  wrote:
> 
> Hello!
> 
> I´m trying to create a new driver called "baseband_ldpc" for baseband but I
> get this error:
> 
> EAL: failed to parse device "baseband_ldpc"
> EAL: Unable to parse device 'baseband_ldpc'
> 
> 
> I created the driver in the /dpdk/drivers/baseband/ldpc folder.
> The file is called ldpc.c
> I created a copy of the Makefile from the other drivers but changed these
> lines:
> 
> LIB =librte_pmd_bbdev_ldpc.a
> ...
> EXPORT_MAP :=rte_pmd_bbdev_ldpc_version.map
> ...
> SRCS-$(CONFIG_RTELIBRTE_PMD_BBDEV_LDPC) += ldpc.c
> 
> 
> I added this line in the Makefile in the folder /dpdk/drivers/baseband
> DIRS-$(CONFIG_RTE_LIBRTE_PMD_BBDEV_LDPC) += ldpc
> DEPDIRS-ldpc = $(core-libs)
> 
> I also added these lines in the common_base file:
> CONFIG_RTE_LIBRTE_PMD_BBDEV_LDPC=y
> 
> My guess is that the error has something to do with vdev and that my
> baseband_ldpc doesnt get registered as a valid vdev.
> 
> My command for running the app is:
> ./build/bbdev --vdev='baseband_ldpc' --vdev='net_tap0' --no-pci
> 

Did you add your driver to the mk/rte.app.mk file?

> I hope that you can help with this problem.
> 
> Best regards,
> Oscar


Re: [dpdk-users] Query on handling packets

2019-01-31 Thread Wiles, Keith



Sent from my iPhone

On Jan 30, 2019, at 5:36 PM, Harsh Patel 
mailto:thadodahars...@gmail.com>> wrote:

Hello,

This mail is to inform you that the integration of DPDK is working with ns-3 on 
a basic level. The model is running.
For UDP traffic we are getting throughput same or better than raw socket. 
(Around 100Mbps)
But unfortunately for TCP, there are burst packet losses due to which the 
throughput is drastically affected after some point of time. The bandwidth of 
the link used was 100Mbps.
We have obtained cwnd and ssthresh graphs which show that once the flow gets 
out from Slow Start mode, there are so many packet losses that the congestion 
window & the slow start threshold is not able to go above 4-5 packets.

Can you determine where the packets are being dropped?
We have attached the graphs with this mail.


I do not see the graphs attached but that’s OK.
We would like to know if there is any reason to this or how can we fix this.

I think we have to find out where the packets are being dropped this is the 
only reason for the case to your referring to.

Thanks & Regards
Harsh & Hrishikesh

On Wed, 16 Jan 2019 at 19:25, Harsh Patel 
mailto:thadodahars...@gmail.com>> wrote:
Hi

We were able to optimise the DPDK version. There were couple of things we 
needed to do.

We were using tx timeout as 1s/2048, which we found out to be very less. Then 
we increased the timeout, but we were getting lot of retransmissions.

So we removed the timeout and sent single packet as soon as we get it. This 
increased the throughput.

Then we used DPDK feature to launch function on core, and gave a dedicated core 
for Rx. This increased the throughput further.

The code is working really well for low bandwidth (<~50Mbps) and is 
outperforming raw socket version.
But for high bandwidth, we are getting packet length mismatches for some 
reason. We are investigating it.

We really thank you for the suggestions given by you and also for keeping the 
patience for last couple of months.

Thank you

Regards,
Harsh & Hrishikesh

On Fri, Jan 4, 2019, 11:27 Harsh Patel 
mailto:thadodahars...@gmail.com>> wrote:
Yes that would be helpful.
It'd be ok for now to use the same dpdk version to overcome the build issues.
We will look into updating the code for latest versions once we get past this 
problem.

Thank you very much.

Regards,
Harsh & Hrishikesh

On Fri, Jan 4, 2019, 04:13 Wiles, Keith 
mailto:keith.wi...@intel.com>> wrote:


> On Jan 3, 2019, at 12:12 PM, Harsh Patel 
> mailto:thadodahars...@gmail.com>> wrote:
>
> Hi
>
> We applied your suggestion of removing the `IsLinkUp()` call. But the 
> performace is even worse. We could only get around 340kbits/s.
>
> The Top Hotspots are:
>
> FunctionModuleCPU Time
> eth_em_recv_pktslibrte_pmd_e1000.so15.106s
> rte_delay_us_blocklibrte_eal.so.6.17.372s
> ns3::DpdkNetDevice::Read
> libns3.28.1-fd-net-device-debug.so<http://libns3.28.1-fd-net-device-debug.so> 
>5.080s
> rte_eth_rx_burst
> libns3.28.1-fd-net-device-debug.so<http://libns3.28.1-fd-net-device-debug.so> 
>3.558s
> ns3::DpdkNetDeviceReader::DoRead
> libns3.28.1-fd-net-device-debug.so<http://libns3.28.1-fd-net-device-debug.so> 
>3.364s
> [Others]4.760s

Performance reduced by removing that link status check, that is weird.
>
> Upon checking the callers of `rte_delay_us_block`, we got to know that most 
> of the time (92%) spent in this function is during initialization.
> This does not waste our processing time during communication. So, it's a good 
> start to our optimization.
>
> CallersCPU Time: TotalCPU Time: Self
> rte_delay_us_block100.0%7.372s
>   e1000_enable_ulp_lpt_lp92.3%6.804s
>   e1000_write_phy_reg_mdic1.8%0.136s
>   e1000_reset_hw_ich8lan1.7%0.128s
>   e1000_read_phy_reg_mdic1.4%0.104s
>   eth_em_link_update1.4%0.100s
>   e1000_get_cfg_done_generic0.7%0.052s
>   e1000_post_phy_reset_ich8lan.part.180.7%0.048s

I guess you are having vTune start your application and that is why you have 
init time items in your log. I normally start my application and then attach 
vtune to the application. One of the options in configuration of vtune for that 
project is to attach to the application. Maybe it would help hear.

Looking at the data you provided it was ok. The problem is it would not load 
the source files as I did not have the same build or executable. I tried to 
build the code, but it failed to build and I did not go further. I guess I 
would need to see the full source tree and the executable you used to really 
look at the problem. I have limited time, but I can try if you like.
>
>
> Effective CPU Utilization:21.4% (0.856 out of 4)
>
> Here is the link to vtune

Re: [dpdk-users] Not able to compile dpdk-18.02.2 in gcc version 4.4

2019-01-14 Thread Wiles, Keith


> On Jan 14, 2019, at 3:12 AM, Surendran Dhuruvan  
> wrote:
> 
> Hi,
> 
> Am trying to compile dpdk-18.02.2 in rhel 6.10 which has gcc version 4.4.
> 
> while compiling am facing following error
> 
> In file included from
> /root/dpdk-stable-18.02.2/x86_64-native-linuxapp-gcc/include/rte_bus.h:53,
> from /root/dpdk-stable-18.02.2/lib/librte_pci/rte_pci.c:17:
> /root/dpdk-stable-18.02.2/x86_64-native-linuxapp-gcc/include/rte_dev.h:189:
> error: wrong number of arguments specified for ‘deprecated’ attribute
> /root/dpdk-stable-18.02.2/x86_64-native-linuxapp-gcc/include/rte_dev.h:205:
> error: wrong number of arguments specified for ‘deprecated’ attribute
> In file included from /root/dpdk-stable-18.02.2/lib/librte_pci/rte_pci.c:20:
> /root/dpdk-stable-18.02.2/x86_64-native-linuxapp-gcc/include/rte_eal.h:188:
> error: wrong number of arguments specified for ‘deprecated’ attribute
> /root/dpdk-stable-18.02.2/x86_64-native-linuxapp-gcc/include/rte_eal.h:253:
> error: wrong number of arguments specified for ‘deprecated’ attribute
> /root/dpdk-stable-18.02.2/x86_64-native-linuxapp-gcc/include/rte_eal.h:270:
> error: wrong number of arguments specified for ‘deprecated’ attribute
> /root/dpdk-stable-18.02.2/x86_64-native-linuxapp-gcc/include/rte_eal.h:289:
> error: wrong number of arguments specified for ‘deprecated’ attribute
> /root/dpdk-stable-18.02.2/x86_64-native-linuxapp-gcc/include/rte_eal.h:318:
> error: wrong number of arguments specified for ‘deprecated’ attribute
> /root/dpdk-stable-18.02.2/x86_64-native-linuxapp-gcc/include/rte_eal.h:340:
> error: wrong number of arguments specified for ‘deprecated’ attribute
> /root/dpdk-stable-18.02.2/x86_64-native-linuxapp-gcc/include/rte_eal.h:464:
> error: wrong number of arguments specified for ‘deprecated’ attribute
> make[5]: *** [rte_pci.o] Error 1
> make[4]: *** [librte_pci] Error 2
> make[3]: *** [lib] Error 2
> make[2]: *** [all] Error 2
> make[1]: *** [pre_install] Error 2
> make: *** [install] Error 2

This normally means the define ALLOW_EXPERIMENTAL_API is not defined, which is 
normally put on the make command line using -D option.
> 
> it seems i need gcc 4.5 and above to overcome this error.
> but my internal tools are running on gcc 4.4, so that am not able to
> upgrade the version.
> 
> is there any other way to overcome this issue ?
> 
> Thanks in advance !
> 
> -- 
> Surendran
> software engineer,
> HCL-CISCO,
> chennai.

Regards,
Keith



Re: [dpdk-users] Query on handling packets

2019-01-03 Thread Wiles, Keith


> On Jan 3, 2019, at 12:12 PM, Harsh Patel  wrote:
> 
> Hi
> 
> We applied your suggestion of removing the `IsLinkUp()` call. But the 
> performace is even worse. We could only get around 340kbits/s.
> 
> The Top Hotspots are:
> 
> FunctionModuleCPU Time
> eth_em_recv_pktslibrte_pmd_e1000.so15.106s
> rte_delay_us_blocklibrte_eal.so.6.17.372s
> ns3::DpdkNetDevice::Readlibns3.28.1-fd-net-device-debug.so5.080s
> rte_eth_rx_burstlibns3.28.1-fd-net-device-debug.so3.558s
> ns3::DpdkNetDeviceReader::DoReadlibns3.28.1-fd-net-device-debug.so
> 3.364s
> [Others]4.760s

Performance reduced by removing that link status check, that is weird.
> 
> Upon checking the callers of `rte_delay_us_block`, we got to know that most 
> of the time (92%) spent in this function is during initialization.
> This does not waste our processing time during communication. So, it's a good 
> start to our optimization.
> 
> CallersCPU Time: TotalCPU Time: Self
> rte_delay_us_block100.0%7.372s
>   e1000_enable_ulp_lpt_lp92.3%6.804s
>   e1000_write_phy_reg_mdic1.8%0.136s
>   e1000_reset_hw_ich8lan1.7%0.128s
>   e1000_read_phy_reg_mdic1.4%0.104s
>   eth_em_link_update1.4%0.100s
>   e1000_get_cfg_done_generic0.7%0.052s
>   e1000_post_phy_reset_ich8lan.part.180.7%0.048s

I guess you are having vTune start your application and that is why you have 
init time items in your log. I normally start my application and then attach 
vtune to the application. One of the options in configuration of vtune for that 
project is to attach to the application. Maybe it would help hear.

Looking at the data you provided it was ok. The problem is it would not load 
the source files as I did not have the same build or executable. I tried to 
build the code, but it failed to build and I did not go further. I guess I 
would need to see the full source tree and the executable you used to really 
look at the problem. I have limited time, but I can try if you like. 
> 
> 
> Effective CPU Utilization:21.4% (0.856 out of 4)
> 
> Here is the link to vtune profiling results. 
> https://drive.google.com/open?id=1M6g2iRZq2JGPoDVPwZCxWBo7qzUhvWi5
> 
> Thank you
> 
> Regards
> 
> On Sun, Dec 30, 2018, 06:00 Wiles, Keith  wrote:
> 
> 
> > On Dec 29, 2018, at 4:03 PM, Harsh Patel  wrote:
> > 
> > Hello,
> > As suggested, we tried profiling the application using Intel VTune 
> > Amplifier. We aren't sure how to use these results, so we are attaching 
> > them to this email.
> > 
> > The things we understood were 'Top Hotspots' and 'Effective CPU 
> > utilization'. Following are some of our understandings:
> > 
> > Top Hotspots
> > 
> > FunctionModule  CPU Time
> > rte_delay_us_block  librte_eal.so.6.1   15.042s
> > eth_em_recv_pktslibrte_pmd_e1000.so 9.544s
> > ns3::DpdkNetDevice::Readlibns3.28.1-fd-net-device-debug.so  
> > 3.522s
> > ns3::DpdkNetDeviceReader::DoReadlibns3.28.1-fd-net-device-debug.so  
> > 2.470s
> > rte_eth_rx_burstlibns3.28.1-fd-net-device-debug.so  2.456s
> > [Others]6.656s
> > 
> > We knew about other methods except `rte_delay_us_block`. So we investigated 
> > the callers of this method:
> > 
> > Callers Effective Time  Spin Time   Overhead Time   Effective Time  
> > Spin Time   Overhead Time   Wait Time: TotalWait Time: Self
> > e1000_enable_ulp_lpt_lp 45.6%   0.0%0.0%6.860s  0usec   0usec
> > e1000_write_phy_reg_mdic32.7%   0.0%0.0%4.916s  0usec   
> > 0usec
> > e1000_read_phy_reg_mdic 19.4%   0.0%0.0%2.922s  0usec   0usec
> > e1000_reset_hw_ich8lan  1.0%0.0%0.0%0.143s  0usec   0usec
> > eth_em_link_update  0.7%0.0%0.0%0.100s  0usec   0usec
> > e1000_post_phy_reset_ich8lan.part.180.4%0.0%0.0%0.064s  
> > 0usec   0usec
> > e1000_get_cfg_done_generic  0.2%0.0%0.0%0.037s  0usec   
> > 0usec
> > 
> > We lack sufficient knowledge to investigate more than this.
> > 
> > Effective CPU utilization
> > 
> > Interestingly, the effective CPU utilization was 20.8% (0.832 out of 4 
> > logical CPUs). We thought this is less. So we compared this with the 
> > raw-socket version of the code, which was even less, 8.0% (0.318 out of 4 
> > logical CPUs), and even then it is performing way better.
> > 
> > It would be helpful if you give us insights on how to use these results or 
> > point us to some resources to do so. 
> 

Re: [dpdk-users] Query on handling packets

2018-12-29 Thread Wiles, Keith


> On Dec 29, 2018, at 4:03 PM, Harsh Patel  wrote:
> 
> Hello,
> As suggested, we tried profiling the application using Intel VTune Amplifier. 
> We aren't sure how to use these results, so we are attaching them to this 
> email.
> 
> The things we understood were 'Top Hotspots' and 'Effective CPU utilization'. 
> Following are some of our understandings:
> 
> Top Hotspots
> 
> FunctionModule  CPU Time
> rte_delay_us_block  librte_eal.so.6.1   15.042s
> eth_em_recv_pktslibrte_pmd_e1000.so 9.544s
> ns3::DpdkNetDevice::Readlibns3.28.1-fd-net-device-debug.so  3.522s
> ns3::DpdkNetDeviceReader::DoReadlibns3.28.1-fd-net-device-debug.so
>   2.470s
> rte_eth_rx_burstlibns3.28.1-fd-net-device-debug.so  2.456s
> [Others]6.656s
> 
> We knew about other methods except `rte_delay_us_block`. So we investigated 
> the callers of this method:
> 
> Callers Effective Time  Spin Time   Overhead Time   Effective Time  Spin 
> Time   Overhead Time   Wait Time: TotalWait Time: Self
> e1000_enable_ulp_lpt_lp 45.6%   0.0%0.0%6.860s  0usec   0usec
> e1000_write_phy_reg_mdic32.7%   0.0%0.0%4.916s  0usec   0usec
> e1000_read_phy_reg_mdic 19.4%   0.0%0.0%2.922s  0usec   0usec
> e1000_reset_hw_ich8lan  1.0%0.0%0.0%0.143s  0usec   0usec
> eth_em_link_update  0.7%0.0%0.0%0.100s  0usec   0usec
> e1000_post_phy_reset_ich8lan.part.180.4%0.0%0.0%0.064s  0usec 
>   0usec
> e1000_get_cfg_done_generic  0.2%0.0%0.0%0.037s  0usec   0usec
> 
> We lack sufficient knowledge to investigate more than this.
> 
> Effective CPU utilization
> 
> Interestingly, the effective CPU utilization was 20.8% (0.832 out of 4 
> logical CPUs). We thought this is less. So we compared this with the 
> raw-socket version of the code, which was even less, 8.0% (0.318 out of 4 
> logical CPUs), and even then it is performing way better.
> 
> It would be helpful if you give us insights on how to use these results or 
> point us to some resources to do so. 
> 
> Thank you 
> 

BTW, I was able to build ns3 with DPDK 18.11 it required a couple changes in 
the DPDK init code in ns3 plus one hack in rte_mbuf.h file.

I did have a problem including rte_mbuf.h file into your code. It appears the 
g++ compiler did not like referencing the struct rte_mbuf_sched inside the 
rte_mbuf structure. The rte_mbuf_sched was inside the big union as a hack I 
moved the struct outside of the rte_mbuf structure and replaced the struct in 
the union with ’struct rte_mbuf_sched sched;', but I am guessing you are 
missing some compiler options in your build system as DPDK builds just fine 
without that hack.

The next place was the rxmode and the txq_flags. The rxmode structure has 
changed and I commented out the inits in ns3 and then commented out the 
txq_flags init code as these are now the defaults.

Regards,
Keith



Re: [dpdk-users] Query on handling packets

2018-12-29 Thread Wiles, Keith



> On Dec 29, 2018, at 4:03 PM, Harsh Patel  wrote:
> 
> Hello,
> As suggested, we tried profiling the application using Intel VTune Amplifier. 
> We aren't sure how to use these results, so we are attaching them to this 
> email.
> 
> The things we understood were 'Top Hotspots' and 'Effective CPU utilization'. 
> Following are some of our understandings:
> 
> Top Hotspots
> 
> FunctionModule  CPU Time
> rte_delay_us_block  librte_eal.so.6.1   15.042s
> eth_em_recv_pktslibrte_pmd_e1000.so 9.544s
> ns3::DpdkNetDevice::Readlibns3.28.1-fd-net-device-debug.so  3.522s
> ns3::DpdkNetDeviceReader::DoReadlibns3.28.1-fd-net-device-debug.so
>   2.470s
> rte_eth_rx_burstlibns3.28.1-fd-net-device-debug.so  2.456s
> [Others]6.656s
> 
> We knew about other methods except `rte_delay_us_block`. So we investigated 
> the callers of this method:
> 
> Callers Effective Time  Spin Time   Overhead Time   Effective Time  Spin 
> Time   Overhead Time   Wait Time: TotalWait Time: Self
> e1000_enable_ulp_lpt_lp 45.6%   0.0%0.0%6.860s  0usec   0usec
> e1000_write_phy_reg_mdic32.7%   0.0%0.0%4.916s  0usec   0usec
> e1000_read_phy_reg_mdic 19.4%   0.0%0.0%2.922s  0usec   0usec
> e1000_reset_hw_ich8lan  1.0%0.0%0.0%0.143s  0usec   0usec
> eth_em_link_update  0.7%0.0%0.0%0.100s  0usec   0usec
> e1000_post_phy_reset_ich8lan.part.180.4%0.0%0.0%0.064s  0usec 
>   0usec
> e1000_get_cfg_done_generic  0.2%0.0%0.0%0.037s  0usec   0usec
> 
> We lack sufficient knowledge to investigate more than this.
> 
> Effective CPU utilization
> 
> Interestingly, the effective CPU utilization was 20.8% (0.832 out of 4 
> logical CPUs). We thought this is less. So we compared this with the 
> raw-socket version of the code, which was even less, 8.0% (0.318 out of 4 
> logical CPUs), and even then it is performing way better.
> 
> It would be helpful if you give us insights on how to use these results or 
> point us to some resources to do so. 

I tracked down the rte_delay_us_block to SendFrom() function calling IsLinkUp() 
function and it appears calling that routine on every SendFrom() call, which 
for the e1000 it must be very expensive call. So rework your code to not call 
IsLinkUp() except every so often. I believe you can enable link status 
interrupt in DPDK to take an interrupt on link status change, which would be 
better then calling this routine. How you do that I am not sure, but it should 
be in the docs someplace.

For now I would remove the IsLinkUp() call and just assume it is up after you 
it the first time in Setup call function.

> 
> Thank you 
> 
> Regards
> Harsh & Hrishikesh
> 

Regards,
Keith



Re: [dpdk-users] it seems something wrong in pktgen .

2018-12-29 Thread Wiles, Keith


> On Dec 29, 2018, at 1:14 AM, Vic Wang(BJ-RD)  wrote:
> 
> Hi,
> 

You did not tell me which DPDK and pktgen version you are using.

> I use dpdk and pktgen-dpdk to test the performance of the platform 
> performance .
> The system configuration as follows:
> Senders: Intel i5-6600k with two sumsang ddr4 drams (dual channel), one intel 
> XL710 40G NIC.
> Command: ./pktgen -C 0xf -n 2 -- -P -m "2.0,3.1"
> The packet size set to 64 bytes.

This is not the problem, but see the answer below. You are using four cores and 
three is the number you need, so change the line from -C 0xf to -l 1-3 using 
the -l option is easier for most to see and not make mistakes with the core 
counts and numbers. Alway use -l for all DPDK programs.

> 
> Receiver: Intel i5-660k with two sumsang ddr4 drams(dual channel), one intel 
> XL710 40G NIC.
> Command: ./testpmd -C 0xf -n2 -- -i --portmask=0x3 --coremask=0xc
> However, when starting to forward the packets, the pktgen will stop after 
> sending severl thousands packets .

I do not use testpmd much so 0xf maybe over kill as well here, may only need -l 
2-3, but I could be wrong here. I use pktgen on both sides normally instead of 
l3fwd or testpmd.

> 
> I trace the question, and I found it seems something wrong in pktgen program.
> In the pktgen_send_pkts, if the pg_pktmbuf_alloc_bulk fails because nobufs, 
> it can't goto the pktgen_send_burst forever. So it can't free mbuf in the 
> pktgen_send_burst , and it will be failed every loop when 
> pg_pktmbuf_alloc_bulk.
>> static __inline__ void
>> pktgen_send_pkts(port_info_t *info, uint16_t qid, struct rte_mempool *mp)
>> {
>>uint32_t flags;
>>int rc = 0;
>> 
>>flags = rte_atomic32_read(>port_flags);
>> 
>>if (flags & SEND_FOREVER) {
>>   rc = pg_pktmbuf_alloc_bulk(mp,
>>   info->q[qid].tx_mbufs.m_table,
>>   info->tx_burst);
>>   if (rc == 0) {
>>  info->q[qid].tx_mbufs.len = info->tx_burst;
>>  info->q[qid].tx_cnt += info->tx_burst;
>> 
>>  pktgen_send_burst(info, qid);
>>   }
>>} else {
>>   int64_t txCnt;
> 
> This is my thought, please comment me.

Someone already reported this problem and I have it fixed in the ‘dev’ branch 
of pktgen, but they have not replied if that fixed the problem.

Please grab the ‘dev’ branch out of the git repo and try to see is that fixes 
the problem.

> 
> Best Regards!
> VicWang
> 
> 
> 
> ?
> ?
> CONFIDENTIAL NOTE:
> This email contains confidential or legally privileged information and is for 
> the sole use of its intended recipient. Any unauthorized review, use, copying 
> or forwarding of this email or the content of this email is strictly 
> prohibited.

Regards,
Keith



[dpdk-users] uncrustify configuration file.

2018-12-28 Thread Wiles, Keith
I have been playing with uncrustify http://uncrustify.sourceforge.net/ and 
working on a configuration file for DPDK coding style.

The configuration file located here 
https://github.com/pktgen/uncrustify-dpdk.git seems to get close the right 
style. Maybe it could be a good start on making sure submitted code is close to 
the right style. I give no guarantees and make sure you review the changes to 
your code after running this config.

If you want to make it better then you can fork it on GitHub and send me a pull 
request.

Regards,
Keith



Re: [dpdk-users] error while compiling pktgen-dpdk on RHEL/Centos 7.5

2018-12-05 Thread Wiles, Keith


> On Dec 5, 2018, at 12:57 PM, Murali Krishna  
> wrote:
> 
> Hi Keith,
> 
> Thanks for the pointing to docs and .pc file example. I tried to build lua
> package manually using below steps:
> 
> Step 1. updated kernel
> 
> Step 2. removed existing lua-devel package
> 
> Step 3. downloaded lua-5.3.2.tar.gz and made the below changes:
> In /usr/local/src/lua-5.3.2/Makefile, changed "TO_LIB= liblua.a" to "TO_LIB=
> liblua.a liblua.so"
> 
> In /usr/local/src/lua-5.3.2/src/Makefile
> Changed "CFLAGS= -O2 -Wall -Wextra -DLUA_COMPAT_5_2 $(SYSCFLAGS)
> $(MYCFLAGS)" to " CFLAGS= -O2 -Wall -Wextra -DLUA_COMPAT_5_2 $(SYSCFLAGS)
> $(MYCFLAGS) -fPIC"
> Added:
> LUA_A=  liblua.a
> LUA_SO= liblua.so
> Changed "ALL_T= $(LUA_A) $(LUA_T) $(LUAC_T)" to "ALL_T= $(LUA_A) $(LUA_T)
> $(LUAC_T) $(LUA_SO)"
> Added:
> $(LUA_T): $(LUA_O) $(LUA_A)
>$(CC) -o $@ $(LDFLAGS) $(LUA_O) $(LUA_A) $(LIBS)
> $(LUAC_T): $(LUAC_O) $(LUA_A)
>$(CC) -o $@ $(LDFLAGS) $(LUAC_O) $(LUA_A) $(LIBS)
> $(LUA_SO): $(CORE_O) $(LIB_O)
>$(CC) -o $@ -shared $?
> 
> 
> Step 4: exported C_INCLUDE_PATH variable
> export C_INCLUDE_PATH=/usr/local/src/lua-5.3.2/src
> 
> Step 5: manually added below package configuration file.
> # cat /usr/lib64/pkgconfig/lua-5.3.pc
> V= 5.3
> R= 5.3.2
> prefix= /usr
> exec_prefix=${prefix}
> libdir= /usr/lib64
> includedir=${prefix}/include
> 
> Name: Lua
> Description: An Extensible Extension Language
> Version: ${R}
> Requires:
> Libs: -llua-${V} -lm -ldl
> Cflags: -I${includedir}/lua-${V}
> 
> Step 6: added below configuration file
> # cat /usr/lib64/pkgconfig/lua5.3.pc
> V=5.3
> R=5.3.2
> 
> prefix=/usr
> INSTALL_BIN=${prefix}/bin
> INSTALL_INC=${prefix}/include
> INSTALL_LIB=${prefix}/lib
> INSTALL_MAN=${prefix}/share/man/man1
> INSTALL_LMOD=${prefix}/share/lua/${V}
> INSTALL_CMOD=${prefix}/lib/lua/${V}
> exec_prefix=${prefix}
> libdir=${exec_prefix}/lib
> includedir=${prefix}/include
> 
> Name: Lua
> Description: An Extensible Extension Language
> Version: ${R}
> Requires:
> Libs: -L${libdir} -llua -lm -ldl
> Cflags: -I${includedir}
> 
> 
> Step 7: Moved to the lua-5.3.2/src directory and ran `make linux`Step
> 8: copied newly built libraries to lib64 folder
> # cp /usr/local/src/lua-5.3.2/src/liblua.so /usr/lib64/liblua-5.3.so
> # cp /usr/local/src/lua-5.3.2/src/liblua.a /usr/lib64/liblua-5.3.a
> 
> Now when I try to build latest pktgen-dpdk-3.5.9 or pktgen-dpdk from dev
> branch, I get the below error.
> # make -j40
> == lib
> == common
>  CC copyright_info.o
>  CC port_config.o
>  CC core_info.o
>  CC cmdline_parse_args.o
>  CC lscpu.o
>  CC utils.o
>  CC coremap.o
>  CC _pcap.o
>  CC cksum.o
>  CC l2p.o
>  AR libcommon.a
>  INSTALL-LIB libcommon.a
> == utils
>  CC rte_strings.o
>  CC rte_link.o
>  CC parson_json.o
>  CC rte_atoip.o
>  CC rte_portlist.o
>  CC inet_pton.o
>  CC rte_heap.o
>  AR libutils.a
>  INSTALL-LIB libutils.a
> == vec
>  CC rte_vec.o
>  AR libvec.a
>  INSTALL-LIB libvec.a
> == lua
>  CC rte_lua.o
>  CC rte_lua_stdio.o
>  CC rte_lua_utils.o
>  CC rte_lua_socket.o
>  CC rte_lua_dpdk.o
>  CC rte_lua_pktmbuf.o
>  CC rte_lua_vec.o
>  CC rte_lua_dapi.o
>  AR libpktgen_lua.a
>  INSTALL-LIB libpktgen_lua.a
> == cli
>  CC cli.o
>  CC cli_input.o
>  CC cli_cmds.o
>  CC cli_map.o
>  CC cli_gapbuf.o
>  CC cli_file.o
>  CC cli_env.o
>  CC cli_auto_complete.o
>  CC cli_help.o
>  CC cli_history.o
>  CC cli_search.o
>  CC cli_cmap.o
>  CC cli_vt100.o
>  CC cli_scrn.o
>  AR libcli.a
>  INSTALL-LIB libcli.a
> == app
>  CC cli-functions.o
>  CC lpktgenlib.o
>  CC pktgen-cmds.o
>  CC pktgen.o
>  CC pktgen-cfg.o
>  CC pktgen-main.o
>  CC pktgen-pcap.o
>  CC pktgen-range.o
>  CC pktgen-cpu.o
>  CC pktgen-seq.o
>  CC pktgen-dump.o
>  CC pktgen-capture.o
>  CC pktgen-stats.o
>  CC pktgen-port-cfg.o
>  CC pktgen-ipv6.o
>  CC pktgen-ipv4.o
>  CC pktgen-arp.o
>  CC pktgen-gre.o
>  CC pktgen-ether.o
>  CC pktgen-tcp.o
>  CC pktgen-udp.o
>  CC pktgen-vlan.o
>  CC pktgen-random.o
>  CC pktgen-display.o
>  CC pktgen-log.o
>  CC pktgen-gtpu.o
>  CC pktgen-latency.o
>  LD pktgen
> /usr/bin/ld: cannot find -lpktgen_lua
> collect2: error: ld returned 1 exit status
> make[2]: *** [pktgen] Error 1
> make[1]: *** [all] Error 2
> make: *** [app] Error 2

I did not see it build the common/lua directory as that is where the library 
comes from. I am out of town and it will be hard for me to debug this one 
remotely. It may take a few days s

Re: [dpdk-users] error while compiling pktgen-dpdk on RHEL/Centos 7.5

2018-12-05 Thread Wiles, Keith


> On Dec 5, 2018, at 7:56 AM, Wiles, Keith  wrote:
> 
> 
> 
>> On Dec 5, 2018, at 2:23 AM, Murali Krishna  
>> wrote:
>> 
>> Hi,
>> 
>> 
>> 
>> I am trying to compile pktgen-3.5.8 on RHEL/Centos 7.5 kernel. I see below
>> error while compiling it from source.
>> 
>> 
>> 
>> # cd pktgen-3.5.8/
>> 
>> # make
>> 
>> == lib
>> 
>> == common
>> 
>> == utils
>> 
>> == vec
>> 
>> == lua
>> 
>> Package lua5.3 was not found in the pkg-config search path.
>> 
>> Perhaps you should add the directory containing `lua5.3.pc'
>> 
>> to the PKG_CONFIG_PATH environment variable
>> 
>> No package 'lua5.3' found
>> 
>> Package lua5.3 was not found in the pkg-config search path.
>> 
>> Perhaps you should add the directory containing `lua5.3.pc'
>> 
>> to the PKG_CONFIG_PATH environment variable
>> 
>> No package 'lua5.3' found
>> 
>> CC rte_lua.o
>> 
>> In file included from /root/pkt/pktgen-3.5.8/lib/lua/rte_lua.c:27:0:
>> 
>> /root/pkt/pktgen-3.5.8/lib/lua/rte_lua.h:25:17: fatal error: lua.h: No such
>> file or directory
>> 
>> #include 
>> 
>>^
>> 
>> compilation terminated.
>> 
>> make[3]: *** [rte_lua.o] Error 1
>> 
>> make[2]: *** [all] Error 2
>> 
>> make[1]: *** [lua] Error 2
>> 
>> make: *** [lib] Error 2
>> 
>> 
>> 
>> i see same error even after installing lua53 packages.
>> 
>> 
>> 
>> # rpm -qa | grep lua5
>> 
>> lua53u-libs-5.3.4-1.ius.centos7.x86_64
>> 
>> lua53u-5.3.4-1.ius.centos7.x86_64
>> 
>> lua53u-devel-5.3.4-1.ius.centos7.x86_64
>> 
>> 
>> 
>> Is there any workaround for this error?
> 
> In the dev branch of Pktgen is an example pkg_config file to use and more 
> docs on how to get lua/Pktgen running on these distro’s that do not supply a 
> .pc file.
> 
>> 
> 
> https://git.dpdk.org/apps/pktgen-dpdk/?h=dev

Also just updating to the latest Pktgen 3.5.9 will have the updated docs and 
.pc file example.
> 
> 
>> 
>> 
>> 
>> Thanks,
>> 
>> Murali
> 
> Regards,
> Keith

Regards,
Keith



Re: [dpdk-users] error while compiling pktgen-dpdk on RHEL/Centos 7.5

2018-12-05 Thread Wiles, Keith


> On Dec 5, 2018, at 2:23 AM, Murali Krishna  
> wrote:
> 
> Hi,
> 
> 
> 
> I am trying to compile pktgen-3.5.8 on RHEL/Centos 7.5 kernel. I see below
> error while compiling it from source.
> 
> 
> 
> # cd pktgen-3.5.8/
> 
> # make
> 
> == lib
> 
> == common
> 
> == utils
> 
> == vec
> 
> == lua
> 
> Package lua5.3 was not found in the pkg-config search path.
> 
> Perhaps you should add the directory containing `lua5.3.pc'
> 
> to the PKG_CONFIG_PATH environment variable
> 
> No package 'lua5.3' found
> 
> Package lua5.3 was not found in the pkg-config search path.
> 
> Perhaps you should add the directory containing `lua5.3.pc'
> 
> to the PKG_CONFIG_PATH environment variable
> 
> No package 'lua5.3' found
> 
>  CC rte_lua.o
> 
> In file included from /root/pkt/pktgen-3.5.8/lib/lua/rte_lua.c:27:0:
> 
> /root/pkt/pktgen-3.5.8/lib/lua/rte_lua.h:25:17: fatal error: lua.h: No such
> file or directory
> 
> #include 
> 
> ^
> 
> compilation terminated.
> 
> make[3]: *** [rte_lua.o] Error 1
> 
> make[2]: *** [all] Error 2
> 
> make[1]: *** [lua] Error 2
> 
> make: *** [lib] Error 2
> 
> 
> 
> i see same error even after installing lua53 packages.
> 
> 
> 
> # rpm -qa | grep lua5
> 
> lua53u-libs-5.3.4-1.ius.centos7.x86_64
> 
> lua53u-5.3.4-1.ius.centos7.x86_64
> 
> lua53u-devel-5.3.4-1.ius.centos7.x86_64
> 
> 
> 
> Is there any workaround for this error?

In the dev branch of Pktgen is an example pkg_config file to use and more docs 
on how to get lua/Pktgen running on these distro’s that do not supply a .pc 
file.

> 

https://git.dpdk.org/apps/pktgen-dpdk/?h=dev


> 
> 
> 
> Thanks,
> 
> Murali

Regards,
Keith



Re: [dpdk-users] Query on handling packets

2018-11-30 Thread Wiles, Keith



> On Nov 30, 2018, at 3:02 AM, Harsh Patel  wrote:
> 
> Hello,
> Sorry for the long delay, we were busy with some exams.
> 
> 1) About the NUMA sockets
> This is the result of the command you mentioned :-
> ==
> Core and Socket Information (as reported by '/sys/devices/system/cpu')
> ==
> 
> cores =  [0, 1, 2, 3]
> sockets =  [0]
> 
>Socket 0  
>  
> Core 0 [0]   
> Core 1 [1]   
> Core 2 [2]   
> Core 3 [3]
> 
> We don't know much about this and would like your input on what else to be 
> checked or what do we need to do.
> 
> 2) The part where you asked for a graph 
> We used `ps` to analyse which CPU cores are being utilized.
> The raw socket version had two logical threads which used cores 0 and 1.
> The DPDK version had 6 logical threads, which also used cores 0 and 1. This 
> is the case for which we showed you the results.
> As the previous case had 2 cores and was not giving desired results, we tried 
> to give more cores to see if the DPDK in ns-3 code can achieve the desired 
> throughput and pps. (We thought giving more cores might improve the 
> performance.)
> For this new case, we provided 4 total cores using  EAL arguments, upon 
> which, it used cores 0-3. And still we got the same results as the one sent 
> earlier.
> We think this means that the bottleneck is a different problem unrelated to 
> number of cores as of now. (This whole section is an answer to the question 
> in the last paragraph raised by Kyle to which Keith asked for a graph)

In the CPU output above you are running a four core system with no 
hyper-threads. This means you only have four core and four threads in the terms 
of DPDK. Using 6 logical threads will not improve performance in the DPDK case. 
DPDK normally uses a single thread per core. You can have more than one pthread 
per core, but having more than one thread per code requires the software to 
switch threads. Having context switch is not a good performance win in most 
cases.

Not sure how your system is setup and a picture could help.

I will be traveling all next week and responses will be slow.

> 
> 3) About updating the TX_TIMEOUT and storing rte_get_timer_hz()  
> We have not tried this and will try it by today and will send you the status 
> after that in some time. 
> 
> 4) For the suggestion by Stephen
> We are not clear on what you suggested and it would be nice if you elaborate 
> your suggestion.
> 
> Thanks and Regards, 
> Harsh and Hrishikesh
> 
> PS :- We are done with our exams and would be working now on this regularly. 
> 
> On Sun, 25 Nov 2018 at 10:05, Stephen Hemminger  
> wrote:
> On Sat, 24 Nov 2018 16:01:04 +
> "Wiles, Keith"  wrote:
> 
> > > On Nov 22, 2018, at 9:54 AM, Harsh Patel  wrote:
> > > 
> > > Hi
> > > 
> > > Thank you so much for the reply and for the solution.
> > > 
> > > We used the given code. We were amazed by the pointer arithmetic you 
> > > used, got to learn something new.
> > > 
> > > But still we are under performing.The same bottleneck of ~2.5Mbps is seen.
> > > 
> > > We also checked if the raw socket was using any extra (logical) cores 
> > > than the DPDK. We found that raw socket has 2 logical threads running on 
> > > 2 logical CPUs. Whereas, the DPDK version has 6 logical threads on 2 
> > > logical CPUs. We also ran the 6 threads on 4 logical CPUs, still we see 
> > > the same bottleneck.
> > > 
> > > We have updated our code (you can use the same links from previous mail). 
> > > It would be helpful if you could help us in finding what causes the 
> > > bottleneck.  
> > 
> > I looked at the code for a few seconds and noticed your TX_TIMEOUT is macro 
> > that calls (rte_get_timer_hz()/2014) just to be safe I would not call 
> > rte_get_timer_hz() time, but grab the value and store the hz locally and 
> > use that variable instead. This will not improve performance is my guess 
> > and I would have to look at the code the that routine to see if it buys you 
> > anything to store the value locally. If the getting hz is just a simple 
> > read of a variable then good, but still you should should a local variable 
> > within the object to hold the (rte_get_timer_hz()/2048) instead of doing 
> > the call and divide each time.
> > 
> > > 
> > > Thanks and Regards, 
> > > Harsh and Hrishikesh 
> > >  
> > > 
> > > On Mon, Nov 19, 2018, 19:19 Wiles, Keith  wrot

Re: [dpdk-users] Query on handling packets

2018-11-24 Thread Wiles, Keith



> On Nov 22, 2018, at 9:54 AM, Harsh Patel  wrote:
> 
> Hi
> 
> Thank you so much for the reply and for the solution.
> 
> We used the given code. We were amazed by the pointer arithmetic you used, 
> got to learn something new.
> 
> But still we are under performing.The same bottleneck of ~2.5Mbps is seen.
> 
> We also checked if the raw socket was using any extra (logical) cores than 
> the DPDK. We found that raw socket has 2 logical threads running on 2 logical 
> CPUs. Whereas, the DPDK version has 6 logical threads on 2 logical CPUs. We 
> also ran the 6 threads on 4 logical CPUs, still we see the same bottleneck.
> 
> We have updated our code (you can use the same links from previous mail). It 
> would be helpful if you could help us in finding what causes the bottleneck.

I looked at the code for a few seconds and noticed your TX_TIMEOUT is macro 
that calls (rte_get_timer_hz()/2014) just to be safe I would not call 
rte_get_timer_hz() time, but grab the value and store the hz locally and use 
that variable instead. This will not improve performance is my guess and I 
would have to look at the code the that routine to see if it buys you anything 
to store the value locally. If the getting hz is just a simple read of a 
variable then good, but still you should should a local variable within the 
object to hold the (rte_get_timer_hz()/2048) instead of doing the call and 
divide each time.

> 
> Thanks and Regards, 
> Harsh and Hrishikesh 
> 
> 
> On Mon, Nov 19, 2018, 19:19 Wiles, Keith  wrote:
> 
> 
> > On Nov 17, 2018, at 4:05 PM, Kyle Larose  wrote:
> > 
> > On Sat, Nov 17, 2018 at 5:22 AM Harsh Patel  
> > wrote:
> >> 
> >> Hello,
> >> Thanks a lot for going through the code and providing us with so much
> >> information.
> >> We removed all the memcpy/malloc from the data path as you suggested and
> > ...
> >> After removing this, we are able to see a performance gain but not as good
> >> as raw socket.
> >> 
> > 
> > You're using an unordered_map to map your buffer pointers back to the
> > mbufs. While it may not do a memcpy all the time, It will likely end
> > up doing a malloc arbitrarily when you insert or remove entries from
> > the map. If it needs to resize the table, it'll be even worse. You may
> > want to consider using librte_hash:
> > https://doc.dpdk.org/api/rte__hash_8h.html instead. Or, even better,
> > see if you can design the system to avoid needing to do a lookup like
> > this. Can you return a handle with the mbuf pointer and the data
> > together?
> > 
> > You're also using floating point math where it's unnecessary (the
> > timing check). Just multiply the numerator by 100 prior to doing
> > the division. I doubt you'll overflow a uint64_t with that. It's not
> > as efficient as integer math, though I'm not sure offhand it'd cause a
> > major perf problem.
> > 
> > One final thing: using a raw socket, the kernel will take over
> > transmitting and receiving to the NIC itself. that means it is free to
> > use multiple CPUs for the rx and tx. I notice that you only have one
> > rx/tx queue, meaning at most one CPU can send and receive packets.
> > When running your performance test with the raw socket, you may want
> > to see how busy the system is doing packet sends and receives. Is it
> > using more than one CPU's worth of processing? Is it using less, but
> > when combined with your main application's usage, the overall system
> > is still using more than one?
> 
> Along with the floating point math, I would remove all floating point math 
> and use the rte_rdtsc() function to use cycles. Using something like:
> 
> uint64_t cur_tsc, next_tsc, timo = (rte_timer_get_hz() / 16);   /* One 16th 
> of a second use 2/4/8/16/32 power of two numbers to make the math simple 
> divide */
> 
> cur_tsc = rte_rdtsc();
> 
> next_tsc = cur_tsc + timo; /* Now next_tsc the next time to flush */
> 
> while(1) {
> cur_tsc = rte_rdtsc();
> if (cur_tsc >= next_tsc) {
> flush();
> next_tsc += timo;
> }
> /* Do other stuff */
> }
> 
> For the m_bufPktMap I would use the rte_hash or do not use a hash at all by 
> grabbing the buffer address and subtract the
> mbuf = (struct rte_mbuf *)RTE_PTR_SUB(buf, sizeof(struct rte_mbuf) + 
> RTE_MAX_HEADROOM);
> 
> 
> DpdkNetDevice:Write(uint8_t *buffer, size_t length)
> {
> struct rte_mbuf *pkt;
> uint64_t cur_tsc;
> 
> pkt = (struct rte_mbuf *)RTE_PTR_SUB(buffer, sizeof(struct rte_mbuf) 
> + RTE_MAX_HEADROOM);
> 
>

Re: [dpdk-users] Query on handling packets

2018-11-24 Thread Wiles, Keith



> On Nov 24, 2018, at 9:43 AM, Wiles, Keith  wrote:
> 
> 
> 
>> On Nov 22, 2018, at 9:54 AM, Harsh Patel  wrote:
>> 
>> Hi
>> 
>> Thank you so much for the reply and for the solution.
>> 
>> We used the given code. We were amazed by the pointer arithmetic you used, 
>> got to learn something new.
>> 
>> But still we are under performing.The same bottleneck of ~2.5Mbps is seen.
> 
> Make sure the cores you are using are on the same NUMA or socket the PCI 
> devices are located.
> 
> If you have two CPUs or sockets in your system. The cpu_layout.py script will 
> help you understand the layout of the cores and/or lcores in the system.
> 
> On my machine the PCI bus is connected to socket 1 and not socket 0, this 
> means I have to use lcores only on socket 1. Some systems have two PCI buses 
> one on each socket. Accessing data from one NUMA zone or socket to another 
> can effect performance and should be avoided.
> 
> HTH
>> 
>> We also checked if the raw socket was using any extra (logical) cores than 
>> the DPDK. We found that raw socket has 2 logical threads running on 2 
>> logical CPUs. Whereas, the DPDK version has 6 logical threads on 2 logical 
>> CPUs. We also ran the 6 threads on 4 logical CPUs, still we see the same 
>> bottleneck.

Not sure what you are trying to tell me here, but a picture could help me a lot.

>> 
>> We have updated our code (you can use the same links from previous mail). It 
>> would be helpful if you could help us in finding what causes the bottleneck.
>> 
>> Thanks and Regards, 
>> Harsh and Hrishikesh 
>> 
>> 
>> On Mon, Nov 19, 2018, 19:19 Wiles, Keith  wrote:
>> 
>> 
>>> On Nov 17, 2018, at 4:05 PM, Kyle Larose  wrote:
>>> 
>>> On Sat, Nov 17, 2018 at 5:22 AM Harsh Patel  
>>> wrote:
>>>> 
>>>> Hello,
>>>> Thanks a lot for going through the code and providing us with so much
>>>> information.
>>>> We removed all the memcpy/malloc from the data path as you suggested and
>>> ...
>>>> After removing this, we are able to see a performance gain but not as good
>>>> as raw socket.
>>>> 
>>> 
>>> You're using an unordered_map to map your buffer pointers back to the
>>> mbufs. While it may not do a memcpy all the time, It will likely end
>>> up doing a malloc arbitrarily when you insert or remove entries from
>>> the map. If it needs to resize the table, it'll be even worse. You may
>>> want to consider using librte_hash:
>>> https://doc.dpdk.org/api/rte__hash_8h.html instead. Or, even better,
>>> see if you can design the system to avoid needing to do a lookup like
>>> this. Can you return a handle with the mbuf pointer and the data
>>> together?
>>> 
>>> You're also using floating point math where it's unnecessary (the
>>> timing check). Just multiply the numerator by 100 prior to doing
>>> the division. I doubt you'll overflow a uint64_t with that. It's not
>>> as efficient as integer math, though I'm not sure offhand it'd cause a
>>> major perf problem.
>>> 
>>> One final thing: using a raw socket, the kernel will take over
>>> transmitting and receiving to the NIC itself. that means it is free to
>>> use multiple CPUs for the rx and tx. I notice that you only have one
>>> rx/tx queue, meaning at most one CPU can send and receive packets.
>>> When running your performance test with the raw socket, you may want
>>> to see how busy the system is doing packet sends and receives. Is it
>>> using more than one CPU's worth of processing? Is it using less, but
>>> when combined with your main application's usage, the overall system
>>> is still using more than one?
>> 
>> Along with the floating point math, I would remove all floating point math 
>> and use the rte_rdtsc() function to use cycles. Using something like:
>> 
>> uint64_t cur_tsc, next_tsc, timo = (rte_timer_get_hz() / 16);   /* One 16th 
>> of a second use 2/4/8/16/32 power of two numbers to make the math simple 
>> divide */
>> 
>> cur_tsc = rte_rdtsc();
>> 
>> next_tsc = cur_tsc + timo; /* Now next_tsc the next time to flush */
>> 
>> while(1) {
>>cur_tsc = rte_rdtsc();
>>if (cur_tsc >= next_tsc) {
>>flush();
>>next_tsc += timo;
>>}
>>/* Do other stuff */
>> }
>> 
>> For the m_bufPktMap I would use the rte_hash or do not use a hash at

Re: [dpdk-users] Query on handling packets

2018-11-24 Thread Wiles, Keith



> On Nov 22, 2018, at 9:54 AM, Harsh Patel  wrote:
> 
> Hi
> 
> Thank you so much for the reply and for the solution.
> 
> We used the given code. We were amazed by the pointer arithmetic you used, 
> got to learn something new.
> 
> But still we are under performing.The same bottleneck of ~2.5Mbps is seen.

Make sure the cores you are using are on the same NUMA or socket the PCI 
devices are located.

If you have two CPUs or sockets in your system. The cpu_layout.py script will 
help you understand the layout of the cores and/or lcores in the system.

On my machine the PCI bus is connected to socket 1 and not socket 0, this means 
I have to use lcores only on socket 1. Some systems have two PCI buses one on 
each socket. Accessing data from one NUMA zone or socket to another can effect 
performance and should be avoided.

HTH
> 
> We also checked if the raw socket was using any extra (logical) cores than 
> the DPDK. We found that raw socket has 2 logical threads running on 2 logical 
> CPUs. Whereas, the DPDK version has 6 logical threads on 2 logical CPUs. We 
> also ran the 6 threads on 4 logical CPUs, still we see the same bottleneck.
> 
> We have updated our code (you can use the same links from previous mail). It 
> would be helpful if you could help us in finding what causes the bottleneck.
> 
> Thanks and Regards, 
> Harsh and Hrishikesh 
> 
> 
> On Mon, Nov 19, 2018, 19:19 Wiles, Keith  wrote:
> 
> 
> > On Nov 17, 2018, at 4:05 PM, Kyle Larose  wrote:
> > 
> > On Sat, Nov 17, 2018 at 5:22 AM Harsh Patel  
> > wrote:
> >> 
> >> Hello,
> >> Thanks a lot for going through the code and providing us with so much
> >> information.
> >> We removed all the memcpy/malloc from the data path as you suggested and
> > ...
> >> After removing this, we are able to see a performance gain but not as good
> >> as raw socket.
> >> 
> > 
> > You're using an unordered_map to map your buffer pointers back to the
> > mbufs. While it may not do a memcpy all the time, It will likely end
> > up doing a malloc arbitrarily when you insert or remove entries from
> > the map. If it needs to resize the table, it'll be even worse. You may
> > want to consider using librte_hash:
> > https://doc.dpdk.org/api/rte__hash_8h.html instead. Or, even better,
> > see if you can design the system to avoid needing to do a lookup like
> > this. Can you return a handle with the mbuf pointer and the data
> > together?
> > 
> > You're also using floating point math where it's unnecessary (the
> > timing check). Just multiply the numerator by 100 prior to doing
> > the division. I doubt you'll overflow a uint64_t with that. It's not
> > as efficient as integer math, though I'm not sure offhand it'd cause a
> > major perf problem.
> > 
> > One final thing: using a raw socket, the kernel will take over
> > transmitting and receiving to the NIC itself. that means it is free to
> > use multiple CPUs for the rx and tx. I notice that you only have one
> > rx/tx queue, meaning at most one CPU can send and receive packets.
> > When running your performance test with the raw socket, you may want
> > to see how busy the system is doing packet sends and receives. Is it
> > using more than one CPU's worth of processing? Is it using less, but
> > when combined with your main application's usage, the overall system
> > is still using more than one?
> 
> Along with the floating point math, I would remove all floating point math 
> and use the rte_rdtsc() function to use cycles. Using something like:
> 
> uint64_t cur_tsc, next_tsc, timo = (rte_timer_get_hz() / 16);   /* One 16th 
> of a second use 2/4/8/16/32 power of two numbers to make the math simple 
> divide */
> 
> cur_tsc = rte_rdtsc();
> 
> next_tsc = cur_tsc + timo; /* Now next_tsc the next time to flush */
> 
> while(1) {
> cur_tsc = rte_rdtsc();
> if (cur_tsc >= next_tsc) {
> flush();
> next_tsc += timo;
> }
> /* Do other stuff */
> }
> 
> For the m_bufPktMap I would use the rte_hash or do not use a hash at all by 
> grabbing the buffer address and subtract the
> mbuf = (struct rte_mbuf *)RTE_PTR_SUB(buf, sizeof(struct rte_mbuf) + 
> RTE_MAX_HEADROOM);
> 
> 
> DpdkNetDevice:Write(uint8_t *buffer, size_t length)
> {
> struct rte_mbuf *pkt;
> uint64_t cur_tsc;
> 
> pkt = (struct rte_mbuf *)RTE_PTR_SUB(buffer, sizeof(struct rte_mbuf) 
> + RTE_MAX_HEADROOM);
> 
> /* No need to test pkt, but buffer maybe tested to make sure it is 
> not null above the math

Re: [dpdk-users] Query on handling packets

2018-11-19 Thread Wiles, Keith



> On Nov 17, 2018, at 4:05 PM, Kyle Larose  wrote:
> 
> On Sat, Nov 17, 2018 at 5:22 AM Harsh Patel  wrote:
>> 
>> Hello,
>> Thanks a lot for going through the code and providing us with so much
>> information.
>> We removed all the memcpy/malloc from the data path as you suggested and
> ...
>> After removing this, we are able to see a performance gain but not as good
>> as raw socket.
>> 
> 
> You're using an unordered_map to map your buffer pointers back to the
> mbufs. While it may not do a memcpy all the time, It will likely end
> up doing a malloc arbitrarily when you insert or remove entries from
> the map. If it needs to resize the table, it'll be even worse. You may
> want to consider using librte_hash:
> https://doc.dpdk.org/api/rte__hash_8h.html instead. Or, even better,
> see if you can design the system to avoid needing to do a lookup like
> this. Can you return a handle with the mbuf pointer and the data
> together?
> 
> You're also using floating point math where it's unnecessary (the
> timing check). Just multiply the numerator by 100 prior to doing
> the division. I doubt you'll overflow a uint64_t with that. It's not
> as efficient as integer math, though I'm not sure offhand it'd cause a
> major perf problem.
> 
> One final thing: using a raw socket, the kernel will take over
> transmitting and receiving to the NIC itself. that means it is free to
> use multiple CPUs for the rx and tx. I notice that you only have one
> rx/tx queue, meaning at most one CPU can send and receive packets.
> When running your performance test with the raw socket, you may want
> to see how busy the system is doing packet sends and receives. Is it
> using more than one CPU's worth of processing? Is it using less, but
> when combined with your main application's usage, the overall system
> is still using more than one?

Along with the floating point math, I would remove all floating point math and 
use the rte_rdtsc() function to use cycles. Using something like:

uint64_t cur_tsc, next_tsc, timo = (rte_timer_get_hz() / 16);   /* One 16th of 
a second use 2/4/8/16/32 power of two numbers to make the math simple divide */

cur_tsc = rte_rdtsc();

next_tsc = cur_tsc + timo; /* Now next_tsc the next time to flush */

while(1) {
cur_tsc = rte_rdtsc();
if (cur_tsc >= next_tsc) {
flush();
next_tsc += timo;
}
/* Do other stuff */
}

For the m_bufPktMap I would use the rte_hash or do not use a hash at all by 
grabbing the buffer address and subtract the
mbuf = (struct rte_mbuf *)RTE_PTR_SUB(buf, sizeof(struct rte_mbuf) + 
RTE_MAX_HEADROOM);


DpdkNetDevice:Write(uint8_t *buffer, size_t length)
{
struct rte_mbuf *pkt;
uint64_t cur_tsc;

pkt = (struct rte_mbuf *)RTE_PTR_SUB(buffer, sizeof(struct rte_mbuf) + 
RTE_MAX_HEADROOM);

/* No need to test pkt, but buffer maybe tested to make sure it is not 
null above the math above */

pkt->pk_len = length;
pkt->data_len = length;

rte_eth_tx_buffer(m_portId, 0, m_txBuffer, pkt);

cur_tsc = rte_rdtsc();

/* next_tsc is a private variable */
if (cur_tsc >= next_tsc) {
rte_eth_tx_buffer_flush(m_portId, 0, m_txBuffer);   /* 
hardcoded the queue id, should be fixed */
next_tsc = cur_tsc + timo; /* timo is a fixed number of cycles 
to wait */
}
return length;
}

DpdkNetDevice::Read()
{
struct rte_mbuf *pkt;

if (m_rxBuffer->length == 0) {
m_rxBuffer->next = 0;
m_rxBuffer->length = rte_eth_rx_burst(m_portId, 0, 
m_rxBuffer->pmts, MAX_PKT_BURST);

if (m_rxBuffer->length == 0)
return std::make_pair(NULL, -1);
}

pkt = m_rxBuffer->pkts[m_rxBuffer->next++];

/* do not use rte_pktmbuf_read() as it does a copy for the complete 
packet */

return std:make_pair(rte_pktmbuf_mtod(pkt, char *), pkt->pkt_len);
}

void
DpdkNetDevice::FreeBuf(uint8_t *buf)
{
struct rte_mbuf *pkt;

if (!buf)
return;
pkt = (struct rte_mbuf *)RTE_PKT_SUB(buf, sizeof(rte_mbuf) + 
RTE_MAX_HEADROOM);

rte_pktmbuf_free(pkt);
}

When your code is done with the buffer, then convert the buffer address back to 
a rte_mbuf pointer and call rte_pktmbuf_free(pkt); This should eliminate the 
copy and floating point code. Converting my C code to C++ priceless :-)

Hopefully the buffer address passed is the original buffer address and has not 
be adjusted.


Regards,
Keith



Re: [dpdk-users] How to receive packets from non-DPDK requests?

2018-11-14 Thread Wiles, Keith


> On Nov 14, 2018, at 1:05 PM, Sungho Hong  wrote:
> 
> Thanks for the reply, but excluding  the ARP, 
> Is it possible to receive POSIX TCP or UDP packets using DPDK-application? 
> 
> Because DPDK application for me right now, only recognizes the packets sent 
> from DPDK only. But by the sound of it, and based on your reply 
> 
> DPDK can receive packets from non-DPDK correct (POSIX, UDP, TCP)? 
> How can we receive any packets at all from non-DPDK? 

DPDK is just a high speed packet I/O system on top of normal NICs in most cases.

DPDK can receive any packet that is on the wire if you enable promiscuous mode 
for that port. If you do not have promiscuous mode enabled then the destination 
MAC address in the L3 header must match the NIC’s MAC address.
> 
> 

Regards,
Keith



Re: [dpdk-users] How to receive packets from non-DPDK requests?

2018-11-14 Thread Wiles, Keith



> On Nov 14, 2018, at 11:35 AM, Sungho Hong  wrote:
> 
> Hello DPDK users,
> 
> I am trying to receive the packets from non-DPDK requests,
> Currently, I am trying to send ARP requests, but it seems that DPDK
> application only understands the packets that are sent by DPDK.
> 
> Is there some configuration that needs to be done in order to receive
> non-DPDK requests?

Not sure what you are asking here, but DPDK does not have a ARP or network 
stack of any kind. There are a few network stacks for DPDK just google it.

If you want to detect ARP packets then you need to look at the L2 header and 
then you would need to write the ARP protocol handling code.

HTH

Regards,
Keith



Re: [dpdk-users] Query on handling packets

2018-11-14 Thread Wiles, Keith



> On Nov 14, 2018, at 7:54 AM, Harsh Patel  wrote:
> 
> Hello,
> This is a link to the complete source code of our project :- 
> https://github.com/ns-3-dpdk-integration/ns-3-dpdk
> For the description of the project, look through this :- 
> https://ns-3-dpdk-integration.github.io/
> Once you go through it, you will have a basic understanding of the project.
> Installation instructions link are provided in the github.io page.
> 
> In the code we mentioned above, the master branch contains the implementation 
> of the logic using rte_rings which we mentioned at the very beginning of the 
> discussion. There is a branch named "newrxtx" which contains the 
> implementation according to the logic you provided.
> 
> We would like you to take a look at the code in newrxtx branch. 
> (https://github.com/ns-3-dpdk-integration/ns-3-dpdk/tree/newrxtx)
> In the code in this branch, go to 
> ns-allinone-3.28.1/ns-3.28.1/src/fd-net-device/model/ directory. Here we have 
> implemented the DpdkNetDevice model. This model contains the code which 
> implements the whole model providing interaction between ns-3 and DPDK. We 
> would like you take a look at our Read function 
> (https://github.com/ns-3-dpdk-integration/ns-3-dpdk/blob/newrxtx/ns-allinone-3.28.1/ns-3.28.1/src/fd-net-device/model/dpdk-net-device.cc#L626)
>  and Write function 
> (https://github.com/ns-3-dpdk-integration/ns-3-dpdk/blob/newrxtx/ns-allinone-3.28.1/ns-3.28.1/src/fd-net-device/model/dpdk-net-device.cc#L576).
>  These contains the logic you suggested.

A couple of points for performance with DPDK.
 - Never use memcpy in the data path unless it is absolutely require and always 
try to avoid copying all of the data. In some cases you may want to use memcpy 
or rte_memcpy to only replace a small amount of data or to grab a copy of some 
small amount of data.
 - Never use malloc in the data path, meaning never call malloc on every packet 
use a list of buffers allocated up front if you need buffers of some time.
 - DPDK mempools are highly tuned and if you can use them for fixed size 
buffers.

I believe in the DPDK docs is a performance white paper or some information 
about optimizing packet process in DPDK. If you have not read it you may want 
to do so.

> 
> Can you go through this and suggest us some changes or find some mistake in 
> our code? If you need any help or have any doubt, ping us.
> 
> Thanks and Regards,
> Harsh & Hrishikesh
> 
> On Tue, 13 Nov 2018 at 19:17, Wiles, Keith  wrote:
> 
> 
> > On Nov 12, 2018, at 8:25 PM, Harsh Patel  wrote:
> > 
> > Hello,
> > It would be really helpful if you can provide us a link (for both Tx and 
> > Rx) to the project you mentioned earlier where you worked on a similar 
> > problem, if possible. 
> > 
> 
> At this time I can not provide a link. I will try and see what I can do, but 
> do not hold your breath it could be awhile as we have to go thru a lot of 
> legal stuff. If you can try vtune tool from Intel for x86 systems if you can 
> get a copy for your platform as it can tell you a lot about the code and 
> where the performance issues are located. If you are not running Intel x86 
> then my code may not work for you, I do not remember if you told me which 
> platform.
> 
> 
> > Thanks and Regards, 
> > Harsh & Hrishikesh.
> > 
> > On Mon, 12 Nov 2018 at 01:15, Harsh Patel  wrote:
> > Thanks a lot for all the support. We are looking into our work as of now 
> > and will contact you once we are done checking it completely from our side. 
> > Thanks for the help.
> > 
> > Regards,
> > Harsh and Hrishikesh
> > 
> > On Sat, 10 Nov 2018 at 11:47, Wiles, Keith  wrote:
> > Please make sure to send your emails in plain text format. The Mac mail 
> > program loves to use rich-text format is the original email use it and I 
> > have told it not only send plain text :-(
> > 
> > > On Nov 9, 2018, at 4:09 AM, Harsh Patel  wrote:
> > > 
> > > We have implemented the logic for Tx/Rx as you suggested. We compared the 
> > > obtained throughput with another version of same application that uses 
> > > Linux raw sockets. 
> > > Unfortunately, the throughput we receive in our DPDK application is less 
> > > by a good margin. Is this any way we can optimize our implementation or 
> > > anything that we are missing?
> > > 
> > 
> > The PoC code I was developing for DAPI I did not have any performance of 
> > issues it run just as fast with my limited testing. I converted the l3fwd 
> > code and I saw 10G 64byte wire rate as I remember using pktgen to generate 
> > the traffic.
> > 
> > Not sure why yo

Re: [dpdk-users] Query on handling packets

2018-11-14 Thread Wiles, Keith
Sorry, did not send plain text email again. 

> On Nov 14, 2018, at 7:54 AM, Harsh Patel  wrote:
> 
> Hello,
> This is a link to the complete source code of our project :- 
> https://github.com/ns-3-dpdk-integration/ns-3-dpdk
> For the description of the project, look through this :- 
> https://ns-3-dpdk-integration.github.io/
> Once you go through it, you will have a basic understanding of the project.
> Installation instructions link are provided in the github.io page.
> 
> In the code we mentioned above, the master branch contains the implementation 
> of the logic using rte_rings which we mentioned at the very beginning of the 
> discussion. There is a branch named "newrxtx" which contains the 
> implementation according to the logic you provided.
> 
> We would like you to take a look at the code in newrxtx branch. 
> (https://github.com/ns-3-dpdk-integration/ns-3-dpdk/tree/newrxtx)
> In the code in this branch, go to 
> ns-allinone-3.28.1/ns-3.28.1/src/fd-net-device/model/ directory. Here we have 
> implemented the DpdkNetDevice model. This model contains the code which 
> implements the whole model providing interaction between ns-3 and DPDK. We 
> would like you take a look at our Read function 
> (https://github.com/ns-3-dpdk-integration/ns-3-dpdk/blob/newrxtx/ns-allinone-3.28.1/ns-3.28.1/src/fd-net-device/model/dpdk-net-device.cc#L626)
>  and Write function 
> (https://github.com/ns-3-dpdk-integration/ns-3-dpdk/blob/newrxtx/ns-allinone-3.28.1/ns-3.28.1/src/fd-net-device/model/dpdk-net-device.cc#L576).
>  These contains the logic you suggested.
> 

I looked at the read and write routines briefly. The one thing that jumped out 
at me is you copy the packet from an internal data buffer to the mbuf or mbuf 
to data buffer. You should try your hardest to remove these memcpy calls in the 
data path as they will kill your performance. If you have to use memcpy I would 
look at the rte_memcpy() routine to use as they are highly optimized for DPDK. 
Even with using DPKD rte_memcpy() you will still see a big performance hit.

I did not look at were the buffer came from, but maybe you could allocate a 
pktmbuf pool (as you did) and when you main code asks for a buffer it grabs a 
mbufs points to the start of the mbuf and returns that pointer instead. Then 
when you get to the write or read routine you find the start of the mbuf header 
based on the buffer address or even some meta data attached to the buffer. Then 
you can call the rte_eth_tx_buffer() routine with that mbuf pointer. For the TX 
side the mbuf is freed by the driver, but could be on the TX done queue, just 
make sure you have enough buffers.

On the read side you need to also find the place the buffer is allocated and 
allocate a mbuf then save the mbuf pointer in the meta data of the buffer (if 
you have meta data per buffer) then you can at some point free the mbuf after 
you have processed the data buffer.

I hope that is clear, I meetings I must attend.

> Can you go through this and suggest us some changes or find some mistake in 
> our code? If you need any help or have any doubt, ping us.
> 
> Thanks and Regards,
> Harsh & Hrishikesh
> 
> On Tue, 13 Nov 2018 at 19:17, Wiles, Keith  wrote:
> 
> 
> > On Nov 12, 2018, at 8:25 PM, Harsh Patel  wrote:
> > 
> > Hello,
> > It would be really helpful if you can provide us a link (for both Tx and 
> > Rx) to the project you mentioned earlier where you worked on a similar 
> > problem, if possible. 
> > 
> 
> At this time I can not provide a link. I will try and see what I can do, but 
> do not hold your breath it could be awhile as we have to go thru a lot of 
> legal stuff. If you can try vtune tool from Intel for x86 systems if you can 
> get a copy for your platform as it can tell you a lot about the code and 
> where the performance issues are located. If you are not running Intel x86 
> then my code may not work for you, I do not remember if you told me which 
> platform.
> 
> 
> > Thanks and Regards, 
> > Harsh & Hrishikesh.
> > 
> > On Mon, 12 Nov 2018 at 01:15, Harsh Patel  wrote:
> > Thanks a lot for all the support. We are looking into our work as of now 
> > and will contact you once we are done checking it completely from our side. 
> > Thanks for the help.
> > 
> > Regards,
> > Harsh and Hrishikesh
> > 
> > On Sat, 10 Nov 2018 at 11:47, Wiles, Keith  wrote:
> > Please make sure to send your emails in plain text format. The Mac mail 
> > program loves to use rich-text format is the original email use it and I 
> > have told it not only send plain text :-(
> > 
> > > On Nov 9, 2018, at 4:09 AM, Harsh Patel  wrote:
> > > 
> > > We have implemented the logic for Tx/Rx

Re: [dpdk-users] Query on handling packets

2018-11-14 Thread Wiles, Keith



On Nov 14, 2018, at 7:54 AM, Harsh Patel 
mailto:thadodahars...@gmail.com>> wrote:

Hello,
This is a link to the complete source code of our project :- 
https://github.com/ns-3-dpdk-integration/ns-3-dpdk
For the description of the project, look through this :- 
https://ns-3-dpdk-integration.github.io/
Once you go through it, you will have a basic understanding of the project.
Installation instructions link are provided in the github.io<http://github.io/> 
page.

In the code we mentioned above, the master branch contains the implementation 
of the logic using rte_rings which we mentioned at the very beginning of the 
discussion. There is a branch named "newrxtx" which contains the implementation 
according to the logic you provided.

We would like you to take a look at the code in newrxtx branch. 
(https://github.com/ns-3-dpdk-integration/ns-3-dpdk/tree/newrxtx)
In the code in this branch, go to 
ns-allinone-3.28.1/ns-3.28.1/src/fd-net-device/model/ directory. Here we have 
implemented the DpdkNetDevice model. This model contains the code which 
implements the whole model providing interaction between ns-3 and DPDK. We 
would like you take a look at our Read function 
(https://github.com/ns-3-dpdk-integration/ns-3-dpdk/blob/newrxtx/ns-allinone-3.28.1/ns-3.28.1/src/fd-net-device/model/dpdk-net-device.cc#L626)
 and Write function 
(https://github.com/ns-3-dpdk-integration/ns-3-dpdk/blob/newrxtx/ns-allinone-3.28.1/ns-3.28.1/src/fd-net-device/model/dpdk-net-device.cc#L576).
 These contains the logic you suggested.

I looked at the read and write routines briefly. The one thing that jumped out 
at me is you copy the packet from an internal data buffer to the mbuf or mbuf 
to data buffer. You should try your hardest to remove these memcpy calls in the 
data path as they will kill your performance. If you have to use memcpy I would 
look at the rte_memcpy() routine to use as they are highly optimized for DPDK. 
Even with using DPKD rte_memcpy() you will still see a big performance hit.

I did not look at were the buffer came from, but maybe you could allocate a 
pktmbuf pool (as you did) and when you main code asks for a buffer it grabs a 
mbufs points to the start of the mbuf and returns that pointer instead. Then 
when you get to the write or read routine you find the start of the mbuf header 
based on the buffer address or even some meta data attached to the buffer. Then 
you can call the rte_eth_tx_buffer() routine with that mbuf pointer. For the TX 
side the mbuf is freed by the driver, but could be on the TX done queue, just 
make sure you have enough buffers.

On the read side you need to also find the place the buffer is allocated and 
allocate a mbuf then save the mbuf pointer in the meta data of the buffer (if 
you have meta data per buffer) then you can at some point free the mbuf after 
you have processed the data buffer.

I hope that is clear, I meetings I must attend.


Can you go through this and suggest us some changes or find some mistake in our 
code? If you need any help or have any doubt, ping us.

Thanks and Regards,
Harsh & Hrishikesh

On Tue, 13 Nov 2018 at 19:17, Wiles, Keith 
mailto:keith.wi...@intel.com>> wrote:


> On Nov 12, 2018, at 8:25 PM, Harsh Patel 
> mailto:thadodahars...@gmail.com>> wrote:
>
> Hello,
> It would be really helpful if you can provide us a link (for both Tx and Rx) 
> to the project you mentioned earlier where you worked on a similar problem, 
> if possible.
>

At this time I can not provide a link. I will try and see what I can do, but do 
not hold your breath it could be awhile as we have to go thru a lot of legal 
stuff. If you can try vtune tool from Intel for x86 systems if you can get a 
copy for your platform as it can tell you a lot about the code and where the 
performance issues are located. If you are not running Intel x86 then my code 
may not work for you, I do not remember if you told me which platform.


> Thanks and Regards,
> Harsh & Hrishikesh.
>
> On Mon, 12 Nov 2018 at 01:15, Harsh Patel 
> mailto:thadodahars...@gmail.com>> wrote:
> Thanks a lot for all the support. We are looking into our work as of now and 
> will contact you once we are done checking it completely from our side. 
> Thanks for the help.
>
> Regards,
> Harsh and Hrishikesh
>
> On Sat, 10 Nov 2018 at 11:47, Wiles, Keith 
> mailto:keith.wi...@intel.com>> wrote:
> Please make sure to send your emails in plain text format. The Mac mail 
> program loves to use rich-text format is the original email use it and I have 
> told it not only send plain text :-(
>
> > On Nov 9, 2018, at 4:09 AM, Harsh Patel 
> > mailto:thadodahars...@gmail.com>> wrote:
> >
> > We have implemented the logic for Tx/Rx as you suggested. We compared the 
> > obtained throughput with another version of same application that uses 
> 

Re: [dpdk-users] Query on handling packets

2018-11-13 Thread Wiles, Keith



> On Nov 12, 2018, at 8:25 PM, Harsh Patel  wrote:
> 
> Hello,
> It would be really helpful if you can provide us a link (for both Tx and Rx) 
> to the project you mentioned earlier where you worked on a similar problem, 
> if possible. 
> 

At this time I can not provide a link. I will try and see what I can do, but do 
not hold your breath it could be awhile as we have to go thru a lot of legal 
stuff. If you can try vtune tool from Intel for x86 systems if you can get a 
copy for your platform as it can tell you a lot about the code and where the 
performance issues are located. If you are not running Intel x86 then my code 
may not work for you, I do not remember if you told me which platform.


> Thanks and Regards, 
> Harsh & Hrishikesh.
> 
> On Mon, 12 Nov 2018 at 01:15, Harsh Patel  wrote:
> Thanks a lot for all the support. We are looking into our work as of now and 
> will contact you once we are done checking it completely from our side. 
> Thanks for the help.
> 
> Regards,
> Harsh and Hrishikesh
> 
> On Sat, 10 Nov 2018 at 11:47, Wiles, Keith  wrote:
> Please make sure to send your emails in plain text format. The Mac mail 
> program loves to use rich-text format is the original email use it and I have 
> told it not only send plain text :-(
> 
> > On Nov 9, 2018, at 4:09 AM, Harsh Patel  wrote:
> > 
> > We have implemented the logic for Tx/Rx as you suggested. We compared the 
> > obtained throughput with another version of same application that uses 
> > Linux raw sockets. 
> > Unfortunately, the throughput we receive in our DPDK application is less by 
> > a good margin. Is this any way we can optimize our implementation or 
> > anything that we are missing?
> > 
> 
> The PoC code I was developing for DAPI I did not have any performance of 
> issues it run just as fast with my limited testing. I converted the l3fwd 
> code and I saw 10G 64byte wire rate as I remember using pktgen to generate 
> the traffic.
> 
> Not sure why you would see a big performance drop, but I do not know your 
> application or code.
> 
> > Thanks and regards
> > Harsh & Hrishikesh
> > 
> > On Thu, 8 Nov 2018 at 23:14, Wiles, Keith  wrote:
> > 
> > 
> >> On Nov 8, 2018, at 4:58 PM, Harsh Patel  wrote:
> >> 
> >> Thanks
> >>  for your insight on the topic. Transmission is working with the functions 
> >> you mentioned. We tried to search for some similar functions for handling 
> >> incoming packets but could not find anything. Can you help us on that as 
> >> well?
> >> 
> > 
> > I do not know if a DPDK API set for RX side. But in the DAPI (DPDK API) PoC 
> > I was working on and presented at the DPDK Summit last Sept. In the PoC I 
> > did create a RX side version. The issues it has a bit of tangled up in the 
> > DAPI PoC.
> > 
> > The basic concept is a call to RX a single packet does a rx_burst of N 
> > number of packets keeping then in a mbuf list. The code would spin waiting 
> > for mbufs to arrive or return quickly if a flag was set. When it did find 
> > RX mbufs it would just return the single mbuf and keep the list of mbufs 
> > for later requests until the list is empty then do another rx_burst call.
> > 
> > Sorry this is a really quick note on how it works. If you need more details 
> > we can talk more later.
> >> 
> >> Regards,
> >> Harsh
> >>  and Hrishikesh.
> >> 
> >> 
> >> On Thu, 8 Nov 2018 at 14:26, Wiles, Keith  wrote:
> >> 
> >> 
> >> > On Nov 8, 2018, at 8:24 AM, Harsh Patel  wrote:
> >> > 
> >> > Hi,
> >> > We are working on a project where we are trying to integrate DPDK with
> >> > another software. We are able to obtain packets from the other 
> >> > environment
> >> > to DPDK environment in one-by-one fashion. On the other hand DPDK allows 
> >> > to
> >> > send/receive burst of data packets. We want to know if there is any
> >> > functionality in DPDK to achieve this conversion of single incoming 
> >> > packet
> >> > to a burst of packets sent on NIC and similarly, conversion of burst read
> >> > packets from NIC to send it to other environment sequentially?
> >> 
> >> 
> >> Search in the docs or lib/librte_ethdev directory on 
> >> rte_eth_tx_buffer_init, rte_eth_tx_buffer, ...
> >> 
> >> 
> >> 
> >> > Thanks and regards
> >> > Harsh Patel, Hrishikesh Hiraskar
> >> > NITK Surathkal
> >> 
> >> Regards,
> >> Keith
> >> 
> > 
> > Regards,
> > Keith
> > 
> 
> Regards,
> Keith
> 

Regards,
Keith



Re: [dpdk-users] Query on handling packets

2018-11-09 Thread Wiles, Keith
Please make sure to send your emails in plain text format. The Mac mail program 
loves to use rich-text format is the original email use it and I have told it 
not only send plain text :-(

> On Nov 9, 2018, at 4:09 AM, Harsh Patel  wrote:
> 
> We have implemented the logic for Tx/Rx as you suggested. We compared the 
> obtained throughput with another version of same application that uses Linux 
> raw sockets. 
> Unfortunately, the throughput we receive in our DPDK application is less by a 
> good margin. Is this any way we can optimize our implementation or anything 
> that we are missing?
> 

The PoC code I was developing for DAPI I did not have any performance of issues 
it run just as fast with my limited testing. I converted the l3fwd code and I 
saw 10G 64byte wire rate as I remember using pktgen to generate the traffic.

Not sure why you would see a big performance drop, but I do not know your 
application or code.

> Thanks and regards
> Harsh & Hrishikesh
> 
> On Thu, 8 Nov 2018 at 23:14, Wiles, Keith  wrote:
> 
> 
>> On Nov 8, 2018, at 4:58 PM, Harsh Patel  wrote:
>> 
>> Thanks
>>  for your insight on the topic. Transmission is working with the functions 
>> you mentioned. We tried to search for some similar functions for handling 
>> incoming packets but could not find anything. Can you help us on that as 
>> well?
>> 
> 
> I do not know if a DPDK API set for RX side. But in the DAPI (DPDK API) PoC I 
> was working on and presented at the DPDK Summit last Sept. In the PoC I did 
> create a RX side version. The issues it has a bit of tangled up in the DAPI 
> PoC.
> 
> The basic concept is a call to RX a single packet does a rx_burst of N number 
> of packets keeping then in a mbuf list. The code would spin waiting for mbufs 
> to arrive or return quickly if a flag was set. When it did find RX mbufs it 
> would just return the single mbuf and keep the list of mbufs for later 
> requests until the list is empty then do another rx_burst call.
> 
> Sorry this is a really quick note on how it works. If you need more details 
> we can talk more later.
>> 
>> Regards,
>> Harsh
>>  and Hrishikesh.
>> 
>> 
>> On Thu, 8 Nov 2018 at 14:26, Wiles, Keith  wrote:
>> 
>> 
>> > On Nov 8, 2018, at 8:24 AM, Harsh Patel  wrote:
>> > 
>> > Hi,
>> > We are working on a project where we are trying to integrate DPDK with
>> > another software. We are able to obtain packets from the other environment
>> > to DPDK environment in one-by-one fashion. On the other hand DPDK allows to
>> > send/receive burst of data packets. We want to know if there is any
>> > functionality in DPDK to achieve this conversion of single incoming packet
>> > to a burst of packets sent on NIC and similarly, conversion of burst read
>> > packets from NIC to send it to other environment sequentially?
>> 
>> 
>> Search in the docs or lib/librte_ethdev directory on rte_eth_tx_buffer_init, 
>> rte_eth_tx_buffer, ...
>> 
>> 
>> 
>> > Thanks and regards
>> > Harsh Patel, Hrishikesh Hiraskar
>> > NITK Surathkal
>> 
>> Regards,
>> Keith
>> 
> 
> Regards,
> Keith
> 

Regards,
Keith



Re: [dpdk-users] Query on handling packets

2018-11-09 Thread Wiles, Keith



Sent from my iPhone

On Nov 9, 2018, at 5:09 AM, Harsh Patel 
mailto:thadodahars...@gmail.com>> wrote:

We have implemented the logic for Tx/Rx as you suggested. We compared the 
obtained throughput with another version of same application that uses Linux 
raw sockets.
Unfortunately, the throughput we receive in our DPDK application is less by a 
good margin. Is this any way we can optimize our implementation or anything 
that we are missing?

The PoC code I was developing for DAPI I did not have any performance of issues 
it run just as fast with my limited testing. I converted the l3fwd code and I 
saw 10G 64byte wire rate as I remember using pktgen to generate the traffic.

Not sure why you would see a big performance drop, but I do not know your 
application or code.


Thanks and regards
Harsh & Hrishikesh

On Thu, 8 Nov 2018 at 23:14, Wiles, Keith 
mailto:keith.wi...@intel.com>> wrote:


On Nov 8, 2018, at 4:58 PM, Harsh Patel 
mailto:thadodahars...@gmail.com>> wrote:

Thanks for your insight on the topic. Transmission is working with the 
functions you mentioned. We tried to search for some similar functions for 
handling incoming packets but could not find anything. Can you help us on that 
as well?

I do not know if a DPDK API set for RX side. But in the DAPI (DPDK API) PoC I 
was working on and presented at the DPDK Summit last Sept. In the PoC I did 
create a RX side version. The issues it has a bit of tangled up in the DAPI PoC.

The basic concept is a call to RX a single packet does a rx_burst of N number 
of packets keeping then in a mbuf list. The code would spin waiting for mbufs 
to arrive or return quickly if a flag was set. When it did find RX mbufs it 
would just return the single mbuf and keep the list of mbufs for later requests 
until the list is empty then do another rx_burst call.

Sorry this is a really quick note on how it works. If you need more details we 
can talk more later.

Regards,
Harsh and Hrishikesh.

On Thu, 8 Nov 2018 at 14:26, Wiles, Keith 
mailto:keith.wi...@intel.com>> wrote:


> On Nov 8, 2018, at 8:24 AM, Harsh Patel 
> mailto:thadodahars...@gmail.com>> wrote:
>
> Hi,
> We are working on a project where we are trying to integrate DPDK with
> another software. We are able to obtain packets from the other environment
> to DPDK environment in one-by-one fashion. On the other hand DPDK allows to
> send/receive burst of data packets. We want to know if there is any
> functionality in DPDK to achieve this conversion of single incoming packet
> to a burst of packets sent on NIC and similarly, conversion of burst read
> packets from NIC to send it to other environment sequentially?


Search in the docs or lib/librte_ethdev directory on rte_eth_tx_buffer_init, 
rte_eth_tx_buffer, ...



> Thanks and regards
> Harsh Patel, Hrishikesh Hiraskar
> NITK Surathkal

Regards,
Keith


Regards,
Keith



Re: [dpdk-users] Query on handling packets

2018-11-08 Thread Wiles, Keith



On Nov 8, 2018, at 4:58 PM, Harsh Patel 
mailto:thadodahars...@gmail.com>> wrote:

Thanks for your insight on the topic. Transmission is working with the 
functions you mentioned. We tried to search for some similar functions for 
handling incoming packets but could not find anything. Can you help us on that 
as well?

I do not know if a DPDK API set for RX side. But in the DAPI (DPDK API) PoC I 
was working on and presented at the DPDK Summit last Sept. In the PoC I did 
create a RX side version. The issues it has a bit of tangled up in the DAPI PoC.

The basic concept is a call to RX a single packet does a rx_burst of N number 
of packets keeping then in a mbuf list. The code would spin waiting for mbufs 
to arrive or return quickly if a flag was set. When it did find RX mbufs it 
would just return the single mbuf and keep the list of mbufs for later requests 
until the list is empty then do another rx_burst call.

Sorry this is a really quick note on how it works. If you need more details we 
can talk more later.

Regards,
Harsh and Hrishikesh.

On Thu, 8 Nov 2018 at 14:26, Wiles, Keith 
mailto:keith.wi...@intel.com>> wrote:


> On Nov 8, 2018, at 8:24 AM, Harsh Patel 
> mailto:thadodahars...@gmail.com>> wrote:
>
> Hi,
> We are working on a project where we are trying to integrate DPDK with
> another software. We are able to obtain packets from the other environment
> to DPDK environment in one-by-one fashion. On the other hand DPDK allows to
> send/receive burst of data packets. We want to know if there is any
> functionality in DPDK to achieve this conversion of single incoming packet
> to a burst of packets sent on NIC and similarly, conversion of burst read
> packets from NIC to send it to other environment sequentially?


Search in the docs or lib/librte_ethdev directory on rte_eth_tx_buffer_init, 
rte_eth_tx_buffer, ...



> Thanks and regards
> Harsh Patel, Hrishikesh Hiraskar
> NITK Surathkal

Regards,
Keith


Regards,
Keith



Re: [dpdk-users] pktgen-dpdk runts when start size is 64

2018-11-08 Thread Wiles, Keith



> On Nov 8, 2018, at 2:12 AM, Bichan.Lu  wrote:
> 
> Hi,
> 
> OS: CentOS 7.5.1804
> Pktgen version: pktgen-3.5.8
> DPDK verison: dpdk-18.08
> 
> I use the pktgen in follows command. When I start pktgen, default PktSize is 
> 64, in the command I sad:
> 
> ./pktgen -l 0-4 -n 3 -- -P -m "[1:3].0,[2:4].1"
> Pktgen:/> start all
> 
> Link State: 
> TotalRate
> Pkts/s Max/Rx : 1487853/1487844 1491609/1491602   
> 2979462/2979446
>   Max/Tx : 1491607/1491606 1487855/1487841   
> 2979459/2979447
> MBits/s Rx/Tx :999/10021002/999 
> 2002/2002
> Broadcast :   0   0
> Multicast :   0   0
>  64 Bytes:   0   0
>  65-127  :   0   0
>  128-255 :   0   0
>  256-511 :   0   0
>  512-1023:   0   0
>  1024-1518   :   0   0
> Runts/Jumbos  :   8889515/0   8913610/0
> Errors Rx/Tx  : 0/0 0/0
> ..< shown parts of pktgen >.
> 
> Then will be get Runts.
> I tried when size is 68, range will be in [64 Bytes], and no runts.
> Pktgen:/> set 0 size 68
> Pktgen:/> set 1 size 68
> Pktgen:/> start all
> 

What is the hardware NIC as most of the hardware appends the FCS?

Pktgen expects the NIC to append the FCS and there is a flag to disable that 
feature?

> I guess it may related about range, could help to check?
> Thanks.
> 
> Best regards,
> Bichan.Lu

Regards,
Keith



Re: [dpdk-users] Query on handling packets

2018-11-08 Thread Wiles, Keith



> On Nov 8, 2018, at 8:24 AM, Harsh Patel  wrote:
> 
> Hi,
> We are working on a project where we are trying to integrate DPDK with
> another software. We are able to obtain packets from the other environment
> to DPDK environment in one-by-one fashion. On the other hand DPDK allows to
> send/receive burst of data packets. We want to know if there is any
> functionality in DPDK to achieve this conversion of single incoming packet
> to a burst of packets sent on NIC and similarly, conversion of burst read
> packets from NIC to send it to other environment sequentially?


Search in the docs or lib/librte_ethdev directory on rte_eth_tx_buffer_init, 
rte_eth_tx_buffer, ...



> Thanks and regards
> Harsh Patel, Hrishikesh Hiraskar
> NITK Surathkal

Regards,
Keith



Re: [dpdk-users] Memory allocation in dpdk

2018-10-23 Thread Wiles, Keith



> On Oct 23, 2018, at 1:44 AM, Avinash Chaurasia  
> wrote:
> 
> Hello,
> I am trying to understand how dpdk allocate memory. I tried digging code to
> understand memory allocation of DPDK. So far I understood that memory is
> allocated from a heap that dpdk maintains. However, this heap must be
> allocated at some place. I failed to traceback any function (called from
> heap_alloc()) that calls mmap to allocate memory. Please let me know when
> this heap is created, which function call does that.

Memory allocation is tricky in DPDK, but are you talking about rte_malloc or 
rte_mempool allocagtion. Each h has a different way to get memory. Look at the 
rte_memzone code it does the lowest level allocation of memory and use huge 
pages. Also look at the email list for patches submitted by Anatoly Burakov he 
just re-wrote the memory system and has some good explanations in the patches 
an in the docs for DPDK.

> Thanks
> Avinash Kumar Chaurasia

Regards,
Keith



Re: [dpdk-users] users Digest, Vol 155, Issue 7

2018-10-23 Thread Wiles, Keith


> On Oct 22, 2018, at 11:17 PM, Wajeeha Javed  
> wrote:
> 
> Hi Keith, 

Please try to reply inline to the text and do not top post, it makes its hard 
to follow so many email threads.

> 
> Thanks for your reply. Please find below my comments
> 
> >> You're right, in my application all the packets are stored inside mbuf. 
> >> The reason for not using the next pointer of mbuf is that it might get 
> >> used by the fragmented packets having size greater than MTU. 
> 
> >> I have tried using small buffer of STAILQ linked list for each port having 
> >> STAILQ entry and pointer to mbuf packets burst. I allocate the stailq 
> >> entry, set the mbuf pointer in the stailq entry, the link the stailq entry 
> >> to the stailq list using stailq macros. I observe millions of packet loss, 
> >> the stailq linked list could only hold less than 1 million packets per 
> >> second at line rate of 10Gbits/sec.
> 
> >> I would like to prevent data loss, could you please guide me what is the 
> >> best optimal solution for increasing the number of mbufs without freeing 
> >> or overwriting them for a delay of 2 secs.

Using the stailq method is my best guess to solve your problem. If you are 
calling malloc on each packet you want to save at the time you need to link the 
packets that would be the reason you can not hold the packets without dropping 
some at the wire.

Allocate all of the stailq blocks and keep them in some type of array or list 
too avoid doing an allocation call at startup. Other then this type of help and 
not doing the code myself this is all I have for you, sorry. The amount of 
memory allocated for the stailq structures is going to be more then 28M blocks 
all sorts of cache issues could be causing the problem.

> 
> Thanks & Best Regards,
> 
> Wajeeha Javed
> 
> 
> 
> On Tue, Oct 16, 2018 at 3:02 PM Wiles, Keith  wrote:
> Sorry, you must have replied to my screwup not sending the reply in pure text 
> format. I did send an updated reply to hopefully fix that problem. More 
> comments inline below. All emails to the list must be in ‘text' format not 
> ‘Rich Text’ format :-(
> 
> > On Oct 15, 2018, at 11:42 PM, Wajeeha Javed  
> > wrote:
> > 
> > Hi,
> > 
> > Thanks, everyone for your reply. Please find below my comments.
> > 
> > *I've failed to find explicit limitations from the first glance.*
> > * NB_MBUF define is typically internal to examples/apps.*
> > * The question I'd like to double-check if the host has enought*
> > * RAM and hugepages allocated? 5 million mbufs already require about*
> > * 10G.*
> > 
> > Total Ram = 128 GB
> > Available Memory = 23GB free
> > 
> > Total Huge Pages = 80
> > 
> > Free Huge Page = 38
> > Huge Page Size = 1GB
> > 
> > *The mempool uses uint32_t for most sizes and the number of mempool items
> > is uint32_t so ok with the number of entries in a can be ~4G as stated be
> > make sure you have enough *
> > 
> > *memory as the over head for mbufs is not just the header + the packet size*
> > 
> > Right. Currently, there are total of 80 huge pages, 40 for each numa node
> > (Numa node 0 and Numa node 1). I observed that I was using only 16 huge
> > pages while the other 16
> > 
> > huge pages were used by other dpdk  application. By running only my dpdk
> > application on numa node 0, I was able to increase the mempool size to 14M
> > that uses all the
> > 
> > huge pages of Numa node 0.
> > 
> > *My question is why are you copying the mbuf and not just linking the mbufs
> > into a link list? Maybe I do not understand the reason. I would try to make
> > sure you do not do a copy of the *
> > 
> > *data and just link the mbufs together using the next pointer in the mbuf
> > header unless you have chained mbufs already.*
> > 
> > The reason for copying the Mbuf is due to the NIC limitations, I cannot
> > have more than 16384 Rx descriptors, whereas  I want to withhold all the
> > packets coming at a line rate of 10GBits/sec for each port. I created a
> > circular queue running on a FIFO basis. Initially, I thought of using
> > rte_mbuf* packet burst for a delay of 2 secs. Now at line rate, we receive
> > 14Million
> 
> I assume in your driver a mbuf is used to receive the packet data, which 
> means the packet is inside an mbuf (if not then why not?). The mbuf data does 
> not need to be copied you can use the ’next’ pointer in the mbuf to create a 
> single link list. If you use fragmented packets in your design, which means 
> you are using the ’next’ pointer in the mb

Re: [dpdk-users] unable to compile dpdk

2018-10-21 Thread Wiles, Keith


> On Oct 20, 2018, at 9:32 AM, venkataprasad k  wrote:
> 
> 2nd Try.
> Any idea on this?
> Not clear what i am missing here.

Does the build work without sed command, if it does then something is wrong 
with the sed command. In your case just edit the new config file and enable 
PCAP PMD if this style will work for you needs.

Do not know if this will help at all. I some times copy the 
config/deconfig_x86_64-native-linuxapp-gcc to 
config/deconfig_x86_64-mine-linuxapp-gcc then modify the new file to contain 
the changes you want the config file. This way you do not have to be concerned 
about how you do the make … config or as I do it this way.

cd dpdk
export RTE_SDK=`pwd`
export RTE_TARGET=x86_64-mine-linuxapp-gcc

make install T=$RTE_TARGET -j

You will get a warning at the end, but you can ignore it as you were not 
installing the results anyway. Here is a bash script I use being lazy.

function _rte() {
   if [ "$1" != "" ]; then
   export RTE_SDK=`pwd`
   export RTE_TARGET=`basename $1`
   echo "RTE_SDK: "$RTE_SDK " RTE_TARGET: "$RTE_TARGET
   else
   echo "Currently RTE_SDK: "$RTE_SDK " RTE_TARGET: "$RTE_TARGET
   fi
}

function _bld() {
   echo make -C ${RTE_SDK} install T=${RTE_TARGET} $@ -j
   make -C ${RTE_SDK} install T=${RTE_TARGET} $@ -j
}

function _dbld() {
   echo make -C ${RTE_SDK} install T=${RTE_TARGET} EXTRA_CFLAGS="-g -O0" $@ -j
   make -C ${RTE_SDK} install T=${RTE_TARGET} EXTRA_CFLAGS="-g -O0" $@ -j
}

alias rte=_rte $@
alias bld=_bld $@
alias dbld=_dbld $@


Then I just cd into the DPDK directory and type ‘rte 
x86_64-native-linuxapp-gcc’ then I can just do ‘bld’ to build DPDK and it does 
not matter which directory I am in when I do the build it will stay in the 
directory I executed ‘bld’. The dbld command is to build DPDK with 
EXTRA_CFLAGS=“-g -O0"

The rte command without args will print the current values and I normally cd to 
DPDK directory and do ‘rte x8’ to complete the command.

Anyway I hope that helps.

> 
> 
> -Original Message-
> From: Trahe, Fiona 
> Sent: Friday, October 19, 2018 5:40 PM
> To: Pathak, Pravin ; users@dpdk.org
> Cc: Trahe, Fiona 
> Subject: RE: Crypto QAT device not found
> 
> Hi Pravin,
> 
> Good that it's working now.
> Be careful of the order on changing config, this works:
> 1. make T=x86_64-native-linuxapp-gcc config 2. change build/.config (if you 
> do make T=xx config after this it overwrites your changes and reverts to the 
> default again) 3. make
> 
> Fiona
> 
>> -Original Message-
>> From: Pathak, Pravin
>> Sent: Friday, October 19, 2018 2:00 PM
>> To: Trahe, Fiona ; users@dpdk.org
>> Subject: RE: Crypto QAT device not found
>> 
>> Hi Fiona -
>> 
>> Thanks for the help.  I was using 18.05 and then moved to 18.08.
>> For configuration changes, there is  build/.config,config/common_base 
>> and x86_64-native-linuxapp-
>> gcc/.config.
>> I was changing build/.config and building but some reason it was not picking 
>> the new options set.
>> Now I changed common_base, regenerated config and build again. It worked 
>> after that.
>> I think I am not following correct build procedure.
>> There is make,   make T= x86_64-native-linuxapp-gcc, make install...
>> Each seems to work differently.
>> 
>> I am able to use HW crypto device now.
>> 
>> Regards
>> Pravin
>> 
>> 
>> -Original Message-
>> From: Trahe, Fiona
>> Sent: Friday, October 19, 2018 4:25 PM
>> To: Pathak, Pravin ; users@dpdk.org
>> Cc: Trahe, Fiona 
>> Subject: RE: Crypto QAT device not found
>> 
>> Hi Pravin,
>> 
>> As your VFs are bound to igb_uio this looks fine.
>> DPDK QAT PMD does support 37c9
>> 
>> Can you confirm which DPDK version you're using? You mentioned 18.04 but 
>> there's no such release.
>> If it's 18.08 then you also need CONFIG_RTE_LIBRTE_PMD_QAT_SYM=y but not in 
>> earlier releases.
>> 
>> Does the test code run for you?
>> run "make test-build" in the top-level directory 
>> ./build/build/test/test/test -l1 -n1 -w 
>>> cryptodev_qat_autotest
>> 
>> Fiona
>>> -Original Message-
>>> From: Pathak, Pravin
>>> Sent: Friday, October 19, 2018 11:26 AM
>>> To: Trahe, Fiona ; users@dpdk.org
>>> Subject: RE: Crypto QAT device not found
>>> 
>>> Hi Fiona -
>>> Thanks for the reply. I tried -cdev_type HW  but it did not help.  I 
>>> am not sure of DPDK supports the device on our board.
>>> Device is with ID 37c8/c9
>>> 
>>> 3d:00.0 Co-processor: Intel Corporation Device 37c8 (rev 04)
>>> 3f:00.0 Co-processor: Intel Corporation Device 37c8 (rev 04)
>>> da:00.0 Co-processor: Intel Corporation Device 37c8 (rev 04)
>>> 3d:01.0 Co-processor: Intel Corporation Device 37c9 (rev 04)
>>> 3d:01.1 Co-processor: Intel Corporation Device 37c9 (rev 04)
>>> 3d:01.2 Co-processor: Intel Corporation Device 37c9 (rev 04)
>>> 3d:01.3 Co-processor: Intel Corporation Device 37c9 (rev 04) 
>>> 
>>> Everything looks correct except DPDK does not see these crypto 
>>> devices. It seems virtual device if I add one.
>>> Is there any 

Re: [dpdk-users] Crypto QAT device not found

2018-10-19 Thread Wiles, Keith


> On Oct 19, 2018, at 5:01 PM, Pathak, Pravin  wrote:
> 
> Hi Fiona -
> That explains it. I was using make T=... again at last.
> Pravin

Do not know if this will help at all. I some times copy the 
config/deconfig_x86_64-native-linuxapp-gcc to 
config/deconfig_x86_64-mine-linuxapp-gcc then modify the new file to contain 
the changes you want the config file. This way you do not have to be concerned 
about how you do the make … config or as I do it this way.

cd dpdk
export RTE_SDK=`pwd`
export RTE_TARGET=x86_64-mine-linuxapp-gcc

make install T=$RTE_TARGET -j

You will get a warning at the end, but you can ignore it as you were not 
installing the results anyway. Here is a bash script I use being lazy.

function _rte() {
if [ "$1" != "" ]; then
export RTE_SDK=`pwd`
export RTE_TARGET=`basename $1`
echo "RTE_SDK: "$RTE_SDK " RTE_TARGET: "$RTE_TARGET
else
echo "Currently RTE_SDK: "$RTE_SDK " RTE_TARGET: "$RTE_TARGET
fi
}

function _bld() {
echo make -C ${RTE_SDK} install T=${RTE_TARGET} $@ -j
make -C ${RTE_SDK} install T=${RTE_TARGET} $@ -j
}

function _dbld() {
echo make -C ${RTE_SDK} install T=${RTE_TARGET} EXTRA_CFLAGS="-g -O0" $@ -j
make -C ${RTE_SDK} install T=${RTE_TARGET} EXTRA_CFLAGS="-g -O0" $@ -j
}

alias rte=_rte $@
alias bld=_bld $@
alias dbld=_dbld $@


Then I just cd into the DPDK directory and type ‘rte 
x86_64-native-linuxapp-gcc’ then I can just do ‘bld’ to build DPDK and it does 
not matter which directory I am in when I do the build it will stay in the 
directory I executed ‘bld’. The dbld command is to build DPDK with 
EXTRA_CFLAGS=“-g -O0"

The rte command without args will print the current values and I normally cd to 
DPDK directory and do ‘rte x8’ to complete the command.

Anyway I hope that helps.

> 
> 
> -Original Message-
> From: Trahe, Fiona 
> Sent: Friday, October 19, 2018 5:40 PM
> To: Pathak, Pravin ; users@dpdk.org
> Cc: Trahe, Fiona 
> Subject: RE: Crypto QAT device not found
> 
> Hi Pravin,
> 
> Good that it's working now.
> Be careful of the order on changing config, this works:
> 1. make T=x86_64-native-linuxapp-gcc config 2. change build/.config (if you 
> do make T=xx config after this it overwrites your changes and reverts to the 
> default again) 3. make
> 
> Fiona
> 
>> -Original Message-
>> From: Pathak, Pravin
>> Sent: Friday, October 19, 2018 2:00 PM
>> To: Trahe, Fiona ; users@dpdk.org
>> Subject: RE: Crypto QAT device not found
>> 
>> Hi Fiona -
>> 
>> Thanks for the help.  I was using 18.05 and then moved to 18.08.
>> For configuration changes, there is  build/.config,config/common_base 
>> and x86_64-native-linuxapp-
>> gcc/.config.
>> I was changing build/.config and building but some reason it was not picking 
>> the new options set.
>> Now I changed common_base, regenerated config and build again. It worked 
>> after that.
>> I think I am not following correct build procedure.
>> There is make,   make T= x86_64-native-linuxapp-gcc, make install...
>> Each seems to work differently.
>> 
>> I am able to use HW crypto device now.
>> 
>> Regards
>> Pravin
>> 
>> 
>> -Original Message-
>> From: Trahe, Fiona
>> Sent: Friday, October 19, 2018 4:25 PM
>> To: Pathak, Pravin ; users@dpdk.org
>> Cc: Trahe, Fiona 
>> Subject: RE: Crypto QAT device not found
>> 
>> Hi Pravin,
>> 
>> As your VFs are bound to igb_uio this looks fine.
>> DPDK QAT PMD does support 37c9
>> 
>> Can you confirm which DPDK version you're using? You mentioned 18.04 but 
>> there's no such release.
>> If it's 18.08 then you also need CONFIG_RTE_LIBRTE_PMD_QAT_SYM=y but not in 
>> earlier releases.
>> 
>> Does the test code run for you?
>> run "make test-build" in the top-level directory 
>> ./build/build/test/test/test -l1 -n1 -w 
>>> cryptodev_qat_autotest
>> 
>> Fiona
>>> -Original Message-
>>> From: Pathak, Pravin
>>> Sent: Friday, October 19, 2018 11:26 AM
>>> To: Trahe, Fiona ; users@dpdk.org
>>> Subject: RE: Crypto QAT device not found
>>> 
>>> Hi Fiona -
>>> Thanks for the reply. I tried -cdev_type HW  but it did not help.  I 
>>> am not sure of DPDK supports the device on our board.
>>> Device is with ID 37c8/c9
>>> 
>>> 3d:00.0 Co-processor: Intel Corporation Device 37c8 (rev 04)
>>> 3f:00.0 Co-processor: Intel Corporation Device 37c8 (rev 04)
>>> da:00.0 Co-processor: Intel Corporation Device 37c8 (rev 04)
>>> 3d:01.0 Co-processor: Intel Corporation Device 37c9 (rev 04)
>>> 3d:01.1 Co-processor: Intel Corporation Device 37c9 (rev 04)
>>> 3d:01.2 Co-processor: Intel Corporation Device 37c9 (rev 04)
>>> 3d:01.3 Co-processor: Intel Corporation Device 37c9 (rev 04) 
>>> 
>>> Everything looks correct except DPDK does not see these crypto 
>>> devices. It seems virtual device if I add one.
>>> Is there any command like argument I need to pass or build option 
>>> other than CONFIG_RTE_LIBRTE_PMD_QAT=y
>>> 
>>> PFs are bound to Kernel and VFs are bound to DPDK.
>>> 
>>> Crypto devices 

Re: [dpdk-users] users Digest, Vol 155, Issue 7

2018-10-16 Thread Wiles, Keith
Sorry, you must have replied to my screwup not sending the reply in pure text 
format. I did send an updated reply to hopefully fix that problem. More 
comments inline below. All emails to the list must be in ‘text' format not 
‘Rich Text’ format :-(

> On Oct 15, 2018, at 11:42 PM, Wajeeha Javed  
> wrote:
> 
> Hi,
> 
> Thanks, everyone for your reply. Please find below my comments.
> 
> *I've failed to find explicit limitations from the first glance.*
> * NB_MBUF define is typically internal to examples/apps.*
> * The question I'd like to double-check if the host has enought*
> * RAM and hugepages allocated? 5 million mbufs already require about*
> * 10G.*
> 
> Total Ram = 128 GB
> Available Memory = 23GB free
> 
> Total Huge Pages = 80
> 
> Free Huge Page = 38
> Huge Page Size = 1GB
> 
> *The mempool uses uint32_t for most sizes and the number of mempool items
> is uint32_t so ok with the number of entries in a can be ~4G as stated be
> make sure you have enough *
> 
> *memory as the over head for mbufs is not just the header + the packet size*
> 
> Right. Currently, there are total of 80 huge pages, 40 for each numa node
> (Numa node 0 and Numa node 1). I observed that I was using only 16 huge
> pages while the other 16
> 
> huge pages were used by other dpdk  application. By running only my dpdk
> application on numa node 0, I was able to increase the mempool size to 14M
> that uses all the
> 
> huge pages of Numa node 0.
> 
> *My question is why are you copying the mbuf and not just linking the mbufs
> into a link list? Maybe I do not understand the reason. I would try to make
> sure you do not do a copy of the *
> 
> *data and just link the mbufs together using the next pointer in the mbuf
> header unless you have chained mbufs already.*
> 
> The reason for copying the Mbuf is due to the NIC limitations, I cannot
> have more than 16384 Rx descriptors, whereas  I want to withhold all the
> packets coming at a line rate of 10GBits/sec for each port. I created a
> circular queue running on a FIFO basis. Initially, I thought of using
> rte_mbuf* packet burst for a delay of 2 secs. Now at line rate, we receive
> 14Million

I assume in your driver a mbuf is used to receive the packet data, which means 
the packet is inside an mbuf (if not then why not?). The mbuf data does not 
need to be copied you can use the ’next’ pointer in the mbuf to create a single 
link list. If you use fragmented packets in your design, which means you are 
using the ’next’ pointer in the mbuf to chain the frame fragments into a single 
packet then using ’next’ will not work. Plus when you call rte_pktmbuf_free() 
you need to make sure the next pointer is NULL or it will free the complete 
chain of mbufs (not what you want here).

In the case where you are using chained mbufs for a single packet then you can 
create a set of small buffers to hold the STAILQ pointers and the pointer to 
the mbuf. Then add the small structure onto a link list as this method maybe 
the best solution in the long run instead of trying to use the mbuf->next 
pointer.

Have a look at the rte_tailq.h and eal_common_tailqs.c files and rte_mempool.c 
(plus many other libs in DPDK). Use the rte_tailq_entry structure to create a 
linked list of mempool structures for searching and debugging mempools in the 
system. The 'struct rte_tailq_entry’ is just adding a simple structure to point 
to the mempool structure and allows it to build a linked list with the correct 
pointer types.

You can create a mempool of rte_tailq_entry structures if you want a fast and 
clean way to allocate/free the tailq entry structures.

Then you do not need to copy the packet memory anyplace just allocate a tailq 
entry structure, set the mbuf pointer in the tailq entry, the link the tailq 
entry  to the tailq list. These macros for tailq support are not the easiest to 
understand :-(, but once you understand the idea it becomes clearer.

I hope that helps.

> 
> Packet/s, so descriptor get full and I don't have other option left than
> copying the mbuf to the circular queue rather than using a rte_mbuf*
> pointer. I know I have to make a
> 
> compromise on performance to achieve a delay for packets. So for copying
> mbufs, I allocate memory from Mempool to copy the mbuf received and then
> free it. Please find the
> 
> code snippet below.
> 
> How we can chain different mbufs together? According to my understanding
> chained mbufs in the API are used for storing segments of the fragmented
> packets that are greater
> 
> than MTU. Even If we chain the mbufs together using next pointer we need to
> free the mbufs received, otherwise we will not be able to get free Rx
> descriptors at a line rate of
> 
> 10GBits/sec and eventually all the Rx descriptors will be filled and NIC
> will not receive any more packets.
> 
> 
> 
> for( j = 0; j < nb_rx; j++) {
> m = pkts_burst[j];
> struct rte_mbuf* copy_mbuf = pktmbuf_copy(m, pktmbuf_pool[sockid]);
> 
> rte_pktmbuf_free(m);
> }
> 
> 

Re: [dpdk-users] Increasing the NB_MBUFs of PktMbuf MemPool

2018-10-12 Thread Wiles, Keith
Stupid email program I tell it to reply in the format as received (text format) 
and it still sends in rich text format :-(
hope this is more readable.


> On Oct 11, 2018, at 11:48 PM, Wajeeha Javed  
> wrote:
> 
> Hi,
> 
> I am in the process of developing  DPDK based Application where I would
> like to delay the packets for about 2 secs. There are two ports connected
> to DPDK App and sending traffic of 64 bytes size packets at a line rate of
> 10GB/s. Within 2 secs, I will have 28 Million packets for each of the port
> in delay application. The maximum RX Descriptor size is 16384. I am unable
> to increase the number of Rx descriptors more than 16384 value. Is it
> possible to increase the number of Rx descriptors to a large value. e.g.
> 65536.

This is most likely a limitation of the NIC being used and increasing beyond 
that value will not be possible, please check the NIC being used programmer 
guide.

> Therefore I copied the mbufs using the pktmbuf copy code(shown
> below) and free the packet received. Now the issue is that I can not copy
> more than 5 million packets because the  nb_mbufs of the mempool can't be
> more than 5 Million (#define NB_MBUF 500). If I increase the NB_MBUF
> macro from more than 5 Million, the error is returned unable to init mbuf
> pool. Is there a possible way to increase the mempool size?

The mempool uses uint32_t for most sizes and the number of mempool items is 
uint32_t so ok with the number of entries in a can be ~4G as stated be make 
sure you have enough memory as the over head for mbufs is not just the header + 
the packet size.

My question is why are you copying the mbuf and not just linking the mbufs into 
a link list? Maybe I do not understand the reason. I would try to make sure you 
do not do a copy of the data and just link the mbufs together using the next 
pointer in the mbuf header unless you have chained mbufs already.

The other question is can you drop any packets if not then you only have the 
linking option IMO. If you can drop packets then you can just start dropping 
them when the ring is getting full. Holding onto 28m packets for two seconds 
can cause other protocol related problems and TCP could be sending 
retransmitted packets and now you have caused a bunch of work on the RX side at 
the end point.

> 
> Furthermore, kindly guide me if this is the appropriate mailing list for
> asking this type of questions.

You are on the correct email list, d...@dpdk.org is for 
DPDK developers normally.

Hope this helps.

> 
> 
> 
> static inline struct rte_mbuf *
> 
> pktmbuf_copy(struct rte_mbuf *md, struct rte_mempool *mp)
> {
> struct rte_mbuf *mc = NULL;
> struct rte_mbuf **prev = 
> 
> do {
>struct rte_mbuf *mi;
> 
>mi = rte_pktmbuf_alloc(mp);
>if (unlikely(mi == NULL)) {
>rte_pktmbuf_free(mc);
> 
>rte_exit(EXIT_FAILURE, "Unable to Allocate Memory. Memory
> Failure.\n");
>return NULL;
>}
> 
>mi->data_off = md->data_off;
>mi->data_len = md->data_len;
>mi->port = md->port;
>mi->vlan_tci = md->vlan_tci;
>mi->tx_offload = md->tx_offload;
>mi->hash = md->hash;
> 
>mi->next = NULL;
>mi->pkt_len = md->pkt_len;
>mi->nb_segs = md->nb_segs;
>mi->ol_flags = md->ol_flags;
>mi->packet_type = md->packet_type;
> 
>   rte_memcpy(rte_pktmbuf_mtod(mi, char *), rte_pktmbuf_mtod(md, char *),
> md->data_len);
>   *prev = mi;
>   prev = >next;
> } while ((md = md->next) != NULL);
> 
> *prev = NULL;
> return mc;
> 
> }
> 
> 
> 
> *Reference:*  http://patchwork.dpdk.org/patch/6289/
> 
> Thanks & Best Regards,
> 
> Wajeeha Javed

Regards,
Keith



Re: [dpdk-users] Increasing the NB_MBUFs of PktMbuf MemPool

2018-10-12 Thread Wiles, Keith



On Oct 11, 2018, at 11:48 PM, Wajeeha Javed 
mailto:wajeeha.javed...@gmail.com>> wrote:

Hi,

I am in the process of developing  DPDK based Application where I would
like to delay the packets for about 2 secs. There are two ports connected
to DPDK App and sending traffic of 64 bytes size packets at a line rate of
10GB/s. Within 2 secs, I will have 28 Million packets for each of the port
in delay application. The maximum RX Descriptor size is 16384. I am unable
to increase the number of Rx descriptors more than 16384 value. Is it
possible to increase the number of Rx descriptors to a large value. e.g.
65536.

This is most likely a limitation of the NIC being used and increasing beyond 
that value will not be possible, please check the NIC being used programmer 
guide.
Therefore I copied the mbufs using the pktmbuf copy code(shown
below) and free the packet received. Now the issue is that I can not copy
more than 5 million packets because the  nb_mbufs of the mempool can't be
more than 5 Million (#define NB_MBUF 500). If I increase the NB_MBUF
macro from more than 5 Million, the error is returned unable to init mbuf
pool. Is there a possible way to increase the mempool size?

The mempool uses uint32_t for most sizes and the number of mempool items is 
uint32_t so ok with the number of entries in a can be ~4G as stated be make 
sure you have enough memory as the over head for mbufs is not just the header + 
the packet size.

My question is why are you copying the mbuf and not just linking the mbufs into 
a link list? Maybe I do not understand the reason. I would try to make sure you 
do not do a copy of the data and just link the mbufs together using the next 
pointer in the mbuf header unless you have chained mbufs already.

The other question is can you drop any packets if not then you only have the 
linking option IMO. If you can drop packets then you can just start dropping 
them when the ring is getting full. Holding onto 28m packets for two seconds 
can cause other protocol related problems and TCP could be sending 
retransmitted packets and now you have caused a bunch of work on the RX side at 
the end point.


Furthermore, kindly guide me if this is the appropriate mailing list for
asking this type of questions.

You are on the correct email list, d...@dpdk.org is for 
DPDK developers normally.

Hope this helps.



static inline struct rte_mbuf *

pktmbuf_copy(struct rte_mbuf *md, struct rte_mempool *mp)
{
struct rte_mbuf *mc = NULL;
struct rte_mbuf **prev = 

do {
   struct rte_mbuf *mi;

   mi = rte_pktmbuf_alloc(mp);
   if (unlikely(mi == NULL)) {
   rte_pktmbuf_free(mc);

   rte_exit(EXIT_FAILURE, "Unable to Allocate Memory. Memory
Failure.\n");
   return NULL;
   }

   mi->data_off = md->data_off;
   mi->data_len = md->data_len;
   mi->port = md->port;
   mi->vlan_tci = md->vlan_tci;
   mi->tx_offload = md->tx_offload;
   mi->hash = md->hash;

   mi->next = NULL;
   mi->pkt_len = md->pkt_len;
   mi->nb_segs = md->nb_segs;
   mi->ol_flags = md->ol_flags;
   mi->packet_type = md->packet_type;

  rte_memcpy(rte_pktmbuf_mtod(mi, char *), rte_pktmbuf_mtod(md, char *),
md->data_len);
  *prev = mi;
  prev = >next;
} while ((md = md->next) != NULL);

*prev = NULL;
return mc;

}



*Reference:*  http://patchwork.dpdk.org/patch/6289/

Thanks & Best Regards,

Wajeeha Javed

Regards,
Keith



Re: [dpdk-users] RTE_MACHINE_TYPE Error

2018-10-09 Thread Wiles, Keith


> On Oct 9, 2018, at 9:56 AM, Cliff Burdick  wrote:
> 
> I think I answered my own question -- the motherboards we're using had AES-NI 
> disabled in the BIOS, so DPDK was correctly not seeing it enabled even though 
> the processor supports it. I enabled it in the BIOS and it's working properly 
> now. Thanks again Keith!

Ok great.

> 
> On Tue, Oct 9, 2018 at 7:37 AM Cliff Burdick  wrote:
> Thanks Keith. You are right that /proc/cpuinfo on a E5-2680 v3 does not have 
> AES listed. I was incorrect assuming this was a broadwell system, but it's 
> Haswell. Either way, I'm still not quite clear what's going on since the gcc 
> manual here (https://gcc.gnu.org/onlinedocs/gcc/x86-Options.html) specifies 
> this:
> 
> ‘haswell’
> Intel Haswell CPU with 64-bit extensions, MOVBE, MMX, SSE, SSE2, SSE3, SSSE3, 
> SSE4.1, SSE4.2, POPCNT, AVX, AVX2, AES, PCLMUL, FSGSBASE, RDRND, FMA, BMI, 
> BMI2 and F16C instruction set support.
> 
> Is the gcc manual specifying some other AES feature that's not what DPDK is 
> listing?
> 
> 
> 
> 
> On Tue, Oct 9, 2018 at 6:54 AM Wiles, Keith  wrote:
> 
> 
> > On Oct 8, 2018, at 11:10 PM, Cliff Burdick  wrote:
> > 
> > Hi, I'm trying to compile on a machine with an older-generation xeon than
> > the target, so I'm using CONFIG_RTE_MACHINE="broadwell" in the config.
> > gcc's options show that broadwell supports the AES flag, and I verified
> > that the build shows -march=broadwell. However, when I run my application
> > it prints immediately:
> > 
> > ERROR: This system does not support "AES".
> > Please check that RTE_MACHINE is set correctly.
> > EAL: FATAL: unsupported cpu type.
> > EAL: unsupported cpu type.
> > EAL: Error - exiting with code: 1
> >  Cause: Error with EAL initialization
> > 
> > This is gcc 7, so it supports that flag. Does anyone know how I can compile
> > for a later architecture on an older machine?
> 
> Have you checked to make sure the CPU does support the feature by looking 
> that the CPU flags in /proc/cpuinfo ?
> 
> Normally this is the reason the code will not run is the CPU does not support 
> it.
> 
> Regards,
> Keith
> 

Regards,
Keith



Re: [dpdk-users] RTE_MACHINE_TYPE Error

2018-10-09 Thread Wiles, Keith


> On Oct 9, 2018, at 9:37 AM, Cliff Burdick  wrote:
> 
> Thanks Keith. You are right that /proc/cpuinfo on a E5-2680 v3 does not have 
> AES listed. I was incorrect assuming this was a broadwell system, but it's 
> Haswell. Either way, I'm still not quite clear what's going on since the gcc 
> manual here (https://gcc.gnu.org/onlinedocs/gcc/x86-Options.html) specifies 
> this:
> 
> ‘haswell’
> Intel Haswell CPU with 64-bit extensions, MOVBE, MMX, SSE, SSE2, SSE3, SSSE3, 
> SSE4.1, SSE4.2, POPCNT, AVX, AVX2, AES, PCLMUL, FSGSBASE, RDRND, FMA, BMI, 
> BMI2 and F16C instruction set support.
> 
> Is the gcc manual specifying some other AES feature that's not what DPDK is 
> listing?

Not all the Haswell CPUs are the same and some SKUs do not have some features.

Did you try with the haswell machine instead of the broadwell, if that does not 
work I am going to need to talk to someone that knows. I assume Bruce or 
Konstantin
know the answer. Do not expect a reply until they get to the office in Ireland.

> 
> 
> 
> 
> On Tue, Oct 9, 2018 at 6:54 AM Wiles, Keith  wrote:
> 
> 
> > On Oct 8, 2018, at 11:10 PM, Cliff Burdick  wrote:
> > 
> > Hi, I'm trying to compile on a machine with an older-generation xeon than
> > the target, so I'm using CONFIG_RTE_MACHINE="broadwell" in the config.
> > gcc's options show that broadwell supports the AES flag, and I verified
> > that the build shows -march=broadwell. However, when I run my application
> > it prints immediately:
> > 
> > ERROR: This system does not support "AES".
> > Please check that RTE_MACHINE is set correctly.
> > EAL: FATAL: unsupported cpu type.
> > EAL: unsupported cpu type.
> > EAL: Error - exiting with code: 1
> >  Cause: Error with EAL initialization
> > 
> > This is gcc 7, so it supports that flag. Does anyone know how I can compile
> > for a later architecture on an older machine?
> 
> Have you checked to make sure the CPU does support the feature by looking 
> that the CPU flags in /proc/cpuinfo ?
> 
> Normally this is the reason the code will not run is the CPU does not support 
> it.
> 
> Regards,
> Keith
> 

Regards,
Keith



Re: [dpdk-users] RTE_MACHINE_TYPE Error

2018-10-09 Thread Wiles, Keith



> On Oct 8, 2018, at 11:10 PM, Cliff Burdick  wrote:
> 
> Hi, I'm trying to compile on a machine with an older-generation xeon than
> the target, so I'm using CONFIG_RTE_MACHINE="broadwell" in the config.
> gcc's options show that broadwell supports the AES flag, and I verified
> that the build shows -march=broadwell. However, when I run my application
> it prints immediately:
> 
> ERROR: This system does not support "AES".
> Please check that RTE_MACHINE is set correctly.
> EAL: FATAL: unsupported cpu type.
> EAL: unsupported cpu type.
> EAL: Error - exiting with code: 1
>  Cause: Error with EAL initialization
> 
> This is gcc 7, so it supports that flag. Does anyone know how I can compile
> for a later architecture on an older machine?

Have you checked to make sure the CPU does support the feature by looking that 
the CPU flags in /proc/cpuinfo ?

Normally this is the reason the code will not run is the CPU does not support 
it.

Regards,
Keith



Re: [dpdk-users] Calculating Packet Length

2018-09-29 Thread Wiles, Keith



> On Sep 29, 2018, at 5:19 AM, Michael Barker  wrote:
> 
> Hi,
> 
> I've new to DPDK and have been started by sending ARP packets.  I have a
> question around how to set the mbuf data_len and pkt_size.  I Initially did
> the following:
> 
>struct rte_mbuf* arp_pkt = rte_pktmbuf_alloc(mbuf_pool);
>const size_t pkt_size = sizeof(struct ether_addr) + sizeof(struct
> arp_hdr);

This does seem to be wrong and sizeof(struct ether_hdr) should have used. As 
the L2 header is normally 14 bytes in size.

A packet in DPDK must be at least 60 bytes in length is the hardware appends 
the Frame checksum (4 bytes) because all ethernet frames must be at least 64 
bytes in the wire. Some hardware will pad the frame out the correct length then 
add the FCS (Frame Checksum) bytes. Some hardware will discard the frame and 
count it as a runt or fragment. Just to be safe I always make sure the length 
of the frame is 60 bytes at least using a software check.

Hope that helps you. 

Also it looks like the3 ptpclient.c file may need to be fixed.

> 
>arp_pkt->data_len = pkt_size;
>arp_pkt->pkt_len = pkt_size;
> 
> Which is based on ptpclient.c sample code.  However after setting all of
> the fields, the packet either doesn't get sent or has some of the data
> truncated from the end of the packet when viewed in Wireshark.  If I modify
> the size to be the following:
> 
>const size_t pkt_size = sizeof(struct ether_addr) + sizeof(struct
> arp_hdr) + 8;
> 
> It works as expected.  I'm wondering where the extra 8 bytes come from?  Is
> there a better way to calculate the packet length?
> 
> Using dpdk 18.08, Linux - kernel 4.15.0-33.
> 
> Mike.

Regards,
Keith



Re: [dpdk-users] Using dpdk libraries without EAL

2018-09-13 Thread Wiles, Keith



> On Sep 10, 2018, at 3:05 AM, Charles Ju  wrote:
> 
> Hi,
> 
> I have developed my own packet capture code and would like to just use the
> dpdk libraries such as the ACL Library and mempool libraries. In this case,
> does these libraries require the EAL?

It depends on the code, but I assume it is using the DPDK memory system 
rte_malloc() and the like.

You will have to replace these calls as the memory subsystem is inited from 
EAL. Also I assume it maybe calling into other library components and they will 
have to modified as well.

DPDK is not a collection of functions like libc. The libc library is a 
collection of functions that are pretty much independent from each other and 
easy to use in a standalone fashion. DPDK Libraries are not typically written 
to be standalone as they use other highly optimized routines in DPDK.

It is not to say we should not look at making some parts of DPDK replaceable or 
be able to be swapped out with other user components. We have done a lot of 
work to allow external memory managers or hardware based memory in the case of 
SOCs.


Regards,
Keith



Re: [dpdk-users] How to use software prefetching for custom structures to increase throughput on the fast path

2018-09-11 Thread Wiles, Keith



> On Sep 11, 2018, at 10:42 AM, Arvind Narayanan  wrote:
> 
> Keith, thanks!
> 
> My structure's size is 24 bytes, and for that particular for-loop, I do not 
> dereference the rte_mbuf pointer, hence my understanding is it wouldn't 
> require to load 4 cache lines, correct?
> I am only looking at the tags to make a decision and then simply move ahead 
> on the fast-path.

The mbufs does get accessed by the Rx path, so a cacheline is pulled. If you 
are not accessing the mbuf structure or data then I am not sure what is the 
problem. The my_packet structure is it starting on a cacheline and have you 
tried putting each structure on a cacheline using __rte_cache_aligned?

Have you used vtune or some of the other tools in the intel site?
https://software.intel.com/en-us/intel-vtune-amplifier-xe

Not sure about cost or anything. Vtune is a great tool, but for me it does have 
some learning curve to understand the output.

A Xeon core of this type should be able to forward packets nicely at 10G 64 
byte frames. Maybe just do the normal Rx then process, but do not do all of the 
processing then send it back out like a dumb forwarder. Are the NIC(s) and 
cores on the same socket, if you have a multi-socket system? Just shooting in 
the dark here.

Also did you try l2fwd or l3fwd example and see if that app can get to 10G.

> 
> I tried the method suggested in ip_fragmentation example. I tried several 
> values of PREFETCH_OFFSET -- 3 to 16, but none helped boost throughput.
> 
> Here is my CPU info:
> 
> Model name:Intel(R) Xeon(R) CPU E5-2620 0 @ 2.00GHz
> Architecture:  x86_64
> L1d cache: 32K
> L1i cache: 32K
> L2 cache:  256K
> L3 cache:  15360K
> Flags: fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge 
> mca cmov pat pse36 clflush dts acpi mmx fxsr sse sse2 ss ht tm pbe syscall nx 
> pdpe1gb rdtscp lm constant_tsc arch_perfmon pebs bts rep_good nopl xtopology 
> nonstop_tsc aperfmperf eagerfpu pni pclmulqdq dtes64 monitor ds_cpl vmx smx 
> est tm2 ssse3 cx16 xtpr pdcm pcid dca sse4_1 sse4_2 x2apic popcnt 
> tsc_deadline_timer aes xsave avx lahf_lm tpr_shadow vnmi flexpriority ept 
> vpid xsaveopt dtherm ida arat pln pts
> 
> Just to provide some more context, I isolate the CPU core used from the 
> kernel for fast-path, hence this core is fully dedicated to the fast-path 
> pipeline.
> 
> The only time when the performance bumps from 7.7G to ~8.4G (still not close 
> to 10G/100%) is when I add flags such as MEMPOOL_F_NO_SPREAD or 
> MEMPOOL_F_NO_CACHE_ALIGN.
> 
> Thanks,
> Arvind
> 
> -- Forwarded message -
> From: Wiles, Keith 
> Date: Tue, Sep 11, 2018 at 9:20 AM
> Subject: Re: [dpdk-users] How to use software prefetching for custom 
> structures to increase throughput on the fast path
> To: Arvind Narayanan 
> Cc: users@dpdk.org 
> 
> 
> 
> 
> > On Sep 11, 2018, at 3:15 AM, Arvind Narayanan  wrote:
> > 
> > Hi,
> > 
> > I am trying to write a DPDK application and finding it difficult to achieve
> > line rate on a 10G NIC. I feel this has something to do with CPU caches and
> > related optimizations, and would be grateful if someone can point me to the
> > right direction.
> > 
> > I wrap every rte_mbuf into my own structure say, my_packet. Here is
> > my_packet's structure declaration:
> > 
> > ```
> > struct my_packet {
> > struct rte_mbuf * m;
> > uint16_t tag1;
> > uint16_t tag2;
> > }
> > ```
> 
> The only problem you have created is having to pull in another cache line by 
> having to access my_packet structure. The mbuf is highly optimized to limit 
> the number of cache lines required to be pulled into cache for an mbuf. The 
> mbuf structure is split between RX and TX, when doing TX you touch one of the 
> two cache lines the mbuf is contained in and RX you touch the other cache 
> line, at least that is the reason for the order of the members in the mbuf.
> 
> For the most port accessing a packet of data takes about 2-3 cache lines to 
> load into memory. Getting the prefetches far enough in advance to get the 
> cache lines into top level cache is hard to do. In one case if I removed the 
> prefetches the performance increased not decreased. :-(
> 
> Sound like you are hitting this problem of now loading 4 cache lines and this 
> causes the CPU to stall. One method is to prefetch the packets in a list then 
> prefetch the a number of cache lines in advanced then start processing the 
> first packet of data. In some case I have seen prefetching 3 packets worth of 
> cache lines helps. YMMV
> 
> You did not list processor you are using, but Intel Xeon processors have a 

Re: [dpdk-users] How to use software prefetching for custom structures to increase throughput on the fast path

2018-09-11 Thread Wiles, Keith



> On Sep 11, 2018, at 3:15 AM, Arvind Narayanan  wrote:
> 
> Hi,
> 
> I am trying to write a DPDK application and finding it difficult to achieve
> line rate on a 10G NIC. I feel this has something to do with CPU caches and
> related optimizations, and would be grateful if someone can point me to the
> right direction.
> 
> I wrap every rte_mbuf into my own structure say, my_packet. Here is
> my_packet's structure declaration:
> 
> ```
> struct my_packet {
> struct rte_mbuf * m;
> uint16_t tag1;
> uint16_t tag2;
> }
> ```

The only problem you have created is having to pull in another cache line by 
having to access my_packet structure. The mbuf is highly optimized to limit the 
number of cache lines required to be pulled into cache for an mbuf. The mbuf 
structure is split between RX and TX, when doing TX you touch one of the two 
cache lines the mbuf is contained in and RX you touch the other cache line, at 
least that is the reason for the order of the members in the mbuf.

For the most port accessing a packet of data takes about 2-3 cache lines to 
load into memory. Getting the prefetches far enough in advance to get the cache 
lines into top level cache is hard to do. In one case if I removed the 
prefetches the performance increased not decreased. :-(

Sound like you are hitting this problem of now loading 4 cache lines and this 
causes the CPU to stall. One method is to prefetch the packets in a list then 
prefetch the a number of cache lines in advanced then start processing the 
first packet of data. In some case I have seen prefetching 3 packets worth of 
cache lines helps. YMMV

You did not list processor you are using, but Intel Xeon processors have a 
limit to the number of outstanding prefetches you can have at a time, I think 8 
is the number. Also VPP at fd.io does use this method too in order to prefetch 
the data and not allow the CPU to stall.

Look in the examples/ip_fragmentation/main.c and look at the code that 
prefetches mbufs and data structures. I hope that one helps. 

> 
> During initialization, I reserve a mempool of type struct my_packet with
> 8192 elements. Whenever I form my_packet, I get them in bursts, similarly
> for freeing I put them back into pool as bursts.
> 
> So there is a loop in the datapath which touches each of these my_packet's
> tag to make a decision.
> 
> ```
> for (i = 0; i < pkt_count; i++) {
>if (rte_hash_lookup_data(rx_table, &(my_packet[i]->tag1), (void
> **)[i]) < 0) {
>}
> }
> ```
> 
> Based on my tests, &(my_packet->tag1) is the cause for not letting me
> achieve line rate in the fast path. I say this because if I hardcode the
> tag1's value, I am able to achieve line rate. As a workaround, I tried to
> use rte_prefetch0() and rte_prefetch_non_temporal() to prefetch 2 to 8
> my_packet(s) from my_packet[] array, but nothing seems to boost the
> throughput.
> 
> I tried to play with the flags in rte_mempool_create() function call:
> -- MEMPOOL_F_NO_SPREAD gives me 8.4GB throughput out of 10G
> -- MEMPOOL_F_NO_CACHE_ALIGN initially gives ~9.4G but then gradually
> settles to ~8.5GB after 20 or 30 seconds.
> -- NO FLAG gives 7.7G
> 
> I am running DPDK 18.05 on Ubuntu 16.04.3 LTS.
> 
> Any help or pointers are highly appreciated.
> 
> Thanks,
> Arvind

Regards,
Keith



Re: [dpdk-users] DPDK application as a library

2018-09-11 Thread Wiles, Keith



> On Sep 11, 2018, at 5:49 AM, Amedeo Sapio  wrote:
> 
> Dear all,
> I am writing a program that uses dpdk. I wrote the program based on the
> dpdk examples, in particular using the Makefile provided in the examples.
> If i compile the program as an APP (as describe here
> ), all goes
> well.

Sounds like you and this person need to get together here :-)
https://mails.dpdk.org/archives/users/2018-September/003439.html

> However, my code is part of a larger project, for which the use of a
> separate makefile causes a lot of troubles.
> So I compiled my code as a library, as described in the same page.
> Now, the program that calls the functions in the library (to initialize the
> EAL) is getting this error:
> 

Did you call rte_eal_init() first as most of the routines depend on calling 
this function before you call the other one.

As I explained the other thread, is that DPDK is not libc and you can not just 
pick the functions you want to use for the most part. Some functions are fairly 
independent, but not really as work is needed to make sure it has all of the 
other inits called.

If you want to use DPDK today, the best method is to make sure you initialize 
DPDK correctly and then you can use parts of it for your needs.

As for the code that interacts with your application it is best to have a 
directory with just the functions you need to build with DPDK and use these 
functions as a bridge to/from your code. Using your header files and DPDK 
header files in the same directory or file can cause a number of build 
problems, which is why I use this type of method. Some routines in DPDK you can 
call without any real problems, but sometimes it makes it easier to write 
inline functions or functions that build most independent of DPDK and your 
routines. Not sure explained it clearly, but I hope it helps.

BTW, please remove any disclaimers in your email as they do not apply to a 
public email list.

> MBUF: error setting mempool handler
> Cannot init mbuf pool
> 
> I also made an experiment with the l2fwd example. The example compiled as
> an app works correctly. But if I compile it as a library and then I call
> the functions in this library from another program, I get:
> 
> EAL: Error - exiting with code: 1
>  Cause: No Ethernet ports - bye
> 
> I have one ethernet port using the igb_uio driver (seen from
> dpdk-devbind.py). When I compile my program, I link the following
> libraries: dpdk, pthread, dl, numa. DPDK is compiled from source as
> described here .
> 
> Thanks for your help,
> 
> ---
> Amedeo
> 
> -- 
> 
> This message and its contents, including attachments are intended solely 
> for the original recipient. If you are not the intended recipient or have 
> received this message in error, please notify me immediately and delete 
> this message from your computer system. Any unauthorized use or 
> distribution is prohibited. Please consider the environment before printing 
> this email.

Regards,
Keith



Re: [dpdk-users] Too much data being sent with small packet and IXGBE

2018-09-10 Thread Wiles, Keith



> On Sep 10, 2018, at 1:51 PM, terry.montague.1...@btinternet.com wrote:
> 
> Hi there,
> I'm trying to send a single IGMP control packet out of an Intel X550 to its 
> connected router.
> I've looked at the packet in wireshark and its absolutely fine, the ethernet 
> header, the ip header, the ipchecksum offload and the IGMP payload & checksum 
> - there's nothing wrong with it.
> It is 46 bytes formatted and the pkt and data lengths of the allocated buffer 
> are all set to 46 bytes. Its a 46 byte packet - end of!
> However  what comes out of the interface is 60 bytes of data - the 46 
> bytes I want to send + 14 bytes of zeros.
> The 14 bytes is curiously equivalent to the size of an ethernet header, but I 
> just cannot see how it is just being added on to the end of the packet data.
> Has anyone else observed this effect , or indeed know what's causing it ?

Not sure what you are doing, but the smallest packet on ethernet is 60 bytes + 
4 bytes CRC for 64 bytes total. Normally anything less then 60 bytes + CRC is 
called a runt packet and are discarded. The NIC is padding the data to 60 bytes 
then adds the 4 byte CRC.

Hope that helps.

> Many thanks
> Terry.

Regards,
Keith



Re: [dpdk-users] dpdk and bulk data (video/audio)

2018-09-10 Thread Wiles, Keith



> On Sep 10, 2018, at 6:28 AM, Sofia Baran  wrote:
> 
> 
> Hi All,
> 
> I want/try to us DPDK for transferring larger amount of data, e.g. video 
> frames which usually are stored within memory buffers with sizes of several 
> MB (remark: by using huges pages, these buffers could be physically 
> contiguous).
> 
> When looking at the DPDK documentation, library APIs and examples, I can't 
> find a way/hint how to transfer larger buffers using DPDK without copying the 
> video buffer fragments to the payload sections of the mbufs - which results 
> in high CPU loads.
> 
> Within the ip_fragmentation example indirect mbufs are used, pointing to the 
> payload section of a direct mbuf (holding the header). But in my 
> understanding the maximum size of a mbuf payload is 65KB (uint16_t)!?

It is true that mbufs only hold (64K - 1). The concept of mbufs is normally an 
ethernet packet and they are limited to 64K.

You can create a small mbuf (128 bytes) then set offset/data in the mbuf to 
point to the video buffer only if you can find the physical memory address for 
the data. The mbuf normally holds the physical address of the mbuf->data not 
the attached buffer in this case. This of course means you have to manage the 
mbuf internal structure members yourself and be very careful you do not 
rearrange the mbuf members as that can cause a performance problem.

> 
> I'm pretty new to DPDK so maybe I missed something. I hope that someone can 
> provide me some hits how to avoid copying the entire payload.
> 
> Thanks
> Sofia Baran
> 
> 

Regards,
Keith



Re: [dpdk-users] Does ring library and mempool library require HUGEPAGE?

2018-09-10 Thread Wiles, Keith



> On Sep 10, 2018, at 4:40 AM, Charles Ju  wrote:
> 
> I want to use dpdk ring library and mempool library in my application
> without EAL. Does the ring library allocate memory from user space? Does
> the kernel need HGETBLFS etc if I just want to use ring library and mempool
> to allocate memory from user space?

You may want to go to readthedocs.org/projects/dpdk and read up in the command 
line options as one of the options is to not use hugepages.

Regards,
Keith



Re: [dpdk-users] ring vdev and secondary process

2018-09-03 Thread Wiles, Keith



> On Sep 3, 2018, at 3:40 PM, Tom Barbette  wrote:
> 
> Hi all,
> 
> 
> I'm trying to use virtual devices (ring-based PMD, but the underlying system 
> does not matter) between two DPDK processes.
> 
> 
> But when I launch the secondary process, I get "RING: Cannot reserve memory". 
> I modified the message to get the rte_errno, which is 17, File exists.  This 
> also happens with testpmd.

Memory can not be allocated in the secondary process, but must request it from 
the primary. Is this the problem?

> 
> 
> I'm using DPDK 18.08. Using the ring API directly works without any problem. 
> But I'd like to use the vdev one to build functional tests.
> 
> 
> I tried with the TAP pmd, the device is not available in the secondary 
> process (rte_eth_dev_count_avail() is 0).
> 
> 
> Thanks,
> 
> Tom

Regards,
Keith



Re: [dpdk-users] Invalid NUMA socket

2018-08-26 Thread Wiles, Keith



> On Aug 26, 2018, at 12:38 PM, waqas ahmed  wrote:
> 
> many thanks keith for help
> here are the details
> dpdk-18.02.2
> $> lsb_release -a
> Distributor ID: Ubuntu
> Description:Ubuntu 17.04
> Release:17.04
> Codename:   zesty
> $> uname -r
> 4.10.0-19-generic
> 
> following is the output by calling rte_socket_id(), which associate lcores to 
> their numa socket correctly, i am just got curious about why nic is not 
> associated with its numa socket!
> i am working on remote server.

Well I assume it should be working too, but I would start by looking at the 
code that prints out the -1 value.

> 
> EAL: Detected 8 lcore(s)
> EAL: Multi-process socket /var/run/.rte_unix
> EAL: Probing VFIO support...
> EAL: PCI device :01:00.0 on NUMA socket -1
> EAL:   Invalid NUMA socket, default to 0
> EAL:   probe driver: 8086:10c9 net_e1000_igb
> EAL: PCI device :01:00.1 on NUMA socket -1
> EAL:   Invalid NUMA socket, default to 0
> EAL:   probe driver: 8086:10c9 net_e1000_igb
> EAL: PCI device :05:00.0 on NUMA socket -1
> EAL:   Invalid NUMA socket, default to 0
> EAL:   probe driver: 8086:10fb net_ixgbe
> EAL: PCI device :05:00.1 on NUMA socket -1
> EAL:   Invalid NUMA socket, default to 0
> EAL:   probe driver: 8086:10fb net_ixgbe
> hello from core 1 on socket 0
> hello from core 2 on socket 0
> hello from core 3 on socket 0
> hello from core 4 on socket 1
> hello from core 5 on socket 1
> hello from core 6 on socket 1
> hello from core 7 on socket 1
> hello from core 0 on socket 0
> -
> Regards
> Ahmed  
> 
> 
> On Sun, Aug 26, 2018 at 7:52 PM Wiles, Keith  wrote:
> 
> 
> > On Aug 26, 2018, at 1:04 AM, waqas ahmed  wrote:
> > 
> > Hi everyone,
> > we have dual socket xeon cpu, and we have intel 52899 10g nic having pci
> > address :05:00.0.  while running helloworld app EAL log tells that this
> > device is found on numa socket -1 ! it should be found on either numa node
> > 0 or 1?
> > 
> > numactl --hardware gives following output
> > available: 2 nodes (0-1)
> > node 0 cpus: 0 1 2 3
> > node 0 size: 16036 MB
> > node 0 free: 10268 MB
> > node 1 cpus: 4 5 6 7
> > node 1 size: 16125 MB
> > node 1 free: 11977 MB
> > node distances:
> > node   0   1
> >  0:  10  21
> >  1:  21  10
> 
> Can you tell us the version of DPDK and the OS/version.
> 
> I assume you are not running inside a VM, right?
> 
> > 
> > DPDK helloworld app log
> > ---
> > EAL: Detected 8 lcore(s)
> > EAL: Multi-process socket /var/run/.rte_unix
> > EAL: Probing VFIO support...
> > EAL: PCI device :01:00.0 on NUMA socket -1
> > EAL:   Invalid NUMA socket, default to 0
> > EAL:   probe driver: 8086:10c9 net_e1000_igb
> > EAL: PCI device :01:00.1 on NUMA socket -1
> > EAL:   Invalid NUMA socket, default to 0
> > EAL:   probe driver: 8086:10c9 net_e1000_igb
> > EAL: PCI device :05:00.0 on NUMA socket -1
> > EAL:   Invalid NUMA socket, default to 0
> > EAL:   probe driver: 8086:10fb net_ixgbe
> > EAL: PCI device :05:00.1 on NUMA socket -1
> > EAL:   Invalid NUMA socket, default to 0
> > EAL:   probe driver: 8086:10fb net_ixgbe
> > hello from core 2
> > hello from core 3
> > hello from core 4
> > hello from core 5
> > hello from core 6
> > hello from core 7
> > hello from core 1
> 
> You can also change the hello world app and call rte_socket_id() and print 
> that value out too?
> 
> Regards,
> Keith
> 

Regards,
Keith



Re: [dpdk-users] Invalid NUMA socket

2018-08-26 Thread Wiles, Keith



> On Aug 26, 2018, at 1:04 AM, waqas ahmed  wrote:
> 
> Hi everyone,
> we have dual socket xeon cpu, and we have intel 52899 10g nic having pci
> address :05:00.0.  while running helloworld app EAL log tells that this
> device is found on numa socket -1 ! it should be found on either numa node
> 0 or 1?
> 
> numactl --hardware gives following output
> available: 2 nodes (0-1)
> node 0 cpus: 0 1 2 3
> node 0 size: 16036 MB
> node 0 free: 10268 MB
> node 1 cpus: 4 5 6 7
> node 1 size: 16125 MB
> node 1 free: 11977 MB
> node distances:
> node   0   1
>  0:  10  21
>  1:  21  10

Can you tell us the version of DPDK and the OS/version.

I assume you are not running inside a VM, right?

> 
> DPDK helloworld app log
> ---
> EAL: Detected 8 lcore(s)
> EAL: Multi-process socket /var/run/.rte_unix
> EAL: Probing VFIO support...
> EAL: PCI device :01:00.0 on NUMA socket -1
> EAL:   Invalid NUMA socket, default to 0
> EAL:   probe driver: 8086:10c9 net_e1000_igb
> EAL: PCI device :01:00.1 on NUMA socket -1
> EAL:   Invalid NUMA socket, default to 0
> EAL:   probe driver: 8086:10c9 net_e1000_igb
> EAL: PCI device :05:00.0 on NUMA socket -1
> EAL:   Invalid NUMA socket, default to 0
> EAL:   probe driver: 8086:10fb net_ixgbe
> EAL: PCI device :05:00.1 on NUMA socket -1
> EAL:   Invalid NUMA socket, default to 0
> EAL:   probe driver: 8086:10fb net_ixgbe
> hello from core 2
> hello from core 3
> hello from core 4
> hello from core 5
> hello from core 6
> hello from core 7
> hello from core 1

You can also change the hello world app and call rte_socket_id() and print that 
value out too?

Regards,
Keith



Re: [dpdk-users] [ptkgen-dpdk] Only can send a few packets and stopped.

2018-08-15 Thread Wiles, Keith


> On Aug 14, 2018, at 9:44 PM, Vic Wang(BJ-RD)  wrote:
> 
> Dear Keith,
>   
>There is no mbuf leak.
>Now I’ve tried lastest 18.08 dpdk release and pktgen 3.5.2 with intel 
> 10G NIC. I found an interesting phenomenon.
>It can transfer packets continuously when the pktgen command is 
> “./pktgen –l 0-7 –n2 -- -P –m "2.0,3.1"”.
>But it only transfers packets about one second with the pktgen command 
> “./pktgen –l 0-7 –n2 -- -P –m "[2:3].0,[4,5].1””.

Is the [4,5].1 or did you mean [4:5].1 ?

From the last set of emails you were using 2.0,3.1 and one only lasted a few 
seconds but now you are saying that it works on the current versions?

>In other words, when tx and rx polling threads are separated, it can’t 
> transfer packets sustainably.
>I don’t know whether it is a question. I find a way to make pktgen 
> work. Thank you very much.
>  
> Best Regards.
> Vic
> 发件人: Wiles, Keith [mailto:keith.wi...@intel.com] 
> 发送时间: 2018年8月14日 22:46
> 收件人: Vic Wang(BJ-RD) 
> 抄送: users@dpdk.org
> 主题: Re: [dpdk-users] [ptkgen-dpdk] Only can send a few packets and stopped.
>  
>  
> 
> 
> On Aug 14, 2018, at 9:18 AM, Vic Wang(BJ-RD)  wrote:
>  
> Dear Keith,
>  
> The machine I used is 8 cores.
> Today I do another test that using intel 82599 10G NIC to tranfer packets 
> between pktgen and dpdk. 
> I use the same commands as "./pktgen –c 0xf –n2 -- -P –m "2.0,3.1"" on 
> pktgen side and “./test_pmd –c 0xf –n2 -- -i –portmask=0x3 –coremask=0xc” on 
> the dpdk side.
> It is better than 1G NIC case. It can transfer packets about 3-5 minites 
> in random. Sometimes one port can keep transfering ,but the other port is 
> stopped.  I don't know why it suddenly stops transfering.
> I add some debug info in function pktgen_send_pkts, it seems that 
> pg_pktmbuf_alloc_bulk return val is not zero, so it can't call 
> pktgen_send_burst.
> >static __inline__ void pktgen_send_pkts(port_info_t *info, uint16_t qid, 
> >struct rte_mempool *mp)
> >{
> > uint32_t flags;
> > int rc = 0;
> > 
> > flags = rte_atomic32_read(>port_flags);
> > 
> > if (flags & SEND_FOREVER) {
> > rc = pg_pktmbuf_alloc_bulk(mp,
> >   info->q[qid].tx_mbufs.m_table,
> >   info->tx_burst);
> > if (rc == 0) {
> > info->q[qid].tx_mbufs.len = info->tx_burst;
> > info->q[qid].tx_cnt += info->tx_burst;
> > 
> > pktgen_send_burst(info, qid);
> > }
> > } else {
>  
> All this suggests is the mempool/pktmbuf_pool is running out of mbufs. Can 
> you try the test without using random and just a single packet send test. 
> Getting a non-zero return for the call just means no mbufs in the pool, but 
> that can happen if the TX ring is larger then the number of mbufs or they are 
> not getting freed in the PMD or in pktgen.
>  
> Could have mbuf leak in Pktgen but I do not think so.
> 
> 
>  
> I will try your suggest for the lastest 18.08 release for pktgen. Thank 
> you very much.  
>  
> Best Regards.
> Vic
> 发件人: Wiles, Keith 
> 发送时间: 2018年8月14日 21:31
> 收件人: Vic Wang(BJ-RD)
> 抄送: users@dpdk.org
> 主题: Re: [dpdk-users] [ptkgen-dpdk] Only can send a few packets and stopped.
>  
>  
> 
> 
> On Aug 13, 2018, at 10:10 PM, Vic Wang(BJ-RD)  wrote:
>  
> Hi  Keith,
>I loop the cable back to the different port on the same machine, but 
> it doesn’t transfer forever, just transfer about one second and stop.  
>Then I do another try. I use two ports on both machine A and B. On the 
> machine A , run pktgen with the command “./pktgen –c 0xf –n2 -- -P –m 
> “2.0,3.1””. And on the machine B ,run dpdk with the command “./test_pmd –c 
> 0xf –n2 -- -i –portmask=0x3 –coremask=0xc”. It can also only send a few 
> packets and stop.
>The version of dpdk is v17.11.2, and the version of pktgen is 3.5.2. 
> The NIC I used is intel 82575 Gigabit nic. Is there any problem above?.
> 
>  
> How many cores does this machine have?
>  
> In the pktgen command you should really use the -l instead of the -c command 
> as the -l is easier to read and use.
>  
> pktgen -l 0-3 -n2 — -P -m “2.0, 3.1”
>  
> Also you are using extra cores then you need in the -m command, pktgen needs 
> one extra core for display/timers then used for ports. Using -l 1-3 would 
> give pktgen core 1 and 2/3 are used for the ports.
>  
> The only other issue is the 1G PMD is not used much I think and it could have 
> a problem as it does not work in testpmd and pktgen. To me this is not a 
> pktgen problem but a problem with something else.
>  
> The only other thing I ca

Re: [dpdk-users] [ptkgen-dpdk] Only can send a few packets and stopped.

2018-08-14 Thread Wiles, Keith


On Aug 14, 2018, at 9:18 AM, Vic Wang(BJ-RD) 
mailto:vicw...@zhaoxin.com>> wrote:

Dear Keith,

The machine I used is 8 cores.
Today I do another test that using intel 82599 10G NIC to tranfer packets 
between pktgen and dpdk.
I use the same commands as "./pktgen –c 0xf –n2 -- -P –m "2.0,3.1"" on 
pktgen side and “./test_pmd –c 0xf –n2 -- -i –portmask=0x3 –coremask=0xc” on 
the dpdk side.
It is better than 1G NIC case. It can transfer packets about 3-5 minites in 
random. Sometimes one port can keep transfering ,but the other port is stopped. 
 I don't know why it suddenly stops transfering.
I add some debug info in function pktgen_send_pkts, it seems that 
pg_pktmbuf_alloc_bulk return val is not zero, so it can't call 
pktgen_send_burst.

>static __inline__ void pktgen_send_pkts(port_info_t *info, uint16_t qid, 
>struct rte_mempool *mp)
>{
> uint32_t flags;
> int rc = 0;
>
> flags = rte_atomic32_read(>port_flags);
>
> if (flags & SEND_FOREVER) {
> rc = pg_pktmbuf_alloc_bulk(mp,
>   info->q[qid].tx_mbufs.m_table,
>   info->tx_burst);
> if (rc == 0) {
> info->q[qid].tx_mbufs.len = info->tx_burst;
> info->q[qid].tx_cnt += info->tx_burst;
>
> pktgen_send_burst(info, qid);
> }
> } else {

All this suggests is the mempool/pktmbuf_pool is running out of mbufs. Can you 
try the test without using random and just a single packet send test. Getting a 
non-zero return for the call just means no mbufs in the pool, but that can 
happen if the TX ring is larger then the number of mbufs or they are not 
getting freed in the PMD or in pktgen.

Could have mbuf leak in Pktgen but I do not think so.


I will try your suggest for the lastest 18.08 release for pktgen. Thank you 
very much.


Best Regards.
Vic

发件人: Wiles, Keith mailto:keith.wi...@intel.com>>
发送时间: 2018年8月14日 21:31
收件人: Vic Wang(BJ-RD)
抄送: users@dpdk.org<mailto:users@dpdk.org>
主题: Re: [dpdk-users] [ptkgen-dpdk] Only can send a few packets and stopped.



On Aug 13, 2018, at 10:10 PM, Vic Wang(BJ-RD) 
mailto:vicw...@zhaoxin.com>> wrote:

Hi  Keith,
   I loop the cable back to the different port on the same machine, but it 
doesn’t transfer forever, just transfer about one second and stop.
   Then I do another try. I use two ports on both machine A and B. On the 
machine A , run pktgen with the command “./pktgen –c 0xf –n2 -- -P –m 
“2.0,3.1””. And on the machine B ,run dpdk with the command “./test_pmd –c 0xf 
–n2 -- -i –portmask=0x3 –coremask=0xc”. It can also only send a few packets and 
stop.
   The version of dpdk is v17.11.2, and the version of pktgen is 3.5.2. The 
NIC I used is intel 82575 Gigabit nic. Is there any problem above?.


How many cores does this machine have?

In the pktgen command you should really use the -l instead of the -c command as 
the -l is easier to read and use.

pktgen -l 0-3 -n2 — -P -m “2.0, 3.1”

Also you are using extra cores then you need in the -m command, pktgen needs 
one extra core for display/timers then used for ports. Using -l 1-3 would give 
pktgen core 1 and 2/3 are used for the ports.

The only other issue is the 1G PMD is not used much I think and it could have a 
problem as it does not work in testpmd and pktgen. To me this is not a pktgen 
problem but a problem with something else.

The only other thing I can suggest is trying the latest 18.08 release for 
pktgen and see if that works.


Best Regards!
VicWang
发件人: Wiles, Keith [mailto:keith.wi...@intel.com]
发送时间: 2018年8月14日 5:13
收件人: Vic Wang(BJ-RD) mailto:vicw...@zhaoxin.com>>
抄送: users@dpdk.org<mailto:users@dpdk.org>
主题: Re: [dpdk-users] [ptkgen-dpdk] Only can send a few packets and stopped.




On Aug 13, 2018, at 9:46 AM, Vic Wang(BJ-RD) 
mailto:vicw...@zhaoxin.com>> wrote:

Hi Wiles,

The version of pktgen I used is pktgen-3.5.2.
Additional, I just use one port for recving and xmiting with "-m [2:3].0" .

When I tried tools/pktgen-run.sh with the line "load_file = -f 
test/set_seq.lua" and without starting the dpdk on computer B,  it seems to 
work a while. But with starting the dpdk on computer B, the pktgen on computer 
A can only transfer a few packets.

Pktgen normally works unless the command line is wrong. In your case it seems 
like the Link is not up or is going up and down. If you can loop the cable back 
to the same machine to a different port (if you have two ports).



Best Regards!
VicWang

发件人: Wiles, Keith mailto:keith.wi...@intel.com>>
发送时间: 2018年8月13日 21:18
收件人: Vic Wang(BJ-RD)
抄送: users@dpdk.org<mailto:users@dpdk.org>
主题: Re: [dpdk-users] [ptkgen-dpdk] Only can send a few packets and stopped.



> On Aug 13, 2018, at 6:01 AM, Vic Wang(BJ-RD) 
> mailto:vicw...@zhaoxin.com>> wrote:
>
> Hi,
>   When I run pktgen-dpdk on computer A and run d

Re: [dpdk-users] [ptkgen-dpdk] Only can send a few packets and stopped.

2018-08-14 Thread Wiles, Keith


On Aug 13, 2018, at 10:10 PM, Vic Wang(BJ-RD) 
mailto:vicw...@zhaoxin.com>> wrote:

Hi  Keith,
   I loop the cable back to the different port on the same machine, but it 
doesn’t transfer forever, just transfer about one second and stop.
   Then I do another try. I use two ports on both machine A and B. On the 
machine A , run pktgen with the command “./pktgen –c 0xf –n2 -- -P –m 
“2.0,3.1””. And on the machine B ,run dpdk with the command “./test_pmd –c 0xf 
–n2 -- -i –portmask=0x3 –coremask=0xc”. It can also only send a few packets and 
stop.
   The version of dpdk is v17.11.2, and the version of pktgen is 3.5.2. The 
NIC I used is intel 82575 Gigabit nic. Is there any problem above?.


How many cores does this machine have?

In the pktgen command you should really use the -l instead of the -c command as 
the -l is easier to read and use.

pktgen -l 0-3 -n2 — -P -m “2.0, 3.1”

Also you are using extra cores then you need in the -m command, pktgen needs 
one extra core for display/timers then used for ports. Using -l 1-3 would give 
pktgen core 1 and 2/3 are used for the ports.

The only other issue is the 1G PMD is not used much I think and it could have a 
problem as it does not work in testpmd and pktgen. To me this is not a pktgen 
problem but a problem with something else.

The only other thing I can suggest is trying the latest 18.08 release for 
pktgen and see if that works.


Best Regards!
VicWang
发件人: Wiles, Keith [mailto:keith.wi...@intel.com]
发送时间: 2018年8月14日 5:13
收件人: Vic Wang(BJ-RD) mailto:vicw...@zhaoxin.com>>
抄送: users@dpdk.org<mailto:users@dpdk.org>
主题: Re: [dpdk-users] [ptkgen-dpdk] Only can send a few packets and stopped.




On Aug 13, 2018, at 9:46 AM, Vic Wang(BJ-RD) 
mailto:vicw...@zhaoxin.com>> wrote:

Hi Wiles,

The version of pktgen I used is pktgen-3.5.2.
Additional, I just use one port for recving and xmiting with "-m [2:3].0" .

When I tried tools/pktgen-run.sh with the line "load_file = -f 
test/set_seq.lua" and without starting the dpdk on computer B,  it seems to 
work a while. But with starting the dpdk on computer B, the pktgen on computer 
A can only transfer a few packets.

Pktgen normally works unless the command line is wrong. In your case it seems 
like the Link is not up or is going up and down. If you can loop the cable back 
to the same machine to a different port (if you have two ports).



Best Regards!
VicWang
____
发件人: Wiles, Keith mailto:keith.wi...@intel.com>>
发送时间: 2018年8月13日 21:18
收件人: Vic Wang(BJ-RD)
抄送: users@dpdk.org<mailto:users@dpdk.org>
主题: Re: [dpdk-users] [ptkgen-dpdk] Only can send a few packets and stopped.



> On Aug 13, 2018, at 6:01 AM, Vic Wang(BJ-RD) 
> mailto:vicw...@zhaoxin.com>> wrote:
>
> Hi,
>   When I run pktgen-dpdk on computer A and run dpdk on computer B, the 
> pktgen can only send a few packets( about one second) and stopped.
>   The command on A side is ./tools/pktgen-run.sh.
>   I modified some line in the tools/pktgen-run.sh to suit my computer. 
> The critical line is "load_file = "-f test/tx-rx-loopback.lua"". I also tried 
> "load_file = -f test/set_seq.lua". But it also only can send a few packets 
> and stopped.
>   Does anyone do it successful? I need your help, thanks very much.

Can you please tell me the version of pktgen and I would like to see the 
command line that is printed when run the script.

>
> Best Regards!
> VicWang
>
>
>
> ?
> ?
> CONFIDENTIAL NOTE:
> This email contains confidential or legally privileged information and is for 
> the sole use of its intended recipient. Any unauthorized review, use, copying 
> or forwarding of this email or the content of this email is strictly 
> prohibited.

These notices are not valid on a public email list, please remove them when 
sending to this public list.


Regards,
Keith


保密声明:
本邮件含有保密或专有信息,仅供指定收件人使用。严禁对本邮件或其内容做任何未经授权的查阅、使用、复制或转发。
CONFIDENTIAL NOTE:
This email contains confidential or legally privileged information and is for 
the sole use of its intended recipient. Any unauthorized review, use, copying 
or forwarding of this email or the content of this email is strictly prohibited.

Regards,
Keith




保密声明:
本邮件含有保密或专有信息,仅供指定收件人使用。严禁对本邮件或其内容做任何未经授权的查阅、使用、复制或转发。
CONFIDENTIAL NOTE:
This email contains confidential or legally privileged information and is for 
the sole use of its intended recipient. Any unauthorized review, use, copying 
or forwarding of this email or the content of this email is strictly prohibited.

Regards,
Keith



Re: [dpdk-users] [ptkgen-dpdk] Only can send a few packets and stopped.

2018-08-13 Thread Wiles, Keith


On Aug 13, 2018, at 9:46 AM, Vic Wang(BJ-RD) 
mailto:vicw...@zhaoxin.com>> wrote:

Hi Wiles,

The version of pktgen I used is pktgen-3.5.2.
Additional, I just use one port for recving and xmiting with "-m [2:3].0" .

When I tried tools/pktgen-run.sh with the line "load_file = -f 
test/set_seq.lua" and without starting the dpdk on computer B,  it seems to 
work a while. But with starting the dpdk on computer B, the pktgen on computer 
A can only transfer a few packets.

Pktgen normally works unless the command line is wrong. In your case it seems 
like the Link is not up or is going up and down. If you can loop the cable back 
to the same machine to a different port (if you have two ports).


Best Regards!
VicWang
____
发件人: Wiles, Keith mailto:keith.wi...@intel.com>>
发送时间: 2018年8月13日 21:18
收件人: Vic Wang(BJ-RD)
抄送: users@dpdk.org<mailto:users@dpdk.org>
主题: Re: [dpdk-users] [ptkgen-dpdk] Only can send a few packets and stopped.



> On Aug 13, 2018, at 6:01 AM, Vic Wang(BJ-RD) 
> mailto:vicw...@zhaoxin.com>> wrote:
>
> Hi,
>   When I run pktgen-dpdk on computer A and run dpdk on computer B, the 
> pktgen can only send a few packets( about one second) and stopped.
>   The command on A side is ./tools/pktgen-run.sh.
>   I modified some line in the tools/pktgen-run.sh to suit my computer. 
> The critical line is "load_file = "-f test/tx-rx-loopback.lua"". I also tried 
> "load_file = -f test/set_seq.lua". But it also only can send a few packets 
> and stopped.
>   Does anyone do it successful? I need your help, thanks very much.

Can you please tell me the version of pktgen and I would like to see the 
command line that is printed when run the script.

>
> Best Regards!
> VicWang
>
>
>
> ?
> ?
> CONFIDENTIAL NOTE:
> This email contains confidential or legally privileged information and is for 
> the sole use of its intended recipient. Any unauthorized review, use, copying 
> or forwarding of this email or the content of this email is strictly 
> prohibited.

These notices are not valid on a public email list, please remove them when 
sending to this public list.


Regards,
Keith



保密声明:
本邮件含有保密或专有信息,仅供指定收件人使用。严禁对本邮件或其内容做任何未经授权的查阅、使用、复制或转发。
CONFIDENTIAL NOTE:
This email contains confidential or legally privileged information and is for 
the sole use of its intended recipient. Any unauthorized review, use, copying 
or forwarding of this email or the content of this email is strictly prohibited.

Regards,
Keith



Re: [dpdk-users] [ptkgen-dpdk] Only can send a few packets and stopped.

2018-08-13 Thread Wiles, Keith



> On Aug 13, 2018, at 6:01 AM, Vic Wang(BJ-RD)  wrote:
> 
> Hi,
>   When I run pktgen-dpdk on computer A and run dpdk on computer B, the 
> pktgen can only send a few packets( about one second) and stopped.
>   The command on A side is ./tools/pktgen-run.sh.
>   I modified some line in the tools/pktgen-run.sh to suit my computer. 
> The critical line is "load_file = "-f test/tx-rx-loopback.lua"". I also tried 
> "load_file = -f test/set_seq.lua". But it also only can send a few packets 
> and stopped.
>   Does anyone do it successful? I need your help, thanks very much.

Can you please tell me the version of pktgen and I would like to see the 
command line that is printed when run the script.

> 
> Best Regards!
> VicWang
> 
> 
> 
> ?
> ?
> CONFIDENTIAL NOTE:
> This email contains confidential or legally privileged information and is for 
> the sole use of its intended recipient. Any unauthorized review, use, copying 
> or forwarding of this email or the content of this email is strictly 
> prohibited.

These notices are not valid on a public email list, please remove them when 
sending to this public list.


Regards,
Keith



Re: [dpdk-users] Client-Hello

2018-08-10 Thread Wiles, Keith


> On Aug 10, 2018, at 11:17 AM, Konstantinos Schoinas  wrote:
> 
> Hello everyone,
> 
> I have built an application that does DPI(Deep packet insepction) in the 
> Client-Hello message during the tls session, extracts the SNI from the packet 
> and then if the SNI is a forbidden name then i block the ssl connection and i 
> dont forward any packet.
> 
> Does anyone know how to write/send  packet with  a fatal-level 
> unrecognized_name(112)?
> Or maybe send a deny message through dpdk ?

DPDK does not have a stack, but there are a number of ones in open source. If 
you can construct a reply and not require a networking stack then you can just 
send the packet your code constructs.

I would start with google ‘networking stack dpdk’ or similar.

> 
> Thanks for your time
> 
> Konstantinos Schoinas

Regards,
Keith



Re: [dpdk-users] pktgen run.py error: NameError: name 'false' is not defined

2018-07-25 Thread Wiles, Keith
forgot to turntext mode back on :-(

> On Jul 25, 2018, at 12:59 AM, Uditha Bandara  wrote:
> 
> Hi,
> 
> Just identified that the problem goes away if the following file within 
>  is deleted.
> 
> /pktgen-dpdk/style/call_Uncrustify.cfg
> 
> Is it safe to delete this file?

Yes, you can delete it. But that does not explain why this is happening, did 
you move any of the files around?

You need to be in the pktgen-dpdk directory and the ‘cfg’ directory needs to be 
at that level.
> 
> Thanks and Regards,
> Uditha
> 
> 
> On Wed, Jul 25, 2018 at 2:45 PM, Uditha Bandara  wrote:
> Hi Keith,
> 
> Thanks for the information.
> 
> I'm using latest version of Pktgen cloned on 24th July 2018 via:
> 
> git clone git://dpdk.org/apps/pktgen-dpdk
> 
> My python version is 2.7.12
> I upgraded to 2.7.15 but observed the same errors.
> 
> RTE_SDK and RTE_TARGET environment variables are set as explained on
> http://pktgen-dpdk.readthedocs.io/en/latest/getting_started.html
> 
> Thanks,
> Uditha
> 
> On Wed, Jul 25, 2018 at 10:21 AM, Wiles, Keith  wrote:
> 
> 
> > On Jul 24, 2018, at 6:25 PM, Uditha Bandara  wrote:
> > 
> > Hi,
> > 
> > I'm trying to install Pktgen on a system running Linux Mint 18.1 Serena.
> > I followed the steps given in:
> 
> Which version of Pktgen are you using, seems like the latest version.
> 
> > 
> > http://pktgen-dpdk.readthedocs.io/en/latest/getting_started.html
> > 
> > When I executed "./tools/run.py -s default" I am getting the following
> > error.
> > 
> > uditha@udithapc ~/sandbox/pktgen/pktgen-dpdk $ ./tools/run.py -s default
> >>>> sdk '/home/uditha/sandbox/pktgen/dpdk', target
> > 'x86_64-native-linuxapp-gcc'
> > Traceback (most recent call last):
> >  File "./tools/run.py", line 361, in 
> >main()
> >  File "./tools/run.py", line 358, in main
> >setup_cfg(cfg_file)
> >  File "./tools/run.py", line 219, in setup_cfg
> >cfg = load_cfg(cfg_file)
> >  File "./tools/run.py", line 119, in load_cfg
> >cfg = imp.load_source('cfg', '', f)
> >  File "", line 1, in 
> > 
> 
> I normally only run pktgen on Ubuntu systems and currently 18.04, which makes 
> me believe it could the version of python you are running.
> 
> I am running python 2.7.15.
> 
> On my machine the same command works and false is nowhere in the code, must 
> be a Python variable type.
> 
> Do you have your RTE_SDK and RTE_TARGET environment variables set?
> 
> > NameError: name 'false' is not defined
> > 
> > I have tried cleaning up the files and a re-install but getting the same
> > error. Does any of the config files needed to be edited before this step?
> > 
> > Thanks in advance!
> > 
> > Regards,
> > Uditha
> 
> Regards,
> Keith
> 
> 
> 

Regards,
Keith



Re: [dpdk-users] pktgen run.py error: NameError: name 'false' is not defined

2018-07-25 Thread Wiles, Keith



On Jul 25, 2018, at 12:59 AM, Uditha Bandara 
mailto:ubn...@gmail.com>> wrote:

Hi,

Just identified that the problem goes away if the following file within 
 is deleted.

/pktgen-dpdk/style/call_Uncrustify.cfg


Yes, it is safe to delete that file. I do not understand how that file would 
effect anything.

Is it safe to delete this file?

Thanks and Regards,
Uditha


On Wed, Jul 25, 2018 at 2:45 PM, Uditha Bandara 
mailto:ubn...@gmail.com>> wrote:
Hi Keith,

Thanks for the information.

I'm using latest version of Pktgen cloned on 24th July 2018 via:

git clone git://dpdk.org/apps/pktgen-dpdk<http://dpdk.org/apps/pktgen-dpdk>

My python version is 2.7.12
I upgraded to 2.7.15 but observed the same errors.

RTE_SDK and RTE_TARGET environment variables are set as explained on
http://pktgen-dpdk.readthedocs.io/en/latest/getting_started.html

Thanks,
Uditha

On Wed, Jul 25, 2018 at 10:21 AM, Wiles, Keith 
mailto:keith.wi...@intel.com>> wrote:


> On Jul 24, 2018, at 6:25 PM, Uditha Bandara 
> mailto:ubn...@gmail.com>> wrote:
>
> Hi,
>
> I'm trying to install Pktgen on a system running Linux Mint 18.1 Serena.
> I followed the steps given in:

Which version of Pktgen are you using, seems like the latest version.

>
> http://pktgen-dpdk.readthedocs.io/en/latest/getting_started.html
>
> When I executed "./tools/run.py -s default" I am getting the following
> error.
>
> uditha@udithapc ~/sandbox/pktgen/pktgen-dpdk $ ./tools/run.py -s default
>>>> sdk '/home/uditha/sandbox/pktgen/dpdk', target
> 'x86_64-native-linuxapp-gcc'
> Traceback (most recent call last):
>  File "./tools/run.py", line 361, in 
>main()
>  File "./tools/run.py", line 358, in main
>setup_cfg(cfg_file)
>  File "./tools/run.py", line 219, in setup_cfg
>cfg = load_cfg(cfg_file)
>  File "./tools/run.py", line 119, in load_cfg
>cfg = imp.load_source('cfg', '', f)
>  File "", line 1, in 
>

I normally only run pktgen on Ubuntu systems and currently 18.04, which makes 
me believe it could the version of python you are running.

I am running python 2.7.15.

On my machine the same command works and false is nowhere in the code, must be 
a Python variable type.

Do you have your RTE_SDK and RTE_TARGET environment variables set?

> NameError: name 'false' is not defined
>
> I have tried cleaning up the files and a re-install but getting the same
> error. Does any of the config files needed to be edited before this step?
>
> Thanks in advance!
>
> Regards,
> Uditha

Regards,
Keith




Regards,
Keith



  1   2   3   >