[dpdk-users] Capture traffic with DPDK-dump

2016-11-10 Thread jose suarez
Hi,

I made a test using the linux kernel to capture packets with the testpmd 
app. In this way I used the following command:

# sudo ./x86_64-native-linuxapp-gcc/app/testpmd -c '0xfc' -n 4 --vdev 
'eth_pcap0,rx_iface=eth0,tx_pcap=/tmp/file.pcap' -- --port-topology=chained


I show below the output in this case:

EAL: Detected 8 lcore(s)
EAL: Probing VFIO support...
EAL: VFIO support initialized
PMD: Initializing pmd_pcap for eth_pcap0
PMD: Creating pcap-backed ethdev on numa socket 0
PMD: bnxt_rte_pmd_init() called for (null)
EAL: PCI device :01:00.0 on NUMA socket -1
EAL:   probe driver: 8086:10fb rte_ixgbe_pmd
USER1: create a new mbuf pool : n=187456, size=2176, 
socket=0
Configuring Port 0 (socket 0)
Port 0: XX:XX:XX:XX:XX:XX
Checking link statuses...
Port 0 Link Up - speed 1 Mbps - full-duplex
Done
No commandline core given, start packet forwarding
io packet forwarding - ports=1 - cores=1 - streams=1 - NUMA support 
disabled, MP over anonymous pages disabled
Logical Core 3 (socket 0) forwards packets on 1 streams:
   RX P=0/Q=0 (socket 0) -> TX P=0/Q=0 (socket 0) peer=02:00:00:00:00:00

   io packet forwarding - CRC stripping disabled - packets/burst=32
   nb forwarding cores=1 - nb forwarding ports=1
   RX queues=1 - RX desc=128 - RX free threshold=0
   RX threshold registers: pthresh=0 hthresh=0 wthresh=0
   TX queues=1 - TX desc=512 - TX free threshold=0
   TX threshold registers: pthresh=0 hthresh=0 wthresh=0
   TX RS bit threshold=0 - TXQ flags=0x0
Press enter to exit

Telling cores to stop...
Waiting for lcores to finish...

   -- Forward statistics for port 0 
--
   RX-packets: 1591270RX-dropped: 0 RX-total: 1591270
   TX-packets: 1591270TX-dropped: 0 TX-total: 1591270


   +++ Accumulated forward statistics for all 
ports+++
   RX-packets: 1591270RX-dropped: 0 RX-total: 1591270
   TX-packets: 1591270TX-dropped: 0 TX-total: 1591270


Done.

Shutting down port 0...
Stopping ports...
Done
Closing ports...
Done

Bye...

Once I interrupt the app I can see that packets are recorded and also 
the pcap was generated. So I receive the traffic correctly. It seems 
that the problem happens when I install the uio_pci_generic driver for 
the NIC.


Thanks a lot!

Jos?.


El 10/11/16 a las 14:20, Pattan, Reshma escribi?:
> Hi,
>
> Comments below.
>
> Thanks,
> Reshma
>
>> -Original Message-
>> From: jose suarez [mailto:jsuarezv at ac.upc.edu]
>> Sent: Thursday, November 10, 2016 12:32 PM
>> To: Pattan, Reshma 
>> Cc: users at dpdk.org
>> Subject: Re: [dpdk-users] Capture traffic with DPDK-dump
>>
>> Hi,
>>
>> Thank you very much for your response. I followed your comment about the
>> full PCI id and now the PDUMP application is working fine :). It creates the
>> pcap file.
>>
>> My problem now is that I noticed that in the testpmd app I don't receive any
>> packets. I write below the commands that I use to execute both apps:
>>
>> # sudo ./x86_64-native-linuxapp-gcc/app/testpmd -c 0x06 -n 4 --proc-type
>> primary --socket-mem 1000 --file-prefix pg1 -w :01:00.0 -- -i --port-
>> topology=chained
>>
>> # sudo ./x86_64-native-linuxapp-gcc/app/dpdk-pdump -c 0xf8 -n 4 --proc-
>> type auto --socket-mem 1000 --file-prefix pg2 -w :01:00.0 -- --pdump
>> 'device_id=:01:00.0,queue=*,rx-dev=/tmp/file.pcap'
>>
>> Before I execute these commands, I ensure that all the hugepages are free
>> (sudo rm -R /dev/hugepages/*)
>>
>> In this way I split up the hugepages (I have 2048 in total) between both
>> processes, as Keith Wiles advised me. Also I don't overlap any core with the
>> masks used (0x06 and 0xf8)
>>
>> My NIC (Intel 82599ES), which PCI id is :01:00.0, it is connected to a 
>> 10G
>> link that receives traffic from a mirrored port. I show you the internet 
>> device
>> settings related to this NIC:
>>
>> Network devices using DPDK-compatible driver
>> 
>> :01:00.0 '82599ES 10-Gigabit SFI/SFP+ Network Connection'
>> drv=igb_uio unused=ixgbe
>>
>>
>> When I run the testpmd app and check the port stats, I get the following
>> output:
>>
>> #sudo ./x86_64-native-linuxapp-gcc/app/testpmd -c 0x06 -n 4 --proc-
>> type=auto --socket-mem 1000 --file-prefix pg1 -w :01:00.0 -- -i --port-
>> topology=chained
>> EAL: Detected 8 lcore(s)
>> EAL: Auto-detected process type: PRIMARY
>> EAL: Probing VFIO support...
>> PMD: bnxt_rte_pmd_init() called for (null)
>> EAL: PCI device :01:00.0 on NUMA socket 0
>> EAL:   probe driver: 8086:10fb rte_ixgbe_pmd
>> Interactive-mode selected
>> USER1: create a new mbuf pool : n=155456,
>> size=2176,
>> socket=0
>> Configuring Port 0 (socket 0)
>> Port 0: XX:XX:XX:XX:XX:XX
>> Checking link statuses...
>> Port 0 Link 

[dpdk-users] Compiling DPDK 16.07 with shared option is failing..

2016-11-10 Thread Nagaprabhanjan Bellaru
I am not sure why it complains about "rte_eth_bond_8023ad_conf_get at DPDK_2.0"
when I cam compiling for 16.07. Anyways, if I change the symbol names, the
compilation goes through fine.

-nagp

On Thu, Nov 10, 2016 at 1:53 PM, Nagaprabhanjan Bellaru <
nagp.lists at gmail.com> wrote:

> Hi,
>
> I am compiling DPDK as part of VPP (Cisco). It has rte_timer etc. turned
> off. I enabled it and stumbled on this error. DPDK compilation is invoked
> by VPP.
>
> -nagp
>
> On Thu, Nov 10, 2016 at 1:48 PM, Thomas Monjalon <
> thomas.monjalon at 6wind.com> wrote:
>
>> 2016-11-10 10:37, Nagaprabhanjan Bellaru:
>> >  I found that librte_timer was not enabled and so the compilation
>> failed.
>> > When I enabled the same, it went ahead and failed at a different place
>> > while compiling librte_pmd:
>> >
>> > --
>> > gcc -pie -fPIC
>> > -L/home/ubuntu/development/libfwdd/src/platform/vpp/vpp/buil
>> d-root/install-vpp_debug-native/dpdk/lib
>> > -Wl,-g -shared rte_eth_bond_api.o rte_eth_bond_pmd.o rte_eth_bond_args.o
>> > rte_eth_bond_8023ad.o rte_eth_bond_alb.o -z defs -lrte_mbuf -lethdev
>> > -lrte_eal -lrte_kvargs -lrte_cmdline -lrte_mempool -lrte_ring
>> > -Wl,-soname,librte_pmd_bond.so.1.1 -o librte_pmd_bond.so.1.1
>> > /usr/bin/ld: librte_pmd_bond.so.1.1: version node not found for symbol
>> > rte_eth_bond_8023ad_conf_get at DPDK_2.0
>> > /usr/bin/ld: failed to set dynamic section sizes: Bad value
>> > collect2: error: ld returned 1 exit status
>> > --
>>
>> What are you changing in the default configuration?
>> Which commands are you using to compile?
>>
>
>


[dpdk-users] SRIOV with DPDK-PF: PMD: eth_ixgbevf_dev_init(): VF Initialization Failure

2016-11-10 Thread Eli Britstein
Hi all,

I want to use both PF and VF bound to igb_uio.
However, when doing so, VF init fails:

EAL: PCI device :04:10.1 on NUMA socket 0
EAL:   probe driver: 8086:10ed rte_ixgbevf_pmd
PMD: eth_ixgbevf_dev_init(): VF Initialization Failure: -100
EAL: Error - exiting with code: 1
  Cause: Requested device :04:10.1 cannot be used

The sequence I use it this:

Initial state:
- 04:00.1 is bound to Linux driver (ixgbe)
- no VFs configured

Step 1: configure a VF
echo 1 > /sys/bus/pci/devices/:04:00.1/sriov_numvfs

Step 2: set VF's MAC address
ip link set dev enp4s0f1 vf 0 mac 00:bb:dd:04:00:00

Step 3: bind the VF to igb_uio

Step 4: execute L2FWD
./dpdk/examples/l2fwd/build/l2fwd -c 1 -n 1 -w :04:00.1 -- -T 0 -p 1 Works 
as expected

Step 5: bind the PF to igb_uio

Step 6: execute L2FWD again
./dpdk/examples/l2fwd/build/l2fwd -c 1 -n 1 -w :04:00.1 -- -T 0 -p 1

---
FAILURE
---

I tried several DPDK versions (1.7.1, 2.0.0, 2.2.0, 16.07). All fail in 
eth_ixgbevf_dev_init.
Usually the error code is -100, but in some cases also I saw -15.

Please advise
Thanks,
Eli


-
This email and any files transmitted and/or attachments with it are 
confidential and proprietary information of
Toga Networks Ltd., and intended solely for the use of the individual or entity 
to whom they are addressed.
If you have received this email in error please notify the system manager. This 
message contains confidential
information of Toga Networks Ltd., and is intended only for the individual 
named. If you are not the named
addressee you should not disseminate, distribute or copy this e-mail. Please 
notify the sender immediately
by e-mail if you have received this e-mail by mistake and delete this e-mail 
from your system. If you are not
the intended recipient you are notified that disclosing, copying, distributing 
or taking any action in reliance on
the contents of this information is strictly prohibited.




[dpdk-users] SRIOV with DPDK-PF: PMD: eth_ixgbevf_dev_init(): VF Initialization Failure

2016-11-10 Thread Eli Britstein
Hi all,



I want to use both PF and VF bound to igb_uio.

However, when doing so, VF init fails:



EAL: PCI device :04:10.1 on NUMA socket 0

EAL:   probe driver: 8086:10ed rte_ixgbevf_pmd

PMD: eth_ixgbevf_dev_init(): VF Initialization Failure: -100

EAL: Error - exiting with code: 1

  Cause: Requested device :04:10.1 cannot be used



The sequence I use it this:



Initial state:

- 04:00.1 is bound to Linux driver (ixgbe)

- no VFs configured



Step 1: configure a VF

echo 1 > /sys/bus/pci/devices/:04:00.1/sriov_numvfs



Step 2: set VF's MAC address

ip link set dev enp4s0f1 vf 0 mac 00:bb:dd:04:00:00



Step 3: bind the VF to igb_uio



Step 4: execute L2FWD

./dpdk/examples/l2fwd/build/l2fwd -c 1 -n 1 -w :04:00.1 -- -T 0 -p 1 Works 
as expected



Step 5: bind the PF to igb_uio



Step 6: execute L2FWD again

./dpdk/examples/l2fwd/build/l2fwd -c 1 -n 1 -w :04:00.1 -- -T 0 -p 1



---

FAILURE

---



I tried several DPDK versions (1.7.1, 2.0.0, 2.2.0, 16.07). All fail in 
eth_ixgbevf_dev_init.

Usually the error code is -100, but in some cases also I saw -15.



Please advise

Thanks,

Eli



-
This email and any files transmitted and/or attachments with it are 
confidential and proprietary information of
Toga Networks Ltd., and intended solely for the use of the individual or entity 
to whom they are addressed.
If you have received this email in error please notify the system manager. This 
message contains confidential
information of Toga Networks Ltd., and is intended only for the individual 
named. If you are not the named
addressee you should not disseminate, distribute or copy this e-mail. Please 
notify the sender immediately
by e-mail if you have received this e-mail by mistake and delete this e-mail 
from your system. If you are not
the intended recipient you are notified that disclosing, copying, distributing 
or taking any action in reliance on
the contents of this information is strictly prohibited.




[dpdk-users] Compiling DPDK 16.07 with shared option is failing..

2016-11-10 Thread Nagaprabhanjan Bellaru
Hi,

I am compiling DPDK as part of VPP (Cisco). It has rte_timer etc. turned
off. I enabled it and stumbled on this error. DPDK compilation is invoked
by VPP.

-nagp

On Thu, Nov 10, 2016 at 1:48 PM, Thomas Monjalon 
wrote:

> 2016-11-10 10:37, Nagaprabhanjan Bellaru:
> >  I found that librte_timer was not enabled and so the compilation failed.
> > When I enabled the same, it went ahead and failed at a different place
> > while compiling librte_pmd:
> >
> > --
> > gcc -pie -fPIC
> > -L/home/ubuntu/development/libfwdd/src/platform/vpp/vpp/
> build-root/install-vpp_debug-native/dpdk/lib
> > -Wl,-g -shared rte_eth_bond_api.o rte_eth_bond_pmd.o rte_eth_bond_args.o
> > rte_eth_bond_8023ad.o rte_eth_bond_alb.o -z defs -lrte_mbuf -lethdev
> > -lrte_eal -lrte_kvargs -lrte_cmdline -lrte_mempool -lrte_ring
> > -Wl,-soname,librte_pmd_bond.so.1.1 -o librte_pmd_bond.so.1.1
> > /usr/bin/ld: librte_pmd_bond.so.1.1: version node not found for symbol
> > rte_eth_bond_8023ad_conf_get at DPDK_2.0
> > /usr/bin/ld: failed to set dynamic section sizes: Bad value
> > collect2: error: ld returned 1 exit status
> > --
>
> What are you changing in the default configuration?
> Which commands are you using to compile?
>


[dpdk-users] Capture traffic with DPDK-dump

2016-11-10 Thread jose suarez
Hi,

Thank you very much for your response. I followed your comment about the 
full PCI id and now the PDUMP application is working fine :). It creates 
the pcap file.

My problem now is that I noticed that in the testpmd app I don't receive 
any packets. I write below the commands that I use to execute both apps:

# sudo ./x86_64-native-linuxapp-gcc/app/testpmd -c 0x06 -n 4 --proc-type 
primary --socket-mem 1000 --file-prefix pg1 -w :01:00.0 -- -i 
--port-topology=chained

# sudo ./x86_64-native-linuxapp-gcc/app/dpdk-pdump -c 0xf8 -n 4 
--proc-type auto --socket-mem 1000 --file-prefix pg2 -w :01:00.0 -- 
--pdump 'device_id=:01:00.0,queue=*,rx-dev=/tmp/file.pcap'

Before I execute these commands, I ensure that all the hugepages are 
free (sudo rm -R /dev/hugepages/*)

In this way I split up the hugepages (I have 2048 in total) between both 
processes, as Keith Wiles advised me. Also I don't overlap any core with 
the masks used (0x06 and 0xf8)

My NIC (Intel 82599ES), which PCI id is :01:00.0, it is connected to 
a 10G link that receives traffic from a mirrored port. I show you the 
internet device settings related to this NIC:

Network devices using DPDK-compatible driver

:01:00.0 '82599ES 10-Gigabit SFI/SFP+ Network Connection' 
drv=igb_uio unused=ixgbe


When I run the testpmd app and check the port stats, I get the following 
output:

#sudo ./x86_64-native-linuxapp-gcc/app/testpmd -c 0x06 -n 4 
--proc-type=auto --socket-mem 1000 --file-prefix pg1 -w :01:00.0 -- 
-i --port-topology=chained
EAL: Detected 8 lcore(s)
EAL: Auto-detected process type: PRIMARY
EAL: Probing VFIO support...
PMD: bnxt_rte_pmd_init() called for (null)
EAL: PCI device :01:00.0 on NUMA socket 0
EAL:   probe driver: 8086:10fb rte_ixgbe_pmd
Interactive-mode selected
USER1: create a new mbuf pool : n=155456, size=2176, 
socket=0
Configuring Port 0 (socket 0)
Port 0: XX:XX:XX:XX:XX:XX
Checking link statuses...
Port 0 Link Up - speed 1 Mbps - full-duplex
Done
testpmd> show port stats 0

    NIC statistics for port 0 

   RX-packets: 0  RX-missed: 0  RX-bytes:  0
   RX-errors: 0
   RX-nombuf:  0
   TX-packets: 0  TX-errors: 0  TX-bytes:  0

   Throughput (since last show)
   Rx-pps:0
   Tx-pps:0


It doesn't receive any packet. Did I missed any step in the 
configuration of the testpmd app?


Thanks!

Jos?.


El 10/11/16 a las 11:56, Pattan, Reshma escribi?:
> Hi,
>
> Comments below.
>
> Thanks,
> Reshma
>
>
>> -Original Message-
>> From: users [mailto:users-bounces at dpdk.org] On Behalf Of jose suarez
>> Sent: Monday, November 7, 2016 5:50 PM
>> To: users at dpdk.org
>> Subject: [dpdk-users] Capture traffic with DPDK-dump
>>
>> Hello everybody!
>>
>> I am new in DPDK. I'm trying simply to capture traffic from a 10G physical
>> NIC. I installed the DPDK from source files and activated the following
>> modules in common-base file:
>>
>> CONFIG_RTE_LIBRTE_PMD_PCAP=y
>>
>> CONFIG_RTE_LIBRTE_PDUMP=y
>>
>> CONFIG_RTE_PORT_PCAP=y
>>
>> Then I built the distribution using the dpdk-setup.h script. Also I add
>> hugepages and check they are configured successfully:
>>
>> AnonHugePages:  4096 kB
>> HugePages_Total:2048
>> HugePages_Free:0
>> HugePages_Rsvd:0
>> HugePages_Surp:0
>> Hugepagesize:   2048 kB
>>
>> To capture the traffic I guess I can use the dpdk-pdump application, but I
>> don't know how to use it. First of all, does it work if I bind the interfaces
>> using the uio_pci_generic drive? I guess that if I capture the traffic using 
>> the
>> linux kernel driver (ixgbe) I will loose a lot of packets.
>>
>> To bind the NIC I write this command:
>>
>> sudo ./tools/dpdk-devbind.py --bind=uio_pci_generic eth0
>>
>>
>> When I check the interfaces I can see that the NIC was binded successfully.
>> Also I checked that mi NIC is compatible with DPDK (Intel
>> 8599)
>>
>> Network devices using DPDK-compatible driver
>> 
>> :01:00.0 '82599ES 10-Gigabit SFI/SFP+ Network Connection'
>> drv=uio_pci_generic unused=ixgbe,vfio-pci
>>
>>
>> To capture packets, I read in the mailing list that it is necessary to run 
>> the
>> testpmd application and then dpdk-pdump using different cores.
>> So I used the following commands:
>>
>> sudo ./x86_64-native-linuxapp-gcc/app/testpmd -c 0x6 -n 4 -- -i
>>
>> sudo ./x86_64-native-linuxapp-gcc/app/dpdk-pdump -c 0xff -n 2 -- --pdump
>> 'device_id=01:00.0,queue=*,rx-dev=/tmp/file.pcap'
> 1)Please pass on the full PCI id, i.e ":01:00.0" in the command instead 
> of "01:00.0".
> In latest DPDK 16.11 code  full PCI id is used by eal layer to identify the 
> device.
>
> 2)Also note that you should not use same core mask for both primary and 
> secondary processes in multi 

[dpdk-users] Capture traffic with DPDK-dump

2016-11-10 Thread Pattan, Reshma
Hi,

Comments below.

Thanks,
Reshma

> -Original Message-
> From: jose suarez [mailto:jsuarezv at ac.upc.edu]
> Sent: Thursday, November 10, 2016 12:32 PM
> To: Pattan, Reshma 
> Cc: users at dpdk.org
> Subject: Re: [dpdk-users] Capture traffic with DPDK-dump
> 
> Hi,
> 
> Thank you very much for your response. I followed your comment about the
> full PCI id and now the PDUMP application is working fine :). It creates the
> pcap file.
> 
> My problem now is that I noticed that in the testpmd app I don't receive any
> packets. I write below the commands that I use to execute both apps:
> 
> # sudo ./x86_64-native-linuxapp-gcc/app/testpmd -c 0x06 -n 4 --proc-type
> primary --socket-mem 1000 --file-prefix pg1 -w :01:00.0 -- -i --port-
> topology=chained
> 
> # sudo ./x86_64-native-linuxapp-gcc/app/dpdk-pdump -c 0xf8 -n 4 --proc-
> type auto --socket-mem 1000 --file-prefix pg2 -w :01:00.0 -- --pdump
> 'device_id=:01:00.0,queue=*,rx-dev=/tmp/file.pcap'
> 
> Before I execute these commands, I ensure that all the hugepages are free
> (sudo rm -R /dev/hugepages/*)
> 
> In this way I split up the hugepages (I have 2048 in total) between both
> processes, as Keith Wiles advised me. Also I don't overlap any core with the
> masks used (0x06 and 0xf8)
> 
> My NIC (Intel 82599ES), which PCI id is :01:00.0, it is connected to a 10G
> link that receives traffic from a mirrored port. I show you the internet 
> device
> settings related to this NIC:
> 
> Network devices using DPDK-compatible driver
> 
> :01:00.0 '82599ES 10-Gigabit SFI/SFP+ Network Connection'
> drv=igb_uio unused=ixgbe
> 
> 
> When I run the testpmd app and check the port stats, I get the following
> output:
> 
> #sudo ./x86_64-native-linuxapp-gcc/app/testpmd -c 0x06 -n 4 --proc-
> type=auto --socket-mem 1000 --file-prefix pg1 -w :01:00.0 -- -i --port-
> topology=chained
> EAL: Detected 8 lcore(s)
> EAL: Auto-detected process type: PRIMARY
> EAL: Probing VFIO support...
> PMD: bnxt_rte_pmd_init() called for (null)
> EAL: PCI device :01:00.0 on NUMA socket 0
> EAL:   probe driver: 8086:10fb rte_ixgbe_pmd
> Interactive-mode selected
> USER1: create a new mbuf pool : n=155456,
> size=2176,
> socket=0
> Configuring Port 0 (socket 0)
> Port 0: XX:XX:XX:XX:XX:XX
> Checking link statuses...
> Port 0 Link Up - speed 1 Mbps - full-duplex Done
> testpmd> show port stats 0


After testpmd comes to the prompt, you need to execute command "start". This 
will start traffic forwarding.

> NIC statistics for port 0
> 
>RX-packets: 0  RX-missed: 0  RX-bytes:  0
>RX-errors: 0
>RX-nombuf:  0
>TX-packets: 0  TX-errors: 0  TX-bytes:  0
> 
>Throughput (since last show)
>Rx-pps:0
>Tx-pps:0
> 
> 
> 
> It doesn't receive any packet. Did I missed any step in the configuration of
> the testpmd app?

I am wondering , you should see all packets hitting Rx-missed. 
So I suggest just stop everything. Unbind port back to Linux. Then check if the 
port is receiving packets from other end using tcpdump. 
If not then you may need to debug the issue. 

After everything is fine, bind back the port to dpdk, run testpmd and see if 
you could receive packets or not. 
If you are seeing packets against "Rx-missed", then run start command in 
testpmd prompt to start packet forwarding. After that you will be able to
See the packets in the capture file.

Thanks,
Reshma



[dpdk-users] Is DPDK compatible with C++11 threads?

2016-11-10 Thread Pavey, Nicholas
Yes, agreed. We only use the STL in general purpose code, not the DPDK fast 
path.

Nick

From: Anupam Kapoor 
Date: Wednesday, November 9, 2016 at 11:14 PM
To: "Wiles, Keith" 
Cc: David Aldrich , "Pavey, Nicholas" , "users at dpdk.org" 
Subject: Re: [dpdk-users] Is DPDK compatible with C++11 threads?


On Thu, Nov 10, 2016 at 2:30 AM, Wiles, Keith mailto:keith.wiles at intel.com>> wrote:
Also look at one of the DPDK examples as it uses a lthread on top of pthreads 
and it may give you some ideas as to how multiple threads can work. I am trying 
to remember which example and my dev machine is down at this time, but just 
search for lthread.

?this one: 
?http://dpdk.org/doc/guides/sample_app_ug/performance_thread.html

one more thing that you probably need to watch out for would be libc 
interactions within your C++11 application e.g. usage of stl containers might 
exact a severe performance penalty...

--
kind regards
anupam


In the beginning was the lambda, and the lambda was with Emacs, and Emacs was 
the lambda.


[dpdk-users] PDUMP: failed to send to server:Connection refused

2016-11-10 Thread Pattan, Reshma
Hi,

I really apologize for not noticing this mail.

comments are below.

> -Original Message-
> From: users [mailto:users-bounces at dpdk.org] On Behalf Of Sandeep
> Rayapudi
> Sent: Thursday, August 25, 2016 5:01 PM
> To: users at dpdk.org
> Subject: [dpdk-users] PDUMP: failed to send to server:Connection refused
> 
> Hi all,
> 
> I'm trying the following scenario and PDUMP doesn't start up even though
> I'm running traffic generator. My idea is to generate traffic from one host
> and dump on another host.
> 
> 1. Downloaded DPDK latest version on two hosts and compiled DPDK with
> CONFIG_RTE_LIBRTE_PMD_PCAP=y 2. On both of these hosts, I made one of
> the NIC as DPDK enabled 3. On host 1, I did:
> ./app/app/x86_64-native-linuxapp-gcc/pktgen -c 0x1f -n 3 -- -P -m "[1:3].0"
> The packet generator starts and prints:
> 
>Copyright (c) <2010-2016>, Intel Corporation. All rights reserved.
>Pktgen created by: Keith Wiles -- >>> Powered by Intel? DPDK <<<
> 
> Lua 5.3.2  Copyright (C) 1994-2015 Lua.org, PUC-Rio
> >>> Packet Burst 32, RX Desc 512, TX Desc 512, mbufs/port 4096, mbuf
> >>> cache
> 512
> 
> === port to lcore mapping table (# lcores 5) ===
>lcore: 0 1 2 3 4
> port   0:  D: T  1: 0  0: 0  0: 1  0: 0 =  1: 1
> Total   :  0: 0  1: 0  0: 0  0: 1  0: 0
> Display and Timer on lcore 0, rx:tx counts per port/lcore
> 
> Configuring 2 ports, MBUF Size 1920, MBUF Cache Size 512
> Lcore:
> 1, RX-Only
> RX( 1): ( 0: 0)
> 3, TX-Only
> TX( 1): ( 0: 0)
> 
> Port :
> 0, nb_lcores  2, private 0x8ac490, lcores:  1  3
> 
> 
> 
> ** Dev Info (rte_ixgbe_pmd:0) **
>max_vfs:   0 min_rx_bufsize:1024 max_rx_pktlen : 15872
> max_rx_queues : 128 max_tx_queues:  64
>max_mac_addrs  : 127 max_hash_mac_addrs:4096 max_vmdq_pools:64
>rx_offload_capa:  31 tx_offload_capa   :  63 reta_size :   128
> flow_type_rss_offloads:00038d34
>vmdq_queue_base:   0 vmdq_queue_num: 128 vmdq_pool_base: 0
> ** RX Conf **
>pthreash   :   8 hthresh  :   8 wthresh: 0
>Free Thresh:  32 Drop Enable  :   0 Deferred Start : 0
> ** TX Conf **
>pthreash   :  32 hthresh  :   0 wthresh: 0
>Free Thresh:  32 RS Thresh:  32 Deferred Start : 0 TXQ
> Flags:0f01
> 
> Initialize Port 0 -- TxQ 1, RxQ 1,  Src MAC 00:11:0a:67:d7:dc
> Create: Default RX  0:0  - Memory used (MBUFs 4096 x (size 1920 + Hdr
> 128)) + 192 =   8193 KB headroom 128 2176
>   Set RX queue stats mapping pid 0, q 0, lcore 1
> 
> 
> Create: Default TX  0:0  - Memory used (MBUFs 4096 x (size 1920 + Hdr
> 128)) + 192 =   8193 KB headroom 128 2176
> Create: Range TX0:0  - Memory used (MBUFs 4096 x (size 1920 + Hdr
> 128)) + 192 =   8193 KB headroom 128 2176
> Create: Sequence TX 0:0  - Memory used (MBUFs 4096 x (size 1920 + Hdr
> 128)) + 192 =   8193 KB headroom 128 2176
> Create: Special TX  0:0  - Memory used (MBUFs   64 x (size 1920 + Hdr
> 128)) + 192 =129 KB headroom 128 2176
> 
>Port 
> memory used =  32897 KB
>   Total 
> memory used =  32897 KB
> Port  0: Link Up - speed 1 Mbps - full-duplex  mode>
> 
> 
> === Display processing on lcore 0
> WARNING: Nothing to do on lcore 2: exiting
> WARNING: Nothing to do on lcore 4: exiting
>   RX processing lcore:   1 rx:  1 tx:  0
>   TX processing lcore:   3 rx:  0 tx:  1
> 
> 
> 
> 
> 
> 
> / Ports 0-1 of 2 Copyright (c) <2010-2016>, Intel Corporation
>   Flags:Port  :   P--:0
> Link State:TotalRate
> Pkts/s Max/Rx : 0/0   0/0
>Max/Tx : 0/0   0/0
> MBits/s Rx/Tx : 0/0   0/0
> Broadcast :   0
> Multicast :   0
>   64 Bytes:   0
>   65-127  :   0
>   128-255 :   0
>   256-511 :   0
>   512-1023:   0
>   1024-1518   :   0
> Runts/Jumbos  : 0/0
> Errors Rx/Tx  : 0/0
> Total Rx Pkts :   0
>   Tx Pkts :   0
>   Rx MBs  :   0
>   Tx MBs  :   0
> ARP/ICMP Pkts : 0/0
>   :
> Pattern Type  : abcd...
> Tx Count/% Rate   :  Forever / 100%
> PktSize/Tx Burst  :   64 /   32
> Src/Dest Port : 1234 / 5678
> Pkt Type:VLAN ID  : IPv4 / TCP:0001
> Dst  IP Address   : 192.168.1.1
> Src  IP Address   :  192.168.0.1/24
> Dst MAC Address   :   00:00:00:00:00:00
> Src MAC Address   :   

[dpdk-users] Capture traffic with DPDK-dump

2016-11-10 Thread Pattan, Reshma
Hi,

Comments below.

Thanks,
Reshma


> -Original Message-
> From: users [mailto:users-bounces at dpdk.org] On Behalf Of jose suarez
> Sent: Monday, November 7, 2016 5:50 PM
> To: users at dpdk.org
> Subject: [dpdk-users] Capture traffic with DPDK-dump
> 
> Hello everybody!
> 
> I am new in DPDK. I'm trying simply to capture traffic from a 10G physical
> NIC. I installed the DPDK from source files and activated the following
> modules in common-base file:
> 
> CONFIG_RTE_LIBRTE_PMD_PCAP=y
> 
> CONFIG_RTE_LIBRTE_PDUMP=y
> 
> CONFIG_RTE_PORT_PCAP=y
> 
> Then I built the distribution using the dpdk-setup.h script. Also I add
> hugepages and check they are configured successfully:
> 
> AnonHugePages:  4096 kB
> HugePages_Total:2048
> HugePages_Free:0
> HugePages_Rsvd:0
> HugePages_Surp:0
> Hugepagesize:   2048 kB
> 
> To capture the traffic I guess I can use the dpdk-pdump application, but I
> don't know how to use it. First of all, does it work if I bind the interfaces
> using the uio_pci_generic drive? I guess that if I capture the traffic using 
> the
> linux kernel driver (ixgbe) I will loose a lot of packets.
> 
> To bind the NIC I write this command:
> 
> sudo ./tools/dpdk-devbind.py --bind=uio_pci_generic eth0
> 
> 
> When I check the interfaces I can see that the NIC was binded successfully.
> Also I checked that mi NIC is compatible with DPDK (Intel
> 8599)
> 
> Network devices using DPDK-compatible driver
> 
> :01:00.0 '82599ES 10-Gigabit SFI/SFP+ Network Connection'
> drv=uio_pci_generic unused=ixgbe,vfio-pci
> 
> 
> To capture packets, I read in the mailing list that it is necessary to run the
> testpmd application and then dpdk-pdump using different cores.
> So I used the following commands:
> 
> sudo ./x86_64-native-linuxapp-gcc/app/testpmd -c 0x6 -n 4 -- -i
> 
> sudo ./x86_64-native-linuxapp-gcc/app/dpdk-pdump -c 0xff -n 2 -- --pdump
> 'device_id=01:00.0,queue=*,rx-dev=/tmp/file.pcap'

1)Please pass on the full PCI id, i.e ":01:00.0" in the command instead of 
"01:00.0".
In latest DPDK 16.11 code  full PCI id is used by eal layer to identify the 
device. 

2)Also note that you should not use same core mask for both primary and 
secondary processes in multi process context.
ex: -c0x6 for testpmd and -c0x2 for dpdk-pdump can be used. 

Please let me know how you are proceeding.


Thanks,
Reshma


[dpdk-users] Compiling DPDK 16.07 with shared option is failing..

2016-11-10 Thread Nagaprabhanjan Bellaru
 I found that librte_timer was not enabled and so the compilation failed.
When I enabled the same, it went ahead and failed at a different place
while compiling librte_pmd:

--
gcc -pie -fPIC
-L/home/ubuntu/development/libfwdd/src/platform/vpp/vpp/build-root/install-vpp_debug-native/dpdk/lib
-Wl,-g -shared rte_eth_bond_api.o rte_eth_bond_pmd.o rte_eth_bond_args.o
rte_eth_bond_8023ad.o rte_eth_bond_alb.o -z defs -lrte_mbuf -lethdev
-lrte_eal -lrte_kvargs -lrte_cmdline -lrte_mempool -lrte_ring
-Wl,-soname,librte_pmd_bond.so.1.1 -o librte_pmd_bond.so.1.1
/usr/bin/ld: librte_pmd_bond.so.1.1: version node not found for symbol
rte_eth_bond_8023ad_conf_get at DPDK_2.0
/usr/bin/ld: failed to set dynamic section sizes: Bad value
collect2: error: ld returned 1 exit status
--

I see that the symbol is present in drivers/net/bonding, but I am not sure
how to fix this error. Can someone please help me?

Thanks,
-nagp

On Wed, Nov 9, 2016 at 6:04 PM, Nagaprabhanjan Bellaru  wrote:

> Hi,
>
> I am compiling DPDK with "CONFIG_RTE_BUILD_SHARED_LIB=y". However, when I
> compile, it is failing at:
>
> --
> gcc -pie -fPIC -L/home/ubuntu/development/libfwdd/src/platform/vpp/vpp/
> build-root/install-vpp_debug-native/dpdk/lib  -Wl,-g -shared rte_sched.o
> rte_red.o rte_approx.o rte_reciprocal.o -z defs -lm -lrt -lrte_eal
> -lrte_mempool -lrte_mbuf -lrte_timer -Wl,-soname,librte_sched.so.1.1 -o
> librte_sched.so.1.1
> /usr/bin/ld: cannot find -lrte_timer
> --
>
> I can see that librte_timer is not compiled yet, however librte_sched
> depends on it - is there any workaround for this issue? Can I change the
> order somewhere?
>
> Thanks,
> -nagp
>


[dpdk-users] Is DPDK compatible with C++11 threads?

2016-11-10 Thread Anupam Kapoor
On Thu, Nov 10, 2016 at 2:30 AM, Wiles, Keith  wrote:

> Also look at one of the DPDK examples as it uses a lthread on top of
> pthreads and it may give you some ideas as to how multiple threads can
> work. I am trying to remember which example and my dev machine is down at
> this time, but just search for lthread.


?this one: ?http://dpdk.org/doc/guides/sample_app_ug/performance_thread.html

one more thing that you probably need to watch out for would be libc
interactions within your C++11 application e.g. usage of stl containers
might exact a severe performance penalty...

--
kind regards
anupam


In the beginning was the lambda, and the lambda was with Emacs, and Emacs
was the lambda.


[dpdk-users] Compiling DPDK 16.07 with shared option is failing..

2016-11-10 Thread Thomas Monjalon
2016-11-10 10:37, Nagaprabhanjan Bellaru:
>  I found that librte_timer was not enabled and so the compilation failed.
> When I enabled the same, it went ahead and failed at a different place
> while compiling librte_pmd:
> 
> --
> gcc -pie -fPIC
> -L/home/ubuntu/development/libfwdd/src/platform/vpp/vpp/build-root/install-vpp_debug-native/dpdk/lib
> -Wl,-g -shared rte_eth_bond_api.o rte_eth_bond_pmd.o rte_eth_bond_args.o
> rte_eth_bond_8023ad.o rte_eth_bond_alb.o -z defs -lrte_mbuf -lethdev
> -lrte_eal -lrte_kvargs -lrte_cmdline -lrte_mempool -lrte_ring
> -Wl,-soname,librte_pmd_bond.so.1.1 -o librte_pmd_bond.so.1.1
> /usr/bin/ld: librte_pmd_bond.so.1.1: version node not found for symbol
> rte_eth_bond_8023ad_conf_get at DPDK_2.0
> /usr/bin/ld: failed to set dynamic section sizes: Bad value
> collect2: error: ld returned 1 exit status
> --

What are you changing in the default configuration?
Which commands are you using to compile?