[dpdk-dev] DPDK supported processor

2014-08-21 Thread BYEONG-GI KIM
It works fine with DPDK at least.

The problem is that OVDK, especially ovs-dpdk command supported by the
OVDK, doesn't work correctly. I tested the DPDK application, l3fwd, in
order to know where the incorrect operation comes from, either DPDK or OVDK
itself. And I know now the issue seems belonging to OVDK. :) In the OVDK,
the physical ports even could not detected...

Best regards

Byeong-Gi KIM


2014-08-21 19:00 GMT+09:00 Masaru Oki :

> I have SUPERMICRO A1SRi-2758F mainboard (onboard Atom C2758),
> and onboard NICs work fine with DPDK application.
>
> 2014-08-21 18:17 GMT+09:00 BYEONG-GI KIM :
>
> Well, I did think that the NIC in my testing machine may not support DPDK
>> rather than thinking l3fwd is not working.
>>
>> I just tested the l3fwd sample application to identify whether the NIC
>> supports DPDK or not.
>>
>> Best regards
>>
>> Byeong-Gi KIM
>>
>>
>> 2014-08-21 18:14 GMT+09:00 Chae-yong Chong :
>>
>> > Hi
>> >
>> > Could you give the details why you think the l3fwd is not working
>> >
>> > Best regards,
>> > Chae-yong
>> >
>> > 2014? 8? 21? ???, BYEONG-GI KIM?? ??? ???:
>> >
>> > Thank you for the reply.
>> >>
>> >> I tested l3forwarding sample application, and the results were as
>> below:
>> >>
>> >> sudo ./build/l3fwd -c 0x0F -n 4 -- -p 0x03
>> >> --config="(0,0,0),(0,1,1),(1,0,2),(1,1,3)"
>> >> EAL: Detected lcore 0 as core 0 on socket 0
>> >> EAL: Detected lcore 1 as core 1 on socket 0
>> >> EAL: Detected lcore 2 as core 2 on socket 0
>> >> EAL: Detected lcore 3 as core 3 on socket 0
>> >> EAL: Detected lcore 4 as core 4 on socket 0
>> >> EAL: Detected lcore 5 as core 5 on socket 0
>> >> EAL: Detected lcore 6 as core 6 on socket 0
>> >> EAL: Detected lcore 7 as core 7 on socket 0
>> >> EAL: Support maximum 64 logical core(s) by configuration.
>> >> EAL: Detected 8 lcore(s)
>> >> EAL: Searching for IVSHMEM devices...
>> >> EAL: No IVSHMEM configuration found!
>> >> EAL: Setting up memory...
>> >> EAL: Ask a virtual area of 0x5ffc0 bytes
>> >> EAL: Virtual area found at 0x2aa4aae0 (size = 0x5ffc0)
>> >>
>> >> EAL: Ask a virtual area of 0x20 bytes
>> >> EAL: Virtual area found at 0x73e0 (size = 0x20)
>> >> EAL: Ask a virtual area of 0x20 bytes
>> >> EAL: Virtual area found at 0x73a0 (size = 0x20)
>> >> EAL: Requesting 12288 pages of size 2MB from socket 0
>> >> EAL: TSC frequency is ~238 KHz
>> >> EAL: Master core 0 is ready (tid=f7fe9800)
>> >> EAL: Core 3 is ready (tid=f5998700)
>> >> EAL: Core 2 is ready (tid=f6199700)
>> >> EAL: Core 1 is ready (tid=f699a700)
>> >> EAL: PCI device :00:14.0 on NUMA socket -1
>> >> EAL:   probe driver: 8086:1f41 rte_igb_pmd
>> >> EAL:   PCI memory mapped at 0x77f8d000
>> >> EAL:   PCI memory mapped at 0x77f89000
>> >> EAL: PCI device :00:14.1 on NUMA socket -1
>> >> EAL:   probe driver: 8086:1f41 rte_igb_pmd
>> >> EAL:   PCI memory mapped at 0x77f69000
>> >> EAL:   PCI memory mapped at 0x77f65000
>> >> EAL: PCI device :00:14.2 on NUMA socket -1
>> >> EAL:   probe driver: 8086:1f41 rte_igb_pmd
>> >> EAL:   :00:14.2 not managed by UIO driver, skipping
>> >> EAL: PCI device :00:14.3 on NUMA socket -1
>> >> EAL:   probe driver: 8086:1f41 rte_igb_pmd
>> >> EAL:   :00:14.3 not managed by UIO driver, skipping
>> >> EAL: PCI device :00:14.2 on NUMA socket -1
>> >> EAL:   probe driver: 8086:1f41 rte_igb_pmd
>> >> EAL:   :00:14.2 not managed by UIO driver, skipping
>> >> EAL: PCI device :00:14.3 on NUMA socket -1
>> >> EAL:   probe driver: 8086:1f41 rte_igb_pmd
>> >> EAL:   :00:14.3 not managed by UIO driver, skipping
>> >> Initializing port 0 ... Creating queues: nb_rxq=2 nb_txq=4...
>> >>  Address:0C:C4:7A:05:52:7A, Allocated mbuf pool on socket 0
>> >> LPM: Adding route 0x01010100 / 24 (0)
>> >> LPM: Adding route 0x02010100 / 24 (1)
>> >> LPM: Adding route IPV6 / 48 (0)
>> >> LPM: Adding route IPV6 / 48 (1)
>> >> txq=0,0,0 PMD: To improve 1G driver performance, consider setting the
>> TX
>> >> WTHRESH value to 4, 8, or 16.
>> >> txq=1,1,0 PMD: To improve 1G driver performance, consider setting the
>> TX
>> >> WTHRESH value to 4, 8, or 16.
>> >> txq=2,2,0 PMD: To improve 1G driver performance, consider setting the
>> TX
>> >> WTHRESH value to 4, 8, or 16.
>> >> txq=3,3,0 PMD: To improve 1G driver performance, consider setting the
>> TX
>> >> WTHRESH value to 4, 8, or 16.
>> >>
>> >> Initializing port 1 ... Creating queues: nb_rxq=2 nb_txq=4...
>> >>  Address:0C:C4:7A:05:52:7B, txq=0,0,0 PMD: To improve 1G driver
>> >> performance, consider setting the TX WTHRESH value to 4, 8, or 16.
>> >> txq=1,1,0 PMD: To improve 1G driver performance, consider setting the
>> TX
>> >> WTHRESH value to 4, 8, or 16.
>> >> txq=2,2,0 PMD: To improve 1G driver performance, consider setting the
>> TX
>> >> WTHRESH value to 4, 8, or 16.
>> >> txq=3,3,0 PMD: To improve 1G driver performance, consider setting the
>> TX
>> >> WTHRESH value to 4, 8, or 16.
>> >>
>> >>
>> >> 

[dpdk-dev] DPDK supported processor

2014-08-21 Thread Masaru Oki
I have SUPERMICRO A1SRi-2758F mainboard (onboard Atom C2758),
and onboard NICs work fine with DPDK application.

2014-08-21 18:17 GMT+09:00 BYEONG-GI KIM :

> Well, I did think that the NIC in my testing machine may not support DPDK
> rather than thinking l3fwd is not working.
>
> I just tested the l3fwd sample application to identify whether the NIC
> supports DPDK or not.
>
> Best regards
>
> Byeong-Gi KIM
>
>
> 2014-08-21 18:14 GMT+09:00 Chae-yong Chong :
>
> > Hi
> >
> > Could you give the details why you think the l3fwd is not working
> >
> > Best regards,
> > Chae-yong
> >
> > 2014? 8? 21? ???, BYEONG-GI KIM?? ??? ???:
> >
> > Thank you for the reply.
> >>
> >> I tested l3forwarding sample application, and the results were as below:
> >>
> >> sudo ./build/l3fwd -c 0x0F -n 4 -- -p 0x03
> >> --config="(0,0,0),(0,1,1),(1,0,2),(1,1,3)"
> >> EAL: Detected lcore 0 as core 0 on socket 0
> >> EAL: Detected lcore 1 as core 1 on socket 0
> >> EAL: Detected lcore 2 as core 2 on socket 0
> >> EAL: Detected lcore 3 as core 3 on socket 0
> >> EAL: Detected lcore 4 as core 4 on socket 0
> >> EAL: Detected lcore 5 as core 5 on socket 0
> >> EAL: Detected lcore 6 as core 6 on socket 0
> >> EAL: Detected lcore 7 as core 7 on socket 0
> >> EAL: Support maximum 64 logical core(s) by configuration.
> >> EAL: Detected 8 lcore(s)
> >> EAL: Searching for IVSHMEM devices...
> >> EAL: No IVSHMEM configuration found!
> >> EAL: Setting up memory...
> >> EAL: Ask a virtual area of 0x5ffc0 bytes
> >> EAL: Virtual area found at 0x2aa4aae0 (size = 0x5ffc0)
> >>
> >> EAL: Ask a virtual area of 0x20 bytes
> >> EAL: Virtual area found at 0x73e0 (size = 0x20)
> >> EAL: Ask a virtual area of 0x20 bytes
> >> EAL: Virtual area found at 0x73a0 (size = 0x20)
> >> EAL: Requesting 12288 pages of size 2MB from socket 0
> >> EAL: TSC frequency is ~238 KHz
> >> EAL: Master core 0 is ready (tid=f7fe9800)
> >> EAL: Core 3 is ready (tid=f5998700)
> >> EAL: Core 2 is ready (tid=f6199700)
> >> EAL: Core 1 is ready (tid=f699a700)
> >> EAL: PCI device :00:14.0 on NUMA socket -1
> >> EAL:   probe driver: 8086:1f41 rte_igb_pmd
> >> EAL:   PCI memory mapped at 0x77f8d000
> >> EAL:   PCI memory mapped at 0x77f89000
> >> EAL: PCI device :00:14.1 on NUMA socket -1
> >> EAL:   probe driver: 8086:1f41 rte_igb_pmd
> >> EAL:   PCI memory mapped at 0x77f69000
> >> EAL:   PCI memory mapped at 0x77f65000
> >> EAL: PCI device :00:14.2 on NUMA socket -1
> >> EAL:   probe driver: 8086:1f41 rte_igb_pmd
> >> EAL:   :00:14.2 not managed by UIO driver, skipping
> >> EAL: PCI device :00:14.3 on NUMA socket -1
> >> EAL:   probe driver: 8086:1f41 rte_igb_pmd
> >> EAL:   :00:14.3 not managed by UIO driver, skipping
> >> EAL: PCI device :00:14.2 on NUMA socket -1
> >> EAL:   probe driver: 8086:1f41 rte_igb_pmd
> >> EAL:   :00:14.2 not managed by UIO driver, skipping
> >> EAL: PCI device :00:14.3 on NUMA socket -1
> >> EAL:   probe driver: 8086:1f41 rte_igb_pmd
> >> EAL:   :00:14.3 not managed by UIO driver, skipping
> >> Initializing port 0 ... Creating queues: nb_rxq=2 nb_txq=4...
> >>  Address:0C:C4:7A:05:52:7A, Allocated mbuf pool on socket 0
> >> LPM: Adding route 0x01010100 / 24 (0)
> >> LPM: Adding route 0x02010100 / 24 (1)
> >> LPM: Adding route IPV6 / 48 (0)
> >> LPM: Adding route IPV6 / 48 (1)
> >> txq=0,0,0 PMD: To improve 1G driver performance, consider setting the TX
> >> WTHRESH value to 4, 8, or 16.
> >> txq=1,1,0 PMD: To improve 1G driver performance, consider setting the TX
> >> WTHRESH value to 4, 8, or 16.
> >> txq=2,2,0 PMD: To improve 1G driver performance, consider setting the TX
> >> WTHRESH value to 4, 8, or 16.
> >> txq=3,3,0 PMD: To improve 1G driver performance, consider setting the TX
> >> WTHRESH value to 4, 8, or 16.
> >>
> >> Initializing port 1 ... Creating queues: nb_rxq=2 nb_txq=4...
> >>  Address:0C:C4:7A:05:52:7B, txq=0,0,0 PMD: To improve 1G driver
> >> performance, consider setting the TX WTHRESH value to 4, 8, or 16.
> >> txq=1,1,0 PMD: To improve 1G driver performance, consider setting the TX
> >> WTHRESH value to 4, 8, or 16.
> >> txq=2,2,0 PMD: To improve 1G driver performance, consider setting the TX
> >> WTHRESH value to 4, 8, or 16.
> >> txq=3,3,0 PMD: To improve 1G driver performance, consider setting the TX
> >> WTHRESH value to 4, 8, or 16.
> >>
> >>
> >> Initializing rx queues on lcore 0 ... rxq=0,0,0
> >> Initializing rx queues on lcore 1 ... rxq=0,1,0
> >> Initializing rx queues on lcore 2 ... rxq=1,0,0
> >> Initializing rx queues on lcore 3 ... rxq=1,1,0
> >>
> >> Checking link status.done
> >> Port 0 Link Up - speed 100 Mbps - full-duplex
> >> Port 1 Link Up - speed 100 Mbps - full-duplex
> >> L3FWD: entering main loop on lcore 1
> >> L3FWD: entering main loop on lcore 3
> >> L3FWD:  -- lcoreid=1 portid=0 rxqueueid=1
> >> L3FWD:  -- lcoreid=3 portid=1 rxqueueid=1
> >> L3FWD: entering main loop on lcore 0
> 

[dpdk-dev] DPDK supported processor

2014-08-21 Thread BYEONG-GI KIM
Well, I did think that the NIC in my testing machine may not support DPDK
rather than thinking l3fwd is not working.

I just tested the l3fwd sample application to identify whether the NIC
supports DPDK or not.

Best regards

Byeong-Gi KIM


2014-08-21 18:14 GMT+09:00 Chae-yong Chong :

> Hi
>
> Could you give the details why you think the l3fwd is not working
>
> Best regards,
> Chae-yong
>
> 2014? 8? 21? ???, BYEONG-GI KIM?? ??? ???:
>
> Thank you for the reply.
>>
>> I tested l3forwarding sample application, and the results were as below:
>>
>> sudo ./build/l3fwd -c 0x0F -n 4 -- -p 0x03
>> --config="(0,0,0),(0,1,1),(1,0,2),(1,1,3)"
>> EAL: Detected lcore 0 as core 0 on socket 0
>> EAL: Detected lcore 1 as core 1 on socket 0
>> EAL: Detected lcore 2 as core 2 on socket 0
>> EAL: Detected lcore 3 as core 3 on socket 0
>> EAL: Detected lcore 4 as core 4 on socket 0
>> EAL: Detected lcore 5 as core 5 on socket 0
>> EAL: Detected lcore 6 as core 6 on socket 0
>> EAL: Detected lcore 7 as core 7 on socket 0
>> EAL: Support maximum 64 logical core(s) by configuration.
>> EAL: Detected 8 lcore(s)
>> EAL: Searching for IVSHMEM devices...
>> EAL: No IVSHMEM configuration found!
>> EAL: Setting up memory...
>> EAL: Ask a virtual area of 0x5ffc0 bytes
>> EAL: Virtual area found at 0x2aa4aae0 (size = 0x5ffc0)
>>
>> EAL: Ask a virtual area of 0x20 bytes
>> EAL: Virtual area found at 0x73e0 (size = 0x20)
>> EAL: Ask a virtual area of 0x20 bytes
>> EAL: Virtual area found at 0x73a0 (size = 0x20)
>> EAL: Requesting 12288 pages of size 2MB from socket 0
>> EAL: TSC frequency is ~238 KHz
>> EAL: Master core 0 is ready (tid=f7fe9800)
>> EAL: Core 3 is ready (tid=f5998700)
>> EAL: Core 2 is ready (tid=f6199700)
>> EAL: Core 1 is ready (tid=f699a700)
>> EAL: PCI device :00:14.0 on NUMA socket -1
>> EAL:   probe driver: 8086:1f41 rte_igb_pmd
>> EAL:   PCI memory mapped at 0x77f8d000
>> EAL:   PCI memory mapped at 0x77f89000
>> EAL: PCI device :00:14.1 on NUMA socket -1
>> EAL:   probe driver: 8086:1f41 rte_igb_pmd
>> EAL:   PCI memory mapped at 0x77f69000
>> EAL:   PCI memory mapped at 0x77f65000
>> EAL: PCI device :00:14.2 on NUMA socket -1
>> EAL:   probe driver: 8086:1f41 rte_igb_pmd
>> EAL:   :00:14.2 not managed by UIO driver, skipping
>> EAL: PCI device :00:14.3 on NUMA socket -1
>> EAL:   probe driver: 8086:1f41 rte_igb_pmd
>> EAL:   :00:14.3 not managed by UIO driver, skipping
>> EAL: PCI device :00:14.2 on NUMA socket -1
>> EAL:   probe driver: 8086:1f41 rte_igb_pmd
>> EAL:   :00:14.2 not managed by UIO driver, skipping
>> EAL: PCI device :00:14.3 on NUMA socket -1
>> EAL:   probe driver: 8086:1f41 rte_igb_pmd
>> EAL:   :00:14.3 not managed by UIO driver, skipping
>> Initializing port 0 ... Creating queues: nb_rxq=2 nb_txq=4...
>>  Address:0C:C4:7A:05:52:7A, Allocated mbuf pool on socket 0
>> LPM: Adding route 0x01010100 / 24 (0)
>> LPM: Adding route 0x02010100 / 24 (1)
>> LPM: Adding route IPV6 / 48 (0)
>> LPM: Adding route IPV6 / 48 (1)
>> txq=0,0,0 PMD: To improve 1G driver performance, consider setting the TX
>> WTHRESH value to 4, 8, or 16.
>> txq=1,1,0 PMD: To improve 1G driver performance, consider setting the TX
>> WTHRESH value to 4, 8, or 16.
>> txq=2,2,0 PMD: To improve 1G driver performance, consider setting the TX
>> WTHRESH value to 4, 8, or 16.
>> txq=3,3,0 PMD: To improve 1G driver performance, consider setting the TX
>> WTHRESH value to 4, 8, or 16.
>>
>> Initializing port 1 ... Creating queues: nb_rxq=2 nb_txq=4...
>>  Address:0C:C4:7A:05:52:7B, txq=0,0,0 PMD: To improve 1G driver
>> performance, consider setting the TX WTHRESH value to 4, 8, or 16.
>> txq=1,1,0 PMD: To improve 1G driver performance, consider setting the TX
>> WTHRESH value to 4, 8, or 16.
>> txq=2,2,0 PMD: To improve 1G driver performance, consider setting the TX
>> WTHRESH value to 4, 8, or 16.
>> txq=3,3,0 PMD: To improve 1G driver performance, consider setting the TX
>> WTHRESH value to 4, 8, or 16.
>>
>>
>> Initializing rx queues on lcore 0 ... rxq=0,0,0
>> Initializing rx queues on lcore 1 ... rxq=0,1,0
>> Initializing rx queues on lcore 2 ... rxq=1,0,0
>> Initializing rx queues on lcore 3 ... rxq=1,1,0
>>
>> Checking link status.done
>> Port 0 Link Up - speed 100 Mbps - full-duplex
>> Port 1 Link Up - speed 100 Mbps - full-duplex
>> L3FWD: entering main loop on lcore 1
>> L3FWD: entering main loop on lcore 3
>> L3FWD:  -- lcoreid=1 portid=0 rxqueueid=1
>> L3FWD:  -- lcoreid=3 portid=1 rxqueueid=1
>> L3FWD: entering main loop on lcore 0
>> L3FWD:  -- lcoreid=0 portid=0 rxqueueid=0
>> L3FWD: entering main loop on lcore 2
>> L3FWD:  -- lcoreid=2 portid=1 rxqueueid=0
>>
>> Anyway, I'll also try to test the testpmd application.
>>
>> Best regards
>>
>> Byeong-Gi KIM
>>
>>
>>
>> 2014-08-21 17:56 GMT+09:00 De Lara Guarch, Pablo <
>> pablo.de.lara.guarch at intel.com>:
>>
>> > > -Original Message-
>> > > 

[dpdk-dev] DPDK supported processor

2014-08-21 Thread BYEONG-GI KIM
Hello.

Hmm, the problem is a little complicated. Actually, I'm trying to use
dpdk-ovs with openstack deployment, but it seems not working correctly.
I've thus been trying to keep track of the primary problem of this.

I've thought my machine's hardware requirements may not be satisfied to
support DPDK, but I think it is solved now.

Best regards

Byeong-Gi KIM


2014-08-21 18:13 GMT+09:00 Masaru Oki :

> Hi,
> What is the problem in l3fwd?  link speed?
>
>
> 2014-08-21 18:00 GMT+09:00 BYEONG-GI KIM :
>
> Thank you for the reply.
>>
>> I tested l3forwarding sample application, and the results were as below:
>>
>> sudo ./build/l3fwd -c 0x0F -n 4 -- -p 0x03
>> --config="(0,0,0),(0,1,1),(1,0,2),(1,1,3)"
>> EAL: Detected lcore 0 as core 0 on socket 0
>> EAL: Detected lcore 1 as core 1 on socket 0
>> EAL: Detected lcore 2 as core 2 on socket 0
>> EAL: Detected lcore 3 as core 3 on socket 0
>> EAL: Detected lcore 4 as core 4 on socket 0
>> EAL: Detected lcore 5 as core 5 on socket 0
>> EAL: Detected lcore 6 as core 6 on socket 0
>> EAL: Detected lcore 7 as core 7 on socket 0
>> EAL: Support maximum 64 logical core(s) by configuration.
>> EAL: Detected 8 lcore(s)
>> EAL: Searching for IVSHMEM devices...
>> EAL: No IVSHMEM configuration found!
>> EAL: Setting up memory...
>> EAL: Ask a virtual area of 0x5ffc0 bytes
>> EAL: Virtual area found at 0x2aa4aae0 (size = 0x5ffc0)
>>
>> EAL: Ask a virtual area of 0x20 bytes
>> EAL: Virtual area found at 0x73e0 (size = 0x20)
>> EAL: Ask a virtual area of 0x20 bytes
>> EAL: Virtual area found at 0x73a0 (size = 0x20)
>> EAL: Requesting 12288 pages of size 2MB from socket 0
>> EAL: TSC frequency is ~238 KHz
>> EAL: Master core 0 is ready (tid=f7fe9800)
>> EAL: Core 3 is ready (tid=f5998700)
>> EAL: Core 2 is ready (tid=f6199700)
>> EAL: Core 1 is ready (tid=f699a700)
>> EAL: PCI device :00:14.0 on NUMA socket -1
>> EAL:   probe driver: 8086:1f41 rte_igb_pmd
>> EAL:   PCI memory mapped at 0x77f8d000
>> EAL:   PCI memory mapped at 0x77f89000
>> EAL: PCI device :00:14.1 on NUMA socket -1
>> EAL:   probe driver: 8086:1f41 rte_igb_pmd
>> EAL:   PCI memory mapped at 0x77f69000
>> EAL:   PCI memory mapped at 0x77f65000
>> EAL: PCI device :00:14.2 on NUMA socket -1
>> EAL:   probe driver: 8086:1f41 rte_igb_pmd
>> EAL:   :00:14.2 not managed by UIO driver, skipping
>> EAL: PCI device :00:14.3 on NUMA socket -1
>> EAL:   probe driver: 8086:1f41 rte_igb_pmd
>> EAL:   :00:14.3 not managed by UIO driver, skipping
>> EAL: PCI device :00:14.2 on NUMA socket -1
>> EAL:   probe driver: 8086:1f41 rte_igb_pmd
>> EAL:   :00:14.2 not managed by UIO driver, skipping
>> EAL: PCI device :00:14.3 on NUMA socket -1
>> EAL:   probe driver: 8086:1f41 rte_igb_pmd
>> EAL:   :00:14.3 not managed by UIO driver, skipping
>> Initializing port 0 ... Creating queues: nb_rxq=2 nb_txq=4...
>>  Address:0C:C4:7A:05:52:7A, Allocated mbuf pool on socket 0
>> LPM: Adding route 0x01010100 / 24 (0)
>> LPM: Adding route 0x02010100 / 24 (1)
>> LPM: Adding route IPV6 / 48 (0)
>> LPM: Adding route IPV6 / 48 (1)
>> txq=0,0,0 PMD: To improve 1G driver performance, consider setting the TX
>> WTHRESH value to 4, 8, or 16.
>> txq=1,1,0 PMD: To improve 1G driver performance, consider setting the TX
>> WTHRESH value to 4, 8, or 16.
>> txq=2,2,0 PMD: To improve 1G driver performance, consider setting the TX
>> WTHRESH value to 4, 8, or 16.
>> txq=3,3,0 PMD: To improve 1G driver performance, consider setting the TX
>> WTHRESH value to 4, 8, or 16.
>>
>> Initializing port 1 ... Creating queues: nb_rxq=2 nb_txq=4...
>>  Address:0C:C4:7A:05:52:7B, txq=0,0,0 PMD: To improve 1G driver
>> performance, consider setting the TX WTHRESH value to 4, 8, or 16.
>> txq=1,1,0 PMD: To improve 1G driver performance, consider setting the TX
>> WTHRESH value to 4, 8, or 16.
>> txq=2,2,0 PMD: To improve 1G driver performance, consider setting the TX
>> WTHRESH value to 4, 8, or 16.
>> txq=3,3,0 PMD: To improve 1G driver performance, consider setting the TX
>> WTHRESH value to 4, 8, or 16.
>>
>>
>> Initializing rx queues on lcore 0 ... rxq=0,0,0
>> Initializing rx queues on lcore 1 ... rxq=0,1,0
>> Initializing rx queues on lcore 2 ... rxq=1,0,0
>> Initializing rx queues on lcore 3 ... rxq=1,1,0
>>
>> Checking link status.done
>> Port 0 Link Up - speed 100 Mbps - full-duplex
>> Port 1 Link Up - speed 100 Mbps - full-duplex
>> L3FWD: entering main loop on lcore 1
>> L3FWD: entering main loop on lcore 3
>> L3FWD:  -- lcoreid=1 portid=0 rxqueueid=1
>> L3FWD:  -- lcoreid=3 portid=1 rxqueueid=1
>> L3FWD: entering main loop on lcore 0
>> L3FWD:  -- lcoreid=0 portid=0 rxqueueid=0
>> L3FWD: entering main loop on lcore 2
>> L3FWD:  -- lcoreid=2 portid=1 rxqueueid=0
>>
>> Anyway, I'll also try to test the testpmd application.
>>
>> Best regards
>>
>> Byeong-Gi KIM
>>
>>
>>
>> 2014-08-21 17:56 GMT+09:00 De Lara Guarch, Pablo <
>> 

[dpdk-dev] ixgbe network card has dev_info.max_rx_queues == 0

2014-08-21 Thread Sergey Mironov
Hi. I have face a strange error on one of my network cards. Call to
rte_eth_dev_configure returns with error code -22. Increaing the
verbosity level shows the following:


PMD: rte_eth_dev_configure: ethdev port_id=2 nb_rx_queues=3 > 0
EAL: Error - exiting with code: 1

here is the snippet of code which returns the error


./lib/librte_ether/rte_ethdev.c : 513

(*dev->dev_ops->dev_infos_get)(dev, _info);
if (nb_rx_q > dev_info.max_rx_queues) {
PMD_DEBUG_TRACE("ethdev port_id=%d nb_rx_queues=%d > %d\n",
port_id, nb_rx_q, dev_info.max_rx_queues);
return (-EINVAL);
}

What does this error means (what is rx queues of an adapter?) What may
cause such a problem? I am using dpdk 1.5.1r1.

Thanks in advance,
Sergey


[dpdk-dev] [PATCHv3] librte_acl make it build/work for 'default' target

2014-08-21 Thread Neil Horman
Make ACL library to build/work on 'default' architecture:
- make rte_acl_classify_scalar really scalar
 (make sure it wouldn't use sse4 instrincts through resolve_priority()).
- Provide two versions of rte_acl_classify code path:
  rte_acl_classify_sse() - could be build and used only on systems with sse4.2
  and upper, return -ENOTSUP on lower arch.
  rte_acl_classify_scalar() - a slower version, but could be build and used
  on all systems.
- keep common code shared between these two codepaths.

v2 chages:
 run-time selection of most appropriate code-path for given ISA.
 By default the highest supprted one is selected.
 User can still override that selection by manually assigning new value to
 the global function pointer rte_acl_default_classify.
 rte_acl_classify() becomes a macro calling whatever rte_acl_default_classify
 points to.

V3 Changes
 Updated classify pointer to be a function so as to better preserve ABI
 REmoved macro definitions for match check functions to make them static inline

Signed-off-by: Neil Horman 
---
 app/test-acl/main.c  |  13 +-
 app/test/test_acl.c  |  12 +-
 lib/librte_acl/Makefile  |   5 +-
 lib/librte_acl/acl_bld.c |   5 +-
 lib/librte_acl/acl_match_check.h |  83 
 lib/librte_acl/acl_run.c | 944 ---
 lib/librte_acl/acl_run.h | 220 +
 lib/librte_acl/acl_run_scalar.c  | 198 
 lib/librte_acl/acl_run_sse.c | 627 ++
 lib/librte_acl/rte_acl.c |  46 ++
 lib/librte_acl/rte_acl.h |  26 +-
 11 files changed, 1216 insertions(+), 963 deletions(-)
 create mode 100644 lib/librte_acl/acl_match_check.h
 delete mode 100644 lib/librte_acl/acl_run.c
 create mode 100644 lib/librte_acl/acl_run.h
 create mode 100644 lib/librte_acl/acl_run_scalar.c
 create mode 100644 lib/librte_acl/acl_run_sse.c

diff --git a/app/test-acl/main.c b/app/test-acl/main.c
index d654409..a77f47d 100644
--- a/app/test-acl/main.c
+++ b/app/test-acl/main.c
@@ -787,6 +787,10 @@ acx_init(void)
/* perform build. */
ret = rte_acl_build(config.acx, );

+   /* setup default rte_acl_classify */
+   if (config.scalar)
+   rte_acl_select_classify(ACL_CLASSIFY_SCALAR);
+
dump_verbose(DUMP_NONE, stdout,
"rte_acl_build(%u) finished with %d\n",
config.bld_categories, ret);
@@ -815,13 +819,8 @@ search_ip5tuples_once(uint32_t categories, uint32_t step, 
int scalar)
v += config.trace_sz;
}

-   if (scalar != 0)
-   ret = rte_acl_classify_scalar(config.acx, data,
-   results, n, categories);
-
-   else
-   ret = rte_acl_classify(config.acx, data,
-   results, n, categories);
+   ret = rte_acl_classify(config.acx, data, results,
+   n, categories);

if (ret != 0)
rte_exit(ret, "classify for ipv%c_5tuples returns %d\n",
diff --git a/app/test/test_acl.c b/app/test/test_acl.c
index 869f6d3..2fcef6e 100644
--- a/app/test/test_acl.c
+++ b/app/test/test_acl.c
@@ -148,7 +148,8 @@ test_classify_run(struct rte_acl_ctx *acx)
}

/* make a quick check for scalar */
-   ret = rte_acl_classify_scalar(acx, data, results,
+   rte_acl_select_classify(ACL_CLASSIFY_SCALAR);
+   ret = rte_acl_classify(acx, data, results,
RTE_DIM(acl_test_data), RTE_ACL_MAX_CATEGORIES);
if (ret != 0) {
printf("Line %i: SSE classify failed!\n", __LINE__);
@@ -362,7 +363,8 @@ test_invalid_layout(void)
}

/* classify tuples (scalar) */
-   ret = rte_acl_classify_scalar(acx, data, results,
+   rte_acl_select_classify(ACL_CLASSIFY_SCALAR);
+   ret = rte_acl_classify(acx, data, results,
RTE_DIM(results), 1);
if (ret != 0) {
printf("Line %i: Scalar classify failed!\n", __LINE__);
@@ -850,7 +852,8 @@ test_invalid_parameters(void)
/* scalar classify test */

/* cover zero categories in classify (should not fail) */
-   result = rte_acl_classify_scalar(acx, NULL, NULL, 0, 0);
+   rte_acl_select_classify(ACL_CLASSIFY_SCALAR);
+   result = rte_acl_classify(acx, NULL, NULL, 0, 0);
if (result != 0) {
printf("Line %i: Scalar classify with zero categories "
"failed!\n", __LINE__);
@@ -859,7 +862,8 @@ test_invalid_parameters(void)
}

/* cover invalid but positive categories in classify */
-   result = rte_acl_classify_scalar(acx, NULL, NULL, 0, 3);
+   rte_acl_select_classify(ACL_CLASSIFY_SCALAR);
+   result = rte_acl_classify(acx, NULL, NULL, 0, 3);
if (result == 0) {
printf("Line %i: Scalar classify with 3 categories "
   

[dpdk-dev] ixgbe network card has dev_info.max_rx_queues == 0

2014-08-21 Thread Alex Markuze
RX and TX Are short hand for Receive and Transmit Queues.
These Queues Store the in/egress packets.

Just looking at the info you've sent it tells you that max_rx_queues
for this dev is 0 (Clearly something is wrong here) so the nb_rx_q
which is 3 is an Invalid Value -EINVAL == -22.

On Thu, Aug 21, 2014 at 3:26 PM, Sergey Mironov  wrote:
> Hi. I have face a strange error on one of my network cards. Call to
> rte_eth_dev_configure returns with error code -22. Increaing the
> verbosity level shows the following:
>
>
> PMD: rte_eth_dev_configure: ethdev port_id=2 nb_rx_queues=3 > 0
> EAL: Error - exiting with code: 1
>
> here is the snippet of code which returns the error
>
>
> ./lib/librte_ether/rte_ethdev.c : 513
>
> (*dev->dev_ops->dev_infos_get)(dev, _info);
> if (nb_rx_q > dev_info.max_rx_queues) {
> PMD_DEBUG_TRACE("ethdev port_id=%d nb_rx_queues=%d > %d\n",
> port_id, nb_rx_q, dev_info.max_rx_queues);
> return (-EINVAL);
> }
>
> What does this error means (what is rx queues of an adapter?) What may
> cause such a problem? I am using dpdk 1.5.1r1.
>
> Thanks in advance,
> Sergey


[dpdk-dev] DPDK supported processor

2014-08-21 Thread BYEONG-GI KIM
Hello.

I'm now using Intel Atom processor C2758 on the DPDK testing machines, but
it seems not working.

Which sample application would be recommended to test the NIC actually
works well with DPDK?

Best regards

Byeong-Gi KIM


[dpdk-dev] DPDK supported processor

2014-08-21 Thread De Lara Guarch, Pablo

> 
> From: BYEONG-GI KIM [mailto:kimbyeonggi at gmail.com]
> Sent: Thursday, August 21, 2014 11:09 AM
> To: Masaru Oki; De Lara Guarch, Pablo; dev at dpdk.org
> Subject: Re: [dpdk-dev] DPDK supported processor
> 
> It works fine with DPDK at least.
> 
> The problem is that OVDK, especially ovs-dpdk command supported by the
> OVDK, doesn't work correctly. I tested the DPDK application, l3fwd, in order
> to know where the incorrect operation comes from, either DPDK or OVDK
> itself. And I know now the issue seems belonging to OVDK. :) In the OVDK,
> the physical ports even could not detected...

I assume you are using DPDK 1.7. DPDK-ovs team has been working on a fix for 
that,
so you should check on their mailing list.

> 
> Best regards
> 
> Byeong-Gi KIM
> 
> 2014-08-21 19:00 GMT+09:00 Masaru Oki :
> I have SUPERMICRO A1SRi-2758F mainboard (onboard Atom C2758),
> and onboard NICs work fine with DPDK application.
> 
> 2014-08-21 18:17 GMT+09:00 BYEONG-GI KIM :
> 
> Well, I did think that the NIC in my testing machine may not support DPDK
> rather than thinking l3fwd is not working.
> 
> I just tested the l3fwd sample application to identify whether the NIC
> supports DPDK or not.
> 
> Best regards
> 
> Byeong-Gi KIM
> 
> 
> 2014-08-21 18:14 GMT+09:00 Chae-yong Chong :
> 
> > Hi
> >
> > Could you give the details why you think the l3fwd is not working
> >
> > Best regards,
> > Chae-yong
> >
> > 2014? 8? 21? ???, BYEONG-GI
> KIM?? ??? ???:
> >
> > Thank you for the reply.
> >>
> >> I tested l3forwarding sample application, and the results were as below:
> >>
> >> sudo ./build/l3fwd -c 0x0F -n 4 -- -p 0x03
> >> --config="(0,0,0),(0,1,1),(1,0,2),(1,1,3)"
> >> EAL: Detected lcore 0 as core 0 on socket 0
> >> EAL: Detected lcore 1 as core 1 on socket 0
> >> EAL: Detected lcore 2 as core 2 on socket 0
> >> EAL: Detected lcore 3 as core 3 on socket 0
> >> EAL: Detected lcore 4 as core 4 on socket 0
> >> EAL: Detected lcore 5 as core 5 on socket 0
> >> EAL: Detected lcore 6 as core 6 on socket 0
> >> EAL: Detected lcore 7 as core 7 on socket 0
> >> EAL: Support maximum 64 logical core(s) by configuration.
> >> EAL: Detected 8 lcore(s)
> >> EAL: Searching for IVSHMEM devices...
> >> EAL: No IVSHMEM configuration found!
> >> EAL: Setting up memory...
> >> EAL: Ask a virtual area of 0x5ffc0 bytes
> >> EAL: Virtual area found at 0x2aa4aae0 (size = 0x5ffc0)
> >>
> >> EAL: Ask a virtual area of 0x20 bytes
> >> EAL: Virtual area found at 0x73e0 (size = 0x20)
> >> EAL: Ask a virtual area of 0x20 bytes
> >> EAL: Virtual area found at 0x73a0 (size = 0x20)
> >> EAL: Requesting 12288 pages of size 2MB from socket 0
> >> EAL: TSC frequency is ~238 KHz
> >> EAL: Master core 0 is ready (tid=f7fe9800)
> >> EAL: Core 3 is ready (tid=f5998700)
> >> EAL: Core 2 is ready (tid=f6199700)
> >> EAL: Core 1 is ready (tid=f699a700)
> >> EAL: PCI device :00:14.0 on NUMA socket -1
> >> EAL:? ?probe driver: 8086:1f41 rte_igb_pmd
> >> EAL:? ?PCI memory mapped at 0x77f8d000
> >> EAL:? ?PCI memory mapped at 0x77f89000
> >> EAL: PCI device :00:14.1 on NUMA socket -1
> >> EAL:? ?probe driver: 8086:1f41 rte_igb_pmd
> >> EAL:? ?PCI memory mapped at 0x77f69000
> >> EAL:? ?PCI memory mapped at 0x77f65000
> >> EAL: PCI device :00:14.2 on NUMA socket -1
> >> EAL:? ?probe driver: 8086:1f41 rte_igb_pmd
> >> EAL:? ?:00:14.2 not managed by UIO driver, skipping
> >> EAL: PCI device :00:14.3 on NUMA socket -1
> >> EAL:? ?probe driver: 8086:1f41 rte_igb_pmd
> >> EAL:? ?:00:14.3 not managed by UIO driver, skipping
> >> EAL: PCI device :00:14.2 on NUMA socket -1
> >> EAL:? ?probe driver: 8086:1f41 rte_igb_pmd
> >> EAL:? ?:00:14.2 not managed by UIO driver, skipping
> >> EAL: PCI device :00:14.3 on NUMA socket -1
> >> EAL:? ?probe driver: 8086:1f41 rte_igb_pmd
> >> EAL:? ?:00:14.3 not managed by UIO driver, skipping
> >> Initializing port 0 ... Creating queues: nb_rxq=2 nb_txq=4...
> >>? Address:0C:C4:7A:05:52:7A, Allocated mbuf pool on socket 0
> >> LPM: Adding route 0x01010100 / 24 (0)
> >> LPM: Adding route 0x02010100 / 24 (1)
> >> LPM: Adding route IPV6 / 48 (0)
> >> LPM: Adding route IPV6 / 48 (1)
> >> txq=0,0,0 PMD: To improve 1G driver performance, consider setting the
> TX
> >> WTHRESH value to 4, 8, or 16.
> >> txq=1,1,0 PMD: To improve 1G driver performance, consider setting the
> TX
> >> WTHRESH value to 4, 8, or 16.
> >> txq=2,2,0 PMD: To improve 1G driver performance, consider setting the
> TX
> >> WTHRESH value to 4, 8, or 16.
> >> txq=3,3,0 PMD: To improve 1G driver performance, consider setting the
> TX
> >> WTHRESH value to 4, 8, or 16.
> >>
> >> Initializing port 1 ... Creating queues: nb_rxq=2 nb_txq=4...
> >>? Address:0C:C4:7A:05:52:7B, txq=0,0,0 PMD: To improve 1G driver
> >> performance, consider setting the TX WTHRESH value to 4, 8, or 16.
> >> txq=1,1,0 PMD: To improve 1G driver performance, consider setting the
> TX
> >> WTHRESH value 

[dpdk-dev] About RTE_MAX_ETHPORT_QUEUE_STATS_MAPS

2014-08-21 Thread Alejandro Lucero
Hi,

Documentation and header files describe stat_idx parameter for

rte_eth_dev_set_tx_queue_stats_mapping

and

rte_eth_dev_set_rx_queue_stats_mapping

as

The value must be in the range
[0, RTE_MAX_ETHPORT_QUEUE_STATS_MAPS - 1]

I have not found a definition for RTE_MAX_ETHPORT_QUEUE_STATS_MAPS but the
per queue counters inside struct rte_eth_stats are arrays with length
RTE_ETHDEV_QUEUE_STAT_CNTRS which is defined at

config/defconfig_x86_64-default-linuxapp-gcc

CONFIG_RTE_ETHDEV_QUEUE_STAT_CNTRS=16

I assume RTE_MAX_ETHPORT_QUEUE_STATS_MAPS is equal to
CONFIG_RTE_ETHDEV_QUEUE_STAT_CNTRS.

Can anyone confirm this?

Thanks


[dpdk-dev] DPDK supported processor

2014-08-21 Thread De Lara Guarch, Pablo
> -Original Message-
> From: dev [mailto:dev-bounces at dpdk.org] On Behalf Of BYEONG-GI KIM
> Sent: Thursday, August 21, 2014 7:04 AM
> To: dev at dpdk.org
> Subject: [dpdk-dev] DPDK supported processor
> 
> Hello.
> 
> I'm now using Intel Atom processor C2758 on the DPDK testing machines, but
> it seems not working.
> 
> Which sample application would be recommended to test the NIC actually
> works well with DPDK?
> 
> Best regards
> 
> Byeong-Gi KIM

Hi,

Testpmd is the best application you can use for that.

Best regards,
Pablo


[dpdk-dev] [PATCH] examples/vhost: Support jumbo frame in user space vhost

2014-08-21 Thread Ouyang, Changchun
Hi all,

Any comments for this patch?
And what's the status for merging it into mainline?

Thanks in advance
Changchun

> -Original Message-
> From: Ouyang, Changchun
> Sent: Friday, August 15, 2014 12:58 PM
> To: dev at dpdk.org
> Cc: Cao, Waterman; Ouyang, Changchun
> Subject: [PATCH] examples/vhost: Support jumbo frame in user space vhost
> 
> This patch support mergeable RX feature and thus support jumbo frame RX
> and TX in user space vhost(as virtio backend).
> 
> On RX, it secures enough room from vring to accommodate one complete
> scattered packet which is received by PMD from physical port, and then copy
> data from mbuf to vring buffer, possibly across a few vring entries and
> descriptors.
> 
> On TX, it gets a jumbo frame, possibly described by a few vring descriptors
> which are chained together with the flags of 'NEXT', and then copy them into
> one scattered packet and TX it to physical port through PMD.
> 
> Signed-off-by: Changchun Ouyang 
> Acked-by: Huawei Xie 
> ---
>  examples/vhost/main.c   | 726
> 
>  examples/vhost/virtio-net.h |  14 +
>  2 files changed, 687 insertions(+), 53 deletions(-)
> 
> diff --git a/examples/vhost/main.c b/examples/vhost/main.c index
> 193aa25..7d9e6a2 100644
> --- a/examples/vhost/main.c
> +++ b/examples/vhost/main.c
> @@ -106,6 +106,8 @@
>  #define BURST_RX_WAIT_US 15  /* Defines how long we wait
> between retries on RX */
>  #define BURST_RX_RETRIES 4   /* Number of retries on RX. */
> 
> +#define JUMBO_FRAME_MAX_SIZE0x2600
> +
>  /* State of virtio device. */
>  #define DEVICE_MAC_LEARNING 0
>  #define DEVICE_RX1
> @@ -676,8 +678,12 @@ us_vhost_parse_args(int argc, char **argv)
>   us_vhost_usage(prgname);
>   return -1;
>   } else {
> - if (ret)
> + if (ret) {
> +
>   vmdq_conf_default.rxmode.jumbo_frame = 1;
> +
>   vmdq_conf_default.rxmode.max_rx_pkt_len
> + =
> JUMBO_FRAME_MAX_SIZE;
>   VHOST_FEATURES = (1ULL <<
> VIRTIO_NET_F_MRG_RXBUF);
> + }
>   }
>   }
> 
> @@ -797,6 +803,14 @@ us_vhost_parse_args(int argc, char **argv)
>   return -1;
>   }
> 
> + if ((zero_copy == 1) && (vmdq_conf_default.rxmode.jumbo_frame
> == 1)) {
> + RTE_LOG(INFO, VHOST_PORT,
> + "Vhost zero copy doesn't support jumbo frame,"
> + "please specify '--mergeable 0' to disable the "
> + "mergeable feature.\n");
> + return -1;
> + }
> +
>   return 0;
>  }
> 
> @@ -916,7 +930,7 @@ gpa_to_hpa(struct virtio_net *dev, uint64_t guest_pa,
>   * This function adds buffers to the virtio devices RX virtqueue. Buffers can
>   * be received from the physical port or from another virtio device. A packet
>   * count is returned to indicate the number of packets that were succesfully
> - * added to the RX queue.
> + * added to the RX queue. This function works when mergeable is disabled.
>   */
>  static inline uint32_t __attribute__((always_inline))  virtio_dev_rx(struct
> virtio_net *dev, struct rte_mbuf **pkts, uint32_t count) @@ -930,7 +944,6
> @@ virtio_dev_rx(struct virtio_net *dev, struct rte_mbuf **pkts, uint32_t
> count)
>   uint64_t buff_hdr_addr = 0;
>   uint32_t head[MAX_PKT_BURST], packet_len = 0;
>   uint32_t head_idx, packet_success = 0;
> - uint32_t mergeable, mrg_count = 0;
>   uint32_t retry = 0;
>   uint16_t avail_idx, res_cur_idx;
>   uint16_t res_base_idx, res_end_idx;
> @@ -940,6 +953,7 @@ virtio_dev_rx(struct virtio_net *dev, struct rte_mbuf
> **pkts, uint32_t count)
>   LOG_DEBUG(VHOST_DATA, "(%"PRIu64") virtio_dev_rx()\n", dev-
> >device_fh);
>   vq = dev->virtqueue[VIRTIO_RXQ];
>   count = (count > MAX_PKT_BURST) ? MAX_PKT_BURST : count;
> +
>   /* As many data cores may want access to available buffers, they
> need to be reserved. */
>   do {
>   res_base_idx = vq->last_used_idx_res; @@ -976,9 +990,6
> @@ virtio_dev_rx(struct virtio_net *dev, struct rte_mbuf **pkts, uint32_t
> count)
>   /* Prefetch available ring to retrieve indexes. */
>   rte_prefetch0(>avail->ring[res_cur_idx & (vq->size - 1)]);
> 
> - /* Check if the VIRTIO_NET_F_MRG_RXBUF feature is enabled. */
> - mergeable = dev->features & (1 << VIRTIO_NET_F_MRG_RXBUF);
> -
>   /* Retrieve all of the head indexes first to avoid caching issues. */
>   for (head_idx = 0; head_idx < count; head_idx++)
>   head[head_idx] = vq->avail->ring[(res_cur_idx + head_idx) &
> (vq->size - 1)]; @@ -997,56 +1008,44 @@ virtio_dev_rx(struct virtio_net
> *dev, struct rte_mbuf