>
> On 28/05/2015 17:16, "Gray, Mark D" <[email protected]> wrote:
>
> >>
> >> Non pmd threads have a core_id == UINT32_MAX, while queue ids used
> by
> >>netdevs range from 0 to the number of CPUs. Therefore core ids cannot
> >>be used directly to select a queue.
> >>
> >> This commit introduces a simple mapping to fix the problem: non pmd
> >> threads use queue 0, pmd threads on core 0 to N use queues 1 to N+1
> >>
> >> Fixes: d5c199ea7ff7 ("netdev-dpdk: Properly support non pmd
> >> threads.")
> >>
> >No comments on the code. However, I tested it by adding a veth port and
> >sending a 'ping -I' through the other end of the veth and it segfaults.
>
> Thanks for testing it. From the backtrace it looks like I should also update
> the
> flushing logic.
>
> How did you add the veth? Did you use a pcap vdev?
I do:
ip link add dev vethtest type veth peer name vethtest2
ip link set up dev vethtest
ovs-ofctl del-flows br0
ovs-vsctl add-port br0 vethtest
ovs-ofctl add-flow br0 in_port=3,actions=output:1
ovs-ofctl add-flow br0 in_port=1,actions=output:3
ping -I vethtest2 1.1.1.1
The port seems to add correctly because I can send traffic in the other
direction
but please confirm that you can repeat this in case I am doing something
wrong.
>
> Also, would you mind posting another backtrace with debug symbols?
> It might help understand what is going on with the queues ids
Here you go ..
gdb) bt
#0 0x0000000000524354 in ixgbe_xmit_pkts_vec ()
#1 0x000000000068c6df in rte_eth_tx_burst (port_id=0 '\000', queue_id=0,
tx_pkts=0x7f51c0a1e420,
nb_pkts=65535) at
/home/mdgray/git/ovs/dpdk//x86_64-ivshmem-linuxapp-gcc/include/rte_ethdev.h:2577
#2 0x000000000068fee8 in dpdk_queue_flush__ (dev=0x7f51c0adc500, qid=0) at
lib/netdev-dpdk.c:808
#3 0x00000000006921ee in dpdk_queue_flush (dev=0x7f51c0adc500, qid=0) at
lib/netdev-dpdk.c:842
#4 0x0000000000692390 in netdev_dpdk_rxq_recv (rxq_=0x7f51c0acf040,
packets=0x7f5407ffe850,
c=0x7f5407ffe84c) at lib/netdev-dpdk.c:897
#5 0x00000000005cc8e5 in netdev_rxq_recv (rx=0x7f51c0acf040,
buffers=0x7f5407ffe850,
cnt=0x7f5407ffe84c) at lib/netdev.c:651
#6 0x00000000005a36fb in dp_netdev_process_rxq_port (pmd=0x21723af0,
port=0x217031d0,
rxq=0x7f51c0acf040) at lib/dpif-netdev.c:2517
#7 0x00000000005a3d9b in pmd_thread_main (f_=0x21723af0) at
lib/dpif-netdev.c:2672
#8 0x000000000061d0a4 in ovsthread_wrapper (aux_=0x216e1690) at
lib/ovs-thread.c:338
#9 0x0000003cc7607ee5 in start_thread () from /lib64/libpthread.so.0
#10 0x0000003cc6ef4d1d in clone () from /lib64/libc.so.6
Let me know how you get on as I will try and continue on this tomorrow when I
return to work.
I am starting vswitchd with
ovs-vswitchd --dpdk -c 0x4 -n 4 --socket-mem 1024,0 --
--pidfile=/tmp/vswitchd.pid --log-file
>
> Thanks,
>
> Daniele
>
> >
> >(gdb) bt
> >#0 0x0000000000526354 in ixgbe_xmit_pkts_vec ()
> >#1 0x000000000066f473 in dpdk_queue_flush__ ()
> >#2 0x000000000066fd16 in netdev_dpdk_rxq_recv ()
> >#3 0x00000000005b9cd1 in netdev_rxq_recv ()
> >#4 0x00000000005967e9 in dp_netdev_process_rxq_port ()
> >#5 0x0000000000596f24 in pmd_thread_main ()
> >#6 0x0000000000608041 in ovsthread_wrapper ()
> >#7 0x0000003cc7607ee5 in start_thread () from /lib64/libpthread.so.0
> >#8 0x0000003cc6ef4d1d in clone () from /lib64/libc.so.6
> >
> >I also didn¹t seen any perf drop with this patch in the normal dpdk
> >phy-phy path.
_______________________________________________
dev mailing list
[email protected]
http://openvswitch.org/mailman/listinfo/dev