Re: [dpdk-users] Mellanox 100G NIC, VF SR-IOV, docker container, EAL could not detect the device
Hi Adrien, Thanks very much for your reply. However, after the -w option is used, the EAL still could not find the VF devices. $dpdk-app -c 0x01 --socket-mem=128,128 --file-prefix="docker1" -w :83:00.1 EAL: Detected 12 lcore(s) EAL: Probing VFIO support... EAL: VFIO support initialized PMD: bnxt_rte_pmd_init() called for (null) Error, rte_eth_dev_configure() returns negative! Best wishes, Xiaoban From: Adrien Mazarguil Sent: Tuesday, May 23, 2017 4:12 AM To: Wu, Xiaoban Cc: users@dpdk.org Subject: Re: [dpdk-users] Mellanox 100G NIC, VF SR-IOV, docker container, EAL could not detect the device On Tue, May 23, 2017 at 05:39:36AM +, Wu, Xiaoban wrote: > Dear DPDK users, > > > I am trying to use the VF of the Mellanox 100G NIC enabled by SR-IOV. The > following is what I have done. > > > 1. Add "intel_iommu=on iommu=pt" to kernel command line option, > update-grub, and reboot > > 2. Install the MLNX-OFED, reboot > > 3. In default the card is in infiband mode, so I switched it to ethernet > mode (in order to run DPDK application) and reboot. > > 4. mst start > > mlxconfig -d /dev/mst/mt4115_pciconf0 q #query > mlxconfig -d /dev/mst/mt4115_pciconf0 set SRIOV_EN=1 NUM_OF_VFS=1 > reboot > 5. echo 1 > /sys/bus/pci/devices/\:83\:00.0/mlx5_num_vfs > 6. modprobe vfio-pci > 7. dpdk-devbind.py --status > 8. dpdk-devbind.py -b vfio-pci :83:00.1 > 9. ls -al /dev/vfio > 10. docker run -it --privileged --device=/dev/vfio/54:/dev/vfio/54 > --device=/dev/vfio/vfio:/dev/vfio/vfio -v /mnt/huge/:/dev/hugepages/ -v > /var/run:/var/run $IMAGEID bash > 11. $dpdkapp -c 0x01 --socket-mem=128,128 --file-prefix="docker1" > > However, in the EAL part, it does not list any usable devices > EAL: Detected 12 lcore(s) > EAL: Probing VFIO support... > EAL: VFIO support initialized > PMD: bnxt_rte_pmd_init() called for (null) > Error, rte_eth_dev_configure() returns negative! > > Can anybody please point out any possible solution? Looking forward to your > reply. Thanks very much for your help. Seems like the issue is not related to your mlx5 device. From the above log it appears that you also have a bnxt device on that system which DPDK detects and attempts to use as it is running in blacklist mode. Perhaps that device was not configured properly. Try to white-list the devices you want to use by explicitly providing their PCI bus addresses through -w arguments instead. -- Adrien Mazarguil 6WIND
Re: [dpdk-users] docker container, EAL: failed to initialize virtio_user0 device
Check the /var/run/usvhost . Do you have the socket created there ? Is that visible from docker. Sometime back when I used a container, I passed a mac address for DPDK Parameters when starting the dpdk test_pmd application in container. Regards, Neeraj On 5/20/17, 8:53 PM, "users on behalf of Wu, Xiaoban" wrote: Hi DPDK Users, I am trying to use docker container and ovs(2.6.0)-dpdk(16.07) to setup a test. I want to setup two docker containers, each of them will use a virtual device (socket) created by the ovs-dpdk. The final purpose would let the two containers talk to each other. Setup ovs-dpdk 1. ovsdb-tool create $ovs-dir/etc/openvswitch/conf.db $ovs-dir/share/openvswitch/vswitch.ovsschema 2. ovsdb-server --remote=punix:$ovs-dir/var/run/openvswitch/db.sock --remote=db:Open_vSwitch,Open_vSwitch,manager_options --pidfile --detach 3. ovs-vsctl --no-wait set Open_vSwitch . other_config:dpdk-init=true 4. ovs-vsctl --no-wait set Open_vSwitch . other_config:dpdk-socket-mem="512,512" 5. ovs-vswitchd unix:$ovs-dir/var/run/openvswitch/db.sock --pidfile --detach --log-file=$ovs-dir/temp.log Setup bridge and ports 1. ovs-vsctl add-br br0 -- set bridge br0 datapath_type=netdev 2. ovs-vsctl add-port br0 vhost-user1 -- set Interface vhost-user1 type=dpdkvhostuser ofport_request=1 3. ovs-vsctl add-port br0 vhost-user2 -- set Interface vhost-user2 type=dpdkvhostuser ofport_request=2 4. ovs-ofctl add-flow br0 priority=1000,in_port=1,actions=output:2 5. ovs-ofctl add-flow br0 priority=1000,in_port=2,actions=output:1 Run the docker container 1. docker run -it --privileged -v $ovs-dir/var/run/openvswitch/vhost-user1:/var/run/usvhost -v /mnt/huge/:/dev/hugepages/ $docker-image bash Run the dpdk application in the docker container 1. $dpdk-app -c 0x01 --socket-mem=128,0 --vdev=virtio_user0,path=/var/run/usvhost --file-prefix="docker1" However, I encountered this error PMD: vhost_user_setup(): connect error, Connection refused PMD: virtio_user_dev_init(): backend set up fails PMD: virtio_user_pmd_devinit(): virtio_user_dev_init fails EAL: failed to initialize virtio_user0 device It seems like the application in the docker container cannot connect the socket created by the ovs-dpdk. Can anybody please help me and point out some possible solutions? Looking forward to your reply. Thanks very much for your help. Best wishes, Xiaoban
Re: [dpdk-users] VHOST-USER interface between ovs-dpdk and a VM
Sorry - step 4 below is incorrect: There is a copy between guest OS and guest user-space. Now the question is - what is the different of # of copies (copies of packet from host to guest application) between this setup (OVS-DPDK) and setup with standard OVS (no dpdk) Best regards > -Original Message- > From: Avi Cohen (A) > Sent: Tuesday, 23 May, 2017 4:27 PM > To: users@dpdk.org > Subject: VHOST-USER interface between ovs-dpdk and a VM > > Hi, > I'm trying to understand the packet life-cycle in ovs-dpdk (running on > host) > communicating with a VM through vhost-user interface: > 1. packet is received via physical port to the device. > 2.DMA transfer to mempools on huge-pages allocated by dpdk-ovs - in user- > space. > 3. ovs-dpdk copies this packet to the shared-vring of the associated guest > (shared between ovs-dpdk userspace process and guest) [Avi Cohen (A)] 4. no more copies in > the guest - i.e. when any application running on the guest wants to consume > the > packet - there is a zero copy between the shared-vring and the guest > application. > > Is that correct ? how 4 is implemented ? this is a communication between OS > in guest and application in guest . so how this is implemented with zero copy > ? > > Best Regards > avi
[dpdk-users] VHOST-USER interface between ovs-dpdk and a VM
Hi, I'm trying to understand the packet life-cycle in ovs-dpdk (running on host) communicating with a VM through vhost-user interface: 1. packet is received via physical port to the device. 2.DMA transfer to mempools on huge-pages allocated by dpdk-ovs - in user-space. 3. ovs-dpdk copies this packet to the shared-vring of the associated guest (shared between ovs-dpdk userspace process and guest) 4. no more copies in the guest - i.e. when any application running on the guest wants to consume the packet - there is a zero copy between the shared-vring and the guest application. Is that correct ? how 4 is implemented ? this is a communication between OS in guest and application in guest . so how this is implemented with zero copy ? Best Regards avi
[dpdk-users] Second attempt for help - VLAN issue
Greetings, I've tried asking this question earlier but got the following response: The message's content type was not explicitly allowed I don't know what that means. So, I'll try again... I'm looking for help regarding using DPDK pktgen to send IP V4 VLAN messages from a pcap file. I am able to send IP V4 pcap messages but was only able to do that after using the 'enable 0 pcap' command - as expected. However, when I try the same command for vlan, i.e., 'enable 0 vlan', I get a Segmentation Violation. Is this a bug or do I need other configuration to permit that capability? Using dpdk-pktgen-3.2.4 Fedora release 25 (Twenty Five) Thanks for any help, Alan Reutemann | Software Engineer Transaction Network Services 3000 Bayport Drive | Suite 900 | Tampa | FL 33607 | USA +1 813 261 8873 | +1 813 261 8851| areutem...@tnsi.com | www.tnsi.com This e-mail message is for the sole use of the intended recipient(s)and may contain confidential and privileged information of Transaction Network Services. Any unauthorized review, use, disclosure or distribution is prohibited. If you are not the intended recipient, please contact the sender by reply e-mail and destroy all copies of the original message.
Re: [dpdk-users] pktgen - IP address randomness
> On May 23, 2017, at 12:16 AM, Shyam Shrivastav > wrote: > > Sometime back I used lua in moongen to continuously generate ip:tcp packets > with random ip src and random tcp src/dst ports. Here is the corresponding > lua script fragment, don't know how much useful it can be with pktgen as I > have not looked at or used pktgen till now but just in case ... > > >x1 = math.random(1,254); >x2 = math.random(1,254); >x3 = math.random(1,254); >x4 = math.random(1,254); >p1 = math.random(1025,65534); >p2 = math.random(1025,65534); >pkt.ip.src:set(x1*256*256*256 + x2*256*256 + x3*256 > + x4) >pkt.tcp:setSrcPort(p1); >pkt.tcp:setDstPort(p2); > > May want to look at the random mode in pktgen as that mode packets are changed on the fly using a mask and random values. > > > On Tue, May 23, 2017 at 5:01 AM, Chris Hall > wrote: > >> Hello, >> >> pktgen-3.2.4 >> >> I Looking to get as much randomness out of src ip’s as possible. Using >> this config in lua… >> >>pktgen.src_ip('0', 'start',"0.0.0.1"); >>pktgen.src_ip('0', 'min', "0.0.0.1"); >>pktgen.src_ip('0', 'max', "255.255.255.254"); >>pktgen.src_ip('0', 'inc', "1.1.1.1"); >> >> running a packet capture of 5Million packets on the receiving host, >> parsing the pcap file, based on source ip, seems I can only get about 32769 >> uniq ip's, (each connected about 150 times). >> >> As a comparison I used hping3 with --rand-source option (5million packets) >> I can get about 4942744 uniq ip’s. >> >> Is there a configure option(s) somewhere that could be tuned for more >> randomness or is the above parms just wrong ? >> >> Thanks much. >> >> * Chris >> >> >> >> Regards, Keith
Re: [dpdk-users] pktgen - IP address randomness
> On May 22, 2017, at 6:31 PM, Chris Hall wrote: > > Hello, > > pktgen-3.2.4 > > I Looking to get as much randomness out of src ip’s as possible. Using this > config in lua… > >pktgen.src_ip('0', 'start',"0.0.0.1"); >pktgen.src_ip('0', 'min', "0.0.0.1"); >pktgen.src_ip('0', 'max', "255.255.255.254"); >pktgen.src_ip('0', 'inc', "1.1.1.1”); Currently you can not have a huge number of random values, the reason is to maintain performance levels I initialize the packets before starting the TX and do not update the packets on the fly in range mode. The other problem I see is incrementing the IP address by 1.1.1.1 would only give about 256 IP address is that right. One other person used the lua scripts to setup N number of IP addresses then stopped, change the range to next N ranges and so on till he was able to test all of the ranges or very close to that many. The number of mbufs allocated to range mode is about 8192 packets, which means processing the packets in groups of 8192. Not sure how long that will take, but not sure how long it would have taken is the range mode did support the complete range. > > running a packet capture of 5Million packets on the receiving host, parsing > the pcap file, based on source ip, seems I can only get about 32769 uniq > ip's, (each connected about 150 times). > > As a comparison I used hping3 with --rand-source option (5million packets) I > can get about 4942744 uniq ip’s. > > Is there a configure option(s) somewhere that could be tuned for more > randomness or is the above parms just wrong ? > > Thanks much. > > * Chris > > > Regards, Keith
Re: [dpdk-users] Mellanox 100G NIC, VF SR-IOV, docker container, EAL could not detect the device
On Tue, May 23, 2017 at 05:39:36AM +, Wu, Xiaoban wrote: > Dear DPDK users, > > > I am trying to use the VF of the Mellanox 100G NIC enabled by SR-IOV. The > following is what I have done. > > > 1. Add "intel_iommu=on iommu=pt" to kernel command line option, > update-grub, and reboot > > 2. Install the MLNX-OFED, reboot > > 3. In default the card is in infiband mode, so I switched it to ethernet > mode (in order to run DPDK application) and reboot. > > 4. mst start > > mlxconfig -d /dev/mst/mt4115_pciconf0 q #query > mlxconfig -d /dev/mst/mt4115_pciconf0 set SRIOV_EN=1 NUM_OF_VFS=1 > reboot > 5. echo 1 > /sys/bus/pci/devices/\:83\:00.0/mlx5_num_vfs > 6. modprobe vfio-pci > 7. dpdk-devbind.py --status > 8. dpdk-devbind.py -b vfio-pci :83:00.1 > 9. ls -al /dev/vfio > 10. docker run -it --privileged --device=/dev/vfio/54:/dev/vfio/54 > --device=/dev/vfio/vfio:/dev/vfio/vfio -v /mnt/huge/:/dev/hugepages/ -v > /var/run:/var/run $IMAGEID bash > 11. $dpdkapp -c 0x01 --socket-mem=128,128 --file-prefix="docker1" > > However, in the EAL part, it does not list any usable devices > EAL: Detected 12 lcore(s) > EAL: Probing VFIO support... > EAL: VFIO support initialized > PMD: bnxt_rte_pmd_init() called for (null) > Error, rte_eth_dev_configure() returns negative! > > Can anybody please point out any possible solution? Looking forward to your > reply. Thanks very much for your help. Seems like the issue is not related to your mlx5 device. From the above log it appears that you also have a bnxt device on that system which DPDK detects and attempts to use as it is running in blacklist mode. Perhaps that device was not configured properly. Try to white-list the devices you want to use by explicitly providing their PCI bus addresses through -w arguments instead. -- Adrien Mazarguil 6WIND