We'll have to look into the issue. It doesn't seem simple though. Thomas, can you please revert the patch from the master?
Thanks, Yongseok > On Jun 7, 2019, at 11:33 AM, Stephen Hemminger <step...@networkplumber.org> > wrote: > > The Netvsc PMD got broken on DPDK 19.08 by a bad patch in Mellanox driver. > On a simple setup with SRIOV enabled and using netvsc PMD with the testpmd. > It worked in 19.05 and does not work with current master. > > The probe code gets stuck because netvsc PMD sends a request to host > and never sees a response. > > # ./build/app/testpmd --log-level='pmd.net.netvsc.*:debug' -l 0-3 -n 4 -- > --rxq=4 --txq=4 -i > EAL: Detected 4 lcore(s) > EAL: Detected 1 NUMA nodes > EAL: Multi-process socket /var/run/dpdk/rte/mp_socket > EAL: Probing VFIO support... > EAL: WARNING: cpu flags constant_tsc=yes nonstop_tsc=no -> using unreliable > clock cycles ! > EAL: PCI device 935f:00:02.0 on NUMA socket 0 > EAL: probe driver: 15b3:1014 net_mlx5 > net_mlx5: can not get IB device "mlx5_0" ports number > eth_hn_probe(): >> > eth_hn_dev_init(): >> > hn_nvs_init(): NVS version 0x60001, NDIS version 6.30 > hn_nvs_conn_rxbuf(): connect rxbuff va=0x2200402000 gpad=0xe1e2d > hn_nvs_conn_rxbuf(): receive buffer size 1728 count 18811 > hn_nvs_conn_chim(): connect send buf va=0x2202302000 gpad=0xe1e2e > hn_nvs_conn_chim(): send buffer 16777216 section size:6144, count:2730 > (hung) > > The problem does not occur without SRIOV (or if MLX driver is not > compiled in). > > Doing bisect the problem is caused by: > > commit 69c06d0e357ed0064b498d510d169603cf7308cd > Author: Yongseok Koh <ys...@mellanox.com> > Date: Thu May 2 02:07:54 2019 -0700 > > net/mlx: support IOVA VA mode > > Set RTE_PCI_DRV_IOVA_AS_VA to driver's drv_flags as device's IOMMU takes > virtual address. > > Cc: sta...@dpdk.org > > Signed-off-by: Yongseok Koh <ys...@mellanox.com> > Acked-by: Shahaf Shuler <shah...@mellanox.com> > > Please either revert or fix this patch ASAP. >