RE: [PATCH v13 00/15] SMMUv3 Nested Stage Setup (IOMMU part)
> Hi Krishna, > On 3/15/21 7:04 PM, Krishna Reddy wrote: > > Tested-by: Krishna Reddy > > > >> 1) pass the guest stage 1 configuration > > > > Validated Nested SMMUv3 translations for NVMe PCIe device from Guest VM > along with patch series "v11 SMMUv3 Nested Stage Setup (VFIO part)" and > QEMU patch series "vSMMUv3/pSMMUv3 2 stage VFIO integration" from > v5.2.0-2stage-rfcv8. > > NVMe PCIe device is functional with 2-stage translations and no issues > observed. > Thank you very much for your testing efforts. For your info, there are more > recent kernel series: > [PATCH v14 00/13] SMMUv3 Nested Stage Setup (IOMMU part) (Feb 23) [PATCH > v12 00/13] SMMUv3 Nested Stage Setup (VFIO part) (Feb 23) > > working along with QEMU RFC > [RFC v8 00/28] vSMMUv3/pSMMUv3 2 stage VFIO integration (Feb 25) > > If you have cycles to test with those, this would be higly appreciated. Thanks Eric for the latest patches. Will validate and update. Feel free to reach out me for validating future patch sets as necessary. -KR ___ kvmarm mailing list kvmarm@lists.cs.columbia.edu https://lists.cs.columbia.edu/mailman/listinfo/kvmarm
RE: [PATCH v13 00/15] SMMUv3 Nested Stage Setup (IOMMU part)
Tested-by: Krishna Reddy > 1) pass the guest stage 1 configuration Validated Nested SMMUv3 translations for NVMe PCIe device from Guest VM along with patch series "v11 SMMUv3 Nested Stage Setup (VFIO part)" and QEMU patch series "vSMMUv3/pSMMUv3 2 stage VFIO integration" from v5.2.0-2stage-rfcv8. NVMe PCIe device is functional with 2-stage translations and no issues observed. -KR ___ kvmarm mailing list kvmarm@lists.cs.columbia.edu https://lists.cs.columbia.edu/mailman/listinfo/kvmarm
Re: [PATCH v13 00/15] SMMUv3 Nested Stage Setup (IOMMU part)
Hi Krishna, On 3/15/21 7:04 PM, Krishna Reddy wrote: > Tested-by: Krishna Reddy > >> 1) pass the guest stage 1 configuration > > Validated Nested SMMUv3 translations for NVMe PCIe device from Guest VM along > with patch series "v11 SMMUv3 Nested Stage Setup (VFIO part)" and QEMU patch > series "vSMMUv3/pSMMUv3 2 stage VFIO integration" from v5.2.0-2stage-rfcv8. > NVMe PCIe device is functional with 2-stage translations and no issues > observed. Thank you very much for your testing efforts. For your info, there are more recent kernel series: [PATCH v14 00/13] SMMUv3 Nested Stage Setup (IOMMU part) (Feb 23) [PATCH v12 00/13] SMMUv3 Nested Stage Setup (VFIO part) (Feb 23) working along with QEMU RFC [RFC v8 00/28] vSMMUv3/pSMMUv3 2 stage VFIO integration (Feb 25) If you have cycles to test with those, this would be higly appreciated. Thanks Eric > > -KR > ___ kvmarm mailing list kvmarm@lists.cs.columbia.edu https://lists.cs.columbia.edu/mailman/listinfo/kvmarm
RE: [PATCH v13 00/15] SMMUv3 Nested Stage Setup (IOMMU part)
> -Original Message- > From: Auger Eric [mailto:eric.au...@redhat.com] > Sent: 21 February 2021 18:21 > To: Shameerali Kolothum Thodi ; > eric.auger@gmail.com; io...@lists.linux-foundation.org; > linux-ker...@vger.kernel.org; k...@vger.kernel.org; > kvmarm@lists.cs.columbia.edu; w...@kernel.org; j...@8bytes.org; > m...@kernel.org; robin.mur...@arm.com; alex.william...@redhat.com > Cc: jean-phili...@linaro.org; zhangfei@linaro.org; > zhangfei@gmail.com; vivek.gau...@arm.com; > jacob.jun@linux.intel.com; yi.l@intel.com; t...@semihalf.com; > nicoleots...@gmail.com; yuzenghui ; Zengtao (B) > ; linux...@openeuler.org > Subject: Re: [PATCH v13 00/15] SMMUv3 Nested Stage Setup (IOMMU part) > > Hi Shameer, > On 1/8/21 6:05 PM, Shameerali Kolothum Thodi wrote: > > Hi Eric, > > > >> -Original Message- > >> From: Eric Auger [mailto:eric.au...@redhat.com] > >> Sent: 18 November 2020 11:22 > >> To: eric.auger@gmail.com; eric.au...@redhat.com; > >> io...@lists.linux-foundation.org; linux-ker...@vger.kernel.org; > >> k...@vger.kernel.org; kvmarm@lists.cs.columbia.edu; w...@kernel.org; > >> j...@8bytes.org; m...@kernel.org; robin.mur...@arm.com; > >> alex.william...@redhat.com > >> Cc: jean-phili...@linaro.org; zhangfei@linaro.org; > >> zhangfei@gmail.com; vivek.gau...@arm.com; Shameerali Kolothum > >> Thodi ; > >> jacob.jun@linux.intel.com; yi.l@intel.com; t...@semihalf.com; > >> nicoleots...@gmail.com; yuzenghui > >> Subject: [PATCH v13 00/15] SMMUv3 Nested Stage Setup (IOMMU part) > >> > >> This series brings the IOMMU part of HW nested paging support > >> in the SMMUv3. The VFIO part is submitted separately. > >> > >> The IOMMU API is extended to support 2 new API functionalities: > >> 1) pass the guest stage 1 configuration > >> 2) pass stage 1 MSI bindings > >> > >> Then those capabilities gets implemented in the SMMUv3 driver. > >> > >> The virtualizer passes information through the VFIO user API > >> which cascades them to the iommu subsystem. This allows the guest > >> to own stage 1 tables and context descriptors (so-called PASID > >> table) while the host owns stage 2 tables and main configuration > >> structures (STE). > > > > I am seeing an issue with Guest testpmd run with this series. > > I have two different setups and testpmd works fine with the > > first one but not with the second. > > > > 1). Guest doesn't have kernel driver built-in for pass-through dev. > > > > root@ubuntu:/# lspci -v > > ... > > 00:02.0 Ethernet controller: Huawei Technologies Co., Ltd. Device a22e (rev > 21) > > Subsystem: Huawei Technologies Co., Ltd. Device > > Flags: fast devsel > > Memory at 800010 (64-bit, prefetchable) [disabled] [size=64K] > > Memory at 80 (64-bit, prefetchable) [disabled] [size=1M] > > Capabilities: [40] Express Root Complex Integrated Endpoint, MSI 00 > > Capabilities: [a0] MSI-X: Enable- Count=67 Masked- > > Capabilities: [b0] Power Management version 3 > > Capabilities: [100] Access Control Services > > Capabilities: [300] Transaction Processing Hints > > > > root@ubuntu:/# echo vfio-pci > > /sys/bus/pci/devices/:00:02.0/driver_override > > root@ubuntu:/# echo :00:02.0 > /sys/bus/pci/drivers_probe > > > > root@ubuntu:/mnt/dpdk/build/app# ./testpmd -w :00:02.0 --file-prefix > socket0 -l 0-1 -n 2 -- -i > > EAL: Detected 8 lcore(s) > > EAL: Detected 1 NUMA nodes > > EAL: Multi-process socket /var/run/dpdk/socket0/mp_socket > > EAL: Selected IOVA mode 'VA' > > EAL: No available hugepages reported in hugepages-32768kB > > EAL: No available hugepages reported in hugepages-64kB > > EAL: No available hugepages reported in hugepages-1048576kB > > EAL: Probing VFIO support... > > EAL: VFIO support initialized > > EAL: Invalid NUMA socket, default to 0 > > EAL: using IOMMU type 1 (Type 1) > > EAL: Probe PCI driver: net_hns3_vf (19e5:a22e) device: :00:02.0 (socket > 0) > > EAL: No legacy callbacks, legacy socket not created > > Interactive-mode selected > > testpmd: create a new mbuf pool : n=155456, > size=2176, socket=0 > > testpmd: preferred mempool ops selected: ring_mp_mc > > > > Warning! port-topology=paired and odd forward ports number, the last port > will pair with itself. > > > > Configuring Port 0 (socket 0) > > Port 0: 8E:A6:8C:43:43:45 > > Checking link statuses... > > Done > > te
Re: [PATCH v13 00/15] SMMUv3 Nested Stage Setup (IOMMU part)
Hi Shameer, On 1/8/21 6:05 PM, Shameerali Kolothum Thodi wrote: > Hi Eric, > >> -Original Message- >> From: Eric Auger [mailto:eric.au...@redhat.com] >> Sent: 18 November 2020 11:22 >> To: eric.auger@gmail.com; eric.au...@redhat.com; >> io...@lists.linux-foundation.org; linux-ker...@vger.kernel.org; >> k...@vger.kernel.org; kvmarm@lists.cs.columbia.edu; w...@kernel.org; >> j...@8bytes.org; m...@kernel.org; robin.mur...@arm.com; >> alex.william...@redhat.com >> Cc: jean-phili...@linaro.org; zhangfei@linaro.org; >> zhangfei@gmail.com; vivek.gau...@arm.com; Shameerali Kolothum >> Thodi ; >> jacob.jun@linux.intel.com; yi.l@intel.com; t...@semihalf.com; >> nicoleots...@gmail.com; yuzenghui >> Subject: [PATCH v13 00/15] SMMUv3 Nested Stage Setup (IOMMU part) >> >> This series brings the IOMMU part of HW nested paging support >> in the SMMUv3. The VFIO part is submitted separately. >> >> The IOMMU API is extended to support 2 new API functionalities: >> 1) pass the guest stage 1 configuration >> 2) pass stage 1 MSI bindings >> >> Then those capabilities gets implemented in the SMMUv3 driver. >> >> The virtualizer passes information through the VFIO user API >> which cascades them to the iommu subsystem. This allows the guest >> to own stage 1 tables and context descriptors (so-called PASID >> table) while the host owns stage 2 tables and main configuration >> structures (STE). > > I am seeing an issue with Guest testpmd run with this series. > I have two different setups and testpmd works fine with the > first one but not with the second. > > 1). Guest doesn't have kernel driver built-in for pass-through dev. > > root@ubuntu:/# lspci -v > ... > 00:02.0 Ethernet controller: Huawei Technologies Co., Ltd. Device a22e (rev > 21) > Subsystem: Huawei Technologies Co., Ltd. Device > Flags: fast devsel > Memory at 800010 (64-bit, prefetchable) [disabled] [size=64K] > Memory at 80 (64-bit, prefetchable) [disabled] [size=1M] > Capabilities: [40] Express Root Complex Integrated Endpoint, MSI 00 > Capabilities: [a0] MSI-X: Enable- Count=67 Masked- > Capabilities: [b0] Power Management version 3 > Capabilities: [100] Access Control Services > Capabilities: [300] Transaction Processing Hints > > root@ubuntu:/# echo vfio-pci > > /sys/bus/pci/devices/:00:02.0/driver_override > root@ubuntu:/# echo :00:02.0 > /sys/bus/pci/drivers_probe > > root@ubuntu:/mnt/dpdk/build/app# ./testpmd -w :00:02.0 --file-prefix > socket0 -l 0-1 -n 2 -- -i > EAL: Detected 8 lcore(s) > EAL: Detected 1 NUMA nodes > EAL: Multi-process socket /var/run/dpdk/socket0/mp_socket > EAL: Selected IOVA mode 'VA' > EAL: No available hugepages reported in hugepages-32768kB > EAL: No available hugepages reported in hugepages-64kB > EAL: No available hugepages reported in hugepages-1048576kB > EAL: Probing VFIO support... > EAL: VFIO support initialized > EAL: Invalid NUMA socket, default to 0 > EAL: using IOMMU type 1 (Type 1) > EAL: Probe PCI driver: net_hns3_vf (19e5:a22e) device: :00:02.0 (socket 0) > EAL: No legacy callbacks, legacy socket not created > Interactive-mode selected > testpmd: create a new mbuf pool : n=155456, size=2176, > socket=0 > testpmd: preferred mempool ops selected: ring_mp_mc > > Warning! port-topology=paired and odd forward ports number, the last port > will pair with itself. > > Configuring Port 0 (socket 0) > Port 0: 8E:A6:8C:43:43:45 > Checking link statuses... > Done > testpmd> > > 2). Guest have kernel driver built-in for pass-through dev. > > root@ubuntu:/# lspci -v > ... > 00:02.0 Ethernet controller: Huawei Technologies Co., Ltd. Device a22e (rev > 21) > Subsystem: Huawei Technologies Co., Ltd. Device > Flags: bus master, fast devsel, latency 0 > Memory at 800010 (64-bit, prefetchable) [size=64K] > Memory at 80 (64-bit, prefetchable) [size=1M] > Capabilities: [40] Express Root Complex Integrated Endpoint, MSI 00 > Capabilities: [a0] MSI-X: Enable+ Count=67 Masked- > Capabilities: [b0] Power Management version 3 > Capabilities: [100] Access Control Services > Capabilities: [300] Transaction Processing Hints > Kernel driver in use: hns3 > > root@ubuntu:/# echo vfio-pci > > /sys/bus/pci/devices/:00:02.0/driver_override > root@ubuntu:/# echo :00:02.0 > /sys/bus/pci/drivers/hns3/unbind > root@ubuntu:/# echo :00:02.0 > /sys/bus/pci/drivers_probe > > root@ubuntu:/mnt/dpdk/build/app# ./testpmd -w :00:02.0 --file-prefix > socket0 -l 0-1 -n 2 -- -i > EAL: Detected 8 lcore(s) > EAL: Detected 1 NUMA nodes > EAL: Multi-process socket /var/run/dpdk/socket0/mp_socket > EAL: Selected IOVA mode 'VA' > EAL: No available hugepages reported in hugepages-32768kB > EAL: No available hugepages reported in hugepages-64kB > EAL: No available hugepages reported in hugepages-1048576kB > EAL: Probing VFIO support... > EAL: VFIO support initialized > EAL: Invalid NUMA socket, default to 0 > EAL: using IOMMU type 1 (Type 1) > EAL: Probe
Re: [PATCH v13 00/15] SMMUv3 Nested Stage Setup (IOMMU part)
Hi Shameer, On 1/8/21 6:05 PM, Shameerali Kolothum Thodi wrote: > Hi Eric, > >> -Original Message- >> From: Eric Auger [mailto:eric.au...@redhat.com] >> Sent: 18 November 2020 11:22 >> To: eric.auger@gmail.com; eric.au...@redhat.com; >> io...@lists.linux-foundation.org; linux-ker...@vger.kernel.org; >> k...@vger.kernel.org; kvmarm@lists.cs.columbia.edu; w...@kernel.org; >> j...@8bytes.org; m...@kernel.org; robin.mur...@arm.com; >> alex.william...@redhat.com >> Cc: jean-phili...@linaro.org; zhangfei@linaro.org; >> zhangfei@gmail.com; vivek.gau...@arm.com; Shameerali Kolothum >> Thodi ; >> jacob.jun@linux.intel.com; yi.l@intel.com; t...@semihalf.com; >> nicoleots...@gmail.com; yuzenghui >> Subject: [PATCH v13 00/15] SMMUv3 Nested Stage Setup (IOMMU part) >> >> This series brings the IOMMU part of HW nested paging support >> in the SMMUv3. The VFIO part is submitted separately. >> >> The IOMMU API is extended to support 2 new API functionalities: >> 1) pass the guest stage 1 configuration >> 2) pass stage 1 MSI bindings >> >> Then those capabilities gets implemented in the SMMUv3 driver. >> >> The virtualizer passes information through the VFIO user API >> which cascades them to the iommu subsystem. This allows the guest >> to own stage 1 tables and context descriptors (so-called PASID >> table) while the host owns stage 2 tables and main configuration >> structures (STE). > > I am seeing an issue with Guest testpmd run with this series. > I have two different setups and testpmd works fine with the > first one but not with the second. > > 1). Guest doesn't have kernel driver built-in for pass-through dev. > > root@ubuntu:/# lspci -v > ... > 00:02.0 Ethernet controller: Huawei Technologies Co., Ltd. Device a22e (rev > 21) > Subsystem: Huawei Technologies Co., Ltd. Device > Flags: fast devsel > Memory at 800010 (64-bit, prefetchable) [disabled] [size=64K] > Memory at 80 (64-bit, prefetchable) [disabled] [size=1M] > Capabilities: [40] Express Root Complex Integrated Endpoint, MSI 00 > Capabilities: [a0] MSI-X: Enable- Count=67 Masked- > Capabilities: [b0] Power Management version 3 > Capabilities: [100] Access Control Services > Capabilities: [300] Transaction Processing Hints > > root@ubuntu:/# echo vfio-pci > > /sys/bus/pci/devices/:00:02.0/driver_override > root@ubuntu:/# echo :00:02.0 > /sys/bus/pci/drivers_probe > > root@ubuntu:/mnt/dpdk/build/app# ./testpmd -w :00:02.0 --file-prefix > socket0 -l 0-1 -n 2 -- -i > EAL: Detected 8 lcore(s) > EAL: Detected 1 NUMA nodes > EAL: Multi-process socket /var/run/dpdk/socket0/mp_socket > EAL: Selected IOVA mode 'VA' > EAL: No available hugepages reported in hugepages-32768kB > EAL: No available hugepages reported in hugepages-64kB > EAL: No available hugepages reported in hugepages-1048576kB > EAL: Probing VFIO support... > EAL: VFIO support initialized > EAL: Invalid NUMA socket, default to 0 > EAL: using IOMMU type 1 (Type 1) > EAL: Probe PCI driver: net_hns3_vf (19e5:a22e) device: :00:02.0 (socket 0) > EAL: No legacy callbacks, legacy socket not created > Interactive-mode selected > testpmd: create a new mbuf pool : n=155456, size=2176, > socket=0 > testpmd: preferred mempool ops selected: ring_mp_mc > > Warning! port-topology=paired and odd forward ports number, the last port > will pair with itself. > > Configuring Port 0 (socket 0) > Port 0: 8E:A6:8C:43:43:45 > Checking link statuses... > Done > testpmd> > > 2). Guest have kernel driver built-in for pass-through dev. > > root@ubuntu:/# lspci -v > ... > 00:02.0 Ethernet controller: Huawei Technologies Co., Ltd. Device a22e (rev > 21) > Subsystem: Huawei Technologies Co., Ltd. Device > Flags: bus master, fast devsel, latency 0 > Memory at 800010 (64-bit, prefetchable) [size=64K] > Memory at 80 (64-bit, prefetchable) [size=1M] > Capabilities: [40] Express Root Complex Integrated Endpoint, MSI 00 > Capabilities: [a0] MSI-X: Enable+ Count=67 Masked- > Capabilities: [b0] Power Management version 3 > Capabilities: [100] Access Control Services > Capabilities: [300] Transaction Processing Hints > Kernel driver in use: hns3 > > root@ubuntu:/# echo vfio-pci > > /sys/bus/pci/devices/:00:02.0/driver_override > root@ubuntu:/# echo :00:02.0 > /sys/bus/pci/drivers/hns3/unbind > root@ubuntu:/# echo :00:02.0 > /sys/bus/pci/drivers_probe > > root@ubuntu:/mnt/dpdk/build/app# ./testpmd -w :00:02.0 --file-prefix > socket0 -l 0-1 -n 2 -- -i > EAL: Detected 8 lcore(s) > EAL: Detected 1 NUMA nodes > EAL: Multi-process socket /var/run/dpdk/socket0/mp_socket > EAL: Selected IOVA mode 'VA' > EAL: No available hugepages reported in hugepages-32768kB > EAL: No available hugepages reported in hugepages-64kB > EAL: No available hugepages reported in hugepages-1048576kB > EAL: Probing VFIO support... > EAL: VFIO support initialized > EAL: Invalid NUMA socket, default to 0 > EAL: using IOMMU type 1 (Type 1) > EAL: Probe
RE: [PATCH v13 00/15] SMMUv3 Nested Stage Setup (IOMMU part)
Hi Eric, > -Original Message- > From: Eric Auger [mailto:eric.au...@redhat.com] > Sent: 18 November 2020 11:22 > To: eric.auger@gmail.com; eric.au...@redhat.com; > io...@lists.linux-foundation.org; linux-ker...@vger.kernel.org; > k...@vger.kernel.org; kvmarm@lists.cs.columbia.edu; w...@kernel.org; > j...@8bytes.org; m...@kernel.org; robin.mur...@arm.com; > alex.william...@redhat.com > Cc: jean-phili...@linaro.org; zhangfei@linaro.org; > zhangfei@gmail.com; vivek.gau...@arm.com; Shameerali Kolothum > Thodi ; > jacob.jun@linux.intel.com; yi.l@intel.com; t...@semihalf.com; > nicoleots...@gmail.com; yuzenghui > Subject: [PATCH v13 00/15] SMMUv3 Nested Stage Setup (IOMMU part) > > This series brings the IOMMU part of HW nested paging support > in the SMMUv3. The VFIO part is submitted separately. > > The IOMMU API is extended to support 2 new API functionalities: > 1) pass the guest stage 1 configuration > 2) pass stage 1 MSI bindings > > Then those capabilities gets implemented in the SMMUv3 driver. > > The virtualizer passes information through the VFIO user API > which cascades them to the iommu subsystem. This allows the guest > to own stage 1 tables and context descriptors (so-called PASID > table) while the host owns stage 2 tables and main configuration > structures (STE). I am seeing an issue with Guest testpmd run with this series. I have two different setups and testpmd works fine with the first one but not with the second. 1). Guest doesn't have kernel driver built-in for pass-through dev. root@ubuntu:/# lspci -v ... 00:02.0 Ethernet controller: Huawei Technologies Co., Ltd. Device a22e (rev 21) Subsystem: Huawei Technologies Co., Ltd. Device Flags: fast devsel Memory at 800010 (64-bit, prefetchable) [disabled] [size=64K] Memory at 80 (64-bit, prefetchable) [disabled] [size=1M] Capabilities: [40] Express Root Complex Integrated Endpoint, MSI 00 Capabilities: [a0] MSI-X: Enable- Count=67 Masked- Capabilities: [b0] Power Management version 3 Capabilities: [100] Access Control Services Capabilities: [300] Transaction Processing Hints root@ubuntu:/# echo vfio-pci > /sys/bus/pci/devices/:00:02.0/driver_override root@ubuntu:/# echo :00:02.0 > /sys/bus/pci/drivers_probe root@ubuntu:/mnt/dpdk/build/app# ./testpmd -w :00:02.0 --file-prefix socket0 -l 0-1 -n 2 -- -i EAL: Detected 8 lcore(s) EAL: Detected 1 NUMA nodes EAL: Multi-process socket /var/run/dpdk/socket0/mp_socket EAL: Selected IOVA mode 'VA' EAL: No available hugepages reported in hugepages-32768kB EAL: No available hugepages reported in hugepages-64kB EAL: No available hugepages reported in hugepages-1048576kB EAL: Probing VFIO support... EAL: VFIO support initialized EAL: Invalid NUMA socket, default to 0 EAL: using IOMMU type 1 (Type 1) EAL: Probe PCI driver: net_hns3_vf (19e5:a22e) device: :00:02.0 (socket 0) EAL: No legacy callbacks, legacy socket not created Interactive-mode selected testpmd: create a new mbuf pool : n=155456, size=2176, socket=0 testpmd: preferred mempool ops selected: ring_mp_mc Warning! port-topology=paired and odd forward ports number, the last port will pair with itself. Configuring Port 0 (socket 0) Port 0: 8E:A6:8C:43:43:45 Checking link statuses... Done testpmd> 2). Guest have kernel driver built-in for pass-through dev. root@ubuntu:/# lspci -v ... 00:02.0 Ethernet controller: Huawei Technologies Co., Ltd. Device a22e (rev 21) Subsystem: Huawei Technologies Co., Ltd. Device Flags: bus master, fast devsel, latency 0 Memory at 800010 (64-bit, prefetchable) [size=64K] Memory at 80 (64-bit, prefetchable) [size=1M] Capabilities: [40] Express Root Complex Integrated Endpoint, MSI 00 Capabilities: [a0] MSI-X: Enable+ Count=67 Masked- Capabilities: [b0] Power Management version 3 Capabilities: [100] Access Control Services Capabilities: [300] Transaction Processing Hints Kernel driver in use: hns3 root@ubuntu:/# echo vfio-pci > /sys/bus/pci/devices/:00:02.0/driver_override root@ubuntu:/# echo :00:02.0 > /sys/bus/pci/drivers/hns3/unbind root@ubuntu:/# echo :00:02.0 > /sys/bus/pci/drivers_probe root@ubuntu:/mnt/dpdk/build/app# ./testpmd -w :00:02.0 --file-prefix socket0 -l 0-1 -n 2 -- -i EAL: Detected 8 lcore(s) EAL: Detected 1 NUMA nodes EAL: Multi-process socket /var/run/dpdk/socket0/mp_socket EAL: Selected IOVA mode 'VA' EAL: No available hugepages reported in hugepages-32768kB EAL: No available hugepages reported in hugepages-64kB EAL: No available hugepages reported in hugepages-1048576kB EAL: Probing VFIO support... EAL: VFIO support initialized EAL: Invalid NUMA socket, default to 0 EAL: using IOMMU type 1 (Type 1) EAL: Probe PCI driver: net_hns3_vf (19e5:a22e) device: :00:02.0 (socket 0) :00:02.0 hns3_get_mbx_resp(): VF could not get mbx(11,0) head(1) tail(0) lost(1) from PF in_irq:0 hns3vf_get_queue_info(): Failed to get tqp info from PF: -62 hns3vf_init_vf(): Failed to fetch