Ben,

Of course there's a PCI switch. My main purpose of leveraging SR-IOV with
VF allocation is to allow the internal eswitch on the Intel NIC to handle
switching in hardware instead of the vswitch on the ESXI hypervisor. I
don't really care so much about isolation of the PCI devices nor the risk
of bad firmware on the NIC. I will control/trust all of the VMs with access
to the VFs as well as the device attached to the PF.

So just to confirm, I need to expect 100% CPU utilization with VPP/DPDK +
IOMMU? If so, what's the best way to monitor CPU-related performance impact
if I always see 100%? Also I want to confirm that
enable_unsafe_noiommu_mode still enables the performance benefits of SR-IOV
and the only tradeoff is the aforementioned isolation/security concern?


Thanks for your help,

--Josh

On Tue, Sep 29, 2020 at 5:23 AM Benoit Ganne (bganne) <bga...@cisco.com>
wrote:

> Hi Joshua,
>
> Glad it solves the vfio issue. Looking at the dmesg output, I suspect the
> issue is that the PCIe topology advertised is not fully supported by vfio +
> IOMMU: it looks like your VF is behind a PCIe switch, so the CPU PCIe IOMMU
> root-complex port cannot guarantee full isolation: all devices behind the
> PCIe switch can talk peer-to-peer directly w/o going through the CPU PCIe
> root-complex port.
> As the CPU IOMMU cannot fully isolate your device, vfio refuses to bind
> unless you allow for unsafe IOMMU config - rightfully, as it seems to be
> your case.
> Anyway, it still means you should benefit from the IOMMU to prevent the
> device to read/write everywhere in host memory. You might not however
> prevent a malign firmware running on the NIC to harm other devices behind
> the same PCIe switch.
>
> Regarding the VM crash, note that VPP is polling the interfaces so it will
> always uses 100% CPU.
> Does the VM also crashes if you run stress-test the CPU eg.
> ~# stress-ng --matrix 0 -t 1m
>
> Best
> ben
>
> > -----Original Message-----
> > From: Joshua Moore <j...@jcm.me>
> > Sent: mardi 29 septembre 2020 12:07
> > To: Benoit Ganne (bganne) <bga...@cisco.com>
> > Cc: Damjan Marion <dmar...@me.com>; vpp-dev@lists.fd.io
> > Subject: Re: [vpp-dev] VPP on ESXI with i40evf (SR-IOV Passthrough)
> Driver
> >
> > Hello Ben,
> >
> > echo 1 | sudo tee /sys/module/vfio/parameters/enable_unsafe_noiommu_mode
> > sudo dpdk-devbind --bind=vfio-pci 0000:13:00.0
> >
> >
> > The above commands successfully resulted in vfio-pci driver binding to
> the
> > NIC. However, as soon as I assigned the NIC to VPP and restarted the
> > service, my VM CPU shot up and the VM crashes.
> >
> >
> > Regarding IOMMU I do have it enabled in the host's BIOS, ESXI "Expose
> > IOMMU to the guest OS" option, and I have set the GRUB_CMDLINE_LINUX per
> > the below wiki:
> >
> > https://wiki.fd.io/view/VPP/How_To_Optimize_Performance_(System_Tuning)
> >
> > root@test:~# cat /etc/default/grub | grep GRUB_CMDLINE_LINUX
> > GRUB_CMDLINE_LINUX_DEFAULT="maybe-ubiquity"
> > GRUB_CMDLINE_LINUX="intel_iommu=on isolcpus=1-7 nohz_full=1-7
> > hugepagesz=1GB hugepages=16 default_hugepagesz=1GB"
> >
> > Full dmesg output can be found at: http://jcm.me/dmesg.txt
>
-=-=-=-=-=-=-=-=-=-=-=-
Links: You receive all messages sent to this group.
View/Reply Online (#17579): https://lists.fd.io/g/vpp-dev/message/17579
Mute This Topic: https://lists.fd.io/mt/77164974/21656
Group Owner: vpp-dev+ow...@lists.fd.io
Unsubscribe: https://lists.fd.io/g/vpp-dev/unsub [arch...@mail-archive.com]
-=-=-=-=-=-=-=-=-=-=-=-

Reply via email to