GitHub user vishesh92 added a comment to the discussion: Network VF (SR-IOV) 
options / questions (need a few hundred VLANs in guest)

@bhouse-nexthop @bradh352 I missed this one. As @erikbocks said, you can 
passthrough any types of pci device as long as it shows up in the lspci output 
and supports passthrough with KVM. You just need to make the change like 
https://github.com/apache/cloudstack/pull/11715 .

For it to show network VFs, that might require some extra changes in the script 
to discover the VFs for a network card. 
`scripts/vm/hypervisor/kvm/gpudiscovery.sh` is available at 
`/usr/share/cloudstack-common/scripts/vm/hypervisor/kvm/gpudiscovery.sh` on a 
KVM host. You can make this change on a single host and discover GPU devices on 
the host. This will make the PCI device visible only on that host only. After 
this, you will need to create a compute offering with the discovered PCI device.

You can try updating it and see if it works. This shouldn't require a new 
release for you to test it out. Let me know how this goes.

P.S. - I know some people in the community tested passthrough of NPUs, GPUs, 
processing accelerators & VGA Controllers with CloudStack. I haven't tested 
with network cards and your use case specifically.

GitHub link: 
https://github.com/apache/cloudstack/discussions/11808#discussioncomment-16515618

----
This is an automatically sent email for [email protected].
To unsubscribe, please send an email to: [email protected]

Reply via email to