On Wednesday 18 February 2009, Rusty Russell wrote:

> 2) Direct NIC attachment
> This is particularly interesting with SR-IOV or other multiqueue nics,
> but for boutique cases or benchmarks, could be for normal NICs.  So
> far I have some very sketched-out patches: for the attached nic 
> dev_alloc_skb() gets an skb from the guest (which supplies them via
> some kind of AIO interface), and a branch in netif_receive_skb()
> which returned it to the guest.  This bypasses all firewalling in
> the host though; we're basically having the guest process drive
> the NIC directly.       

If this is not passing the PCI device directly to the guest, but
uses your concept, wouldn't it still be possible to use the firewalling
in the host? You can always inspect the headers, drop the frame, etc
without copying the whole frame at any point.

When it gets to the point of actually giving the (real pf or sr-iov vf)
to one guest, you really get to the point where you can't do local
firewalling any more.

> 3) Direct interguest networking
> Anthony has been thinking here: vmsplice has already been mentioned.
> The idea of passing directly from one guest to another is an
> interesting one: using dma engines might be possible too.  Again,
> host can't firewall this traffic.  Simplest as a dedicated "internal
> lan" NIC, but we could theoretically do a fast-path for certain MAC
> addresses on a general guest NIC.     

Another option would be to use an SR-IOV adapter from multiple guests,
with a virtual ethernet bridge in the adapter. This moves the overhead
from the CPU to the bus and/or adapter, so it may or may not be a real
benefit depending on the workload.

        Arnd <><
--
To unsubscribe from this list: send the line "unsubscribe kvm" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html

Reply via email to