Anthony Liguori wrote:
Avi Kivity wrote:
Anthony Liguori wrote:
Avi Kivity wrote:
Each guest's host userspace mmaps the other guest's address space.
The userspace then does a copy on both the tx and rx paths.
Well, that's better security-wise (I'd still prefer to avoid it, so
we can run each guest under a separate uid), but then we lose
performance wise.
What performance win? I'm not sure the copies can be eliminated in
the case of interguest IO.
I guess not. But at least you can dma instead of busy-copying.
Fast interguest IO means mmap()'ing the other guest's address space
read-only.
This implies trusting the other userspace, which is not a good thing.
Let the kernel copy, we already trust it, and it has more resources to
do the copy.
If you had a pv dma registration api you could conceivably only allow
the active dma entries to be mapped but my fear would be that the
zap'ing on unregister would hurt performance.
Yes, mmu games are costly. They also only work on page granularity
which isn't always possible to guarantee.
Conceivably, this could be done as a read-only mapping so that each
guest userspace copies only the rx packets. That's about as secure
as you're going to get with this approach I think.
Maybe we can terminate the virtio queue in the host kernel as a pipe,
and splice pipes together.
That gives us guest-guest and guest-process communications, and if
you use aio the kernel can use a dma engine for the copy.
Ah, so you're looking to use a DMA engine for accelerated copy.
Perhaps the answer is to expose the DMA engine via a userspace API?
That's one option, but it still involves sharing all of memory.
Splicing pipes might be better.
--
error compiling committee.c: too many arguments to function
_______________________________________________
Lguest mailing list
Lguest@ozlabs.org
https://ozlabs.org/mailman/listinfo/lguest