>> >Why so many vm switches?  First up, a typical I/O system maxes at
>> >about 1Gb/s, right?  That would be a gigabit NIC, or striped RAID,
or
>> >something like that.  This suggests an average of only about 300
>> >bytes/transfer, to get >150k individual transfers per second?  I
>> >thought block I/O usually dealt with 1kbyte or more.  Next, we're
>> >living in a world where most CPUs supporting the required extended
>> >instruction set are multi-core, and you specifically said core duo.
>> >Shouldn't an extensive I/O workload  tend toward one CPU in each VM,
>> >with contraposed producer-consumer queues, and almost zero context
>> >switches?
>>
>> First 150k-200k vm switches is the maximum number we can reach for a
>> single core. The exits are not necessarily related to IO, for
instance
>> before the new MMU code page fault exits were performance bottle neck
>> which major part of it was the vm switches.
>>
>> Second, we currently use Qemu's device emulation were the ne2k device
>> does dozens of IO accesses per packet! The rtl8139 is better and does
>> about 3 IO(MMIO) per packet. The current maximum throughput using the
>> rtl8139 is ~30Mbps. Very soon we'll have PV drivers that will boost
>> performance and use the P/C queues you were talking about. PV drivers
>> will boost performance beyond 1G/bps.
>
>Thanks for the explanation.  Two things had thrown me off: first the
>phrase "extensive I/O" made me think of disk I/O with rather large
>blocks, not a bunch of smallish network packets.  And I'd forgotten
>that we're dealing with full virt, so you can't pipeline requests,
>since the driver expects synchronous memory access.  Is it possible
>for any of qemu's hardware emulation code to run without a VMEXIT,
>more or less like a software interrupt within the VM?  Or is VMEXIT
>already the equivalent of a software interrupt or interrupt from
>request for privileged instruction from ring 3, as applied to virtual
>machine instead of ring 3 user mode?

I didn't completely understand, do you mean that the guest would send
the software interrupts for cheaper VM exit?
If that's the case, I find it impossible for fully virtualized devices
since these devices use I/O or MMIO for connectivity with software.

We currently add PV drivers (even to fully virtualized guests) and we
will queue/coalesce packet together.


-------------------------------------------------------------------------
Take Surveys. Earn Cash. Influence the Future of IT
Join SourceForge.net's Techsay panel and you'll get the chance to share your
opinions on IT & business topics through brief surveys-and earn cash
http://www.techsay.com/default.php?page=join.php&p=sourceforge&CID=DEVDEV
_______________________________________________
kvm-devel mailing list
[email protected]
https://lists.sourceforge.net/lists/listinfo/kvm-devel

Reply via email to