On Wed, 2011-01-26 at 17:17 +0200, Michael S. Tsirkin wrote:
> I am seeing a similar problem, and am trying to fix that.
> My current theory is that this is a variant of a receive livelock:
> if the application isn't fast enough to process
> incoming data, the guest net stack switches
> from prequeue to backlog handling.
> 
> One thing I noticed is that locking the vhost thread
> and the vcpu to the same physical CPU almost doubles the
> bandwidth.  Can you confirm that in your setup?
> 
> My current guess is that when we lock both to
> a single CPU, netperf in guest gets scheduled
> slowing down the vhost thread in the host.
> 
> I also noticed that this specific workload
> performs better with vhost off: presumably
> we are loading the guest less. 

I found similar issue for small message size TCP_STREAM test when guest
as TX. I found when I slow down TX, the BW performance will be doubled
for 1K to 4K message size.

Shirley

--
To unsubscribe from this list: send the line "unsubscribe kvm" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html

Reply via email to