Re: [PATCH net-next 0/3] basic busy polling support for vhost_net

2015-11-30 Thread Michael S. Tsirkin
On Sun, Nov 29, 2015 at 10:31:10PM -0500, David Miller wrote:
> From: Jason Wang 
> Date: Wed, 25 Nov 2015 15:11:26 +0800
> 
> > This series tries to add basic busy polling for vhost net. The idea is
> > simple: at the end of tx/rx processing, busy polling for new tx added
> > descriptor and rx receive socket for a while. The maximum number of
> > time (in us) could be spent on busy polling was specified ioctl.
> > 
> > Test A were done through:
> > 
> > - 50 us as busy loop timeout
> > - Netperf 2.6
> > - Two machines with back to back connected ixgbe
> > - Guest with 1 vcpu and 1 queue
> > 
> > Results:
> > - For stream workload, ioexits were reduced dramatically in medium
> >   size (1024-2048) of tx (at most -43%) and almost all rx (at most
> >   -84%) as a result of polling. This compensate for the possible
> >   wasted cpu cycles more or less. That porbably why we can still see
> >   some increasing in the normalized throughput in some cases.
> > - Throughput of tx were increased (at most 50%) expect for the huge
> >   write (16384). And we can send more packets in the case (+tpkts were
> >   increased).
> > - Very minor rx regression in some cases.
> > - Improvemnt on TCP_RR (at most 17%).
> 
> Michael are you going to take this?  It's touching vhost core as
> much as it is the vhost_net driver.

There's a minor bug there, but once it's fixed - I agree,
it belongs in the vhost tree.

-- 
MST
--
To unsubscribe from this list: send the line "unsubscribe kvm" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: [PATCH net-next 0/3] basic busy polling support for vhost_net

2015-11-29 Thread David Miller
From: Jason Wang 
Date: Wed, 25 Nov 2015 15:11:26 +0800

> This series tries to add basic busy polling for vhost net. The idea is
> simple: at the end of tx/rx processing, busy polling for new tx added
> descriptor and rx receive socket for a while. The maximum number of
> time (in us) could be spent on busy polling was specified ioctl.
> 
> Test A were done through:
> 
> - 50 us as busy loop timeout
> - Netperf 2.6
> - Two machines with back to back connected ixgbe
> - Guest with 1 vcpu and 1 queue
> 
> Results:
> - For stream workload, ioexits were reduced dramatically in medium
>   size (1024-2048) of tx (at most -43%) and almost all rx (at most
>   -84%) as a result of polling. This compensate for the possible
>   wasted cpu cycles more or less. That porbably why we can still see
>   some increasing in the normalized throughput in some cases.
> - Throughput of tx were increased (at most 50%) expect for the huge
>   write (16384). And we can send more packets in the case (+tpkts were
>   increased).
> - Very minor rx regression in some cases.
> - Improvemnt on TCP_RR (at most 17%).

Michael are you going to take this?  It's touching vhost core as
much as it is the vhost_net driver.
--
To unsubscribe from this list: send the line "unsubscribe kvm" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html