On Tue, Aug 11, 2009 at 07:49:37PM -0400, Gregory Haskins wrote:
Michael S. Tsirkin wrote:
This implements vhost: a kernel-level backend for virtio,
The main motivation for this work is to reduce virtualization
overhead for virtio by removing system calls on data path,
without guest
Michael S. Tsirkin wrote:
On Tue, Aug 11, 2009 at 07:49:37PM -0400, Gregory Haskins wrote:
Michael S. Tsirkin wrote:
This implements vhost: a kernel-level backend for virtio,
The main motivation for this work is to reduce virtualization
overhead for virtio by removing system calls on data
On Wed, Aug 12, 2009 at 07:56:05AM -0400, Gregory Haskins wrote:
Michael S. Tsirkin wrote:
On Tue, Aug 11, 2009 at 07:49:37PM -0400, Gregory Haskins wrote:
Michael S. Tsirkin wrote:
This implements vhost: a kernel-level backend for virtio,
The main motivation for this work is to reduce
Michael S. Tsirkin wrote:
On Wed, Aug 12, 2009 at 07:56:05AM -0400, Gregory Haskins wrote:
Michael S. Tsirkin wrote:
snip
1. use a dedicated network interface with SRIOV, program mac to match
that of guest (for testing, you can set promisc mode, but that is
bad for performance)
Are
On Wednesday 12 August 2009, Gregory Haskins wrote:
Are you saying SRIOV is a requirement, and I can either program the
SRIOV adapter with a mac or use promis? Or are you saying I can use
SRIOV+programmed mac OR a regular nic + promisc (with a perf penalty).
SRIOV is not a requirement.
On Wed, Aug 12, 2009 at 08:41:31AM -0400, Gregory Haskins wrote:
Michael S. Tsirkin wrote:
On Wed, Aug 12, 2009 at 07:56:05AM -0400, Gregory Haskins wrote:
Michael S. Tsirkin wrote:
snip
1. use a dedicated network interface with SRIOV, program mac to match
that of guest (for
On Wed, Aug 12, 2009 at 02:52:01PM +0200, Arnd Bergmann wrote:
On Wednesday 12 August 2009, Gregory Haskins wrote:
Are you saying SRIOV is a requirement, and I can either program the
SRIOV adapter with a mac or use promis? Or are you saying I can use
SRIOV+programmed mac OR a regular
On Wed, Aug 12, 2009 at 03:40:44PM +0200, Arnd Bergmann wrote:
On Wednesday 12 August 2009, Michael S. Tsirkin wrote:
If I understand it correctly, you can at least connect a veth pair
to a bridge, right? Something like
veth0 - veth1 - vhost - guest 1
eth0 - br0-|
On Wednesday 12 August 2009, Michael S. Tsirkin wrote:
If I understand it correctly, you can at least connect a veth pair
to a bridge, right? Something like
veth0 - veth1 - vhost - guest 1
eth0 - br0-|
veth2 - veth3 - vhost - guest 2
Heh, you
On Wed, Aug 12, 2009 at 09:51:45AM -0400, Gregory Haskins wrote:
Arnd Bergmann wrote:
On Wednesday 12 August 2009, Michael S. Tsirkin wrote:
If I understand it correctly, you can at least connect a veth pair
to a bridge, right? Something like
veth0 - veth1 - vhost - guest 1
Michael S. Tsirkin wrote:
On Wed, Aug 12, 2009 at 09:51:45AM -0400, Gregory Haskins wrote:
Arnd Bergmann wrote:
On Wednesday 12 August 2009, Michael S. Tsirkin wrote:
If I understand it correctly, you can at least connect a veth pair
to a bridge, right? Something like
veth0 -
On Wed, Aug 12, 2009 at 12:13:43PM -0400, Gregory Haskins wrote:
Michael S. Tsirkin wrote:
On Wed, Aug 12, 2009 at 09:51:45AM -0400, Gregory Haskins wrote:
Arnd Bergmann wrote:
On Wednesday 12 August 2009, Michael S. Tsirkin wrote:
If I understand it correctly, you can at least connect a
This implements vhost: a kernel-level backend for virtio,
The main motivation for this work is to reduce virtualization
overhead for virtio by removing system calls on data path,
without guest changes. For virtio-net, this removes up to
4 system calls per packet: vm exit for kick, reentry for
Michael S. Tsirkin wrote:
This implements vhost: a kernel-level backend for virtio,
The main motivation for this work is to reduce virtualization
overhead for virtio by removing system calls on data path,
without guest changes. For virtio-net, this removes up to
4 system calls per packet: vm
14 matches
Mail list logo