Hi all,

Back in May we talked about efficiently connecting a user-space Ethernet
switch to QEMU guests. Stefan Hajnoczi sketched the design of a userspace
version of vhost that uses a Unix socket for its control interface. His
design is in the mail quoted below.

I'd like to ask you: if this feature were properly implemented and
maintained, would you guys accept it into qemu?

If so then I will work with a good QEMU hacker to develop it.

also, have there been any new developments in this area (vhost-net and
userspace ethernet I/O) that we should take into account?

On 28 May 2013 13:58, Stefan Hajnoczi <stefa...@redhat.com> wrote:

> On Tue, May 28, 2013 at 12:10:50PM +0200, Luke Gorrie wrote:
> > On 27 May 2013 11:34, Stefan Hajnoczi <stefa...@redhat.com> wrote:
> >
> > > vhost_net is about connecting the a virtio-net speaking process to a
> > > tun-like device.  The problem you are trying to solve is connecting a
> > > virtio-net speaking process to Snabb Switch.
> > >
> >
> > Yep!
> >
> >
> > > Either you need to replace vhost or you need a tun-like device
> > > interface.
> > >
> > > Replacing vhost would mean that your switch implements virtio-net,
> > > shares guest RAM with the guest, and shares the ioeventfd and irqfd
> > > which are used to signal with the guest.
> >
> >
> > This would be a great solution from my perspective. This is the design
> that
> > I am now struggling to find a good implementation strategy for.
>
> The switch needs 3 resources for direct virtio-net communication with
> the guest:
>
> 1. Shared memory access to guest physical memory for guest physical to
>    host userspace address translation.  vhost and data plane
>    automatically guest access to guest memory and they learn about
>    memory layout using the MemoryListener interface in QEMU (see
>    hw/virtio/vhost.c:vhost_region_add() and friends).
>
> 2. Virtqueue kick notifier (ioeventfd) so the switch knows when the
>    guest signals the host.  See virtio_queue_get_host_notifier(vq).
>
> 3. Guest interrupt notifier (irqfd) so the switch can signal the guest.
>    See virtio_queue_get_guest_notifier(vq).
>
> I don't have a detailed suggestion for how to interface the switch and
> QEMU processes.  It may be necessary to communicate back and forth (to
> handle the virtio device lifecycle) so a UNIX domain socket would be
> appropriate for passing file descriptors.  Here is a rough idea:
>
> $ switch --listen-path=/var/run/switch.sock
> $ qemu --device virtio-net-pci,switch=/var/run/switch.sock
>
> On QEMU startup:
>
> (switch socket) add_port --id="qemu-$PID" --session-persistence
>
> (Here --session-persistence means that the port will be automatically
> destroyed if the switch socket session is terminated because the UNIX
> domain socket is closed by QEMU.)
>
> On virtio device status transition to DRIVER_OK:
>
> (switch socket) configure_port --id="qemu-$PID"
>                                --mem=/tmp/shm/qemu-$PID
>                                --ioeventfd=2
>                                --irqfd=3
>
> On virtio device status transition from DRIVER_OK:
>
> (switch socket) deconfigure_port --id="qemu-$PID"
>
> I skipped a bunch of things:
>
> 1. virtio-net has several virtqueues so you need multiple ioeventfds.
>
> 2. QEMU needs to communicate memory mapping information, this gets
>    especially interesting with memory hotplug.  Memory is more
>    complicated than a single shmem blob.
>
> 3. Multiple NICs per guest should be supported.
>
> Stefan
>

Reply via email to