On Tue, 2008-06-17 at 09:54 -0500, Anthony Liguori wrote:
> Mark McLoughlin wrote:
> >
> > On Sat, 2008-06-14 at 18:28 -0500, Anthony Liguori wrote:

> >>   We need to make some more pervasive changes to QEMU though to 
> >> take advantage of vringfd upstream.
> >>
> >> Specifically, we need to introduce a RX/TX buffer adding/polling API for 
> >> VLANClientState.  We can then use this within a vringfd VLAN client to 
> >> push the indexes to vringfd.
> >>     
> >
> > I don't think I'm following you fully on this.
> >
> > The TX side is fine - guest adds buffer to ring, virtio VLANClient calls
> > ->add_tx_buffer() on every other VLANClient, waits until all are
> > finished sending and notifies the guest that we're done.
> >
> > But the RX side? The guest allocates the buffers, so does the virtio
> > VLANClient divide those buffers between every other VLANClient?
> 
> This is where things get tricky.  Internally, it will have to copy the 
> TX buffer into each of the clients RX buffers.  We need to special case 
> the circumstance where the only other VLANClientState is a vringfd 
> client so that we can pass the RX buffer directly to it.  Haven't come 
> up with a perfect API just yet but that's what we need to do.

How about if we reversed the tun recv vringfd?

i.e. rather than passing the guest's recv buffers down to the tun driver
where it copies the skbs into the buffers, have the tun driver pass the
skb buffers directly via the vringfd up to qemu where it can then pass
them to each of the other clients. The virtio client would then do copy
into the guest's buffer.

That would make the internal qemu API pretty simple and eliminate the
need for special cases.

Cheers,
Mark.

--
To unsubscribe from this list: send the line "unsubscribe kvm" in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html

Reply via email to