On Thu, Aug 06, 2009 at 10:29:08AM -0600, Gregory Haskins wrote:
> >>> On 8/6/2009 at 11:40 AM, in message <[email protected]>, 
> >>> Arnd
> Bergmann <[email protected]> wrote: 
> > On Thursday 06 August 2009, Gregory Haskins wrote:

[ big snip ]

> > 
> > 3. The ioq method seems to be the real core of your work that makes
> > venet perform better than virtio-net with its virtqueues. I don't see
> > any reason to doubt that your claim is correct. My conclusion from
> > this would be to add support for ioq to virtio devices, alongside
> > virtqueues, but to leave out the extra bus_type and probing method.
> 
> While I appreciate the sentiment, I doubt that is actually whats helping here.
> 
> There are a variety of factors that I poured into venet/vbus that I think 
> contribute to its superior performance.  However, the difference in the ring 
> design I do not think is one if them.  In fact, in many ways I think Rusty's 
> design might turn out to be faster if put side by side because he was much 
> more careful with cacheline alignment than I was.  Also note that I was 
> careful to not pick one ring vs the other ;)  They both should work.

IMO, the virtio vring design is very well thought out. I found it
relatively easy to port to a host+blade setup, and run virtio-net over a
physical PCI bus, connecting two physical CPUs.

> 
> IMO, we are only looking at the tip of the iceberg when looking at this 
> purely as the difference between virtio-pci vs virtio-vbus, or venet vs 
> virtio-net.
> 
> Really, the big thing I am working on here is the host side device-model.  
> The idea here was to design a bus model that was conducive to high 
> performance, software to software IO that would work in a variety of 
> environments (that may or may not have PCI).  KVM is one such environment, 
> but I also have people looking at building other types of containers, and 
> even physical systems (host+blade kind of setups).
> 
> The idea is that the "connector" is modular, and then something like 
> virtio-net or venet "just work": in kvm, in the userspace container, on the 
> blade system. 
> 
> It provides a management infrastructure that (hopefully) makes sense for 
> these different types of containers, regardless of whether they have PCI, 
> QEMU, etc (e.g. things that are inherent to KVM, but not others).
> 
> I hope this helps to clarify the project :)
> 

I think this is the major benefit of vbus. I've only started studying
the vbus code, so I don't have lots to say yet. The overview of the
management interface makes it look pretty good.

Getting two virtio-net drivers hooked together in my virtio-over-PCI
patches was nasty. If you read the thread that followed, you'll see
the lack of a management interface as a concern of mine. It was
basically decided that it could come "later". The configfs interface
vbus provides is pretty nice, IMO.

Just my two cents,
Ira
--
To unsubscribe from this list: send the line "unsubscribe kvm" in
the body of a message to [email protected]
More majordomo info at  http://vger.kernel.org/majordomo-info.html

Reply via email to