Gregory Haskins wrote:
>
>> If so, be aware that virtio is (a) mostly done (b) very well done.
>>     
>
> I would very much like to help make virtio work, which is really where I
> was going with this.  My design is a little bit different so I was
> submitting it in case there was any ideas worth salvaging to be picked
> up by the official project.  (I tried to explain this in the v1
> announcement so I apologize if anyone, particularly Rusty, felt
> slighted....it was not my intention)
>   

Alternative implementation ideas should of course not slight anyone but
instead be welcomed.

>   
>> Can you describe what you are trying to achieve that virtio doesn't do?
>>
>>     
>
> To be perfectly honest, I have never been able to find an implementation
> of virtio (I looked and even asked Rusty directly via email but never
> found/heard anything back) so I don't know exactly what its capabilities
> are.  My impressions from reading Rusty's email proposals are that there
> are certainly similarities to the virtio interface and the ioq interface
> in concept.  Where they seem to differ is that the concept of the ring
> is more exposed in IOQ via the iterator idiom instead of the sg-buffer
> idiom.
>   

Yes.  virtio adapts the driver interface to an API can drive a shared
memory queue more easily, while you actually provide the queue protocol
(and ABI).

> At the time I first saw the virtio proposals, I couldn't quite wrap my
> head around how I could do things like "zero copy" + deferred pointer
> reaping, which was a design goal of mine.  So I kept up with the IOQ
> design for the interim to at least demonstrate where I was trying to go.
> Perhaps it will be a useful innovation and the virtio interface design
> will pick up some of my ideas.  Perhaps virtio deals with it already.
> Or perhaps no one will think its a good idea and it gets pushed
> to /dev/null ;)  I am not sure what the answer will be.
>
> But in any case, I was also trying to go beyond the shared-ring
> interface design.  For instance, the series also provides:
>
> *) a system for efficiently discovering/communicating with PV backend
> devices that was not laden with legacy interfaces like PCI.  (I am in
> the process of converting over to use bus_register as Dor suggested).
>
> *) generalization of as much of the shared-memory code as possible so it
> could be reused (for instance, both guest and host can use the same IOQ
> interface, and the code itself will largely work with any hypervisor, or
> even non-hypervisor shared-memory systems, like RDMA/AMP).
>
> Again, perhaps virtio will cover all these areas too.  Without having
> seen it I am not really sure where the overlap exists, but perhaps there
> will be at least some aspects of my series that are useful.  That is why
> I submitted it.  Ideally I can hook up with whomever is working on the
> implementation (sounds like Dor?) and we can crank something out
> together.  :)
>   

virtio seems to have more modest goals:

- make it easier to write guest/host (or guest/guest) transports, but
not actually provide them
- limited to guest only (and Linux only)
- no discovery/hotplug (yet?)

since it wants to be hypervisor agnostic, it cannot specify an ABI (as
some already have ABIs, for example Xen).

-- 
Do not meddle in the internals of kernels, for they are subtle and quick to 
panic.


-------------------------------------------------------------------------
This SF.net email is sponsored by: Splunk Inc.
Still grepping through log files to find problems?  Stop.
Now Search log events and configuration files using AJAX and a browser.
Download your FREE copy of Splunk now >>  http://get.splunk.com/
_______________________________________________
kvm-devel mailing list
[email protected]
https://lists.sourceforge.net/lists/listinfo/kvm-devel

Reply via email to