On Tue, Jul 17, 2012 at 11:32:45AM +0200, Paolo Bonzini wrote:
> Il 17/07/2012 11:21, Asias He ha scritto:
> >> It depends.  Like vhost-scsi, vhost-blk has the problem of a crippled
> >> feature set: no support for block device formats, non-raw protocols,
> >> etc.  This makes it different from vhost-net.
> > 
> > Data-plane qemu also has this cripppled feature set problem, no?
> 
> Yes, but that is just a proof of concept.  We can implement a separate
> I/O thread within the QEMU block layer, and add fast paths that resemble
> data-path QEMU, without limiting the feature set.
> 
> > Does user always choose to use block devices format like qcow2? What
> > if they prefer raw image or raw block device?
> 
> If they do, the code should hit fast paths and be fast.  But it should
> be automatic, without the need for extra knobs.  aio=thread vs.
> aio=native is already one knob too much IMHO.

Well one extra knob at qemu level is harmless IMO since
the complexity can be handled by libvirt. For vhost-net
libvirt already enables vhost automatically dependeing on backend
used and I imagine a similar thing can happen here.


> >> So it begs the question, is it going to be used in production, or just a
> >> useful reference tool?
> > 
> > This should be decided by user, I can not speak for them. What is wrong
> > with adding one option for user which they can decide?
> 
> Having to explain the user about the relative benefits;

This can just be done automatically by libvirt.

> having to
> support the API; having to handle transition from one more thing when
> something better comes out.
> 
> Paolo

Well this is true for any code. If the limited featureset which
vhost-blk can accelerate is something many people use, then accelerating
by 5-15% might outweight support costs.

-- 
MST
--
To unsubscribe from this list: send the line "unsubscribe kvm" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html

Reply via email to