On 03/16/2010 11:54 AM, Kevin Wolf wrote:

Is this with qcow2, raw file, or direct volume access?

I can understand it for qcow2, but for direct volume access this
shouldn't happen.  The guest schedules as many writes as it can,
followed by a sync.  The host (and disk) can then reschedule them
whether they are in the writeback cache or in the block layer, and must
sync in the same way once completed.

Perhaps what we need is bdrv_aio_submit() which can take a number of
requests.  For direct volume access, this allows easier reordering
(io_submit() should plug the queues before it starts processing and
unplug them when done, though I don't see the code for this?).  For
qcow2, we can coalesce metadata updates for multiple requests into one
RMW (for example, a sequential write split into multiple 64K-256K write
requests).
We already do merge sequential writes back into one larger request. So
this is in fact a case that wouldn't benefit from such changes.

I'm not happy with that. It increases overall latency. With qcow2 it's fine, but I'd let requests to raw volumes flow unaltered.

It may
help for other cases. But even if it did, coalescing metadata writes in
qcow2 sounds like a good way to mess up, so I'd stay with doing it only
for the data itself.

I don't see why.

Apart from that, wouldn't your points apply to writeback as well?

They do, but for writeback the host kernel already does all the coalescing/merging/blah for us.

--
error compiling committee.c: too many arguments to function

--
To unsubscribe from this list: send the line "unsubscribe kvm" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html

Reply via email to