On 02/27/2018 03:34 AM, Fam Zheng wrote:
On Mon, 01/22 23:08, Max Reitz wrote:
@@ -1151,7 +1285,48 @@ static int coroutine_fn
bdrv_mirror_top_preadv(BlockDriverState *bs,
static int coroutine_fn bdrv_mirror_top_pwritev(BlockDriverState *bs,
uint64_t offset, uint64_t bytes, QEMUIOVector *qiov, int flags)
{
- return bdrv_co_pwritev(bs->backing, offset, bytes, qiov, flags);
+ MirrorOp *op = NULL;
+ MirrorBDSOpaque *s = bs->opaque;
+ QEMUIOVector bounce_qiov;
+ void *bounce_buf;
+ int ret = 0;
+ bool copy_to_target;
+
+ copy_to_target = s->job->ret >= 0 &&
+ s->job->copy_mode == MIRROR_COPY_MODE_WRITE_BLOCKING;
+
+ if (copy_to_target) {
+ /* The guest might concurrently modify the data to write; but
+ * the data on source and destination must match, so we have
+ * to use a bounce buffer if we are going to write to the
+ * target now. */
+ bounce_buf = qemu_blockalign(bs, bytes);
+ iov_to_buf_full(qiov->iov, qiov->niov, 0, bounce_buf, bytes);
Quorum doesn't use a bounce buffer, so I think we can get away without it too: a
guest concurrently modifying the buffer isn't a concern in practice.
Arguably, that's a bug in quorum. We also use a bounce buffer for the
same reason when encrypting. We really do need to make sure that bits
landing in more than one storage location come from the same point in time.
--
Eric Blake, Principal Software Engineer
Red Hat, Inc. +1-919-301-3266
Virtualization: qemu.org | libvirt.org