Signed-off-by: Wen Congyang <we...@cn.fujitsu.com> Signed-off-by: Paolo Bonzini <pbonz...@redhat.com> Signed-off-by: Yang Hongyang <yan...@cn.fujitsu.com> Signed-off-by: zhanghailiang <zhang.zhanghaili...@huawei.com> Signed-off-by: Gonglei <arei.gong...@huawei.com> --- docs/block-replication.txt | 129 +++++++++++++++++++++++++++++++++++++++++++++ 1 file changed, 129 insertions(+) create mode 100644 docs/block-replication.txt
diff --git a/docs/block-replication.txt b/docs/block-replication.txt new file mode 100644 index 0000000..59150b8 --- /dev/null +++ b/docs/block-replication.txt @@ -0,0 +1,129 @@ +Block replication +---------------------------------------- +Copyright Fujitsu, Corp. 2015 +Copyright (c) 2015 Intel Corporation +Copyright (c) 2015 HUAWEI TECHNOLOGIES CO.,LTD. + +This work is licensed under the terms of the GNU GPL, version 2 or later. +See the COPYING file in the top-level directory. + +The block replication is used for continuous checkpoints. It is designed +for COLO that Secondary VM is running. It can also be applied for FT/HA +scene that Secondary VM is not running. + +This document gives an overview of block replication's design. + +== Background == +High availability solutions such as micro checkpoint and COLO will do +consecutive checkpoint. The VM state of Primary VM and Secondary VM is +identical right after a VM checkpoint, but becomes different as the VM +executes till the next checkpoint. To support disk contents checkpoint, +the modified disk contents in the Secondary VM must be buffered, and are +only dropped at next checkpoint time. To reduce the network transportation +effort at the time of checkpoint, the disk modification operations of +Primary disk are asynchronously forwarded to the Secondary node. + +== Workflow == +The following is the image of block replication workflow: + + +----------------------+ +------------------------+ + |Primary Write Requests| |Secondary Write Requests| + +----------------------+ +------------------------+ + | | + | (4) + | V + | /-------------\ + | Copy and Forward | | + |---------(1)----------+ | Disk Buffer | + | | | | + | (3) \-------------/ + | speculative ^ + | write through (2) + | | | + V V | + +--------------+ +----------------+ + | Primary Disk | | Secondary Disk | + +--------------+ +----------------+ + + 1) Primary write requests will be copied and forwarded to Secondary + QEMU. + 2) Before Primary write requests are written to Secondary disk, the + original sector content will be read from Secondary disk and + buffered in the Disk buffer, but it will not overwrite the existing + sector content in the Disk buffer. + 3) Primary write requests will be written to Secondary disk. + 4) Secondary write requests will be buffered in the Disk buffer and it + will overwrite the existing sector content in the buffer. + +== Architecture == +We are going to implement COLO block replication from many basic +blocks that are already in QEMU. + + virtio-blk || + ^ || .---------- + | || | Secondary + 1 Quorum || '---------- + / \ || + / \ || + Primary 2 NBD -------> 2 NBD + disk client || server virtio-blk + || ^ ^ +--------. || | | +Primary | || Secondary disk <--------- COLO buffer 3 +--------' || backing + +1) The disk on the primary is represented by a block device with two +children, providing replication between a primary disk and the host that +runs the secondary VM. The read pattern for quorum can be extended to +make the primary always read from the local disk instead of going through +NBD. + +2) The secondary disk receives writes from the primary VM through QEMU's +embedded NBD server (speculative write-through). + +3) The disk on the secondary is represented by a custom block device +("COLO buffer"). The disk buffer's backing image is the secondary disk, +and the disk buffer uses bdrv_add_before_write_notifier to implement +copy-on-write, similar to block/backup.c. + +== New block driver interface == +We add three block driver interfaces to control block replication: +a. bdrv_start_replication() + Start block replication, called in migration/checkpoint thread. + We must call bdrv_start_replication() in secondary QEMU before + calling bdrv_start_replication() in primary QEMU. +b. bdrv_do_checkpoint() + This interface is called after all VM state is transfered to + Secondary QEMU. The Disk buffer will be dropped in this interface. +c. bdrv_stop_replication() + It is called when failover. We will flush the Disk buffer into + Secondary Disk and stop block replication. + +== Usage == +Primary: + -drive if=xxx,driver=quorum,read-pattern=first,\ + children.0.file.filename=1.raw,\ + children.0.driver=raw,\ + children.1.file.driver=nbd+colo,\ + children.1.file.host=xxx,\ + children.1.file.port=xxx,\ + children.1.file.export=xxx,\ + children.1.driver=raw + Note: + 1. NBD Client should not be the first child of quorum. + 2. There should be only one NBD Client. + 3. host is the secondary physical machine's hostname or IP + 4. Each disk must have its own export name. + +Secondary: + -drive if=xxx,driver=blkcolo,export=xxx,\ + backing.file.filename=1.raw,\ + backing.driver=raw + Then run qmp command: + nbd_server_start host:port + Note: + 1. The export name for the same disk must be the same in primary + and secondary QEMU command line + 2. The qmp command nbd_server_start must be run before running the + qmp command migrate on primary QEMU + 3. Don't use nbd_server_start's other options -- 2.1.0