Re: [Qemu-devel] migration: add incremental drive-mirror and blockdev-mirror with dirtymap

2017-05-09 Thread John Snow


On 05/09/2017 03:35 AM, Daniel Kučera wrote:
> 
> 
> Hm, I suppose that's right, pending cache issues, perhaps?
> 
> (1) Write occurs; cached
> (2) Bitmap is added
> (3) Write occurs, cached
> (4) ZFS snapshot is taken
> (5) Data is flushed to backing storage.
> 
> Now, the ZFS snapshot is migrated, but is missing the writes that
> occurred in (1) and (3).
> 
> Next, we mirror the data in the bitmap, but it only includes the data
> from (3).
> 
> The target now appears to be missing the write from (1) -- maybe,
> depending on how the volume snapshot occurs.
> 
> 
> Yes, that's why I'm using cache=none. Libvirt doesn't allow you to
> migrate VM which uses drive cache anyway (unless you specify flag unsafe).

It will be important to document the exact use cases in which this mode
is "supported" and the shortcomings and cases in which it is not supported.

The final version should include a test and some documentation, but I
think the feature is workable once we pay attention to error conditions.

Thanks!

--js



Re: [Qemu-devel] migration: add incremental drive-mirror and blockdev-mirror with dirtymap

2017-05-09 Thread Daniel Kučera
2017-05-08 19:29 GMT+02:00 John Snow :

>
>
> On 05/04/2017 03:45 AM, Daniel Kučera wrote:
> >
> > 2017-05-04 1:44 GMT+02:00 John Snow  > >:
> >
> >
> >
> > On 05/03/2017 03:56 AM, Daniel Kučera wrote:
> > > Hi all,
> > >
> > > this patch adds possibility to start mirroring since specific
> dirtyblock
> > > bitmap.
> > > The use-case is, for live migrations with ZFS volume used as block
> device:
> > > 1. make dirtyblock bitmap in qemu
> >
> > A "block dirty bitmap," I assume you mean. Through which interface?
> > "block dirty bitmap add" via QMP?
> >
> >
> > Yes.
> >
> >
> > > 2. make ZFS volume snapshot
> >
> > ZFS Volume Snapshot is going to be a filesystem-level operation,
> isn't
> > it? That is, creating this snapshot will necessarily create new dirty
> > sectors, yes?
> >
> >
> > No, we are using "zfs volumes" which are block devices (similar to LVM)
> >
> > # blockdev --report /dev/zstore/storage4
> > RORA   SSZ   BSZ   StartSecSize   Device
> > rw   256   512  4096  0 42949672960   /dev/zstore/storage4
> >
> > -drive
> > file=/dev/zstore/storage4,format=raw,if=none,id=drive-
> scsi0-0-0-0,cache=none,discard=unmap
> >
> >
>
> Ah, I see. Clearly I don't know much about ZFS in practice.
>
> > > 3. zfs send/receive the snapshot to target machine
> >
> > Why? Is this an attempt to make the process faster?
> >
> >
> > This preserves and transfers the whole chain of snapshots to destination
> > host, not only current state as it would be with drive-mirror sync: full
> >
> >
> >
> > > 4. start mirroring to destination with block map from step 1
> >
> > This makes me a little nervous: what guarantees do you have that the
> > bitmap and the ZFS snapshot were synchronous?
> >
> >
> > It doesn't have to be synchronous (or atomic). The "block dirty bitmap"
> > just needs to be done prior to ZFS snapshot. Those few writes done in
> > the meantime don't hurt to be done twice.
> >
>
> Hm, I suppose that's right, pending cache issues, perhaps?
>
> (1) Write occurs; cached
> (2) Bitmap is added
> (3) Write occurs, cached
> (4) ZFS snapshot is taken
> (5) Data is flushed to backing storage.
>
> Now, the ZFS snapshot is migrated, but is missing the writes that
> occurred in (1) and (3).
>
> Next, we mirror the data in the bitmap, but it only includes the data
> from (3).
>
> The target now appears to be missing the write from (1) -- maybe,
> depending on how the volume snapshot occurs.
>

Yes, that's why I'm using cache=none. Libvirt doesn't allow you to migrate
VM which uses drive cache anyway (unless you specify flag unsafe).


>
> >
> >
> > > 5. live migrate VM state to destination
> > >
> > > The point is, that I'm not able to live stream zfs changed data to
> > > destination
> > > to ensure same volume state in the moment of switchover of
> migrated VM
> > > to the new hypervisor.
> >
> > I'm a little concerned about the mixing of filesystem and block level
> > snapshots...
> >
> >
> > As I explained above, ZFS snapshots are also block level.
> >
> >
> >
> > >
> > >
> > > From 7317d731d51c5d743d7a4081b368f0a6862856d7 Mon Sep 17 00:00:00
> 2001
> >
> > What happened to your timestamp?
> >
> > > From: Daniel Kucera >
> > > Date: Tue, 2 May 2017 15:00:39 +
> > > Subject: [PATCH] migration: add incremental drive-mirror and
> blockdev-mirror
> >
> > Your actual email subject here, however, is missing the [PATCH] tag,
> > which is useful for it getting picked up by the patchew build bot.
> >
> > >  with dirtymap added parameter bitmap which will be used instead
> > of newly
> > >  created dirtymap in mirror_start_job
> > >
> > > Signed-off-by: Daniel Kucera  > >
> > > ---
> > >  block/mirror.c| 41
> > -
> > >  blockdev.c|  6 +-
> > >  include/block/block_int.h |  4 +++-
> > >  qapi/block-core.json  | 12 ++--
> > >  4 files changed, 42 insertions(+), 21 deletions(-)
> > >
> > > diff --git a/block/mirror.c b/block/mirror.c
> > > index 9f5eb69..02b2f69 100644
> > > --- a/block/mirror.c
> > > +++ b/block/mirror.c
> > > @@ -49,7 +49,7 @@ typedef struct MirrorBlockJob {
> > >  BlockDriverState *to_replace;
> > >  /* Used to block operations on the drive-mirror-replace
> target */
> > >  Error *replace_blocker;
> > > -bool is_none_mode;
> > > +MirrorSyncMode sync_mode;
> > >  BlockMirrorBackingMode backing_mode;
> > >  BlockdevOnError on_source_error, on_target_error;
> > >  bool synced;
> > > @@ -523,7 +523,9 @@ static void mirror_exit(BlockJob *job, void
> > 

Re: [Qemu-devel] migration: add incremental drive-mirror and blockdev-mirror with dirtymap

2017-05-08 Thread John Snow


On 05/04/2017 03:45 AM, Daniel Kučera wrote:
> 
> 2017-05-04 1:44 GMT+02:00 John Snow  >:
> 
> 
> 
> On 05/03/2017 03:56 AM, Daniel Kučera wrote:
> > Hi all,
> >
> > this patch adds possibility to start mirroring since specific dirtyblock
> > bitmap.
> > The use-case is, for live migrations with ZFS volume used as block 
> device:
> > 1. make dirtyblock bitmap in qemu
> 
> A "block dirty bitmap," I assume you mean. Through which interface?
> "block dirty bitmap add" via QMP?
> 
>  
> Yes.
> 
> 
> > 2. make ZFS volume snapshot
> 
> ZFS Volume Snapshot is going to be a filesystem-level operation, isn't
> it? That is, creating this snapshot will necessarily create new dirty
> sectors, yes?
> 
>  
> No, we are using "zfs volumes" which are block devices (similar to LVM)
> 
> # blockdev --report /dev/zstore/storage4
> RORA   SSZ   BSZ   StartSecSize   Device
> rw   256   512  4096  0 42949672960   /dev/zstore/storage4
> 
> -drive
> file=/dev/zstore/storage4,format=raw,if=none,id=drive-scsi0-0-0-0,cache=none,discard=unmap
> 
> 

Ah, I see. Clearly I don't know much about ZFS in practice.

> > 3. zfs send/receive the snapshot to target machine
> 
> Why? Is this an attempt to make the process faster?
> 
> 
> This preserves and transfers the whole chain of snapshots to destination
> host, not only current state as it would be with drive-mirror sync: full
>  
> 
> 
> > 4. start mirroring to destination with block map from step 1
> 
> This makes me a little nervous: what guarantees do you have that the
> bitmap and the ZFS snapshot were synchronous?
> 
> 
> It doesn't have to be synchronous (or atomic). The "block dirty bitmap"
> just needs to be done prior to ZFS snapshot. Those few writes done in
> the meantime don't hurt to be done twice.
>  

Hm, I suppose that's right, pending cache issues, perhaps?

(1) Write occurs; cached
(2) Bitmap is added
(3) Write occurs, cached
(4) ZFS snapshot is taken
(5) Data is flushed to backing storage.

Now, the ZFS snapshot is migrated, but is missing the writes that
occurred in (1) and (3).

Next, we mirror the data in the bitmap, but it only includes the data
from (3).

The target now appears to be missing the write from (1) -- maybe,
depending on how the volume snapshot occurs.

> 
> 
> > 5. live migrate VM state to destination
> >
> > The point is, that I'm not able to live stream zfs changed data to
> > destination
> > to ensure same volume state in the moment of switchover of migrated VM
> > to the new hypervisor.
> 
> I'm a little concerned about the mixing of filesystem and block level
> snapshots...
> 
> 
> As I explained above, ZFS snapshots are also block level.
>  
> 
> 
> >
> >
> > From 7317d731d51c5d743d7a4081b368f0a6862856d7 Mon Sep 17 00:00:00 2001
> 
> What happened to your timestamp?
> 
> > From: Daniel Kucera  >
> > Date: Tue, 2 May 2017 15:00:39 +
> > Subject: [PATCH] migration: add incremental drive-mirror and 
> blockdev-mirror
> 
> Your actual email subject here, however, is missing the [PATCH] tag,
> which is useful for it getting picked up by the patchew build bot.
> 
> >  with dirtymap added parameter bitmap which will be used instead
> of newly
> >  created dirtymap in mirror_start_job
> >
> > Signed-off-by: Daniel Kucera  >
> > ---
> >  block/mirror.c| 41
> -
> >  blockdev.c|  6 +-
> >  include/block/block_int.h |  4 +++-
> >  qapi/block-core.json  | 12 ++--
> >  4 files changed, 42 insertions(+), 21 deletions(-)
> >
> > diff --git a/block/mirror.c b/block/mirror.c
> > index 9f5eb69..02b2f69 100644
> > --- a/block/mirror.c
> > +++ b/block/mirror.c
> > @@ -49,7 +49,7 @@ typedef struct MirrorBlockJob {
> >  BlockDriverState *to_replace;
> >  /* Used to block operations on the drive-mirror-replace target */
> >  Error *replace_blocker;
> > -bool is_none_mode;
> > +MirrorSyncMode sync_mode;
> >  BlockMirrorBackingMode backing_mode;
> >  BlockdevOnError on_source_error, on_target_error;
> >  bool synced;
> > @@ -523,7 +523,9 @@ static void mirror_exit(BlockJob *job, void
> *opaque)
> >  bdrv_child_try_set_perm(mirror_top_bs->backing, 0, BLK_PERM_ALL,
> >  _abort);
> >  if (s->backing_mode == MIRROR_SOURCE_BACKING_CHAIN) {
> > -BlockDriverState *backing = s->is_none_mode ? src : s->base;
> > +BlockDriverState *backing =
> > +(s->sync_mode == MIRROR_SYNC_MODE_INCREMENTAL) ||
> > +(s->sync_mode == 

Re: [Qemu-devel] migration: add incremental drive-mirror and blockdev-mirror with dirtymap

2017-05-04 Thread Daniel Kučera
2017-05-04 1:44 GMT+02:00 John Snow :

>
>
> On 05/03/2017 03:56 AM, Daniel Kučera wrote:
> > Hi all,
> >
> > this patch adds possibility to start mirroring since specific dirtyblock
> > bitmap.
> > The use-case is, for live migrations with ZFS volume used as block
> device:
> > 1. make dirtyblock bitmap in qemu
>
> A "block dirty bitmap," I assume you mean. Through which interface?
> "block dirty bitmap add" via QMP?
>

Yes.


> > 2. make ZFS volume snapshot
>
> ZFS Volume Snapshot is going to be a filesystem-level operation, isn't
> it? That is, creating this snapshot will necessarily create new dirty
> sectors, yes?
>

No, we are using "zfs volumes" which are block devices (similar to LVM)

# blockdev --report /dev/zstore/storage4
RORA   SSZ   BSZ   StartSecSize   Device
rw   256   512  4096  0 42949672960   /dev/zstore/storage4

-drive
file=/dev/zstore/storage4,format=raw,if=none,id=drive-scsi0-0-0-0,cache=none,discard=unmap


> > 3. zfs send/receive the snapshot to target machine
>
> Why? Is this an attempt to make the process faster?
>

This preserves and transfers the whole chain of snapshots to destination
host, not only current state as it would be with drive-mirror sync: full


>
> > 4. start mirroring to destination with block map from step 1
>
> This makes me a little nervous: what guarantees do you have that the
> bitmap and the ZFS snapshot were synchronous?
>

It doesn't have to be synchronous (or atomic). The "block dirty bitmap"
just needs to be done prior to ZFS snapshot. Those few writes done in the
meantime don't hurt to be done twice.


>
> > 5. live migrate VM state to destination
> >
> > The point is, that I'm not able to live stream zfs changed data to
> > destination
> > to ensure same volume state in the moment of switchover of migrated VM
> > to the new hypervisor.
>
> I'm a little concerned about the mixing of filesystem and block level
> snapshots...
>

As I explained above, ZFS snapshots are also block level.


>
> >
> >
> > From 7317d731d51c5d743d7a4081b368f0a6862856d7 Mon Sep 17 00:00:00 2001
>
> What happened to your timestamp?
>
> > From: Daniel Kucera 
> > Date: Tue, 2 May 2017 15:00:39 +
> > Subject: [PATCH] migration: add incremental drive-mirror and
> blockdev-mirror
>
> Your actual email subject here, however, is missing the [PATCH] tag,
> which is useful for it getting picked up by the patchew build bot.
>
> >  with dirtymap added parameter bitmap which will be used instead of newly
> >  created dirtymap in mirror_start_job
> >
> > Signed-off-by: Daniel Kucera 
> > ---
> >  block/mirror.c| 41 --
> ---
> >  blockdev.c|  6 +-
> >  include/block/block_int.h |  4 +++-
> >  qapi/block-core.json  | 12 ++--
> >  4 files changed, 42 insertions(+), 21 deletions(-)
> >
> > diff --git a/block/mirror.c b/block/mirror.c
> > index 9f5eb69..02b2f69 100644
> > --- a/block/mirror.c
> > +++ b/block/mirror.c
> > @@ -49,7 +49,7 @@ typedef struct MirrorBlockJob {
> >  BlockDriverState *to_replace;
> >  /* Used to block operations on the drive-mirror-replace target */
> >  Error *replace_blocker;
> > -bool is_none_mode;
> > +MirrorSyncMode sync_mode;
> >  BlockMirrorBackingMode backing_mode;
> >  BlockdevOnError on_source_error, on_target_error;
> >  bool synced;
> > @@ -523,7 +523,9 @@ static void mirror_exit(BlockJob *job, void *opaque)
> >  bdrv_child_try_set_perm(mirror_top_bs->backing, 0, BLK_PERM_ALL,
> >  _abort);
> >  if (s->backing_mode == MIRROR_SOURCE_BACKING_CHAIN) {
> > -BlockDriverState *backing = s->is_none_mode ? src : s->base;
> > +BlockDriverState *backing =
> > +(s->sync_mode == MIRROR_SYNC_MODE_INCREMENTAL) ||
> > +(s->sync_mode == MIRROR_SYNC_MODE_NONE) ? src : s->base;
> >  if (backing_bs(target_bs) != backing) {
> >  bdrv_set_backing_hd(target_bs, backing, _err);
> >  if (local_err) {
> > @@ -771,7 +773,8 @@ static void coroutine_fn mirror_run(void *opaque)
> >  mirror_free_init(s);
> >
> >  s->last_pause_ns = qemu_clock_get_ns(QEMU_CLOCK_REALTIME);
> > -if (!s->is_none_mode) {
> > +if ((s->sync_mode != MIRROR_SYNC_MODE_INCREMENTAL) &&
> > +  (s->sync_mode != MIRROR_SYNC_MODE_NONE)) {
> >  ret = mirror_dirty_init(s);
> >  if (ret < 0 || block_job_is_cancelled(>common)) {
> >  goto immediate_exit;
> > @@ -1114,7 +1117,8 @@ static void mirror_start_job(const char *job_id,
> > BlockDriverState *bs,
>
> Something appears to have corrupted your patch. Did you copy/paste this
> into gmail? I am unable to apply it.
>
> Please use "git send-email" as detailed in the wiki contributors guide.
>

Okay, I'll try to fix these issues and send the patch again, ha


>
> >   BlockCompletionFunc *cb,
> 

Re: [Qemu-devel] migration: add incremental drive-mirror and blockdev-mirror with dirtymap

2017-05-03 Thread John Snow


On 05/03/2017 03:56 AM, Daniel Kučera wrote:
> Hi all,
> 
> this patch adds possibility to start mirroring since specific dirtyblock
> bitmap.
> The use-case is, for live migrations with ZFS volume used as block device:
> 1. make dirtyblock bitmap in qemu

A "block dirty bitmap," I assume you mean. Through which interface?
"block dirty bitmap add" via QMP?

> 2. make ZFS volume snapshot

ZFS Volume Snapshot is going to be a filesystem-level operation, isn't
it? That is, creating this snapshot will necessarily create new dirty
sectors, yes?

> 3. zfs send/receive the snapshot to target machine

Why? Is this an attempt to make the process faster?

> 4. start mirroring to destination with block map from step 1

This makes me a little nervous: what guarantees do you have that the
bitmap and the ZFS snapshot were synchronous?

> 5. live migrate VM state to destination
> 
> The point is, that I'm not able to live stream zfs changed data to
> destination
> to ensure same volume state in the moment of switchover of migrated VM
> to the new hypervisor.

I'm a little concerned about the mixing of filesystem and block level
snapshots...

> 
> 
> From 7317d731d51c5d743d7a4081b368f0a6862856d7 Mon Sep 17 00:00:00 2001

What happened to your timestamp?

> From: Daniel Kucera 
> Date: Tue, 2 May 2017 15:00:39 +
> Subject: [PATCH] migration: add incremental drive-mirror and blockdev-mirror

Your actual email subject here, however, is missing the [PATCH] tag,
which is useful for it getting picked up by the patchew build bot.

>  with dirtymap added parameter bitmap which will be used instead of newly
>  created dirtymap in mirror_start_job
> 
> Signed-off-by: Daniel Kucera 
> ---
>  block/mirror.c| 41 -
>  blockdev.c|  6 +-
>  include/block/block_int.h |  4 +++-
>  qapi/block-core.json  | 12 ++--
>  4 files changed, 42 insertions(+), 21 deletions(-)
> 
> diff --git a/block/mirror.c b/block/mirror.c
> index 9f5eb69..02b2f69 100644
> --- a/block/mirror.c
> +++ b/block/mirror.c
> @@ -49,7 +49,7 @@ typedef struct MirrorBlockJob {
>  BlockDriverState *to_replace;
>  /* Used to block operations on the drive-mirror-replace target */
>  Error *replace_blocker;
> -bool is_none_mode;
> +MirrorSyncMode sync_mode;
>  BlockMirrorBackingMode backing_mode;
>  BlockdevOnError on_source_error, on_target_error;
>  bool synced;
> @@ -523,7 +523,9 @@ static void mirror_exit(BlockJob *job, void *opaque)
>  bdrv_child_try_set_perm(mirror_top_bs->backing, 0, BLK_PERM_ALL,
>  _abort);
>  if (s->backing_mode == MIRROR_SOURCE_BACKING_CHAIN) {
> -BlockDriverState *backing = s->is_none_mode ? src : s->base;
> +BlockDriverState *backing =
> +(s->sync_mode == MIRROR_SYNC_MODE_INCREMENTAL) ||
> +(s->sync_mode == MIRROR_SYNC_MODE_NONE) ? src : s->base;
>  if (backing_bs(target_bs) != backing) {
>  bdrv_set_backing_hd(target_bs, backing, _err);
>  if (local_err) {
> @@ -771,7 +773,8 @@ static void coroutine_fn mirror_run(void *opaque)
>  mirror_free_init(s);
> 
>  s->last_pause_ns = qemu_clock_get_ns(QEMU_CLOCK_REALTIME);
> -if (!s->is_none_mode) {
> +if ((s->sync_mode != MIRROR_SYNC_MODE_INCREMENTAL) &&
> +  (s->sync_mode != MIRROR_SYNC_MODE_NONE)) {
>  ret = mirror_dirty_init(s);
>  if (ret < 0 || block_job_is_cancelled(>common)) {
>  goto immediate_exit;
> @@ -1114,7 +1117,8 @@ static void mirror_start_job(const char *job_id,
> BlockDriverState *bs,

Something appears to have corrupted your patch. Did you copy/paste this
into gmail? I am unable to apply it.

Please use "git send-email" as detailed in the wiki contributors guide.

>   BlockCompletionFunc *cb,
>   void *opaque,
>   const BlockJobDriver *driver,
> - bool is_none_mode, BlockDriverState *base,
> + MirrorSyncMode sync_mode, const char *bitmap,
> + BlockDriverState *base,
>   bool auto_complete, const char
> *filter_node_name,
>   Error **errp)
>  {
> @@ -1203,7 +1207,7 @@ static void mirror_start_job(const char *job_id,
> BlockDriverState *bs,
>  s->replaces = g_strdup(replaces);
>  s->on_source_error = on_source_error;
>  s->on_target_error = on_target_error;
> -s->is_none_mode = is_none_mode;
> +s->sync_mode = sync_mode;
>  s->backing_mode = backing_mode;
>  s->base = base;
>  s->granularity = granularity;
> @@ -1213,9 +1217,16 @@ static void mirror_start_job(const char *job_id,
> BlockDriverState *bs,
>  s->should_complete = true;
>  }
> 
> -s->dirty_bitmap = bdrv_create_dirty_bitmap(bs, granularity, NULL,
> 

[Qemu-devel] migration: add incremental drive-mirror and blockdev-mirror with dirtymap

2017-05-03 Thread Daniel Kučera
Hi all,

this patch adds possibility to start mirroring since specific dirtyblock
bitmap.
The use-case is, for live migrations with ZFS volume used as block device:
1. make dirtyblock bitmap in qemu
2. make ZFS volume snapshot
3. zfs send/receive the snapshot to target machine
4. start mirroring to destination with block map from step 1
5. live migrate VM state to destination

The point is, that I'm not able to live stream zfs changed data to
destination
to ensure same volume state in the moment of switchover of migrated VM
to the new hypervisor.


>From 7317d731d51c5d743d7a4081b368f0a6862856d7 Mon Sep 17 00:00:00 2001
From: Daniel Kucera 
Date: Tue, 2 May 2017 15:00:39 +
Subject: [PATCH] migration: add incremental drive-mirror and blockdev-mirror
 with dirtymap added parameter bitmap which will be used instead of newly
 created dirtymap in mirror_start_job

Signed-off-by: Daniel Kucera 
---
 block/mirror.c| 41 -
 blockdev.c|  6 +-
 include/block/block_int.h |  4 +++-
 qapi/block-core.json  | 12 ++--
 4 files changed, 42 insertions(+), 21 deletions(-)

diff --git a/block/mirror.c b/block/mirror.c
index 9f5eb69..02b2f69 100644
--- a/block/mirror.c
+++ b/block/mirror.c
@@ -49,7 +49,7 @@ typedef struct MirrorBlockJob {
 BlockDriverState *to_replace;
 /* Used to block operations on the drive-mirror-replace target */
 Error *replace_blocker;
-bool is_none_mode;
+MirrorSyncMode sync_mode;
 BlockMirrorBackingMode backing_mode;
 BlockdevOnError on_source_error, on_target_error;
 bool synced;
@@ -523,7 +523,9 @@ static void mirror_exit(BlockJob *job, void *opaque)
 bdrv_child_try_set_perm(mirror_top_bs->backing, 0, BLK_PERM_ALL,
 _abort);
 if (s->backing_mode == MIRROR_SOURCE_BACKING_CHAIN) {
-BlockDriverState *backing = s->is_none_mode ? src : s->base;
+BlockDriverState *backing =
+(s->sync_mode == MIRROR_SYNC_MODE_INCREMENTAL) ||
+(s->sync_mode == MIRROR_SYNC_MODE_NONE) ? src : s->base;
 if (backing_bs(target_bs) != backing) {
 bdrv_set_backing_hd(target_bs, backing, _err);
 if (local_err) {
@@ -771,7 +773,8 @@ static void coroutine_fn mirror_run(void *opaque)
 mirror_free_init(s);

 s->last_pause_ns = qemu_clock_get_ns(QEMU_CLOCK_REALTIME);
-if (!s->is_none_mode) {
+if ((s->sync_mode != MIRROR_SYNC_MODE_INCREMENTAL) &&
+  (s->sync_mode != MIRROR_SYNC_MODE_NONE)) {
 ret = mirror_dirty_init(s);
 if (ret < 0 || block_job_is_cancelled(>common)) {
 goto immediate_exit;
@@ -1114,7 +1117,8 @@ static void mirror_start_job(const char *job_id,
BlockDriverState *bs,
  BlockCompletionFunc *cb,
  void *opaque,
  const BlockJobDriver *driver,
- bool is_none_mode, BlockDriverState *base,
+ MirrorSyncMode sync_mode, const char *bitmap,
+ BlockDriverState *base,
  bool auto_complete, const char
*filter_node_name,
  Error **errp)
 {
@@ -1203,7 +1207,7 @@ static void mirror_start_job(const char *job_id,
BlockDriverState *bs,
 s->replaces = g_strdup(replaces);
 s->on_source_error = on_source_error;
 s->on_target_error = on_target_error;
-s->is_none_mode = is_none_mode;
+s->sync_mode = sync_mode;
 s->backing_mode = backing_mode;
 s->base = base;
 s->granularity = granularity;
@@ -1213,9 +1217,16 @@ static void mirror_start_job(const char *job_id,
BlockDriverState *bs,
 s->should_complete = true;
 }

-s->dirty_bitmap = bdrv_create_dirty_bitmap(bs, granularity, NULL,
errp);
-if (!s->dirty_bitmap) {
-goto fail;
+if (s->sync_mode == MIRROR_SYNC_MODE_INCREMENTAL) {
+s->dirty_bitmap = bdrv_find_dirty_bitmap(bs, bitmap);
+if (!s->dirty_bitmap) {
+goto fail;
+}
+} else {
+s->dirty_bitmap = bdrv_create_dirty_bitmap(bs, granularity, NULL,
errp);
+if (!s->dirty_bitmap) {
+goto fail;
+}
 }

 /* Required permissions are already taken with blk_new() */
@@ -1265,24 +1276,19 @@ fail:
 void mirror_start(const char *job_id, BlockDriverState *bs,
   BlockDriverState *target, const char *replaces,
   int64_t speed, uint32_t granularity, int64_t buf_size,
-  MirrorSyncMode mode, BlockMirrorBackingMode backing_mode,
+  MirrorSyncMode mode, const char *bitmap,
+  BlockMirrorBackingMode backing_mode,
   BlockdevOnError on_source_error,
   BlockdevOnError on_target_error,
   bool unmap, const char *filter_node_name, Error **errp)
 {
-bool