Otherwise, there is nothing enforcing that the drive mirror is ready
when the migration inactivates the block devices, which can lead to a
failing assertion:
> ../block/io.c:2026: bdrv_co_write_req_prepare: Assertion
> `!(bs->open_flags & BDRV_O_INACTIVE)' failed.

QAPI documentation of 'write-blocking' (currently the only alternative
to the default 'background' mode):
> when data is written to the source, write it (synchronously) to the
> target as well. In addition, data is copied in background just like
> in background mode.

Reported in the community forum [0] (and likely [1]).

Reproduced consistently with a 1 core, 1 GiB RAM, 4 GiB disk Debian 11
VM. I added a sleep of 5 second before issuing the migrate QMP command
and executed the following in the VM after the drive-mirror became
first ready:
> fio --name=make-mirror-work --size=100M --direct=1 --rw=randwrite \
>     --bs=4k --ioengine=psync --numjobs=5 --runtime=60 --time_based
This ensures that there is a large number of dirty clusters and that
the mirror still has work to do when the block device is inactivated.

[0] https://forum.proxmox.com/threads/111831/
[1] https://forum.proxmox.com/threads/100020/

Signed-off-by: Fiona Ebner <f.eb...@proxmox.com>
---
 PVE/QemuMigrate.pm | 14 +++++++++++++-
 PVE/QemuServer.pm  |  6 +++++-
 2 files changed, 18 insertions(+), 2 deletions(-)

diff --git a/PVE/QemuMigrate.pm b/PVE/QemuMigrate.pm
index d52dc8db..dd6b073e 100644
--- a/PVE/QemuMigrate.pm
+++ b/PVE/QemuMigrate.pm
@@ -831,7 +831,19 @@ sub phase2 {
            my $bitmap = $target->{bitmap};
 
            $self->log('info', "$drive: start migration to $nbd_uri");
-           PVE::QemuServer::qemu_drive_mirror($vmid, $drive, $nbd_uri, $vmid, 
undef, $self->{storage_migration_jobs}, 'skip', undef, $bwlimit, $bitmap);
+           PVE::QemuServer::qemu_drive_mirror(
+               $vmid,
+               $drive,
+               $nbd_uri,
+               $vmid,
+               undef,
+               $self->{storage_migration_jobs},
+               'skip',
+               undef,
+               $bwlimit,
+               $bitmap,
+               1,
+           );
        }
     }
 
diff --git a/PVE/QemuServer.pm b/PVE/QemuServer.pm
index 4e85dd02..2901cd83 100644
--- a/PVE/QemuServer.pm
+++ b/PVE/QemuServer.pm
@@ -7450,7 +7450,7 @@ sub qemu_img_format {
 }
 
 sub qemu_drive_mirror {
-    my ($vmid, $drive, $dst_volid, $vmiddst, $is_zero_initialized, $jobs, 
$completion, $qga, $bwlimit, $src_bitmap) = @_;
+    my ($vmid, $drive, $dst_volid, $vmiddst, $is_zero_initialized, $jobs, 
$completion, $qga, $bwlimit, $src_bitmap, $write_blocking) = @_;
 
     $jobs = {} if !$jobs;
 
@@ -7477,6 +7477,10 @@ sub qemu_drive_mirror {
     my $opts = { timeout => 10, device => "drive-$drive", mode => "existing", 
sync => "full", target => $qemu_target };
     $opts->{format} = $format if $format;
 
+    # Relevant for migration, to ensure that the mirror will be ready (after 
being ready once) when
+    # the migration inactivates the block drives.
+    $opts->{'copy-mode'} = 'write-blocking' if $write_blocking;
+
     if (defined($src_bitmap)) {
        $opts->{sync} = 'incremental';
        $opts->{bitmap} = $src_bitmap;
-- 
2.30.2



_______________________________________________
pve-devel mailing list
pve-devel@lists.proxmox.com
https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-devel

Reply via email to