If a VM configuration has been manually moved or migrated by HA,
there is no status on this new node.
In this case, the replication snapshots still exist on the remote side.
It must be possible to remove a job without status,
otherwise a new replication job on the same remote node will fail
and the disks will have to be manually removed.
When searching through the sorted_volumes generated from the VMID.conf,
we can be sure that every disk will be removed in the event
of a complete job removal on the remote side.

In the end the remote_prepare_local_job calls on the remote side a prepare.
---
 PVE/Replication.pm | 4 +++-
 1 file changed, 3 insertions(+), 1 deletion(-)

diff --git a/PVE/Replication.pm b/PVE/Replication.pm
index 9bc4e61..6a20ba2 100644
--- a/PVE/Replication.pm
+++ b/PVE/Replication.pm
@@ -200,8 +200,10 @@ sub replicate {
 
        if ($remove_job eq 'full' && $jobcfg->{target} ne $local_node) {
            # remove all remote volumes
+           my $store_list = [ map { (PVE::Storage::parse_volume_id($_))[0] } 
@$sorted_volids ];
+
            my $ssh_info = PVE::Cluster::get_ssh_info($jobcfg->{target});
-           remote_prepare_local_job($ssh_info, $jobid, $vmid, [], 
$state->{storeid_list}, 0, undef, 1, $logfunc);
+           remote_prepare_local_job($ssh_info, $jobid, $vmid, [], $store_list, 
0, undef, 1, $logfunc);
 
        }
        # remove all local replication snapshots (lastsync => 0)
-- 
2.11.0


_______________________________________________
pve-devel mailing list
pve-devel@pve.proxmox.com
https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel

Reply via email to