Hello,
to pick up this older one:
On 2/16/20 11:28 AM, Roland @web.de wrote:
/> why do i need to have the same local storage name when migrating a vm />>/from node1 to node2 in dual-node cluster with
local disks ? />>//>>/i'm curious that migration is possible in online state (which is much />>/more
complex/challenging task) without a problem, but offline i get />/> "storage is not available on selected target"
(because there are />/> differenz zfs pools on both machines) />
This is because offline and online migration use two very different
mechanism.
AFAIK Qemu NBD is used for online migration and ZFS send->recv is used
for offline migration.
i had a closer look on offline-migration, and apparently zfs send->recv is only
being used with ZVOLS, the default for VMs on ZFS.
for normal (qcow/raw...) files on any filesystem (even zfs), pvesm export/import
is being used.
this is working straightforward and apparently, it seems there is missing
appropriate logic inside proxmox including missing parameterization in the
webgui
(and probably error handling etc..) !?
for example, on the target system i can open a "receiver" like this:
# pvesm import ${TARGETDS}:100/vm-100-disk-0.qcow2 qcow2+size
tcp://10.16.37.0/24 -with-snapshots 1 -allow-rename 1
where on the source i can send the data like this:
# /sbin/pvesm export ${SOURCEDS}:100/vm-100-disk-0.qcow2 qcow2+size -
-with-snapshots 1|mbuffer -O 10.16.37.55:60000
so we apparently see, what's being needed exists at the base level...
//>>/i guess there is no real technical hurdle, it just needs to get
/>>/implemented appropriatley !? />
There is a patch in the works to make different target storages possible
for offline migration.
has there been any progress on this in the meantime ?
regards
Roland
_______________________________________________
pve-user mailing list
pve-user@lists.proxmox.com
https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-user