Hi,
I'm begin to look for storage migration implementation,
do you think we need to extend qm migrate ?
something like:
qm migrate vmid target --virtio0 local:raw --virtio1 nfs:qcow2 --virtio3
rbd:
1) if targethost = sourcehost , then we only migrate storage
2) if targethost !=
I'm looking to your drive-mirror code,
about
# if writes to disk occurs the disk needs to be freezed
# to be able to complete the migration
Are you sure that drive-mirror cannot handle new writes during the mirroring ?
I think they must be a bitmap a new writes somewhere, maybe it's exist some
Hi,
I still can figure out why it should be necessary to migrate the vm if
the storage is migrated to another host?
Especially in the situation where a host has more than one disk and you
then migrate one of the disks. Should the vm still be migrated then?
what about the remaining disks,
I was not aware of the pause command. I will try using that command
instead of savevm-start/end.
The reason for a possible requirement to freeze the vm is cause by the
fact that you could have continues writes to the disk which is migrated
in which case, since writes locally obviously must be
The reason for a possible requirement to freeze the vm is cause by the
fact that you could have continues writes to the disk which is migrated
in which case, since writes locally obviously must be faster than writes
remote, the temporary backend will never be completely flushed. I have
tested
On 01-17-2013 12:00, Alexandre DERUMIER wrote:
How do think it can work, if the the storage is attached on target
host, but not on source host,
and that vm is running on source host ?
I see your point. This particular situation is not handle in my code
since it will only migrate between
I see your point. This particular situation is not handle in my code
since it will only migrate between common attached storage backends. The
case of migrating to an unattached storage is in my opinion a completely
different usecase.
Yes, I known.
But It's not too much different, we can
if hosttarget = sourcetarget, then we simply migrate the disks if hosttarget
!=
sourcetarget, we mirror the disks with nbd (only if the storage is non-shared
on both hosts), then we migrate the vm.
Maybe Dietmar have you an opinion about this ?
What do we present on the GUI to the user?
On 01-17-2013 12:47, Alexandre DERUMIER wrote:
Yes, I known.
But It's not too much different, we can reuse code, just using nbd as
target instead a volume file/device.
I was looking at nbd before I was aware of that qm migrate was able to
do the job. The changes are relatively
simple and can
What do we present on the GUI to the user?
I thinked extend the current migrate panel,
maybe add a new tab,
for each disk of the vm, display a the storagelist of targetnode, default is
disk current storage
Main tab:
Target Node : node list
Online: checkbox
Storage tab:
virtio0: storage list
What do we present on the GUI to the user?
I thinked extend the current migrate panel, maybe add a new tab, for each
disk of the vm, display a the storagelist of targetnode, default is disk
current
storage
Ah, yes. That sounds reasonable.
___
I guess we only talk about nodes inside the cluster here.
I agree too.
- Mail original -
De: Dietmar Maurer diet...@proxmox.com
À: Michael Rasmussen m...@datanom.net, Alexandre DERUMIER
aderum...@odiso.com
Cc: pve-devel@pve.proxmox.com
Envoyé: Jeudi 17 Janvier 2013 14:49:07
I'll make a proof of concept, to see if it's looking good
- Mail original -
De: Dietmar Maurer diet...@proxmox.com
À: Alexandre DERUMIER aderum...@odiso.com
Cc: pve-devel@pve.proxmox.com, Michael Rasmussen m...@datanom.net
Envoyé: Jeudi 17 Janvier 2013 14:48:20
Objet: RE:
Hi, here a new version with storage migration (local node only for the moment)
changelog:
- rebase on last git
- corrections on drive-mirror based on michael work
- add qm migrate -storage (detail in commit)
___
pve-devel mailing list
return 1 if current is template
return 2 if a snapshot is a template
Signed-off-by: Alexandre Derumier aderum...@odiso.com
---
PVE/QemuServer.pm | 16
1 file changed, 16 insertions(+)
diff --git a/PVE/QemuServer.pm b/PVE/QemuServer.pm
index 586fce8..dc0f3ec 100644
---
A template is protected config + volumes.
Can be current config or snapshot.
template_create:
-we lock volume if storage need it for clone (files or rbd)
-then we add template:1 to the config (current or snapshot)
template_delete:
- We need to check if clones of volumes exist before remove the
if a qcow2 current is a template, we can't rollback to a previous snapshot.
(note that file readonly protection do already the job, but we need a clear
error message for user)
Signed-off-by: Alexandre Derumier aderum...@odiso.com
---
PVE/QemuServer.pm |2 ++
1 file changed, 2 insertions(+)
Signed-off-by: Alexandre Derumier aderum...@odiso.com
---
PVE/QemuMigrate.pm |6 ++
1 file changed, 6 insertions(+)
diff --git a/PVE/QemuMigrate.pm b/PVE/QemuMigrate.pm
index de2ee57..2b79025 100644
--- a/PVE/QemuMigrate.pm
+++ b/PVE/QemuMigrate.pm
@@ -245,6 +245,12 @@ sub sync_disks {
Signed-off-by: Alexandre Derumier aderum...@odiso.com
---
PVE/API2/Qemu.pm |2 ++
1 file changed, 2 insertions(+)
diff --git a/PVE/API2/Qemu.pm b/PVE/API2/Qemu.pm
index 91cf569..63dbd33 100644
--- a/PVE/API2/Qemu.pm
+++ b/PVE/API2/Qemu.pm
@@ -873,6 +873,8 @@ __PACKAGE__-register_method({
Signed-off-by: Alexandre Derumier aderum...@odiso.com
---
PVE/API2/Qemu.pm |2 ++
1 file changed, 2 insertions(+)
diff --git a/PVE/API2/Qemu.pm b/PVE/API2/Qemu.pm
index 0444fed..633245c 100644
--- a/PVE/API2/Qemu.pm
+++ b/PVE/API2/Qemu.pm
@@ -2159,12 +2159,14 @@ __PACKAGE__-register_method({
linked clone
qm create vmid --clonefrom vmidsrc [--snapname snap] [--clonemode clone]
copy clone : dest storage = source storage
--
qm create vmid --clonefrom vmidsrc [--snapname snap] --clonemode copy
copy clone : dest storage != source
currently only for local node, works online or offline
syntax:
qm migrate vmid target -storage -[virtiox|idex|scsix|satax] storeid:[fmt]
sample:
qm migrate vmid target -storage -virtio0 local:raw -scsi1 local:qcow2
-sata3 rbdstorage: --ide4 lvmstorage:
fixme: how to handle snapshots ? delete
if files (raw,qcow2) are a template, we forbid vm_start.
note : the readonly protection do already the job, but we need a clear message
for users
Signed-off-by: Alexandre Derumier aderum...@odiso.com
---
PVE/QemuServer.pm |2 ++
1 file changed, 2 insertions(+)
diff --git
Signed-off-by: Alexandre Derumier aderum...@odiso.com
---
PVE/QemuServer.pm | 66 +
1 file changed, 66 insertions(+)
diff --git a/PVE/QemuServer.pm b/PVE/QemuServer.pm
index 6354bc3..30a890a 100644
--- a/PVE/QemuServer.pm
+++
Signed-off-by: Alexandre Derumier aderum...@odiso.com
---
PVE/QemuServer.pm |2 ++
1 file changed, 2 insertions(+)
diff --git a/PVE/QemuServer.pm b/PVE/QemuServer.pm
index 1e21bbc..a965136 100644
--- a/PVE/QemuServer.pm
+++ b/PVE/QemuServer.pm
@@ -1987,6 +1987,8 @@ sub vmstatus {
also work with snapshot as source for qcow2,rbd,sheepdog.
Signed-off-by: Alexandre Derumier aderum...@odiso.com
---
PVE/QemuServer.pm | 51 +++
1 file changed, 51 insertions(+)
diff --git a/PVE/QemuServer.pm b/PVE/QemuServer.pm
index
Hi all,
Attached you will find rc3.
Changes:
- Added user option allowing users to instruct not to suspend but
rather wait for disk writes to end
- Added user option allowing users to configure max wait time
- Use vm_suspend, vm_resume instead of savevm-start/end
- Fix a small bug. After
27 matches
Mail list logo