Re: [pve-devel] vm migration + storage migration with ndb workflow notes
But what if you want to move several disks? Or don't we need that? # qm move vmid -drive storage_and_format_spec for example: # qm move 100 -ide0 local -ide1 local:qcow2 I am not sure about that - better ideas? -Original Message- From: Alexandre DERUMIER [mailto:aderum...@odiso.com] Sent: Sonntag, 20. Jänner 2013 11:53 To: Dietmar Maurer Cc: pve-devel@pve.proxmox.com Subject: Re: [pve-devel] vm migration + storage migration with ndb workflow notes What do you think about : #qm move vmid disk --storage storeid --format [raw|qcow2|vmdk] ___ pve-devel mailing list pve-devel@pve.proxmox.com http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel
Re: [pve-devel] vm migration + storage migration with ndb workflow notes
But what if you want to move several disks? Or don't we need that? Not sure we really need to do this for multiple drives in 1 call. For qm resize, we only do this for 1 drive. For Gui, We can simply add a button like resize, to move the volume selected. I have already a patch for qm move vmid drive storage --format qcow2|raw|vmdk maybe qm move vmid -drive storage_and_format_spec in better for 1 drive or not ? - Mail original - De: Dietmar Maurer diet...@proxmox.com À: Alexandre DERUMIER aderum...@odiso.com Cc: pve-devel@pve.proxmox.com Envoyé: Lundi 21 Janvier 2013 09:00:48 Objet: RE: [pve-devel] vm migration + storage migration with ndb workflow notes But what if you want to move several disks? Or don't we need that? # qm move vmid -drive storage_and_format_spec for example: # qm move 100 -ide0 local -ide1 local:qcow2 I am not sure about that - better ideas? -Original Message- From: Alexandre DERUMIER [mailto:aderum...@odiso.com] Sent: Sonntag, 20. Jänner 2013 11:53 To: Dietmar Maurer Cc: pve-devel@pve.proxmox.com Subject: Re: [pve-devel] vm migration + storage migration with ndb workflow notes What do you think about : #qm move vmid disk --storage storeid --format [raw|qcow2|vmdk] ___ pve-devel mailing list pve-devel@pve.proxmox.com http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel
Re: [pve-devel] vm migration + storage migration with ndb workflow notes
Yes, I think we can have 1 job per drive at the same time. (as we can only do 1 block-job-complete). Sorry, but why do you want to copy one drive multiple times? ___ pve-devel mailing list pve-devel@pve.proxmox.com http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel
Re: [pve-devel] vm migration + storage migration with ndb workflow notes
Not sure we really need to do this for multiple drives in 1 call. For qm resize, we only do this for 1 drive. For Gui, We can simply add a button like resize, to move the volume selected. I have already a patch for qm move vmid drive storage --format qcow2|raw|vmdk Yes, I guess that is good enough to start with. ___ pve-devel mailing list pve-devel@pve.proxmox.com http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel
Re: [pve-devel] vm migration + storage migration with ndb workflow notes
about qemu-img vs drive-mirror, I see a big speed difference for a 32gb file, migrate on same local, qcow2 - raw, both writeback qemu-img : 30s drive-mirror (with paused vm) : 5min So I think drive-mirror code is more complex than qemu-img - Mail original - De: Alexandre DERUMIER aderum...@odiso.com À: Dietmar Maurer diet...@proxmox.com Cc: pve-devel@pve.proxmox.com Envoyé: Lundi 21 Janvier 2013 09:11:01 Objet: Re: [pve-devel] vm migration + storage migration with ndb workflow notes Are you sure? Usually you can have one job per drive? If such restriction exists, we need to remove it anyways. Yes, I think we can have 1 job per drive at the same time. (as we can only do 1 block-job-complete). If such restriction exists, we need to remove it anyways. Out of my competence ;) But maybe you known how to do this in qemu code. - Mail original - De: Dietmar Maurer diet...@proxmox.com À: Alexandre DERUMIER aderum...@odiso.com Cc: Michael Rasmussen m...@datanom.net, pve-devel@pve.proxmox.com Envoyé: Lundi 21 Janvier 2013 08:52:29 Objet: RE: [pve-devel] vm migration + storage migration with ndb workflow notes Would it be possible to do all work using 'qmp' commands. We can simply start the VM if the VM is offline (paused state). I currently do that with the new backup code. I guess that would make the whole thing much simpler? Not sure about that, About my templates patches, I can see one better thing with qemu-img, we can copy multiple time at once. with drive-mirror, we can only do 1 drive-mirror job at once. Are you sure? Usually you can have one job per drive? If such restriction exists, we need to remove it anyways. ___ pve-devel mailing list pve-devel@pve.proxmox.com http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel ___ pve-devel mailing list pve-devel@pve.proxmox.com http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel
Re: [pve-devel] vm migration + storage migration with ndb workflow notes
I see a big speed difference for a 32gb file, migrate on same local, qcow2 - raw, both writeback qemu-img : 30s drive-mirror (with paused vm) : 5min This should be as fast as qemu-img. Please can you post your findings on the pve-devel list - this looks like a serious bug to me. ___ pve-devel mailing list pve-devel@pve.proxmox.com http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel
Re: [pve-devel] vm migration + storage migration with ndb workflow notes
Sorry, but why do you want to copy one drive multiple times? This is about vm cloning. (not storage migration, but this is a disk copy in both case). Multiple users can clone a template vm at the same time. Other thing that qemu-img can do is a copy of a disk snapshot.(qcow2,rbd,sheepdog) OK, I see. But then template copy/clone only works if the template if offline? (yes, I know - I should read your code instead) ___ pve-devel mailing list pve-devel@pve.proxmox.com http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel
Re: [pve-devel] vm migration + storage migration with ndb workflow notes
This should be as fast as qemu-img. Please can you post your findings on the pve-devel list - this looks like a serious bug to me. I'll send a msg to the qemu mailing. Maybe with drive-mirror the target file is open in directsync ? - Mail original - De: Dietmar Maurer diet...@proxmox.com À: Alexandre DERUMIER aderum...@odiso.com Cc: pve-devel@pve.proxmox.com Envoyé: Lundi 21 Janvier 2013 10:18:20 Objet: RE: [pve-devel] vm migration + storage migration with ndb workflow notes I see a big speed difference for a 32gb file, migrate on same local, qcow2 - raw, both writeback qemu-img : 30s drive-mirror (with paused vm) : 5min This should be as fast as qemu-img. Please can you post your findings on the pve-devel list - this looks like a serious bug to me. ___ pve-devel mailing list pve-devel@pve.proxmox.com http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel
Re: [pve-devel] vm migration + storage migration with ndb workflow notes
This should be as fast as qemu-img. Please can you post your findings on the pve-devel list - this looks like a serious bug to me. I'll send a msg to the qemu mailing. Maybe with drive-mirror the target file is open in directsync ? I have no idea, but that does not sound reasonable to me. ___ pve-devel mailing list pve-devel@pve.proxmox.com http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel
Re: [pve-devel] vm migration + storage migration with ndb workflow notes
So we have different restriction (we can rum multiple copies with qemu-img, but only one when the VM is online)? Yes. (But I doesn't allow a template to be running) I think a live vm copy doesn't need to done multiple time at once. It can be usefull to clone a running vm, to make tests on it. A template is more usefull to have a gold image (with some scripts on linux to change hostname, or sysprep under windows) that we want to clone to deploy new vms. So It make sense to be able to multiple copy in parallel. Maybe can you test my patches to see how I have implemented it ? - Mail original - De: Dietmar Maurer diet...@proxmox.com À: Alexandre DERUMIER aderum...@odiso.com Cc: Michael Rasmussen m...@datanom.net, pve-devel@pve.proxmox.com Envoyé: Lundi 21 Janvier 2013 10:41:54 Objet: RE: [pve-devel] vm migration + storage migration with ndb workflow notes My disk copy code, just use qemu-img if offline or drive-mirror if online. So we have different restriction (we can rum multiple copies with qemu-img, but only one when the VM is online)? And what if we copy from template? ___ pve-devel mailing list pve-devel@pve.proxmox.com http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel
Re: [pve-devel] vm migration + storage migration with ndb workflow notes
Maybe can you test my patches to see how I have implemented it ? As you know, this is on my TODO list (next task). But I still need to resolve a bug (hopefully last) in the backup code. ___ pve-devel mailing list pve-devel@pve.proxmox.com http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel
Re: [pve-devel] vm migration + storage migration with ndb workflow notes
As you know, this is on my TODO list (next task). But I still need to resolve a bug (hopefully last) in the backup code. Sure no problem, I known you are very busy with backup code. - Mail original - De: Dietmar Maurer diet...@proxmox.com À: Alexandre DERUMIER aderum...@odiso.com Cc: Michael Rasmussen m...@datanom.net, pve-devel@pve.proxmox.com Envoyé: Lundi 21 Janvier 2013 11:15:57 Objet: RE: [pve-devel] vm migration + storage migration with ndb workflow notes Maybe can you test my patches to see how I have implemented it ? As you know, this is on my TODO list (next task). But I still need to resolve a bug (hopefully last) in the backup code. ___ pve-devel mailing list pve-devel@pve.proxmox.com http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel
Re: [pve-devel] vm migration + storage migration with ndb workflow notes
I just found a patch from last week on the qemu-devel list: [Qemu-devel] [PATCH v2 00/12] Drive mirroring performance improvements So I guess they are already aware of that problem. This should be as fast as qemu-img. Please can you post your findings on the pve-devel list - this looks like a serious bug to me. I'll send a msg to the qemu mailing. Maybe with drive-mirror the target file is open in directsync ? - Mail original - De: Dietmar Maurer diet...@proxmox.com À: Alexandre DERUMIER aderum...@odiso.com Cc: pve-devel@pve.proxmox.com Envoyé: Lundi 21 Janvier 2013 10:18:20 Objet: RE: [pve-devel] vm migration + storage migration with ndb workflow notes I see a big speed difference for a 32gb file, migrate on same local, qcow2 - raw, both writeback qemu-img : 30s drive-mirror (with paused vm) : 5min This should be as fast as qemu-img. Please can you post your findings on the pve-devel list - this looks like a serious bug to me. ___ pve-devel mailing list pve-devel@pve.proxmox.com http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel
Re: [pve-devel] vm migration + storage migration with ndb workflow notes
So I guess they are already aware of that problem. Yes, Paolo have also reply me. Any idea why drive-mirror is so slow ? (maybe does it use directsync when mirroring ?) No, it doesn't. Probably it's because the image is sparse? The current code in git master has a very coarse granularity (1 MB). Please try the blkmirror-job-1.4 branch from my github repo (git://github.com/bonzini/qemu.git). That branch uses the qcow2 file's cluster size as the granularity, and has other optimizations that kick in when the image is sparse. I think it's related to sparse file. Because my 32GB file have only around 2GB datas, and qemu-img seem to skip them and but not drive-mirror. - Mail original - De: Dietmar Maurer diet...@proxmox.com À: Alexandre DERUMIER aderum...@odiso.com Cc: pve-devel@pve.proxmox.com Envoyé: Lundi 21 Janvier 2013 12:29:15 Objet: RE: [pve-devel] vm migration + storage migration with ndb workflow notes I just found a patch from last week on the qemu-devel list: [Qemu-devel] [PATCH v2 00/12] Drive mirroring performance improvements So I guess they are already aware of that problem. This should be as fast as qemu-img. Please can you post your findings on the pve-devel list - this looks like a serious bug to me. I'll send a msg to the qemu mailing. Maybe with drive-mirror the target file is open in directsync ? - Mail original - De: Dietmar Maurer diet...@proxmox.com À: Alexandre DERUMIER aderum...@odiso.com Cc: pve-devel@pve.proxmox.com Envoyé: Lundi 21 Janvier 2013 10:18:20 Objet: RE: [pve-devel] vm migration + storage migration with ndb workflow notes I see a big speed difference for a 32gb file, migrate on same local, qcow2 - raw, both writeback qemu-img : 30s drive-mirror (with paused vm) : 5min This should be as fast as qemu-img. Please can you post your findings on the pve-devel list - this looks like a serious bug to me. ___ pve-devel mailing list pve-devel@pve.proxmox.com http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel
Re: [pve-devel] vm migration + storage migration with ndb workflow notes
There is no need to change the storage name - we just migrate from 'local:disk1' to 'local:disk1' (storage/image is not changed, but be copy data because storage is not shared). Ok, It'll be indeed easier to manage, no need to upgrade the vm config file or to pass new arguments. So,i'm not sure, maybe it doesn't make sense to extend qm migrate ? maybe simply a new qm storagemigrate to only move between storages on a single node is enough ? - Mail original - De: Dietmar Maurer diet...@proxmox.com À: Alexandre DERUMIER aderum...@odiso.com Cc: pve-devel@pve.proxmox.com Envoyé: Samedi 19 Janvier 2013 17:18:22 Objet: RE: [pve-devel] vm migration + storage migration with ndb workflow notes We only want to be able to migrate local disks (changing storage is not really needed?). I guess that would make 'remote' migrate much easier? I think the main feature is indeed live migrate from a local non shared storage to a another remote local non shared storage. Yes I see another usage (for my own need), migrate between 2 remote datacenters (1proxmox cluster across both), with a local san on each datacenter. Each datacenter can use differents storage, with differents protocol (nfs - iscsi for exemple). What do you mean by (changing storage is not really needed?). Do you mean changing storage format ? (local raw - remote local lvm by example) Because,For online remote storage migration, with ndb, it's easy to mirror from any source storage to any remote storage. There is no need to change the storage name - we just migrate from 'local:disk1' to 'local:disk1' (storage/image is not changed, but be copy data because storage is not shared). ___ pve-devel mailing list pve-devel@pve.proxmox.com http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel
Re: [pve-devel] vm migration + storage migration with ndb workflow notes
There is no need to change the storage name - we just migrate from 'local:disk1' to 'local:disk1' (storage/image is not changed, but be copy data because storage is not shared). Ok, It'll be indeed easier to manage, no need to upgrade the vm config file or to pass new arguments. So,i'm not sure, maybe it doesn't make sense to extend qm migrate ? No, I guess it does not really makes sense. maybe simply a new qm storagemigrate to only move between storages on a single node is enough ? Yes. Or maybe qm move? ___ pve-devel mailing list pve-devel@pve.proxmox.com http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel
Re: [pve-devel] vm migration + storage migration with ndb workflow notes
I just think this is enough for remote migration. For local VMs we can have storage migration between any accessible storage. I second this. Just a thought: Would it not be of interest for the users to be able to change storage format? eg. hostX:$vmid/vm-$vmid-disk-N.{raw,qcow2} - hostY:$vmid/vm-$vmid-disk- N.{raw,qcow2} Yes, I guess that is easy to implement (for local storage migration). Both drive-mirror and qemu-img convert can do this on the fly and some format change needs to be supported anyway if storage change from file system to block device. Would it be possible to do all work using 'qmp' commands. We can simply start the VM if the VM is offline (paused state). I currently do that with the new backup code. I guess that would make the whole thing much simpler? ___ pve-devel mailing list pve-devel@pve.proxmox.com http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel
Re: [pve-devel] vm migration + storage migration with ndb workflow notes
hostX:$vmid/vm-$vmid-disk-N.{raw,qcow2} - hostY:$vmid/vm-$vmid-disk- N.{raw,qcow2} Yes, I guess that is easy to implement (for local storage migration). I'll already working in my paches (qemu-img or drive-mirror) Would it be possible to do all work using 'qmp' commands. We can simply start the VM if the VM is offline (paused state). I currently do that with the new backup code. I guess that would make the whole thing much simpler? Not sure about that, About my templates patches, I can see one better thing with qemu-img, we can copy multiple time at once. with drive-mirror, we can only do 1 drive-mirror job at once. - Mail original - De: Dietmar Maurer diet...@proxmox.com À: Michael Rasmussen m...@datanom.net, pve-devel@pve.proxmox.com Envoyé: Dimanche 20 Janvier 2013 11:15:56 Objet: Re: [pve-devel] vm migration + storage migration with ndb workflow notes I just think this is enough for remote migration. For local VMs we can have storage migration between any accessible storage. I second this. Just a thought: Would it not be of interest for the users to be able to change storage format? eg. hostX:$vmid/vm-$vmid-disk-N.{raw,qcow2} - hostY:$vmid/vm-$vmid-disk- N.{raw,qcow2} Yes, I guess that is easy to implement (for local storage migration). Both drive-mirror and qemu-img convert can do this on the fly and some format change needs to be supported anyway if storage change from file system to block device. Would it be possible to do all work using 'qmp' commands. We can simply start the VM if the VM is offline (paused state). I currently do that with the new backup code. I guess that would make the whole thing much simpler? ___ pve-devel mailing list pve-devel@pve.proxmox.com http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel ___ pve-devel mailing list pve-devel@pve.proxmox.com http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel
Re: [pve-devel] vm migration + storage migration with ndb workflow notes
What do you think about : #qm move vmid disk --storage storeid --format [raw|qcow2|vmdk] format is optionnal - Mail original - De: Dietmar Maurer diet...@proxmox.com À: Alexandre DERUMIER aderum...@odiso.com Cc: pve-devel@pve.proxmox.com Envoyé: Dimanche 20 Janvier 2013 11:10:17 Objet: RE: [pve-devel] vm migration + storage migration with ndb workflow notes There is no need to change the storage name - we just migrate from 'local:disk1' to 'local:disk1' (storage/image is not changed, but be copy data because storage is not shared). Ok, It'll be indeed easier to manage, no need to upgrade the vm config file or to pass new arguments. So,i'm not sure, maybe it doesn't make sense to extend qm migrate ? No, I guess it does not really makes sense. maybe simply a new qm storagemigrate to only move between storages on a single node is enough ? Yes. Or maybe qm move? ___ pve-devel mailing list pve-devel@pve.proxmox.com http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel
Re: [pve-devel] vm migration + storage migration with ndb workflow notes
On Sun, 20 Jan 2013 10:15:56 + Dietmar Maurer diet...@proxmox.com wrote: Would it be possible to do all work using 'qmp' commands. We can simply start the VM if the VM is offline (paused state). I currently do that with the new backup code. But what if the VM is offline as in shutdown? Would you the bring it up behind the users back? If the VM is offline as in paused state could we not use qmp commands without bringing it online anyway? I guess that would make the whole thing much simpler? True, but sometime live is hard:-) -- Hilsen/Regards Michael Rasmussen Get my public GnuPG keys: michael at rasmussen dot cc http://pgp.mit.edu:11371/pks/lookup?op=getsearch=0xD3C9A00E mir at datanom dot net http://pgp.mit.edu:11371/pks/lookup?op=getsearch=0xE501F51C mir at miras dot org http://pgp.mit.edu:11371/pks/lookup?op=getsearch=0xE3E80917 -- if it GLISTENS, gobble it!! signature.asc Description: PGP signature ___ pve-devel mailing list pve-devel@pve.proxmox.com http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel
Re: [pve-devel] vm migration + storage migration with ndb workflow notes
Would it be possible to do all work using 'qmp' commands. We can simply start the VM if the VM is offline (paused state). I currently do that with the new backup code. I guess that would make the whole thing much simpler? Not sure about that, About my templates patches, I can see one better thing with qemu-img, we can copy multiple time at once. with drive-mirror, we can only do 1 drive-mirror job at once. Are you sure? Usually you can have one job per drive? If such restriction exists, we need to remove it anyways. ___ pve-devel mailing list pve-devel@pve.proxmox.com http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel
Re: [pve-devel] vm migration + storage migration with ndb workflow notes
We only want to be able to migrate local disks (changing storage is not really needed?). I guess that would make 'remote' migrate much easier? I think the main feature is indeed live migrate from a local non shared storage to a another remote local non shared storage. I see another usage (for my own need), migrate between 2 remote datacenters (1proxmox cluster across both), with a local san on each datacenter. Each datacenter can use differents storage, with differents protocol (nfs - iscsi for exemple). What do you mean by (changing storage is not really needed?). Do you mean changing storage format ? (local raw - remote local lvm by example) Because,For online remote storage migration, with ndb, it's easy to mirror from any source storage to any remote storage. - Mail original - De: Dietmar Maurer diet...@proxmox.com À: Alexandre DERUMIER aderum...@odiso.com, pve-devel@pve.proxmox.com Envoyé: Samedi 19 Janvier 2013 07:16:17 Objet: RE: [pve-devel] vm migration + storage migration with ndb workflow notes I have finally understand the whole workflow for migrate vm + storage migrate at the same time (I don't have to much time to work on this, maybe it's better to have stable working disk clone/copy/live mirror code before ) Yes, I would also like to do that after copy/clone. Besides, I think the whole thing gets to complex. We only want to be able to migrate local disks (changing storage is not really needed?). I guess that would make 'remote' migrate much easier? ___ pve-devel mailing list pve-devel@pve.proxmox.com http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel
Re: [pve-devel] vm migration + storage migration with ndb workflow notes
We only want to be able to migrate local disks (changing storage is not really needed?). I guess that would make 'remote' migrate much easier? I think the main feature is indeed live migrate from a local non shared storage to a another remote local non shared storage. Yes I see another usage (for my own need), migrate between 2 remote datacenters (1proxmox cluster across both), with a local san on each datacenter. Each datacenter can use differents storage, with differents protocol (nfs - iscsi for exemple). What do you mean by (changing storage is not really needed?). Do you mean changing storage format ? (local raw - remote local lvm by example) Because,For online remote storage migration, with ndb, it's easy to mirror from any source storage to any remote storage. There is no need to change the storage name - we just migrate from 'local:disk1' to 'local:disk1' (storage/image is not changed, but be copy data because storage is not shared). ___ pve-devel mailing list pve-devel@pve.proxmox.com http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel
Re: [pve-devel] vm migration + storage migration with ndb workflow notes
What do you mean by (changing storage is not really needed?). Do you mean changing storage format ? (local raw - remote local lvm by example) Because,For online remote storage migration, with ndb, it's easy to mirror from any source storage to any remote storage. There is no need to change the storage name - we just migrate from 'local:disk1' to 'local:disk1' (storage/image is not changed, but be copy data because storage is not shared). I just think this is enough for remote migration. For local VMs we can have storage migration between any accessible storage. ___ pve-devel mailing list pve-devel@pve.proxmox.com http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel
[pve-devel] vm migration + storage migration with ndb workflow notes
Hi, I have finally understand the whole workflow for migrate vm + storage migrate at the same time (I don't have to much time to work on this, maybe it's better to have stable working disk clone/copy/live mirror code before ) Here somes notes: phase1 -- target host --- create new volumes if storage != share --- add a new qm command ? phase2 -- 1)target host send qm start to target: we need start the target vm with new disks locations. - How to do this ? Currently the target vm read the vm config file. Do we need to update the vmconfig file before mirroring ? or pass drive parameters in qm start command ? start nbd_server nbd_server_start ip:port add drives to mirror to nbd - nbd_server_add drive-virtio0 nbd_server_add drive-virtio1 3)source host start mirroring of the drives -- drive-mirror target = nbd:host:port:exportname=drive-virtioX when drive-mirror is finished, (block-job-complete), the source vm will continue to access volume on the remote host through nbd start vm migration -- end of vm migration --- phase3 -- 1)target host resume vm -- nbd_server_stop --- 2) source vm delete source mirrored volumes --- Other notes: How to handle migrate cancel, if drives are already mirrored, and source vm access through nbd on target host. ___ pve-devel mailing list pve-devel@pve.proxmox.com http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel
Re: [pve-devel] vm migration + storage migration with ndb workflow notes
I have finally understand the whole workflow for migrate vm + storage migrate at the same time (I don't have to much time to work on this, maybe it's better to have stable working disk clone/copy/live mirror code before ) Yes, I would also like to do that after copy/clone. Besides, I think the whole thing gets to complex. We only want to be able to migrate local disks (changing storage is not really needed?). I guess that would make 'remote' migrate much easier? ___ pve-devel mailing list pve-devel@pve.proxmox.com http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel
Re: [pve-devel] vm migration + storage migration with ndb workflow notes
As a user, our main use case for live storage migration would be migrating between local disk/RBD/iSCSI. We would also use the ability to migrate storage across the network. This could for example be used to migrate between a SAN in one datacentre to a SAN in a remote datacentre. Regards, Andrew Dietmar Maurer diet...@proxmox.com wrote: I have finally understand the whole workflow for migrate vm + storage migrate at the same time (I don't have to much time to work on this, maybe it's better to have stable working disk clone/copy/live mirror code before ) Yes, I would also like to do that after copy/clone. Besides, I think the whole thing gets to complex. We only want to be able to migrate local disks (changing storage is not really needed?). I guess that would make 'remote' migrate much easier? ___ pve-devel mailing list pve-devel@pve.proxmox.com http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel -- Sent from my Android phone with K-9 Mail. Please excuse my brevity.___ pve-devel mailing list pve-devel@pve.proxmox.com http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel