Re: [pve-devel] [PATCH 1/3] add live storage migration with vm migration
>>Something like that - not sure if that is possible? I can confirm that it's working ! 1) start remote vm + nbd share 2) start disks mirroring to nbd 4) monitor jobs, and wait until all drives are ready (100% mirror) but does not complete 3) start live migration 4) monitor jobs again, if they are all ready do the complete, do the block-job-complete 5) stop the source vm 6) resume source vm I'll polish my patches and try to send them tomorrow for review. - Mail original - De: "dietmar" <diet...@proxmox.com> À: "aderumier" <aderum...@odiso.com> Cc: "Wolfgang Bumiller" <w.bumil...@proxmox.com>, "pve-devel" <pve-devel@pve.proxmox.com> Envoyé: Mercredi 19 Octobre 2016 10:18:41 Objet: Re: [pve-devel] [PATCH 1/3] add live storage migration with vm migration > On October 19, 2016 at 9:26 AM Alexandre DERUMIER <aderum...@odiso.com> > wrote: > > > >>What about the suggestion from Wolfgang: "make drive-mirror write to both > >>disks" > >>(local and remote) > >> > >>I guess that would solve the whole problem? > > Do you mean, keep the drive-mirror running (no block-job-complete), then do > the vm live migration, > > and after vm live migration (just before unpause target, and stop sourvm), do > the block-job-complete on source vm ? Something like that - not sure if that is possible? ___ pve-devel mailing list pve-devel@pve.proxmox.com http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel
Re: [pve-devel] [PATCH 1/3] add live storage migration with vm migration
Firstly, I'll improve drive-mirror to handle multiple mirror in parallel, splitting the block-job complete part. This can improve current live cloning vm with mutiple disks. and It's a good test before implement multiple disk mirroring through remote nbd. - Mail original - De: "aderumier" <aderum...@odiso.com> À: "dietmar" <diet...@proxmox.com> Cc: "pve-devel" <pve-devel@pve.proxmox.com> Envoyé: Mercredi 19 Octobre 2016 10:33:21 Objet: Re: [pve-devel] [PATCH 1/3] add live storage migration with vm migration >>Something like that - not sure if that is possible? I don't known (don't known if I can launch the vm migration process when block-job still running). I'll do tests. In the future, qemu should also add CORO support (some code is already in qemu git but not complete). This should allow to constant vm memory mirroring + disk mirroing to a remote node, to have some kind of real HA, even without shared storage. - Mail original - De: "dietmar" <diet...@proxmox.com> À: "aderumier" <aderum...@odiso.com> Cc: "Wolfgang Bumiller" <w.bumil...@proxmox.com>, "pve-devel" <pve-devel@pve.proxmox.com> Envoyé: Mercredi 19 Octobre 2016 10:18:41 Objet: Re: [pve-devel] [PATCH 1/3] add live storage migration with vm migration > On October 19, 2016 at 9:26 AM Alexandre DERUMIER <aderum...@odiso.com> > wrote: > > > >>What about the suggestion from Wolfgang: "make drive-mirror write to both > >>disks" > >>(local and remote) > >> > >>I guess that would solve the whole problem? > > Do you mean, keep the drive-mirror running (no block-job-complete), then do > the vm live migration, > > and after vm live migration (just before unpause target, and stop sourvm), do > the block-job-complete on source vm ? Something like that - not sure if that is possible? ___ pve-devel mailing list pve-devel@pve.proxmox.com http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel ___ pve-devel mailing list pve-devel@pve.proxmox.com http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel
Re: [pve-devel] [PATCH 1/3] add live storage migration with vm migration
>>Something like that - not sure if that is possible? I don't known (don't known if I can launch the vm migration process when block-job still running). I'll do tests. In the future, qemu should also add CORO support (some code is already in qemu git but not complete). This should allow to constant vm memory mirroring + disk mirroing to a remote node, to have some kind of real HA, even without shared storage. - Mail original - De: "dietmar" <diet...@proxmox.com> À: "aderumier" <aderum...@odiso.com> Cc: "Wolfgang Bumiller" <w.bumil...@proxmox.com>, "pve-devel" <pve-devel@pve.proxmox.com> Envoyé: Mercredi 19 Octobre 2016 10:18:41 Objet: Re: [pve-devel] [PATCH 1/3] add live storage migration with vm migration > On October 19, 2016 at 9:26 AM Alexandre DERUMIER <aderum...@odiso.com> > wrote: > > > >>What about the suggestion from Wolfgang: "make drive-mirror write to both > >>disks" > >>(local and remote) > >> > >>I guess that would solve the whole problem? > > Do you mean, keep the drive-mirror running (no block-job-complete), then do > the vm live migration, > > and after vm live migration (just before unpause target, and stop sourvm), do > the block-job-complete on source vm ? Something like that - not sure if that is possible? ___ pve-devel mailing list pve-devel@pve.proxmox.com http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel
Re: [pve-devel] [PATCH 1/3] add live storage migration with vm migration
> On October 19, 2016 at 9:26 AM Alexandre DERUMIERwrote: > > > >>What about the suggestion from Wolfgang: "make drive-mirror write to both > >>disks" > >>(local and remote) > >> > >>I guess that would solve the whole problem? > > Do you mean, keep the drive-mirror running (no block-job-complete), then do > the vm live migration, > > and after vm live migration (just before unpause target, and stop sourvm), do > the block-job-complete on source vm ? Something like that - not sure if that is possible? ___ pve-devel mailing list pve-devel@pve.proxmox.com http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel
Re: [pve-devel] [PATCH 1/3] add live storage migration with vm migration
>>Exactly, the disk should basically be in a RAID-1 mode during the entire >>migration process. >>Note that this might be incompatible with post-copy mode as with that the >>remote side >>takes over early and would have to mirror its writes back to the source (iow. >>the remote >>side would also have to be in a drive-mirror mode except it only has one >>disk, so I doubt >>that's possible/implemented). Ok,I'll try to test that. I think it should work. - Mail original - De: "Wolfgang Bumiller" <w.bumil...@proxmox.com> À: "aderumier" <aderum...@odiso.com>, "dietmar" <diet...@proxmox.com> Cc: "pve-devel" <pve-devel@pve.proxmox.com> Envoyé: Mercredi 19 Octobre 2016 09:41:50 Objet: Re: [pve-devel] [PATCH 1/3] add live storage migration with vm migration > On October 19, 2016 at 9:26 AM Alexandre DERUMIER <aderum...@odiso.com> > wrote: > > > >>What about the suggestion from Wolfgang: "make drive-mirror write to both > >>disks" > >>(local and remote) > >> > >>I guess that would solve the whole problem? > > Do you mean, keep the drive-mirror running (no block-job-complete), then do > the vm live migration, > > and after vm live migration (just before unpause target, and stop sourvm), do > the block-job-complete on source vm ? Exactly, the disk should basically be in a RAID-1 mode during the entire migration process. Note that this might be incompatible with post-copy mode as with that the remote side takes over early and would have to mirror its writes back to the source (iow. the remote side would also have to be in a drive-mirror mode except it only has one disk, so I doubt that's possible/implemented). > - Mail original - > De: "dietmar" <diet...@proxmox.com> > À: "Wolfgang Bumiller" <w.bumil...@proxmox.com>, "aderumier" > <aderum...@odiso.com>, "pve-devel" <pve-devel@pve.proxmox.com> > Envoyé: Mardi 18 Octobre 2016 17:40:35 > Objet: Re: [pve-devel] [PATCH 1/3] add live storage migration with vm > migration > > > The only think: > > > > I still don't known how to manage the case of target node crash after the > > disk > > are migrate, but vm still on source node. > > > What about the suggestion from Wolfgang: "make drive-mirror write to both > disks" > (local and remote) > > I guess that would solve the whole problem? > > ___ pve-devel mailing list pve-devel@pve.proxmox.com http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel
Re: [pve-devel] [PATCH 1/3] add live storage migration with vm migration
> On October 19, 2016 at 9:26 AM Alexandre DERUMIER <aderum...@odiso.com> wrote: > > > >>What about the suggestion from Wolfgang: "make drive-mirror write to both > >>disks" > >>(local and remote) > >> > >>I guess that would solve the whole problem? > > Do you mean, keep the drive-mirror running (no block-job-complete), then do > the vm live migration, > > and after vm live migration (just before unpause target, and stop sourvm), do > the block-job-complete on source vm ? Exactly, the disk should basically be in a RAID-1 mode during the entire migration process. Note that this might be incompatible with post-copy mode as with that the remote side takes over early and would have to mirror its writes back to the source (iow. the remote side would also have to be in a drive-mirror mode except it only has one disk, so I doubt that's possible/implemented). > - Mail original - > De: "dietmar" <diet...@proxmox.com> > À: "Wolfgang Bumiller" <w.bumil...@proxmox.com>, "aderumier" > <aderum...@odiso.com>, "pve-devel" <pve-devel@pve.proxmox.com> > Envoyé: Mardi 18 Octobre 2016 17:40:35 > Objet: Re: [pve-devel] [PATCH 1/3] add live storage migration with vm > migration > > > The only think: > > > > I still don't known how to manage the case of target node crash after the > > disk > > are migrate, but vm still on source node. > > > What about the suggestion from Wolfgang: "make drive-mirror write to both > disks" > (local and remote) > > I guess that would solve the whole problem? > > ___ pve-devel mailing list pve-devel@pve.proxmox.com http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel
Re: [pve-devel] [PATCH 1/3] add live storage migration with vm migration
>>What about the suggestion from Wolfgang: "make drive-mirror write to both >>disks" >>(local and remote) >> >>I guess that would solve the whole problem? Do you mean, keep the drive-mirror running (no block-job-complete), then do the vm live migration, and after vm live migration (just before unpause target, and stop sourvm), do the block-job-complete on source vm ? - Mail original - De: "dietmar" <diet...@proxmox.com> À: "Wolfgang Bumiller" <w.bumil...@proxmox.com>, "aderumier" <aderum...@odiso.com>, "pve-devel" <pve-devel@pve.proxmox.com> Envoyé: Mardi 18 Octobre 2016 17:40:35 Objet: Re: [pve-devel] [PATCH 1/3] add live storage migration with vm migration > The only think: > > I still don't known how to manage the case of target node crash after the > disk > are migrate, but vm still on source node. What about the suggestion from Wolfgang: "make drive-mirror write to both disks" (local and remote) I guess that would solve the whole problem? ___ pve-devel mailing list pve-devel@pve.proxmox.com http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel
Re: [pve-devel] [PATCH 1/3] add live storage migration with vm migration
> The only think: > > I still don't known how to manage the case of target node crash after the disk > are migrate, but vm still on source node. What about the suggestion from Wolfgang: "make drive-mirror write to both disks" (local and remote) I guess that would solve the whole problem? ___ pve-devel mailing list pve-devel@pve.proxmox.com http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel
Re: [pve-devel] [PATCH 1/3] add live storage migration with vm migration
>>That sounds worse ;-) >>The only real solution is to make drive-mirror write to both disks in a >>raid-1 fashion once the initial sync is completed in order to be able to >>detach the destination in an error case without losing data. yes, I think it's possible to use multiple drive-mirror in parralel, and do the block-job-complete to do 1 switch only. I can adapt my patches serie to test. The only think: I still don't known how to manage the case of target node crash after the disk are migrate, but vm still on source node. - Mail original - De: "Wolfgang Bumiller" <w.bumil...@proxmox.com> À: "aderumier" <aderum...@odiso.com> Cc: "pve-devel" <pve-devel@pve.proxmox.com> Envoyé: Mardi 18 Octobre 2016 11:07:35 Objet: Re: [pve-devel] [PATCH 1/3] add live storage migration with vm migration On Tue, Oct 18, 2016 at 01:50:37AM +0200, Alexandre DERUMIER wrote: > >>Another possibility : create a new vmid on target host, with his own > >>config. > >>Like this user can manage old and new vm after migration . and If something > >>crash during the migration,this is more easier > > We could adapt vm_clone to be able to use remote local storage. > > then reuse vm_clone in vm migration. That sounds worse ;-) The only real solution is to make drive-mirror write to both disks in a raid-1 fashion once the initial sync is completed in order to be able to detach the destination in an error case without losing data. ___ pve-devel mailing list pve-devel@pve.proxmox.com http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel
Re: [pve-devel] [PATCH 1/3] add live storage migration with vm migration
>>I don't really understand that. I thought we will only migrate local disks. >>IMHO >>it makes no >>sense to migrate a shared disk to a local storage? It was in case of my 2nd proposal of new vmid for target. The advantage was that each vm.conf have their own local disk. (no orphans disk in case of node failure during a migration for example) - Mail original - De: "dietmar" <diet...@proxmox.com> À: "aderumier" <aderum...@odiso.com> Cc: "pve-devel" <pve-devel@pve.proxmox.com> Envoyé: Mardi 18 Octobre 2016 11:07:37 Objet: Re: [pve-devel] [PATCH 1/3] add live storage migration with vm migration > > The only drawback is that we can't mix local && shared storage in this > > case. > > (but I think that if user have shared storage, he don't have need of local > > storage migration) > > >>Not sure if this is a good restriction (Maybe this is a major use case). > > Note that you can have sourcevm (local + shared), destination vm : > destination > local > or > sourcevm (local + shared), destination vm : destination local + same shared > (but with new volume) I don't really understand that. I thought we will only migrate local disks. IMHO it makes no sense to migrate a shared disk to a local storage? ___ pve-devel mailing list pve-devel@pve.proxmox.com http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel
Re: [pve-devel] [PATCH 1/3] add live storage migration with vm migration
On Tue, Oct 18, 2016 at 01:50:37AM +0200, Alexandre DERUMIER wrote: > >>Another possibility : create a new vmid on target host, with his own > >>config. > >>Like this user can manage old and new vm after migration . and If something > >>crash during the migration,this is more easier > > We could adapt vm_clone to be able to use remote local storage. > > then reuse vm_clone in vm migration. That sounds worse ;-) The only real solution is to make drive-mirror write to both disks in a raid-1 fashion once the initial sync is completed in order to be able to detach the destination in an error case without losing data. ___ pve-devel mailing list pve-devel@pve.proxmox.com http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel
Re: [pve-devel] [PATCH 1/3] add live storage migration with vm migration
> > The only drawback is that we can't mix local && shared storage in this case. > > (but I think that if user have shared storage, he don't have need of local > > storage migration) > > >>Not sure if this is a good restriction (Maybe this is a major use case). > > Note that you can have sourcevm (local + shared), destination vm : destination > local > or > sourcevm (local + shared), destination vm : destination local + same shared > (but with new volume) I don't really understand that. I thought we will only migrate local disks. IMHO it makes no sense to migrate a shared disk to a local storage? ___ pve-devel mailing list pve-devel@pve.proxmox.com http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel
Re: [pve-devel] [PATCH 1/3] add live storage migration with vm migration
> Another possibility : create a new vmid on target host, with his own config. > Like this user can manage old and new vm after migration . and If something > crash during the migration,this is more easier. > > The only drawback is that we can't mix local && shared storage in this case. > (but I think that if user have shared storage, he don't have need of local > storage migration) Not sure if this is a good restriction (Maybe this is a major use case). ___ pve-devel mailing list pve-devel@pve.proxmox.com http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel
Re: [pve-devel] [PATCH 1/3] add live storage migration with vm migration
>>Another possibility : create a new vmid on target host, with his own config. >>Like this user can manage old and new vm after migration . and If something >>crash during the migration,this is more easier We could adapt vm_clone to be able to use remote local storage. then reuse vm_clone in vm migration. - Mail original - De: "aderumier" <aderum...@odiso.com> À: "pve-devel" <pve-devel@pve.proxmox.com> Envoyé: Mardi 18 Octobre 2016 01:05:16 Objet: Re: [pve-devel] [PATCH 1/3] add live storage migration with vm migration >>setting new drive in pending until the whole migration is done, so user can >>use revert ? >>I think it should done manually by user, because maybe user don't want to >>loose new datas written to target storage. Another possibility : create a new vmid on target host, with his own config. Like this user can manage old and new vm after migration . and If something crash during the migration,this is more easier. The only drawback is that we can't mix local && shared storage in this case. (but I think that if user have shared storage, he don't have need of local storage migration) - Mail original - De: "aderumier" <aderum...@odiso.com> À: "Wolfgang Bumiller" <w.bumil...@proxmox.com> Cc: "pve-devel" <pve-devel@pve.proxmox.com> Envoyé: Lundi 17 Octobre 2016 15:52:14 Objet: Re: [pve-devel] [PATCH 1/3] add live storage migration with vm migration >>So we'd need a way to switch back, then again the remote side might be >>dead at this point... we could try though? setting new drive in pending until the whole migration is done, so user can use revert ? I think it should done manually by user, because maybe user don't want to loose new datas written to target storage. About multiple disks, I thinked that we can't use transactions, but reading the livirt mailing it seem possible to launch multiple drive-mirror in parallel https://www.redhat.com/archives/libvir-list/2015-April/msg00727.html - Mail original - De: "Wolfgang Bumiller" <w.bumil...@proxmox.com> À: "aderumier" <aderum...@odiso.com> Cc: "pve-devel" <pve-devel@pve.proxmox.com> Envoyé: Lundi 17 Octobre 2016 15:41:49 Objet: Re: [pve-devel] [PATCH 1/3] add live storage migration with vm migration On Mon, Oct 17, 2016 at 03:33:38PM +0200, Alexandre DERUMIER wrote: > >>considering some of the code looks like it's prepared for multiple > >>disks, I wonder if the remote side should send a mapping containing the > >>old + new names? > > yes, I think it can prepare output for multiple disks, it'll be easier later. > Maybe simply send multiple lines, 1 by disk ? > > > > > + PVE::QemuServer::qemu_drive_mirror($vmid, $self->{target_drive}, > > $nbd_uri, $vmid); > > + #update config > > >>As far as I can see you have qemu running on the remote side already, > >>since you use hmp/mon commnads to export the nbd devices, so it seems > >>it would be a better choice to update this after the migration has > >>completed, and change the cleanup code below to detach the nbd drive. > > The problem is that when drive_mirror is finished, the source vm write to the > remote disk. So we'd need a way to switch back, then again the remote side might be dead at this point... we could try though? > So I think it's better to update config, to have the new disk in config. > If source host die before end of livemigration, user only need to move the > config file to destination host. > > > Alternativly, I was thinking to use pending to store new local disk path, and > switch at the end of the live migration. > Maybe add a node=xxx option to drive during the migration, to known where the > local disk is located exactly. > I'm not sure... ___ pve-devel mailing list pve-devel@pve.proxmox.com http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel ___ pve-devel mailing list pve-devel@pve.proxmox.com http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel ___ pve-devel mailing list pve-devel@pve.proxmox.com http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel
Re: [pve-devel] [PATCH 1/3] add live storage migration with vm migration
>>setting new drive in pending until the whole migration is done, so user can >>use revert ? >>I think it should done manually by user, because maybe user don't want to >>loose new datas written to target storage. Another possibility : create a new vmid on target host, with his own config. Like this user can manage old and new vm after migration . and If something crash during the migration,this is more easier. The only drawback is that we can't mix local && shared storage in this case. (but I think that if user have shared storage, he don't have need of local storage migration) - Mail original - De: "aderumier" <aderum...@odiso.com> À: "Wolfgang Bumiller" <w.bumil...@proxmox.com> Cc: "pve-devel" <pve-devel@pve.proxmox.com> Envoyé: Lundi 17 Octobre 2016 15:52:14 Objet: Re: [pve-devel] [PATCH 1/3] add live storage migration with vm migration >>So we'd need a way to switch back, then again the remote side might be >>dead at this point... we could try though? setting new drive in pending until the whole migration is done, so user can use revert ? I think it should done manually by user, because maybe user don't want to loose new datas written to target storage. About multiple disks, I thinked that we can't use transactions, but reading the livirt mailing it seem possible to launch multiple drive-mirror in parallel https://www.redhat.com/archives/libvir-list/2015-April/msg00727.html - Mail original - De: "Wolfgang Bumiller" <w.bumil...@proxmox.com> À: "aderumier" <aderum...@odiso.com> Cc: "pve-devel" <pve-devel@pve.proxmox.com> Envoyé: Lundi 17 Octobre 2016 15:41:49 Objet: Re: [pve-devel] [PATCH 1/3] add live storage migration with vm migration On Mon, Oct 17, 2016 at 03:33:38PM +0200, Alexandre DERUMIER wrote: > >>considering some of the code looks like it's prepared for multiple > >>disks, I wonder if the remote side should send a mapping containing the > >>old + new names? > > yes, I think it can prepare output for multiple disks, it'll be easier later. > Maybe simply send multiple lines, 1 by disk ? > > > > > + PVE::QemuServer::qemu_drive_mirror($vmid, $self->{target_drive}, > > $nbd_uri, $vmid); > > + #update config > > >>As far as I can see you have qemu running on the remote side already, > >>since you use hmp/mon commnads to export the nbd devices, so it seems > >>it would be a better choice to update this after the migration has > >>completed, and change the cleanup code below to detach the nbd drive. > > The problem is that when drive_mirror is finished, the source vm write to the > remote disk. So we'd need a way to switch back, then again the remote side might be dead at this point... we could try though? > So I think it's better to update config, to have the new disk in config. > If source host die before end of livemigration, user only need to move the > config file to destination host. > > > Alternativly, I was thinking to use pending to store new local disk path, and > switch at the end of the live migration. > Maybe add a node=xxx option to drive during the migration, to known where the > local disk is located exactly. > I'm not sure... ___ pve-devel mailing list pve-devel@pve.proxmox.com http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel ___ pve-devel mailing list pve-devel@pve.proxmox.com http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel
Re: [pve-devel] [PATCH 1/3] add live storage migration with vm migration
>>So we'd need a way to switch back, then again the remote side might be >>dead at this point... we could try though? setting new drive in pending until the whole migration is done, so user can use revert ? I think it should done manually by user, because maybe user don't want to loose new datas written to target storage. About multiple disks, I thinked that we can't use transactions, but reading the livirt mailing it seem possible to launch multiple drive-mirror in parallel https://www.redhat.com/archives/libvir-list/2015-April/msg00727.html - Mail original - De: "Wolfgang Bumiller" <w.bumil...@proxmox.com> À: "aderumier" <aderum...@odiso.com> Cc: "pve-devel" <pve-devel@pve.proxmox.com> Envoyé: Lundi 17 Octobre 2016 15:41:49 Objet: Re: [pve-devel] [PATCH 1/3] add live storage migration with vm migration On Mon, Oct 17, 2016 at 03:33:38PM +0200, Alexandre DERUMIER wrote: > >>considering some of the code looks like it's prepared for multiple > >>disks, I wonder if the remote side should send a mapping containing the > >>old + new names? > > yes, I think it can prepare output for multiple disks, it'll be easier later. > Maybe simply send multiple lines, 1 by disk ? > > > > > + PVE::QemuServer::qemu_drive_mirror($vmid, $self->{target_drive}, > > $nbd_uri, $vmid); > > + #update config > > >>As far as I can see you have qemu running on the remote side already, > >>since you use hmp/mon commnads to export the nbd devices, so it seems > >>it would be a better choice to update this after the migration has > >>completed, and change the cleanup code below to detach the nbd drive. > > The problem is that when drive_mirror is finished, the source vm write to the > remote disk. So we'd need a way to switch back, then again the remote side might be dead at this point... we could try though? > So I think it's better to update config, to have the new disk in config. > If source host die before end of livemigration, user only need to move the > config file to destination host. > > > Alternativly, I was thinking to use pending to store new local disk path, and > switch at the end of the live migration. > Maybe add a node=xxx option to drive during the migration, to known where the > local disk is located exactly. > I'm not sure... ___ pve-devel mailing list pve-devel@pve.proxmox.com http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel
Re: [pve-devel] [PATCH 1/3] add live storage migration with vm migration
On Mon, Oct 17, 2016 at 03:33:38PM +0200, Alexandre DERUMIER wrote: > >>considering some of the code looks like it's prepared for multiple > >>disks, I wonder if the remote side should send a mapping containing the > >>old + new names? > > yes, I think it can prepare output for multiple disks, it'll be easier later. > Maybe simply send multiple lines, 1 by disk ? > > > > > + PVE::QemuServer::qemu_drive_mirror($vmid, $self->{target_drive}, > > $nbd_uri, $vmid); > > + #update config > > >>As far as I can see you have qemu running on the remote side already, > >>since you use hmp/mon commnads to export the nbd devices, so it seems > >>it would be a better choice to update this after the migration has > >>completed, and change the cleanup code below to detach the nbd drive. > > The problem is that when drive_mirror is finished, the source vm write to the > remote disk. So we'd need a way to switch back, then again the remote side might be dead at this point... we could try though? > So I think it's better to update config, to have the new disk in config. > If source host die before end of livemigration, user only need to move the > config file to destination host. > > > Alternativly, I was thinking to use pending to store new local disk path, and > switch at the end of the live migration. > Maybe add a node=xxx option to drive during the migration, to known where the > local disk is located exactly. > I'm not sure... ___ pve-devel mailing list pve-devel@pve.proxmox.com http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel
Re: [pve-devel] [PATCH 1/3] add live storage migration with vm migration
>>considering some of the code looks like it's prepared for multiple >>disks, I wonder if the remote side should send a mapping containing the >>old + new names? yes, I think it can prepare output for multiple disks, it'll be easier later. Maybe simply send multiple lines, 1 by disk ? > + PVE::QemuServer::qemu_drive_mirror($vmid, $self->{target_drive}, $nbd_uri, > $vmid); > + #update config >>As far as I can see you have qemu running on the remote side already, >>since you use hmp/mon commnads to export the nbd devices, so it seems >>it would be a better choice to update this after the migration has >>completed, and change the cleanup code below to detach the nbd drive. The problem is that when drive_mirror is finished, the source vm write to the remote disk. So I think it's better to update config, to have the new disk in config. If source host die before end of livemigration, user only need to move the config file to destination host. Alternativly, I was thinking to use pending to store new local disk path, and switch at the end of the live migration. Maybe add a node=xxx option to drive during the migration, to known where the local disk is located exactly. I'm not sure... - Mail original - De: "Wolfgang Bumiller" <w.bumil...@proxmox.com> À: "aderumier" <aderum...@odiso.com> Cc: "pve-devel" <pve-devel@pve.proxmox.com> Envoyé: Lundi 17 Octobre 2016 15:16:39 Objet: Re: [pve-devel] [PATCH 1/3] add live storage migration with vm migration On Tue, Oct 11, 2016 at 04:45:19PM +0200, Alexandre Derumier wrote: > This allow to migrate a local storage (only 1 for now) to a remote node > storage. > > When the target node start, a new volume is created and exposed through qemu > embedded nbd server. > > qemu drive-mirror is done on source vm with nbd server as target. > > when drive-mirror is done, the source vm is running the disk though nbd. > > Then we live migration the vm to destination node. > > Signed-off-by: Alexandre Derumier <aderum...@odiso.com> > --- > PVE/API2/Qemu.pm | 7 + > PVE/QemuMigrate.pm | 91 > +++--- > 2 files changed, 93 insertions(+), 5 deletions(-) > > diff --git a/PVE/API2/Qemu.pm b/PVE/API2/Qemu.pm > index 21fbebb..acb1412 100644 > --- a/PVE/API2/Qemu.pm > +++ b/PVE/API2/Qemu.pm > @@ -2648,6 +2648,10 @@ __PACKAGE__->register_method({ > description => "Allow to migrate VMs which use local devices. Only root may > use this option.", > optional => 1, > }, > + targetstorage => get_standard_option('pve-storage-id', { > + description => "Target storage.", > + optional => 1, > + }), > }, > }, > returns => { > @@ -2674,6 +2678,9 @@ __PACKAGE__->register_method({ > > my $vmid = extract_param($param, 'vmid'); > > + raise_param_exc({ targetstorage => "Live Storage migration can only be done > online" }) > + if !$param->{online} && $param->{targetstorage}; > + > raise_param_exc({ force => "Only root may use this option." }) > if $param->{force} && $authuser ne 'root@pam'; > > diff --git a/PVE/QemuMigrate.pm b/PVE/QemuMigrate.pm > index 22a49ef..6e90296 100644 > --- a/PVE/QemuMigrate.pm > +++ b/PVE/QemuMigrate.pm > @@ -170,9 +170,11 @@ sub prepare { > $self->{forcemachine} = PVE::QemuServer::qemu_machine_pxe($vmid, $conf); > > } > - > if (my $loc_res = PVE::QemuServer::check_local_resources($conf, 1)) { > - if ($self->{running} || !$self->{opts}->{force}) { > + if ($self->{running} && $self->{opts}->{targetstorage}){ > + $self->log('info', "migrating VM with online storage migration"); > + } > + elsif ($self->{running} || !$self->{opts}->{force} ) { > die "can't migrate VM which uses local devices\n"; > } else { > $self->log('info', "migrating VM which uses local devices"); > @@ -182,12 +184,16 @@ sub prepare { > my $vollist = PVE::QemuServer::get_vm_volumes($conf); > > my $need_activate = []; > + my $unsharedcount = 0; > foreach my $volid (@$vollist) { > my ($sid, $volname) = PVE::Storage::parse_volume_id($volid, 1); > > # check if storage is available on both nodes > my $scfg = PVE::Storage::storage_check_node($self->{storecfg}, $sid); > - PVE::Storage::storage_check_node($self->{storecfg}, $sid, $self->{node}); > + my $targetsid = $sid; > + $targetsid = $self->{opts}->{targetstorage} if > $self->{opts}->{targetstorage}; > + > + PVE::Storage::storage_check_node($self->{storecfg}, $targetsid, > $self-
Re: [pve-devel] [PATCH 1/3] add live storage migration with vm migration
On Tue, Oct 11, 2016 at 04:45:19PM +0200, Alexandre Derumier wrote: > This allow to migrate a local storage (only 1 for now) to a remote node > storage. > > When the target node start, a new volume is created and exposed through qemu > embedded nbd server. > > qemu drive-mirror is done on source vm with nbd server as target. > > when drive-mirror is done, the source vm is running the disk though nbd. > > Then we live migration the vm to destination node. > > Signed-off-by: Alexandre Derumier> --- > PVE/API2/Qemu.pm | 7 + > PVE/QemuMigrate.pm | 91 > +++--- > 2 files changed, 93 insertions(+), 5 deletions(-) > > diff --git a/PVE/API2/Qemu.pm b/PVE/API2/Qemu.pm > index 21fbebb..acb1412 100644 > --- a/PVE/API2/Qemu.pm > +++ b/PVE/API2/Qemu.pm > @@ -2648,6 +2648,10 @@ __PACKAGE__->register_method({ > description => "Allow to migrate VMs which use local devices. > Only root may use this option.", > optional => 1, > }, > + targetstorage => get_standard_option('pve-storage-id', { > +description => "Target storage.", > +optional => 1, > + }), > }, > }, > returns => { > @@ -2674,6 +2678,9 @@ __PACKAGE__->register_method({ > > my $vmid = extract_param($param, 'vmid'); > > + raise_param_exc({ targetstorage => "Live Storage migration can only be > done online" }) > + if !$param->{online} && $param->{targetstorage}; > + > raise_param_exc({ force => "Only root may use this option." }) > if $param->{force} && $authuser ne 'root@pam'; > > diff --git a/PVE/QemuMigrate.pm b/PVE/QemuMigrate.pm > index 22a49ef..6e90296 100644 > --- a/PVE/QemuMigrate.pm > +++ b/PVE/QemuMigrate.pm > @@ -170,9 +170,11 @@ sub prepare { > $self->{forcemachine} = PVE::QemuServer::qemu_machine_pxe($vmid, $conf); > > } > - > if (my $loc_res = PVE::QemuServer::check_local_resources($conf, 1)) { > - if ($self->{running} || !$self->{opts}->{force}) { > + if ($self->{running} && $self->{opts}->{targetstorage}){ > + $self->log('info', "migrating VM with online storage migration"); > + } > + elsif ($self->{running} || !$self->{opts}->{force} ) { > die "can't migrate VM which uses local devices\n"; > } else { > $self->log('info', "migrating VM which uses local devices"); > @@ -182,12 +184,16 @@ sub prepare { > my $vollist = PVE::QemuServer::get_vm_volumes($conf); > > my $need_activate = []; > +my $unsharedcount = 0; > foreach my $volid (@$vollist) { > my ($sid, $volname) = PVE::Storage::parse_volume_id($volid, 1); > > # check if storage is available on both nodes > my $scfg = PVE::Storage::storage_check_node($self->{storecfg}, $sid); > - PVE::Storage::storage_check_node($self->{storecfg}, $sid, > $self->{node}); > + my $targetsid = $sid; > + $targetsid = $self->{opts}->{targetstorage} if > $self->{opts}->{targetstorage}; > + > + PVE::Storage::storage_check_node($self->{storecfg}, $targetsid, > $self->{node}); > > if ($scfg->{shared}) { > # PVE::Storage::activate_storage checks this for non-shared storages > @@ -197,9 +203,12 @@ sub prepare { > } else { > # only activate if not shared > push @$need_activate, $volid; > + $unsharedcount++; > } > } > > +die "online storage migration don't support more than 1 local disk > currently" if $unsharedcount > 1; > + > # activate volumes > PVE::Storage::activate_volumes($self->{storecfg}, $need_activate); > > @@ -407,7 +416,7 @@ sub phase1 { > $conf->{lock} = 'migrate'; > PVE::QemuConfig->write_config($vmid, $conf); > > -sync_disks($self, $vmid); > +sync_disks($self, $vmid) if !$self->{opts}->{targetstorage}; > > }; > > @@ -452,7 +461,7 @@ sub phase2 { > $spice_ticket = $res->{ticket}; > } > > -push @$cmd , 'qm', 'start', $vmid, '--skiplock', '--migratedfrom', > $nodename; > +push @$cmd , 'qm', 'start', $vmid, '--skiplock', '--migratedfrom', > $nodename, '--targetstorage', $self->{opts}->{targetstorage}; > > # we use TCP only for unsecure migrations as TCP ssh forward tunnels > often > # did appeared to late (they are hard, if not impossible, to check for) > @@ -472,6 +481,7 @@ sub phase2 { > } > > my $spice_port; > +my $nbd_uri; > > # Note: We try to keep $spice_ticket secret (do not pass via command > line parameter) > # instead we pipe it through STDIN > @@ -496,6 +506,13 @@ sub phase2 { > elsif ($line =~ m/^spice listens on port (\d+)$/) { > $spice_port = int($1); > } > +elsif ($line =~ m/^storage migration listens on > nbd:(localhost|[\d\.]+|\[[\d\.:a-fA-F]+\]):(\d+):exportname=(\S+) > volume:(\S+)$/) { considering some of the code looks like it's prepared
Re: [pve-devel] [PATCH 1/3] add live storage migration with vm migration
> +#if storage migration is already done, but vm migration crash, we need > to move the vm config >>Here the VM is not always crashed, just the migration failed or? >> >>If yes, would it not be better to let the VM continue to run where it >>was (if it can even run here) and free the migrated volume on the target >>storage? >>Stopping the VM here seems not obvious. The problem is that, if storage migration has been done and vm migration crash, the vm is still running, but with nbd attached (and target vm still running). (and new write are already done on target storage). That's why I force stop of source vm and migrate config file on target. - Mail original - De: "Thomas Lamprecht" <t.lampre...@proxmox.com> À: "pve-devel" <pve-devel@pve.proxmox.com> Envoyé: Mercredi 12 Octobre 2016 09:44:25 Objet: Re: [pve-devel] [PATCH 1/3] add live storage migration with vm migration comments inline On 10/11/2016 04:45 PM, Alexandre Derumier wrote: > This allow to migrate a local storage (only 1 for now) to a remote node > storage. > > When the target node start, a new volume is created and exposed through qemu > embedded nbd server. > > qemu drive-mirror is done on source vm with nbd server as target. > > when drive-mirror is done, the source vm is running the disk though nbd. > > Then we live migration the vm to destination node. > > Signed-off-by: Alexandre Derumier <aderum...@odiso.com> > --- > PVE/API2/Qemu.pm | 7 + > PVE/QemuMigrate.pm | 91 > +++--- > 2 files changed, 93 insertions(+), 5 deletions(-) > > diff --git a/PVE/API2/Qemu.pm b/PVE/API2/Qemu.pm > index 21fbebb..acb1412 100644 > --- a/PVE/API2/Qemu.pm > +++ b/PVE/API2/Qemu.pm > @@ -2648,6 +2648,10 @@ __PACKAGE__->register_method({ > description => "Allow to migrate VMs which use local devices. Only root may > use this option.", > optional => 1, > }, > + targetstorage => get_standard_option('pve-storage-id', { > + description => "Target storage.", > + optional => 1, > + }), > }, > }, > returns => { > @@ -2674,6 +2678,9 @@ __PACKAGE__->register_method({ > > my $vmid = extract_param($param, 'vmid'); > > + raise_param_exc({ targetstorage => "Live Storage migration can only be done > online" }) > + if !$param->{online} && $param->{targetstorage}; > + > raise_param_exc({ force => "Only root may use this option." }) > if $param->{force} && $authuser ne 'root@pam'; > > diff --git a/PVE/QemuMigrate.pm b/PVE/QemuMigrate.pm > index 22a49ef..6e90296 100644 > --- a/PVE/QemuMigrate.pm > +++ b/PVE/QemuMigrate.pm > @@ -170,9 +170,11 @@ sub prepare { > $self->{forcemachine} = PVE::QemuServer::qemu_machine_pxe($vmid, $conf); > > } > - style nitpick: if you do a v2 let this line stay > if (my $loc_res = PVE::QemuServer::check_local_resources($conf, 1)) { > - if ($self->{running} || !$self->{opts}->{force}) { > + if ($self->{running} && $self->{opts}->{targetstorage}){ > + $self->log('info', "migrating VM with online storage migration"); > + } > + elsif ($self->{running} || !$self->{opts}->{force} ) { This here is wrong. The method "check_local_resources" checks only for usb/hostpci/serial/parallel devices, not for local storages. So you shouldn't need this code part at all? > die "can't migrate VM which uses local devices\n"; > } else { > $self->log('info', "migrating VM which uses local devices"); > @@ -182,12 +184,16 @@ sub prepare { > my $vollist = PVE::QemuServer::get_vm_volumes($conf); > > my $need_activate = []; > + my $unsharedcount = 0; > foreach my $volid (@$vollist) { > my ($sid, $volname) = PVE::Storage::parse_volume_id($volid, 1); > > # check if storage is available on both nodes > my $scfg = PVE::Storage::storage_check_node($self->{storecfg}, $sid); > - PVE::Storage::storage_check_node($self->{storecfg}, $sid, $self->{node}); > + my $targetsid = $sid; > + $targetsid = $self->{opts}->{targetstorage} if > $self->{opts}->{targetstorage}; > + we often use my $targetsid = $self->{opts}->{targetstorage} || $sid; but just nitpicking here :) > + PVE::Storage::storage_check_node($self->{storecfg}, $targetsid, > $self->{node}); > > if ($scfg->{shared}) { > # PVE::Storage::activate_storage checks this for non-shared storages > @@ -197,9 +203,12 @@ sub prepare { > } else { > # only activate if not shared > push @$need_activate, $volid; > + $uns
Re: [pve-devel] [PATCH 1/3] add live storage migration with vm migration
comments inline On 10/11/2016 04:45 PM, Alexandre Derumier wrote: This allow to migrate a local storage (only 1 for now) to a remote node storage. When the target node start, a new volume is created and exposed through qemu embedded nbd server. qemu drive-mirror is done on source vm with nbd server as target. when drive-mirror is done, the source vm is running the disk though nbd. Then we live migration the vm to destination node. Signed-off-by: Alexandre Derumier--- PVE/API2/Qemu.pm | 7 + PVE/QemuMigrate.pm | 91 +++--- 2 files changed, 93 insertions(+), 5 deletions(-) diff --git a/PVE/API2/Qemu.pm b/PVE/API2/Qemu.pm index 21fbebb..acb1412 100644 --- a/PVE/API2/Qemu.pm +++ b/PVE/API2/Qemu.pm @@ -2648,6 +2648,10 @@ __PACKAGE__->register_method({ description => "Allow to migrate VMs which use local devices. Only root may use this option.", optional => 1, }, + targetstorage => get_standard_option('pve-storage-id', { +description => "Target storage.", +optional => 1, + }), }, }, returns => { @@ -2674,6 +2678,9 @@ __PACKAGE__->register_method({ my $vmid = extract_param($param, 'vmid'); + raise_param_exc({ targetstorage => "Live Storage migration can only be done online" }) + if !$param->{online} && $param->{targetstorage}; + raise_param_exc({ force => "Only root may use this option." }) if $param->{force} && $authuser ne 'root@pam'; diff --git a/PVE/QemuMigrate.pm b/PVE/QemuMigrate.pm index 22a49ef..6e90296 100644 --- a/PVE/QemuMigrate.pm +++ b/PVE/QemuMigrate.pm @@ -170,9 +170,11 @@ sub prepare { $self->{forcemachine} = PVE::QemuServer::qemu_machine_pxe($vmid, $conf); } - style nitpick: if you do a v2 let this line stay if (my $loc_res = PVE::QemuServer::check_local_resources($conf, 1)) { - if ($self->{running} || !$self->{opts}->{force}) { + if ($self->{running} && $self->{opts}->{targetstorage}){ + $self->log('info', "migrating VM with online storage migration"); + } + elsif ($self->{running} || !$self->{opts}->{force} ) { This here is wrong. The method "check_local_resources" checks only for usb/hostpci/serial/parallel devices, not for local storages. So you shouldn't need this code part at all? die "can't migrate VM which uses local devices\n"; } else { $self->log('info', "migrating VM which uses local devices"); @@ -182,12 +184,16 @@ sub prepare { my $vollist = PVE::QemuServer::get_vm_volumes($conf); my $need_activate = []; +my $unsharedcount = 0; foreach my $volid (@$vollist) { my ($sid, $volname) = PVE::Storage::parse_volume_id($volid, 1); # check if storage is available on both nodes my $scfg = PVE::Storage::storage_check_node($self->{storecfg}, $sid); - PVE::Storage::storage_check_node($self->{storecfg}, $sid, $self->{node}); + my $targetsid = $sid; + $targetsid = $self->{opts}->{targetstorage} if $self->{opts}->{targetstorage}; + we often use my $targetsid = $self->{opts}->{targetstorage} || $sid; but just nitpicking here :) + PVE::Storage::storage_check_node($self->{storecfg}, $targetsid, $self->{node}); if ($scfg->{shared}) { # PVE::Storage::activate_storage checks this for non-shared storages @@ -197,9 +203,12 @@ sub prepare { } else { # only activate if not shared push @$need_activate, $volid; + $unsharedcount++; } } +die "online storage migration don't support more than 1 local disk currently" if $unsharedcount > 1; + # activate volumes PVE::Storage::activate_volumes($self->{storecfg}, $need_activate); @@ -407,7 +416,7 @@ sub phase1 { $conf->{lock} = 'migrate'; PVE::QemuConfig->write_config($vmid, $conf); -sync_disks($self, $vmid); +sync_disks($self, $vmid) if !$self->{opts}->{targetstorage}; }; @@ -452,7 +461,7 @@ sub phase2 { $spice_ticket = $res->{ticket}; } -push @$cmd , 'qm', 'start', $vmid, '--skiplock', '--migratedfrom', $nodename; +push @$cmd , 'qm', 'start', $vmid, '--skiplock', '--migratedfrom', $nodename, '--targetstorage', $self->{opts}->{targetstorage}; # we use TCP only for unsecure migrations as TCP ssh forward tunnels often # did appeared to late (they are hard, if not impossible, to check for) @@ -472,6 +481,7 @@ sub phase2 { } my $spice_port; +my $nbd_uri; # Note: We try to keep $spice_ticket secret (do not pass via command line parameter) # instead we pipe it through STDIN @@ -496,6 +506,13 @@ sub phase2 { elsif ($line =~ m/^spice listens on port (\d+)$/) { $spice_port = int($1); } +elsif ($line =~