]
Sent: Sonntag, 20. Jänner 2013 11:53
To: Dietmar Maurer
Cc: pve-devel@pve.proxmox.com
Subject: Re: [pve-devel] vm migration + storage migration with ndb workflow
notes
What do you think about :
#qm move vmid disk --storage storeid --format [raw|qcow2|vmdk
09:00:48
Objet: RE: [pve-devel] vm migration + storage migration with ndb workflow notes
But what if you want to move several disks? Or don't we need that?
# qm move vmid -drive storage_and_format_spec
for example:
# qm move 100 -ide0 local -ide1 local:qcow2
I am not sure about
Yes, I think we can have 1 job per drive at the same time. (as we can only do
1 block-job-complete).
Sorry, but why do you want to copy one drive multiple times?
___
pve-devel mailing list
pve-devel@pve.proxmox.com
Not sure we really need to do this for multiple drives in 1 call.
For qm resize, we only do this for 1 drive.
For Gui, We can simply add a button like resize, to move the volume selected.
I have already a patch for
qm move vmid drive storage --format qcow2|raw|vmdk
Yes, I guess that is
aderum...@odiso.com
À: Dietmar Maurer diet...@proxmox.com
Cc: pve-devel@pve.proxmox.com
Envoyé: Lundi 21 Janvier 2013 09:11:01
Objet: Re: [pve-devel] vm migration + storage migration with ndb workflow notes
Are you sure? Usually you can have one job per drive?
If such restriction exists
I see a big speed difference
for a 32gb file, migrate on same local, qcow2 - raw, both writeback
qemu-img : 30s
drive-mirror (with paused vm) : 5min
This should be as fast as qemu-img. Please can you post your findings
on the pve-devel list - this looks like a serious bug to me.
Sorry, but why do you want to copy one drive multiple times?
This is about vm cloning. (not storage migration, but this is a disk copy in
both case).
Multiple users can clone a template vm at the same time.
Other thing that qemu-img can do is a copy of a disk
...@proxmox.com
À: Alexandre DERUMIER aderum...@odiso.com
Cc: pve-devel@pve.proxmox.com
Envoyé: Lundi 21 Janvier 2013 10:18:20
Objet: RE: [pve-devel] vm migration + storage migration with ndb workflow notes
I see a big speed difference
for a 32gb file, migrate on same local, qcow2 - raw, both
This should be as fast as qemu-img. Please can you post your findings
on the pve-devel list - this looks like a serious bug to me.
I'll send a msg to the qemu mailing.
Maybe with drive-mirror the target file is open in directsync ?
I have no idea, but that does not sound reasonable to me.
original -
De: Dietmar Maurer diet...@proxmox.com
À: Alexandre DERUMIER aderum...@odiso.com
Cc: Michael Rasmussen m...@datanom.net, pve-devel@pve.proxmox.com
Envoyé: Lundi 21 Janvier 2013 10:41:54
Objet: RE: [pve-devel] vm migration + storage migration with ndb workflow notes
My disk
Maybe can you test my patches to see how I have implemented it ?
As you know, this is on my TODO list (next task). But I still need to
resolve a bug (hopefully last) in the backup code.
___
pve-devel mailing list
pve-devel@pve.proxmox.com
Cc: Michael Rasmussen m...@datanom.net, pve-devel@pve.proxmox.com
Envoyé: Lundi 21 Janvier 2013 11:15:57
Objet: RE: [pve-devel] vm migration + storage migration with ndb workflow notes
Maybe can you test my patches to see how I have implemented it ?
As you know, this is on my TODO list (next
Janvier 2013 10:18:20
Objet: RE: [pve-devel] vm migration + storage migration with ndb workflow
notes
I see a big speed difference
for a 32gb file, migrate on same local, qcow2 - raw, both writeback
qemu-img : 30s
drive-mirror (with paused vm) : 5min
This should be as fast as qemu-img
Cc: pve-devel@pve.proxmox.com
Envoyé: Lundi 21 Janvier 2013 10:18:20
Objet: RE: [pve-devel] vm migration + storage migration with ndb workflow
notes
I see a big speed difference
for a 32gb file, migrate on same local, qcow2 - raw, both writeback
qemu-img : 30s
drive
Envoyé: Samedi 19 Janvier 2013 17:18:22
Objet: RE: [pve-devel] vm migration + storage migration with ndb workflow notes
We only want to be able to migrate local disks (changing storage is not
really needed?).
I guess that would make 'remote' migrate much easier?
I think the main feature
There is no need to change the storage name - we just migrate from
'local:disk1' to 'local:disk1' (storage/image is not changed, but be
copy data because storage is not shared).
Ok, It'll be indeed easier to manage, no need to upgrade the vm config file or
to pass new arguments.
So,i'm
I just think this is enough for remote migration.
For local VMs we can have storage migration between any accessible
storage.
I second this.
Just a thought: Would it not be of interest for the users to be able to change
storage format?
eg.
hostX:$vmid/vm-$vmid-disk-N.{raw,qcow2}
can only do 1 drive-mirror job at once.
- Mail original -
De: Dietmar Maurer diet...@proxmox.com
À: Michael Rasmussen m...@datanom.net, pve-devel@pve.proxmox.com
Envoyé: Dimanche 20 Janvier 2013 11:15:56
Objet: Re: [pve-devel] vm migration + storage migration with ndb workflow notes
Objet: RE: [pve-devel] vm migration + storage migration with ndb workflow notes
There is no need to change the storage name - we just migrate from
'local:disk1' to 'local:disk1' (storage/image is not changed, but be
copy data because storage is not shared).
Ok, It'll be indeed easier
On Sun, 20 Jan 2013 10:15:56 +
Dietmar Maurer diet...@proxmox.com wrote:
Would it be possible to do all work using 'qmp' commands. We can simply start
the VM
if the VM is offline (paused state). I currently do that with the new backup
code.
But what if the VM is offline as in
Would it be possible to do all work using 'qmp' commands. We can simply
start the VM
if the VM is offline (paused state). I currently do that with the new backup
code.
I guess that would make the whole thing much simpler?
Not sure about that,
About my templates patches, I can see one
, pve-devel@pve.proxmox.com
Envoyé: Samedi 19 Janvier 2013 07:16:17
Objet: RE: [pve-devel] vm migration + storage migration with ndb workflow notes
I have finally understand the whole workflow for migrate vm + storage
migrate at the same time
(I don't have to much time to work
We only want to be able to migrate local disks (changing storage is not
really needed?).
I guess that would make 'remote' migrate much easier?
I think the main feature is indeed live migrate from a local non shared
storage to a another remote local non shared storage.
Yes
I see another
What do you mean by (changing storage is not really needed?). Do you
mean changing storage format ? (local raw - remote local lvm by
example) Because,For online remote storage migration, with ndb, it's
easy to mirror from any source storage to any remote storage.
There is no need to
Hi,
I have finally understand the whole workflow for migrate vm + storage migrate
at the same time
(I don't have to much time to work on this, maybe it's better to have stable
working disk clone/copy/live mirror code before )
Here somes notes:
phase1
--
target host
---
create
I have finally understand the whole workflow for migrate vm + storage
migrate at the same time
(I don't have to much time to work on this, maybe it's better to have stable
working disk clone/copy/live mirror code before )
Yes, I would also like to do that after copy/clone.
Besides, I think
As a user, our main use case for live storage migration would be migrating
between local disk/RBD/iSCSI.
We would also use the ability to migrate storage across the network.
This could for example be used to migrate between a SAN in one datacentre to a
SAN in a remote datacentre.
Regards,
27 matches
Mail list logo