I'll make a proof of concept, to see if it's looking good
- Mail original -
De: "Dietmar Maurer"
À: "Alexandre DERUMIER"
Cc: pve-devel@pve.proxmox.com, "Michael Rasmussen"
Envoyé: Jeudi 17 Janvier 2013 14:48:20
Objet: RE: [pve-deve
>>I guess we only talk about nodes inside the cluster here.
I agree too.
- Mail original -
De: "Dietmar Maurer"
À: "Michael Rasmussen" , "Alexandre DERUMIER"
Cc: pve-devel@pve.proxmox.com
Envoyé: Jeudi 17 Janvier 2013 14:49:07
Objet: RE: [
> I was looking at nbd before I was aware of that qm migrate was able to do
> the job. The changes are relatively simple and can be manage via qmp on
> both side. Asumption: We are only talking about a node in a cluster.
I guess we only talk about nodes inside the cluster here.
> >>What do we present on the GUI to the user?
>
> I thinked extend the current migrate panel, maybe add a new tab, for each
> disk of the vm, display a the storagelist of targetnode, default is disk
> current
> storage
Ah, yes. That sounds reasonable.
___
irtio2:
...
- Mail original -
De: "Dietmar Maurer"
À: "Alexandre DERUMIER" , "Michael Rasmussen"
Cc: pve-devel@pve.proxmox.com
Envoyé: Jeudi 17 Janvier 2013 13:19:40
Objet: RE: [pve-devel] storage migration: extend "qm migrate"?
> if host
On 01-17-2013 12:47, Alexandre DERUMIER wrote:
Yes, I known.
But It's not too much different, we can reuse code, just using nbd as
target instead a volume file/device.
I was looking at nbd before I was aware of that qm migrate was able to
do the job. The changes are relatively
simple and can b
> if hosttarget = sourcetarget, then we simply migrate the disks if hosttarget
> !=
> sourcetarget, we mirror the disks with nbd (only if the storage is non-shared
> on both hosts), then we migrate the vm.
>
> Maybe Dietmar have you an opinion about this ?
What do we present on the GUI to the us
ndre DERUMIER"
Cc: pve-devel@pve.proxmox.com
Envoyé: Jeudi 17 Janvier 2013 12:38:07
Objet: Re: [pve-devel] storage migration: extend "qm migrate"?
On 01-17-2013 12:00, Alexandre DERUMIER wrote:
>
> How do think it can work, if the the storage is attached on target
> host
On 01-17-2013 12:00, Alexandre DERUMIER wrote:
How do think it can work, if the the storage is attached on target
host, but not on source host,
and that vm is running on source host ?
I see your point. This particular situation is not handle in my code
since it will only migrate between common
l -
De: "Michael Rasmussen"
À: pve-devel@pve.proxmox.com
Envoyé: Jeudi 17 Janvier 2013 10:42:15
Objet: Re: [pve-devel] storage migration: extend "qm migrate"?
Hi,
I still can figure out why it should be necessary to migrate the vm if
the storage is migrated to another hos
inal -
De: "Michael Rasmussen"
À: "Alexandre DERUMIER"
Cc: pve-devel@pve.proxmox.com
Envoyé: Jeudi 17 Janvier 2013 10:53:04
Objet: Re: [pve-devel] Storage migration: No on-line migration support
I was not aware of the pause command. I will try using that command
i
&$activate;
$count = 0;
}
$old_len = $stat->{offset};
sleep 1;
}
};
- Mail original -
De: "Michael Rasmussen"
À: "Alexandre DERUMIER"
Envoyé: Mard
Hi,
I still can figure out why it should be necessary to migrate the vm if
the storage is migrated to another host?
Especially in the situation where a host has more than one disk and you
then migrate one of the disks. Should the vm still be migrated then?
what about the remaining disks, sho
$count = 0;
}
$old_len = $stat->{offset};
sleep 1;
}
};
- Mail original -----
De: "Michael Rasmussen"
À: "Alexandre DERUMIER"
Envoyé:
Hi,
I'm begin to look for storage migration implementation,
do you think we need to extend "qm migrate" ?
something like:
qm migrate --virtio0 local:raw --virtio1 nfs:qcow2 --virtio3
rbd:
1) if targethost = sourcehost , then we only migrate storage
2) if targethost != sourcehost, we migr
On 01-15-2013 09:44, Alexandre DERUMIER wrote:
your subs
vol_format
disk_name
vol_name
lock_source
unlock_source
Should go in PVE::Storage::Plugins::
I am aware of this and when the code is stable I intend to split it up
in parts according to the plugin architecture. But as long as it is a
Mail original -
De: "Michael Rasmussen"
À: pve-devel@pve.proxmox.com
Envoyé: Mardi 15 Janvier 2013 03:28:18
Objet: [pve-devel] Storage migration: No on-line migration support
Hi all,
Attached is rc2 which adds support for on-line migration.
Read TODO to find out what is missin
> Attached is rc2 which adds support for on-line migration.
> Read TODO to find out what is missing - especially the part concerning
> support for other storage backends.
I would be great if you use 'git format-patch' and 'git send-email' to send
patches.
arly ?
- Mail original -
De: "Michael Rasmussen"
À: pve-devel@pve.proxmox.com
Envoyé: Mardi 15 Janvier 2013 03:28:18
Objet: [pve-devel] Storage migration: No on-line migration support
Hi all,
Attached is rc2 which adds support for on-line migration.
Read TODO to find ou
Hi all,
Attached is rc2 which adds support for on-line migration.
Read TODO to find out what is missing - especially the part concerning
support for other storage backends.
I have also discovered a bug in PVE::QemuServer::vm_mon_cmd which in
rare cases can lead to disaster. You can loose your soc
e new root?
Yes, that was the boot device.
- Mail original -
De: "Michael Rasmussen"
À: "Alexandre DERUMIER"
Envoyé: Jeudi 10 Janvier 2013 19:35:30
Objet: Re: [pve-devel] Storage migration: online design solution
On Thu, 10 Jan 2013 14:22:02 +0100 (CET)
Alexandre DERUM
chael Rasmussen"
À: "Alexandre DERUMIER"
Cc: pve-devel@pve.proxmox.com
Envoyé: Jeudi 10 Janvier 2013 10:23:23
Objet: Re: [pve-devel] Storage migration: online design solution
I more or less have a complete solution. Need some more tests, though. I have
discovered a potential pro
Are the uuid stored on the drive ?
because if we mirror the full drive, it should be ok.
- Mail original -
De: "datanom.net"
À: pve-devel@pve.proxmox.com
Envoyé: Jeudi 10 Janvier 2013 11:21:01
Objet: Re: [pve-devel] Storage migration: online design solution
On 01-10-
On 01-10-2013 11:19, datanom.net wrote:
tune2fs -U random /dev/sdXY
___
And to use an existing one:
tune2fs -U UUID /dev/sdXY
___
pve-devel mailing list
pve-devel@pve.proxmox.com
http://pve.proxmox.com/cgi-bi
On 01-10-2013 11:11, Alexandre DERUMIER wrote:
mmm, yes, this seem normal because this is a new disk.
I don't know if we can force the uuid of disk,I'll do some research
about this.
tune2fs -U random /dev/sdXY
___
pve-devel mailing list
pve-devel@p
now if we can force the uuid of disk,I'll do some research about this.
- Mail original -
De: "Michael Rasmussen"
À: "Alexandre DERUMIER"
Cc: pve-devel@pve.proxmox.com
Envoyé: Jeudi 10 Janvier 2013 10:23:23
Objet: Re: [pve-devel] Storage migrati
quot;block-job-complete", device =>
>"drive-$drive");
>
>- Mail original -
>
>De: "Michael Rasmussen"
>À: pve-devel@pve.proxmox.com
>Envoyé: Mercredi 9 Janvier 2013 22:12:47
>Objet: Re: [pve-devel] Storage migration: online design solution
ice =>
"drive-$drive");
- Mail original -
De: "Michael Rasmussen"
À: pve-devel@pve.proxmox.com
Envoyé: Mercredi 9 Janvier 2013 22:12:47
Objet: Re: [pve-devel] Storage migration: online design solution
On Wed, 9 Jan 2013 22:01:42 +0100
Michael Rasmussen w
og, other
storage api)
- Mail original -
De: "Michael Rasmussen"
À: pve-devel@pve.proxmox.com
Envoyé: Mercredi 9 Janvier 2013 22:20:08
Objet: Re: [pve-devel] Storage migration: online design solution
On Wed, 9 Jan 2013 22:12:47 +0100
Michael Rasmussen wrote:
> Found ou
On Wed, 9 Jan 2013 22:12:47 +0100
Michael Rasmussen wrote:
> Found out that it has to be a URI: /dev/pve-storage2_vg/vm-102-disk-1
> works:-)
>
And another observation: For LVM, or maybe any block device, the device
needs to be created before executing drive-mirror
# drive_mirror -f drive-virti
On Wed, 9 Jan 2013 22:01:42 +0100
Michael Rasmussen wrote:
> On Wed, 09 Jan 2013 12:05:05 +0100 (CET)
> Alexandre DERUMIER wrote:
>
> >
> > #drive_mirror -n -f drive-virtio0 sheepdog:127.0.0.1:7000:vm-144-disk-1
> >
> # info block
> drive-virtio2: removable=0 io-status=ok
> file=/dev/pve-stora
On Wed, 09 Jan 2013 12:05:05 +0100 (CET)
Alexandre DERUMIER wrote:
>
> #drive_mirror -n -f drive-virtio0 sheepdog:127.0.0.1:7000:vm-144-disk-1
>
# info block
drive-virtio2: removable=0 io-status=ok
file=/dev/pve-storage1_vg/vm-102-disk-1 ro=0 drv=raw encrypted=0 bps=0
bps_rd=0 bps_wr=0 iops=0 io
On Wed, 9 Jan 2013 18:21:54 +
Dietmar Maurer wrote:
>
> Hopefully you connect to the storage using a trusted network.
>
exactly. Also the way according to proxmox documentation as it is with
all other hypervisors - host and storage should be kept on a dedicate
secured network.
--
Hilsen/R
> But then again why should we use encryption? I see no difference between
> using a remote block device today and the way drive-mirror does its job. And
> connections to remote block devices today is not encrypted either.
Hopefully you connect to the storage using a trusted network.
On Wed, 9 Jan 2013 18:47:43 +0100
Michael Rasmussen wrote:
>
> Only outstanding issue is about encryption. My proposal will be to have
> an option in the GUI for choosing to tunnel the migration through a ssh
> tunnel since this is already implemented in the current Proxmox code
> base? But I do
On Wed, 09 Jan 2013 18:35:24 +0100 (CET)
Alexandre DERUMIER wrote:
> >>I cannot see why it is required to migrate the VM also? Maybe libvirt
> >>is having another agenda than my proposal?
>
> Maybe I don't understand what you want to do ???
>
I get your point. The storage model in Proxmox gua
Mail original -
De: "Michael Rasmussen"
À: pve-devel@pve.proxmox.com
Envoyé: Mercredi 9 Janvier 2013 17:55:16
Objet: Re: [pve-devel] Storage migration: online design solution
On Wed, 09 Jan 2013 09:24:15 +0100 (CET)
Alexandre DERUMIER wrote:
>
>
> Maybe I&
On Wed, 09 Jan 2013 09:24:15 +0100 (CET)
Alexandre DERUMIER wrote:
>
>
> Maybe I'm wrong, but I don't think we need nbd for storage migrate on the
> same host. (just use new device/file as target option of qmp mirror)
>
That was also what I ment. NBD is only required if you need to migrate
to
should use "block-job-complete" qmp command
- Mail original -----
De: "Michael Rasmussen"
À: pve-devel@pve.proxmox.com
Envoyé: Mercredi 9 Janvier 2013 02:23:11
Objet: [pve-devel] Storage migration: online design solution
Hi all,
Doing online storage mi
original -
De: "Michael Rasmussen"
À: pve-devel@pve.proxmox.com
Envoyé: Mercredi 9 Janvier 2013 02:23:11
Objet: [pve-devel] Storage migration: online design solution
Hi all,
Doing online storage migration will involve the following phases:
Phase 1) Create remote block dev
chive.com/libvir-list@redhat.com/msg68464.html
- Mail original -
De: "Dietmar Maurer"
À: "Alexandre DERUMIER" , "Michael Rasmussen"
Cc: pve-devel@pve.proxmox.com
Envoyé: Mercredi 9 Janvier 2013 10:15:17
Objet: RE: [pve-devel] Storage migratio
0 bps_wr=0 iops=0 iops_rd=0 iops_wr=0
you can have a look at qmp resize command in proxmox code (QemuServer.pm), it's
using drive device as argument.
- Mail original -
De: "Michael Rasmussen"
À: pve-devel@pve.proxmox.com
Envoyé: Mercredi 9 Janvier 2013 07:39:06
Objet:
> 3) libvirt starts the destination QEMU and sets up the NBD server using the
> nbd-server-start and nbd-server-add commands.
They transfer data without encryption?
___
pve-devel mailing list
pve-devel@pve.proxmox.com
http://pve.proxmox.com/cgi-bin/mail
a look at qmp resize command in proxmox code (QemuServer.pm), it's
using drive device as argument.
- Mail original -
De: "Michael Rasmussen"
À: pve-devel@pve.proxmox.com
Envoyé: Mercredi 9 Janvier 2013 07:39:06
Objet: Re: [pve-devel] Storage migration: online design sol
vm (drive-reopen (?))
stop nbdserver
This is the way is implemented in libvirt
- Mail original -
De: "Michael Rasmussen"
À: pve-devel@pve.proxmox.com
Envoyé: Mercredi 9 Janvier 2013 07:44:46
Objet: Re: [pve-devel] Storage migration: online design solution
On Wed, 09 Jan 2013 03:55:22 +0100 (CET)
Alexandre DERUMIER wrote:
>
> so vm migrate must occur after drive-mirror,
> maybe after drive-reopen ?
> But before ndb server stop
>
See my other reply.
--
Hilsen/Regards
Michael Rasmussen
Get my public GnuPG keys:
michael rasmussen cc
http://pgp
On Wed, 09 Jan 2013 03:49:21 +0100 (CET)
Alexandre DERUMIER wrote:
>
> target can be file or device, (So I think it's for migrate between 2 storages
> available on same host)
>
Yes, and this is were NBD comes into play.
>
> So, if you use ndb server as target, is it for migrate storage betwe
On Wed, 09 Jan 2013 03:34:01 +0100 (CET)
Alexandre DERUMIER wrote:
>
> drive-virtioX
> drive-ideX
> drive-scsiX
> drive-sataX
>
I am must certain I have tried that and was rewarded with a 'Device not
found error'
--
Hilsen/Regards
Michael Rasmussen
Get my public GnuPG keys:
michael rasmusse
On Wed, 09 Jan 2013 03:34:01 +0100 (CET)
Alexandre DERUMIER wrote:
>
> Does it work also with rbd,sheepdog,lvm ? (maybe this is the same formats
> that qemu-img ? )
>
Yes, it works for any block or file device supported by QEMU.
--
Hilsen/Regards
Michael Rasmussen
Get my public GnuPG keys:
UMIER"
À: "Michael Rasmussen"
Cc: pve-devel@pve.proxmox.com
Envoyé: Mercredi 9 Janvier 2013 03:49:21
Objet: Re: [pve-devel] Storage migration: online design solution
one other question:
I'm reading the qmp doc
# @drive-mirror
#
# Start mirroring a block device'
- Mail original -
De: "Michael Rasmussen"
À: pve-devel@pve.proxmox.com
Envoyé: Mercredi 9 Janvier 2013 02:23:11
Objet: [pve-devel] Storage migration: online design solution
Hi all,
Doing online storage migration will involve the following phases:
Phase 1) Create remote block
102-disk-1,size=2G
drives name under proxmox are:
drive-virtioX
drive-ideX
drive-scsiX
drive-sataX
- Mail original -
De: "Michael Rasmussen"
À: pve-devel@pve.proxmox.com
Envoyé: Mercredi 9 Janvier 2013 02:23:11
Objet: [pve-devel] Storage migration: online design solu
Hi all,
Doing online storage migration will involve the following phases:
Phase 1) Create remote block device
Phase 2) Connect this block device to NBD (nbd_server_add [-w] device)
Phase 3) Start nbd_server (nbd_server_start [-a] [-w] host:port)
Phase 4) "drive-mirror", "arguments": {"device": "id
On Mon, 07 Jan 2013 18:41:28 +0100 (CET)
Alexandre DERUMIER wrote:
> If I remember,
>
> they are 2 qmp commands:
>
I was reading some of the discussions on the qemu maillist and if a
remember correct you are right. But the were referring to this as two
steps or phases. Phase 1 is the the migr
rate.pm
you can find qmp command doc here:
http://git.qemu.org/?p=qemu.git;a=blob_plain;f=qmp-commands.hx;hb=HEAD
- Mail original -
De: "Michael Rasmussen"
À: pve-devel@pve.proxmox.com
Envoyé: Lundi 7 Janvier 2013 17:58:56
Objet: Re: [pve-devel] Storage migration: 1.0r
On Mon, 7 Jan 2013 07:02:34 +
Dietmar Maurer wrote:
>
> I can't see that in your patch (Such thing need to use the qmp block migrate
> commands).
>
>
I was not aware of that this was available before 1.4.
How do you initiate qmp from perl? (It uses a REST api, does it not?)
Can you poin
> > Many thanks for the patch. I am not sure how that fits into my
> > picture, but I will take a closer lock when I integrate the copy/clone
> > patches
> from Alexandre.
> >
> Consider it a first step towards full vMotion in Promox
I can't see that in your patch (Such thing need to use the qmp
;Michael Rasmussen" , pve-devel@pve.proxmox.com
Envoyé: Lundi 7 Janvier 2013 06:36:11
Objet: Re: [pve-devel] Storage migration: 1.0rc1
> I have finally completed this module:-)
Many thanks for the patch. I am not sure how that fits into my picture, but I
will
take a closer lock when
On Mon, 7 Jan 2013 05:36:11 +
Dietmar Maurer wrote:
>
> Many thanks for the patch. I am not sure how that fits into my picture, but I
> will
> take a closer lock when I integrate the copy/clone patches from Alexandre.
>
Consider it a first step towards full vMotion in Promox. In VmWare
ter
> I have finally completed this module:-)
Many thanks for the patch. I am not sure how that fits into my picture, but I
will
take a closer lock when I integrate the copy/clone patches from Alexandre.
- Dietmar
___
pve-devel mailing list
pve-devel@pve.
Hi all,
I have finally completed this module:-)
I have done extensive testing migrating 40-50 different storages and
have not found a single bug yet - this off course does not guaranty
that the module does not have bugs, I have not been able to find them
yet.
Attached you will find:
the module -
On Thu, 20 Dec 2012 08:27:12 +0100
Michael Rasmussen wrote:
> Didn't new that. I will try later today.
>
I have done a number of test using various bs with dd. My test shows
that using qemu-img convert -t writeback makes qemu-img convert out
perm dd with roughly doing the job 15% faster:-)
qemu
On Thu, 20 Dec 2012 04:47:45 +
Dietmar Maurer wrote:
>
> Do you pass correct cache setting?
>
> # qemu-img convert -t writeback ...
>
Didn't new that. I will try later today.
--
Hilsen/Regards
Michael Rasmussen
Get my public GnuPG keys:
michael rasmussen cc
http://pgp.mit.edu:11371/
> > maybe you can try with "qemu-img convert" (this is what I use In my
> > test code, you can also use qcow2 or any storage as input)
> >
> dd is 2-3 times faster than "qemu-img convert" given the right options.
Do you pass correct cache setting?
# qemu-img convert -t writeback ...
_
On Tue, 18 Dec 2012 09:15:54 +0100 (CET)
Alexandre DERUMIER wrote:
>
> So I think we need to use "qemu-img convert" as default,
> then if needed (faster method?), implemented specific copy sub for each
> pve-storage plugin,depend of the source/destination
>
>
qemu-img convert is painfully sl
> So I think we need to use "qemu-img convert" as default, then if needed
> (faster method?), implemented specific copy sub for each pve-storage
> plugin,depend of the source/destination
No - we need to optimize "qemu-img convert" instead - can't be that hard.
>>Hi all,
Hi Michael,
I'll submit today (in some hours) my work on vm template-cloning.
New version include linked clone and also clone copy. (aka disk copy).
disk copy already working with all storage/formats, using qemu-img convert for
now.
So maybe you can extend it , or use it as sample.
On Mon, 17 Dec 2012 18:31:05 +0100 (CET)
Alexandre DERUMIER wrote:
> maybe you can try with "qemu-img convert" (this is what I use In my test
> code, you can also use qcow2 or any storage as input)
>
dd is 2-3 times faster than "qemu-img convert" given the right options.
--
Hilsen/Regards
Mic
Hi all,
I have designed the conceptual solution for off-line storage migration
as can be read below (Also attached for better readability). Every
usecase have been tested from the command line and found working. Do
you have any comments or have I left something out?
Storage migration
Two distinc
> 1) Copies the entire block device bit by bit even if bits are zero
> 2) Painfully slow due to 1)
But 1 is needed, because LVM does not initialize new volumes with zero.
___
pve-devel mailing list
pve-devel@pve.proxmox.com
http://pve.proxmox.com/cgi-
om
Envoyé: Lundi 17 Décembre 2012 18:08:26
Objet: [pve-devel] Storage migration: LVM copy
Hi all,
Storage migrate to a LVM volume means, for all I know, using dd
if=current_image of=new_image bs=1M. For two reason I wish there were
some other way of doing it:
1) Copies the entire block de
Hi all,
Storage migrate to a LVM volume means, for all I know, using dd
if=current_image of=new_image bs=1M. For two reason I wish there were
some other way of doing it:
1) Copies the entire block device bit by bit even if bits are zero
2) Painfully slow due to 1)
Any other, faster, way of doing
De: "Michael Rasmussen"
À: pve-devel@pve.proxmox.com
Envoyé: Lundi 17 Décembre 2012 01:01:45
Objet: [pve-devel] Storage migration
Hi Alexandre,
You mentioned some time ago that you have provided some patches for qm
copy which could be used for storage migration. However, I have no
Hi Alexandre,
You mentioned some time ago that you have provided some patches for qm
copy which could be used for storage migration. However, I have not
been able to locate the code anywhere in pve-storage. I wonder if you
in fact mean the volume_clone patch and if so these patches seems not
be ha
> Is anybody at the moment working on implementing storage migration?
No, because there are too many other pending thing we need to solve first.
Also, you need to define the term 'storage migration'?
Maybe you simple want:
# vzdump |qmrestore - --storage
Hi all,
Just a question before I will throw in some hard work:-)
Is anybody at the moment working on implementing storage migration?
If the answer is no I will start developing this feature. In my opinion
must of the basic stuff is already present since I see storage
migration as a special kind
76 matches
Mail list logo