Re: [pve-devel] [PATCH] add with-local-disks option for live storage migration

2017-01-14 Thread Jasmin J.

Hi!

> I'll try work on this next week.
May I ask you to consider a new API for merging between Storage and Plugin.
It might be very useful for a plugin to know a migration is starting and
is finished, so that a plugin can do additional commands to the particular
storage driver. And a plugin needs to know this on both sides of the
migration.

BR,
   Jasmin
___
pve-devel mailing list
pve-devel@pve.proxmox.com
http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel


Re: [pve-devel] [PATCH] add with-local-disks option for live storage migration

2017-01-13 Thread Alexandre DERUMIER
>>New cases can be added later. The code I'm working on is an independent 
>>tool which could then be integrated into the storage import/export API 
>>after that API has been introduced. 

Ah ok.

I'll try work on this next week.


- Mail original -
De: "Wolfgang Bumiller" 
À: "aderumier" 
Cc: "dietmar" , "pve-devel" 
Envoyé: Vendredi 13 Janvier 2017 11:56:41
Objet: Re: [pve-devel] [PATCH] add with-local-disks option for live storage 
migration

On Tue, Jan 10, 2017 at 08:19:29AM +0100, Alexandre DERUMIER wrote: 
> Thanks Wolfgang, 
> 
> 
> >>Basically storage plugins should have to define 
> >>- a set of formats they can export as and import from 
> >>- whether these formats can include snapshots 
> >>- a priority 
> 
> that's exactly what I have in mind. 

In that case... (see below) 

> 
> >>- possibly other things? (sparse/zero-detection/local sepcial cases to 
> >>use 'cp'...?) 
> 
> I think we could add also special case, like ceph rbd copy for example. 
> 
> 
> >>(I'll start polishing up the documentation and push some code into a 
> >>git repo soon-ish...) 
> 
> Ok Great. I'll wait a little bit for your patches . 

There's no need to wait if you're interested and have the time to work 
on this part. The pve-storage part itself can actually be worked on 
already, and the cases we have now could be converted. 

New cases can be added later. The code I'm working on is an independent 
tool which could then be integrated into the storage import/export API 
after that API has been introduced. 

___
pve-devel mailing list
pve-devel@pve.proxmox.com
http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel


Re: [pve-devel] [PATCH] add with-local-disks option for live storage migration

2017-01-13 Thread Wolfgang Bumiller
On Tue, Jan 10, 2017 at 08:19:29AM +0100, Alexandre DERUMIER wrote:
> Thanks Wolfgang,
> 
> 
> >>Basically storage plugins should have to define 
> >>- a set of formats they can export as and import from 
> >>- whether these formats can include snapshots 
> >>- a priority 
> 
> that's exactly what I have in mind.

In that case... (see below)

> 
> >>- possibly other things? (sparse/zero-detection/local sepcial cases to 
> >>use 'cp'...?) 
> 
> I think we could add also special case, like ceph rbd copy for example.
> 
> 
> >>(I'll start polishing up the documentation and push some code into a 
> >>git repo soon-ish...) 
> 
> Ok Great. I'll wait a little bit for your patches .

There's no need to wait if you're interested and have the time to work
on this part. The pve-storage part itself can actually be worked on
already, and the cases we have now could be converted.

New cases can be added later. The code I'm working on is an independent
tool which could then be integrated into the storage import/export API
after that API has been introduced.

___
pve-devel mailing list
pve-devel@pve.proxmox.com
http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel


Re: [pve-devel] [PATCH] add with-local-disks option for live storage migration

2017-01-09 Thread Alexandre DERUMIER
Thanks Wolfgang,


>>Basically storage plugins should have to define 
>>- a set of formats they can export as and import from 
>>- whether these formats can include snapshots 
>>- a priority 

that's exactly what I have in mind.

>>- possibly other things? (sparse/zero-detection/local sepcial cases to 
>>use 'cp'...?) 

I think we could add also special case, like ceph rbd copy for example.


>>(I'll start polishing up the documentation and push some code into a 
>>git repo soon-ish...) 

Ok Great. I'll wait a little bit for your patches .

Thanks for the hard work on this !


- Mail original -
De: "Wolfgang Bumiller" 
À: "aderumier" 
Cc: "dietmar" , "pve-devel" 
Envoyé: Lundi 9 Janvier 2017 12:36:12
Objet: Re: [pve-devel] [PATCH] add with-local-disks option for live storage 
migration

On Sat, Jan 07, 2017 at 03:16:22PM +0100, Alexandre DERUMIER wrote: 
> >>I think wolfgang has some experimental code to implement 
> >>kind of send/receive for most storage types .. I guess this 
> >>could be useful here. 
> 
> maybe it could be great to have something in pve-storage plugins. 
> 
> for example, qcow2-> qcow2 use rsync to keep snapshot, zfs->zfs use zfs 
> send|receive to keep snapshot, qcow2 ->zfs use qemu-img + nbd (and loose 
> snasphot),  
> 
> Currently we have a big PVE::Storage::storage_migrate, with a lot of 
> conditions for differents plugins, 
> I think it could be better to move code in each plugin. 

Yes, this function should die ;-) 

If you're interested in working on this: 
The idea of a generic 'import/export' (or send/receive) interface has 
been floating around and I think we should start working on this as it 
will not only untangle that huge spaghetti if/else function but also 
allow easier maintenance and improvements: 

Basically storage plugins should have to define 
- a set of formats they can export as and import from 
- whether these formats can include snapshots 
- a priority 
- possibly other things? (sparse/zero-detection/local sepcial cases to 
use 'cp'...?) 

When a disk is to be migrated, the source storage's 'export' formats is 
matched against the destination storage's 'import' formats, the "best" 
one they both have in common will be used, taking into account whether 
snapshots should be included or not. 

Every storage would have to at least support the 'raw' type - a simple 
'dd' stream without snapshots, where the receiving end would probably 
use a 4k block size with sparse detection. 

Naturally zfs would define the 'zfs' type which would be the best choice 
if both storages are zfs - and would use zfs-send/receive. (Likewise 
btrfs would have the 'btrfs' type...) 

As for the experimental code Dietmar metnioned: 
I'm currently work on an experimental tool implementing a sort of 
send/receive - or more accurately a copy-on-write/clone/dedup and 
sparse aware streaming archive. 
In theory it can already restore *to* every type of storage we have, and 
I can currently read *from* qcow2 files, lvm thin volumes and raw files 
from btrfs/xfs into such a stream *with* snapshots. (For qcow2 I have a 
qemu-img patch, for lvm-thin I read the thin-pool's metadata to get the 
allocated blocks.) 
I've done some successful tests lately moving VMs + snapshots between 
qcow2, lvm-thin around, I've also moved them to ZFS zvols. 
The big question is how many storage types I'll be able to cover, we'll 
just have to wait and see ;-). 
(I'll start polishing up the documentation and push some code into a 
git repo soon-ish...) 

___
pve-devel mailing list
pve-devel@pve.proxmox.com
http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel


Re: [pve-devel] [PATCH] add with-local-disks option for live storage migration

2017-01-09 Thread Wolfgang Bumiller
On Sat, Jan 07, 2017 at 03:16:22PM +0100, Alexandre DERUMIER wrote:
> >>I think wolfgang has some experimental code to implement 
> >>kind of send/receive for most storage types .. I guess this 
> >>could be useful here. 
> 
> maybe it could be great to have something in pve-storage plugins.
> 
> for example, qcow2-> qcow2 use rsync to keep snapshot, zfs->zfs use zfs 
> send|receive to keep snapshot, qcow2 ->zfs  use qemu-img + nbd (and loose 
> snasphot), 
> 
> Currently we have a big PVE::Storage::storage_migrate, with a lot of 
> conditions for differents plugins,
> I think it could be better to move code in each plugin.

Yes, this function should die ;-)

If you're interested in working on this:
The idea of a generic 'import/export' (or send/receive) interface has
been floating around and I think we should start working on this as it
will not only untangle that huge spaghetti if/else function but also
allow easier maintenance and improvements:

Basically storage plugins should have to define
 - a set of formats they can export as and import from
 - whether these formats can include snapshots
 - a priority
 - possibly other things? (sparse/zero-detection/local sepcial cases to
   use 'cp'...?)

When a disk is to be migrated, the source storage's 'export' formats is
matched against the destination storage's 'import' formats, the "best"
one they both have in common will be used, taking into account whether
snapshots should be included or not.

Every storage would have to at least support the 'raw' type - a simple
'dd' stream without snapshots, where the receiving end would probably
use a 4k block size with sparse detection.

Naturally zfs would define the 'zfs' type which would be the best choice
if both storages are zfs - and would use zfs-send/receive. (Likewise
btrfs would have the 'btrfs' type...)

As for the experimental code Dietmar metnioned:
I'm currently work on an experimental tool implementing a sort of
send/receive - or more accurately a copy-on-write/clone/dedup and
sparse aware streaming archive.
In theory it can already restore *to* every type of storage we have, and
I can currently read *from* qcow2 files, lvm thin volumes and raw files
from btrfs/xfs into such a stream *with* snapshots. (For qcow2 I have a
qemu-img patch, for lvm-thin I read the thin-pool's metadata to get the
allocated blocks.)
I've done some successful tests lately moving VMs + snapshots between
qcow2, lvm-thin around, I've also moved them to ZFS zvols.
The big question is how many storage types I'll be able to cover, we'll
just have to wait and see ;-).
(I'll start polishing up the documentation and push some code into a
git repo soon-ish...)

___
pve-devel mailing list
pve-devel@pve.proxmox.com
http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel


Re: [pve-devel] [PATCH] add with-local-disks option for live storage migration

2017-01-07 Thread Alexandre DERUMIER
>>I think wolfgang has some experimental code to implement 
>>kind of send/receive for most storage types .. I guess this 
>>could be useful here. 

maybe it could be great to have something in pve-storage plugins.

for example, qcow2-> qcow2 use rsync to keep snapshot, zfs->zfs use zfs 
send|receive to keep snapshot, qcow2 ->zfs  use qemu-img + nbd (and loose 
snasphot), 

Currently we have a big PVE::Storage::storage_migrate, with a lot of conditions 
for differents plugins,
I think it could be better to move code in each plugin.



- Mail original -
De: "dietmar" 
À: "aderumier" 
Cc: "pve-devel" 
Envoyé: Vendredi 6 Janvier 2017 16:49:33
Objet: Re: [pve-devel] [PATCH] add with-local-disks option for live storage 
migration

> >>No. But my question was more like: why not? 
> >>I guess this is not really an important feature, because users 
> >>can migrate single disk to another storage anyways... 
> 
> I'll try to look at this. 
> 
> it could be usefull too for full cloning a template with local disk, to a 
> remote node. 

I think wolfgang has some experimental code to implement 
kind of send/receive for most storage types .. I guess this 
could be useful here. 

___
pve-devel mailing list
pve-devel@pve.proxmox.com
http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel


Re: [pve-devel] [PATCH] add with-local-disks option for live storage migration

2017-01-06 Thread Dietmar Maurer
> >>No. But my question was more like: why not? 
> >>I guess this is not really an important feature, because users 
> >>can migrate single disk to another storage anyways... 
> 
> I'll try to look at this.
> 
> it could be usefull too for full cloning a template with local disk, to a
> remote node.

I think wolfgang has some experimental code to implement
kind of send/receive for most storage types ..  I guess this
could be useful here.

___
pve-devel mailing list
pve-devel@pve.proxmox.com
http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel


Re: [pve-devel] [PATCH] add with-local-disks option for live storage migration

2017-01-06 Thread Alexandre DERUMIER
>>No. But my question was more like: why not? 
>>I guess this is not really an important feature, because users 
>>can migrate single disk to another storage anyways... 

I'll try to look at this.

it could be usefull too for full cloning a template with local disk, to a 
remote node.

- Mail original -
De: "dietmar" 
À: "aderumier" 
Cc: "pve-devel" 
Envoyé: Vendredi 6 Janvier 2017 12:51:58
Objet: Re: [pve-devel] [PATCH] add with-local-disks option for live storage 
migration

> >>But I wonder why targetstorage only works for online migration?? 
> 
> do we allow currently the migrate offline, between different type of storage 
> ? 
> (zfs->lvm, lvm->qcow2?). 

No. But my question was more like: why not? 
I guess this is not really an important feature, because users 
can migrate single disk to another storage anyways... 

___
pve-devel mailing list
pve-devel@pve.proxmox.com
http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel


Re: [pve-devel] [PATCH] add with-local-disks option for live storage migration

2017-01-06 Thread Dietmar Maurer
> >>But I wonder why targetstorage only works for online migration??
> 
> do we allow currently the migrate offline, between different type of storage ?
> (zfs->lvm, lvm->qcow2?).

No. But my question was more like: why not?
I guess this is not really an important feature, because users
can migrate single disk to another storage anyways...

___
pve-devel mailing list
pve-devel@pve.proxmox.com
http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel


Re: [pve-devel] [PATCH] add with-local-disks option for live storage migration

2017-01-06 Thread Alexandre DERUMIER
>>But I wonder why targetstorage only works for online migration??

do we allow currently the migrate offline, between different type of storage ? 
(zfs->lvm, lvm->qcow2?).

It could be implemented with a nbd server on remote node + qemu-img convert to 
nbd.



- Mail original -
De: "dietmar" 
À: "aderumier" , "pve-devel" 
Envoyé: Vendredi 6 Janvier 2017 12:25:40
Objet: Re: [pve-devel] [PATCH] add with-local-disks option for live storage 
migration

Applied, thanks! 

But I wonder why targetstorage only works for online migration?? 


> On January 6, 2017 at 10:15 AM Alexandre Derumier  
> wrote: 
> 
> 
> As Fabian as required, 
> add an extra flag "with-local-disks" to enable live storage migration with 
> localdisk. 
> 
> default target storage is same sid than source, this can be overrided with 
> "targetstorage" option. 
> 
> I will try improve this later, with optionnal mapping, disk by disk. 
> 
> Signed-off-by: Alexandre Derumier  
> --- 
> PVE/API2/Qemu.pm | 12 +--- 
> PVE/QemuMigrate.pm | 1 + 
> 2 files changed, 10 insertions(+), 3 deletions(-) 
> 
> diff --git a/PVE/API2/Qemu.pm b/PVE/API2/Qemu.pm 
> index 288a9cd..33b8f5a 100644 
> --- a/PVE/API2/Qemu.pm 
> +++ b/PVE/API2/Qemu.pm 
> @@ -2723,10 +2723,16 @@ __PACKAGE__->register_method({ 
> description => "CIDR of the (sub) network that is used for migration.", 
> optional => 1, 
> }, 
> - targetstorage => get_standard_option('pve-storage-id', { 
> - description => "Target storage.", 
> + "with-local-disks" => { 
> + type => 'boolean', 
> + description => "Enable live storage migration for local disk", 
> optional => 1, 
> - }), 
> + }, 
> + targetstorage => get_standard_option('pve-storage-id', { 
> + description => "Default target storage.", 
> + optional => 1, 
> + completion => \&PVE::QemuServer::complete_storage, 
> + }), 

___
pve-devel mailing list
pve-devel@pve.proxmox.com
http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel


Re: [pve-devel] [PATCH] add with-local-disks option for live storage migration

2017-01-06 Thread Dietmar Maurer
Applied, thanks!

But I wonder why targetstorage only works for online migration??


> On January 6, 2017 at 10:15 AM Alexandre Derumier  wrote:
> 
> 
> As Fabian as required,
> add an extra flag "with-local-disks"  to enable live storage migration with
> localdisk.
> 
> default target storage is same sid than source, this can be overrided with
> "targetstorage" option.
> 
> I will try improve this later, with optionnal mapping, disk by disk.
> 
> Signed-off-by: Alexandre Derumier 
> ---
>  PVE/API2/Qemu.pm   | 12 +---
>  PVE/QemuMigrate.pm |  1 +
>  2 files changed, 10 insertions(+), 3 deletions(-)
> 
> diff --git a/PVE/API2/Qemu.pm b/PVE/API2/Qemu.pm
> index 288a9cd..33b8f5a 100644
> --- a/PVE/API2/Qemu.pm
> +++ b/PVE/API2/Qemu.pm
> @@ -2723,10 +2723,16 @@ __PACKAGE__->register_method({
>   description => "CIDR of the (sub) network that is used for 
> migration.",
>   optional => 1,
>   },
> - targetstorage => get_standard_option('pve-storage-id', {
> - description => "Target storage.",
> + "with-local-disks" => {
> + type => 'boolean',
> + description => "Enable live storage migration for local disk",
>   optional => 1,
> - }),
> + },
> +targetstorage => get_standard_option('pve-storage-id', {
> + description => "Default target storage.",
> + optional => 1,
> + completion => \&PVE::QemuServer::complete_storage,
> +}),

___
pve-devel mailing list
pve-devel@pve.proxmox.com
http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel