Re: [pve-devel] introduce linked disks

2012-12-17 Thread Stefan Priebe - Profihost AG

Hi Dietmar,

sorry for the late reply.

Am 17.12.2012 07:49, schrieb Dietmar Maurer:

But shared disk are really usefull, I regulary use them for web cluster or
databases clusters.


Can you provide some details? What database or what web server is able to
use such shared disk?


The webserver or app doesn't matter. It's the fs which handles this. 
There are a lot of cluster fs out there.


Two examples:
- Oracles OCFS2
- Redhats GFS2


Maybe Stefan have a clear idea how to implemented shared disks?

I guess we need to have special URIs for shared volumes, for example

store1:/0/vm-0-disk-1.raw

(owner is VM 0). But I am not sure if that is a good idea.


The idea is to have entries like this one:
shared_scsi1:vm-117-disk-5
shared_virtio2:vm-117-disk-9

We don't need the path as the PVE code always rely on the vm-(\d+) 
number. So my idea was to do this here too.


We need to change the code to get the next free controller id (scsiX / 
virtioX / ...).


The advantage of the shared_ entries is that snapshot code, backup code, 
... does not know about these disks so it won't try todo backups / 
snapshots of this disk.


A problem will be if i do a snapshot with memory/ram it also needs the 
old status of this shared disk.


So we need to block backups and snapshots with memory included for these 
machines.


Other / better ideas?

Greets,
Stefan
___
pve-devel mailing list
pve-devel@pve.proxmox.com
http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel


Re: [pve-devel] introduce linked disks

2012-12-17 Thread Dietmar Maurer
 Can you provide some details? What database or what web server is able
 to use such shared disk?
 
 webserver : all webservers (apache,nginx,...) under linux with a shared
 filesystem (ocfs2,gfs,...).

Seem we talk about different things!

Stefan want to pass a file on GFS into several VMs. 
IMHO, this is a safe way to destroy all data?
___
pve-devel mailing list
pve-devel@pve.proxmox.com
http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel


Re: [pve-devel] introduce linked disks

2012-12-17 Thread Stefan Priebe - Profihost AG

Hi Dietmar,

Am 17.12.2012 10:48, schrieb Dietmar Maurer:

Can you provide some details? What database or what web server is able
to use such shared disk?


webserver : all webservers (apache,nginx,...) under linux with a shared
filesystem (ocfs2,gfs,...).


Seem we talk about different things!

Stefan want to pass a file on GFS into several VMs.
IMHO, this is a safe way to destroy all data?


What do i want? I want to share a disk between VMs using a cluster 
filesystem on top of these disks. No data gets destroyed in that way. 
Totally confused right now.


Stefan


___
pve-devel mailing list
pve-devel@pve.proxmox.com
http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel


Re: [pve-devel] introduce linked disks

2012-12-17 Thread Alexandre DERUMIER
Seem we talk about different things! 

Stefan want to pass a file on GFS into several VMs. 
IMHO, this is a safe way to destroy all data? 

No, I think we talk about the same thing.

Sharing a disk between vm,
but of course vms need a cluster filesystem or an app (like sqlserver for 
exemple) which can manage the disk sharing.




- Mail original - 

De: Dietmar Maurer diet...@proxmox.com 
À: Alexandre DERUMIER aderum...@odiso.com 
Cc: pve-devel@pve.proxmox.com, Stefan Priebe s.pri...@profihost.ag 
Envoyé: Lundi 17 Décembre 2012 10:48:27 
Objet: RE: [pve-devel] introduce linked disks 

 Can you provide some details? What database or what web server is able 
 to use such shared disk? 
 
 webserver : all webservers (apache,nginx,...) under linux with a shared 
 filesystem (ocfs2,gfs,...). 

Seem we talk about different things! 

Stefan want to pass a file on GFS into several VMs. 
IMHO, this is a safe way to destroy all data? 
___
pve-devel mailing list
pve-devel@pve.proxmox.com
http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel


Re: [pve-devel] introduce linked disks

2012-12-17 Thread Dietmar Maurer
  webserver : all webservers (apache,nginx,...) under linux with a
  shared filesystem (ocfs2,gfs,...).
 
  Seem we talk about different things!
 
  Stefan want to pass a file on GFS into several VMs.
  IMHO, this is a safe way to destroy all data?
 
 What do i want? I want to share a disk between VMs using a cluster
 filesystem on top of these disks. No data gets destroyed in that way.
 Totally confused right now.

Please can you describe exactly what you want to do? Form what I see you
want to run GFS on the host and pass file on GFS into the VM?

Or do you run GFS inside the guest?

___
pve-devel mailing list
pve-devel@pve.proxmox.com
http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel


Re: [pve-devel] introduce linked disks

2012-12-17 Thread Stefan Priebe - Profihost AG

Hi Dietmar,

Am 17.12.2012 11:01, schrieb Dietmar Maurer:
Please can you describe exactly what you want to do? Form what I see you

want to run GFS on the host and pass file on GFS into the VM?

No.


Or do you run GFS inside the guest?
Yes! I'm using ocfs2 but that doesn't matter. The host isn't touched by 
this. I'm using a cluster fs INSIDE guests.


So X guests can share the same disk and data.

Stefan

___
pve-devel mailing list
pve-devel@pve.proxmox.com
http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel


Re: [pve-devel] introduce linked disks

2012-12-17 Thread Dietmar Maurer
  Or do you run GFS inside the guest?
 Yes! I'm using ocfs2 but that doesn't matter. The host isn't touched by this.
 I'm using a cluster fs INSIDE guests.
 
 So X guests can share the same disk and data.

That makes more sense now ;-)

___
pve-devel mailing list
pve-devel@pve.proxmox.com
http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel


Re: [pve-devel] introduce linked disks

2012-12-17 Thread Stefan Priebe - Profihost AG

Hi,

Am 17.12.2012 11:06, schrieb Dietmar Maurer:

Or do you run GFS inside the guest?

Yes! I'm using ocfs2 but that doesn't matter. The host isn't touched by this.
I'm using a cluster fs INSIDE guests.

So X guests can share the same disk and data.


That makes more sense now ;-)


Sorry for the confusion - Alexandre is doing the same. Do you have ideas 
about implementation? What do you think about my last suggestion?


Greets,
Stefan

___
pve-devel mailing list
pve-devel@pve.proxmox.com
http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel


Re: [pve-devel] introduce linked disks

2012-12-17 Thread Dietmar Maurer
  store1:/0/vm-0-disk-1.raw
 
  (owner is VM 0). But I am not sure if that is a good idea.
 
 The idea is to have entries like this one:
 shared_scsi1:vm-117-disk-5
 shared_virtio2:vm-117-disk-9
 
 We don't need the path as the PVE code always rely on the vm-(\d+)
 number. So my idea was to do this here too.

So how do we detect that a volume is shared? When storage name has prefix 
'shared_'?
___
pve-devel mailing list
pve-devel@pve.proxmox.com
http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel


Re: [pve-devel] introduce linked disks

2012-12-17 Thread Stefan Priebe - Profihost AG

Hi,
Am 17.12.2012 11:08, schrieb Dietmar Maurer:

store1:/0/vm-0-disk-1.raw

(owner is VM 0). But I am not sure if that is a good idea.


The idea is to have entries like this one:
shared_scsi1:vm-117-disk-5
shared_virtio2:vm-117-disk-9

We don't need the path as the PVE code always rely on the vm-(\d+)
number. So my idea was to do this here too.


So how do we detect that a volume is shared? When storage name has prefix 
'shared_'?


We have to possibilies:
1.) Volume is shared FROM another guest:
shared_scsi1
So thisis pretty easy to detect by the shared_ prefix.

2.) Volume is shared TO another guest:
I can imagine to possibilities here:
- add an option to disk when the disk gets shared by another VM:
...,cache=writeback,shared=1
or
to loop through all VMs when we need to know this.

Stefan
___
pve-devel mailing list
pve-devel@pve.proxmox.com
http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel


Re: [pve-devel] [PATCH] auto balloning with mom algorithm implementation

2012-12-17 Thread Dietmar Maurer
 Just committed the ballooning stats patches.
 Ok, thanks.
 Also added a fix so that we can set the polling interval at VM startup.
 Great !
 
 Any news to get all stats values in 1 qom get ?

Juts uploaded a patch for that.

___
pve-devel mailing list
pve-devel@pve.proxmox.com
http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel


Re: [pve-devel] introduce linked disks

2012-12-17 Thread Dietmar Maurer
 Am 17.12.2012 11:08, schrieb Dietmar Maurer:
  store1:/0/vm-0-disk-1.raw
 
  (owner is VM 0). But I am not sure if that is a good idea.
 
  The idea is to have entries like this one:
  shared_scsi1:vm-117-disk-5
  shared_virtio2:vm-117-disk-9
 
  We don't need the path as the PVE code always rely on the vm-(\d+)
  number. So my idea was to do this here too.
 
  So how do we detect that a volume is shared? When storage name has
 prefix 'shared_'?
 
 We have to possibilies:
 1.) Volume is shared FROM another guest:
 shared_scsi1
 So thisis pretty easy to detect by the shared_ prefix.
 
 2.) Volume is shared TO another guest:
 I can imagine to possibilities here:
 - add an option to disk when the disk gets shared by another VM:
 ...,cache=writeback,shared=1
 or
 to loop through all VMs when we need to know this.

That sounds clumsy. What is wrong with my proposal?
___
pve-devel mailing list
pve-devel@pve.proxmox.com
http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel


Re: [pve-devel] introduce linked disks

2012-12-17 Thread Stefan Priebe - Profihost AG

Hi Dietmar,
Am 17.12.2012 11:20, schrieb Dietmar Maurer:

Am 17.12.2012 11:08, schrieb Dietmar Maurer:

store1:/0/vm-0-disk-1.raw

(owner is VM 0). But I am not sure if that is a good idea.


The idea is to have entries like this one:
shared_scsi1:vm-117-disk-5
shared_virtio2:vm-117-disk-9

We don't need the path as the PVE code always rely on the vm-(\d+)
number. So my idea was to do this here too.


So how do we detect that a volume is shared? When storage name has

prefix 'shared_'?

We have to possibilies:
1.) Volume is shared FROM another guest:
shared_scsi1
So thisis pretty easy to detect by the shared_ prefix.

2.) Volume is shared TO another guest:
I can imagine to possibilities here:
- add an option to disk when the disk gets shared by another VM:
...,cache=writeback,shared=1
or
to loop through all VMs when we need to know this.


That sounds clumsy. What is wrong with my proposal?


So your idea is to prefix the controller (scsi, ide, virtio) on ALL 
guests. And the owner is just detected by the ID? (vm-$ID-disk-$I)


Stefan
___
pve-devel mailing list
pve-devel@pve.proxmox.com
http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel


Re: [pve-devel] introduce linked disks

2012-12-17 Thread Dietmar Maurer
 (owner is VM 0). But I am not sure if that is a good idea.
 I didn't have thinked about it.
 
 I think that the master need also too know where the disk is shared.
 Because if we do a snapshot rollback for example, on the master, we need
 to stop all vms where the disk is shared...
 So do we need to parse all vm configs for this ?

I think we need to disable snapshot/rollback if there is a shared disk? 

Or simply make sure that all VM using that disks are stopped - but that is not 
easy to implement.

___
pve-devel mailing list
pve-devel@pve.proxmox.com
http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel


Re: [pve-devel] introduce linked disks

2012-12-17 Thread Dietmar Maurer
 So your idea is to prefix the controller (scsi, ide, virtio) on ALL guests. 

No.

And the owner is just detected by the ID? (vm-$ID-disk-$I)

We already have an 'owner' for each volume (that is already implemented).

If (owner == 0) === shared disk
___
pve-devel mailing list
pve-devel@pve.proxmox.com
http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel


Re: [pve-devel] introduce linked disks

2012-12-17 Thread Dietmar Maurer
 I like examples to be sure we're talking about the same thing. So you mean
 like this:
 
 VM 123
 
 scsi1: ...,vm-123-disk5,...
 owner = 1

no, owner = 123

 
 VM 124
 
 shared_scsi6: ...,vm-123-disk5,...
 owner = 0

owner = 123

 
 VM 125
 
 shared_scsi7: ...,vm-123-disk5,...
 owner = 0

owner = 123

___
pve-devel mailing list
pve-devel@pve.proxmox.com
http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel


Re: [pve-devel] introduce linked disks

2012-12-17 Thread Dietmar Maurer
  We already have an 'owner' for each volume (that is already implemented).
 
 Ah OK sorry didn't know that. How is that detected?
 
 
 I like examples to be sure we're talking about the same thing. So you mean
 like this:
 
 VM 123
 
 scsi1: ...,vm-123-disk5,...
 owner = 1

The 'owner' is not a boolean flag - it is a VMID instead.
___
pve-devel mailing list
pve-devel@pve.proxmox.com
http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel


Re: [pve-devel] introduce linked disks

2012-12-17 Thread Stefan Priebe - Profihost AG

Hi,
Am 17.12.2012 11:45, schrieb Dietmar Maurer:

I like examples to be sure we're talking about the same thing. So you mean
like this:

VM 123
scsi1: ...,vm-123-disk5,...
owner = 1

no, owner = 123


OK another question. Do we pass all params like cache I/O limits... to 
shared guests? Or should this be configurable in shared guests too? I 
would like too keep it as simple as possible and would pass these 
settings from master guest to the shared guests.


Greets,
Stefan
___
pve-devel mailing list
pve-devel@pve.proxmox.com
http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel


Re: [pve-devel] introduce linked disks

2012-12-17 Thread Dietmar Maurer
  OK another question. Do we pass all params like cache I/O limits... to
  shared guests? Or should this be configurable in shared guests too? I
  would like too keep it as simple as possible and would pass these settings
 from master
  guest to the shared guests.
 
 
 With my suggestion, there is no master.

But I guess we should force cache=none for shared disk anyways?


___
pve-devel mailing list
pve-devel@pve.proxmox.com
http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel


Re: [pve-devel] introduce linked disks

2012-12-17 Thread Alexandre DERUMIER
 But I guess we should force cache=none for shared disk anyways? 

Not sure about it, but I use directsync. (I'll retest it, but I think that 
cache=none (writeback in guest), doesn't allow ocfs2 to start)



- Mail original - 

De: Dietmar Maurer diet...@proxmox.com 
À: Dietmar Maurer diet...@proxmox.com, Stefan Priebe - Profihost AG 
s.pri...@profihost.ag 
Cc: pve-devel@pve.proxmox.com 
Envoyé: Lundi 17 Décembre 2012 11:59:10 
Objet: Re: [pve-devel] introduce linked disks 

  OK another question. Do we pass all params like cache I/O limits... to 
  shared guests? Or should this be configurable in shared guests too? I 
  would like too keep it as simple as possible and would pass these settings 
 from master 
  guest to the shared guests. 
 
 
 With my suggestion, there is no master. 

But I guess we should force cache=none for shared disk anyways? 


___ 
pve-devel mailing list 
pve-devel@pve.proxmox.com 
http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel 
___
pve-devel mailing list
pve-devel@pve.proxmox.com
http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel


Re: [pve-devel] pve-manager : add hd resize feature

2012-12-17 Thread Dietmar Maurer
applied, thanks!

- Dietmar

 -Original Message-
 From: pve-devel-boun...@pve.proxmox.com [mailto:pve-devel-
 boun...@pve.proxmox.com] On Behalf Of Alexandre Derumier
 Sent: Donnerstag, 13. Dezember 2012 15:41
 To: pve-devel@pve.proxmox.com
 Subject: [pve-devel] pve-manager : add hd resize feature
 
 Please review, but I think it's clean.
 
 ___
 pve-devel mailing list
 pve-devel@pve.proxmox.com
 http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel


___
pve-devel mailing list
pve-devel@pve.proxmox.com
http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel


Re: [pve-devel] introduce linked disks

2012-12-17 Thread Stefan Priebe - Profihost AG

Am 17.12.2012 11:58, schrieb Dietmar Maurer:

OK another question. Do we pass all params like cache I/O limits... to shared
guests? Or should this be configurable in shared guests too? I would like too
keep it as simple as possible and would pass these settings from master
guest to the shared guests.



With my suggestion, there is no master.


Sorry i meant owner.

Stefan


___
pve-devel mailing list
pve-devel@pve.proxmox.com
http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel


Re: [pve-devel] introduce linked disks

2012-12-17 Thread Stefan Priebe - Profihost AG

Hi,

Am 17.12.2012 12:04, schrieb Alexandre DERUMIER:

But I guess we should force cache=none for shared disk anyways?


Not sure about it, but I use directsync. (I'll retest it, but I think that 
cache=none (writeback in guest), doesn't allow ocfs2 to start)
Mhm i would say cache doesn't matter. The FS itsef should always use 
sync. But I'll check this too.


Stefan
___
pve-devel mailing list
pve-devel@pve.proxmox.com
http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel


[pve-devel] Storage migration: LVM copy

2012-12-17 Thread Michael Rasmussen
Hi all,

Storage migrate to a LVM volume means, for all I know, using dd
if=current_image of=new_image bs=1M. For two reason I wish there were
some other way of doing it:

1) Copies the entire block device bit by bit even if bits are zero
2) Painfully slow due to 1)

Any other, faster, way of doing this?


-- 
Hilsen/Regards
Michael Rasmussen

Get my public GnuPG keys:
michael at rasmussen dot cc
http://pgp.mit.edu:11371/pks/lookup?op=getsearch=0xD3C9A00E
mir at datanom dot net
http://pgp.mit.edu:11371/pks/lookup?op=getsearch=0xE501F51C
mir at miras dot org
http://pgp.mit.edu:11371/pks/lookup?op=getsearch=0xE3E80917
--
dark Turns out that grep returns error code 1 when there are no
matches. I KNEW that.  Why did it take me half an hour?
-- Seen on #Debian


signature.asc
Description: PGP signature
___
pve-devel mailing list
pve-devel@pve.proxmox.com
http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel


Re: [pve-devel] Storage migration: LVM copy

2012-12-17 Thread Alexandre DERUMIER
maybe you can try with qemu-img convert (this is what I use In my test code, 
you can also use qcow2 or any storage as input)

qemu-img convert -f raw -O host_device myfile.raw /dev/mylvmdevice


- Mail original - 

De: Michael Rasmussen m...@datanom.net 
À: pve-devel@pve.proxmox.com 
Envoyé: Lundi 17 Décembre 2012 18:08:26 
Objet: [pve-devel] Storage migration: LVM copy 

Hi all, 

Storage migrate to a LVM volume means, for all I know, using dd 
if=current_image of=new_image bs=1M. For two reason I wish there were 
some other way of doing it: 

1) Copies the entire block device bit by bit even if bits are zero 
2) Painfully slow due to 1) 

Any other, faster, way of doing this? 


-- 
Hilsen/Regards 
Michael Rasmussen 

Get my public GnuPG keys: 
michael at rasmussen dot cc 
http://pgp.mit.edu:11371/pks/lookup?op=getsearch=0xD3C9A00E 
mir at datanom dot net 
http://pgp.mit.edu:11371/pks/lookup?op=getsearch=0xE501F51C 
mir at miras dot org 
http://pgp.mit.edu:11371/pks/lookup?op=getsearch=0xE3E80917 
-- 
dark Turns out that grep returns error code 1 when there are no 
matches. I KNEW that. Why did it take me half an hour? 
-- Seen on #Debian 

___ 
pve-devel mailing list 
pve-devel@pve.proxmox.com 
http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel 
___
pve-devel mailing list
pve-devel@pve.proxmox.com
http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel


Re: [pve-devel] Storage migration: LVM copy

2012-12-17 Thread Dietmar Maurer
 1) Copies the entire block device bit by bit even if bits are zero
 2) Painfully slow due to 1)

But 1 is needed, because LVM does not initialize new volumes with zero.



___
pve-devel mailing list
pve-devel@pve.proxmox.com
http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel


[pve-devel] Storage migration: RFC

2012-12-17 Thread Michael Rasmussen
Hi all,

I have designed the conceptual solution for off-line storage migration
as can be read below (Also attached for better readability). Every
usecase have been tested from the command line and found working. Do
you have any comments or have I left something out?

Storage migration

Two distinct usecases - offline and online.
Four storage backend - raw, qcow2 and vmdk.
Two distinct environments - NFS and iSCSI(LVM).
Implementation preserves the machine on its current node.

Requirements: Since proxmox does not provide thin provisioning space
requirements are absolute so provided storage is available when initial
actions takes place we can initiate storage migration.

iSCSI: raw
NFS: raw, qcow2 and vmdk

iSCSI - NFS
0) lvchange -ay $class-path($scfg, $volnamesrc)
1) lvchange -p r $class-path($scfg, $volnamesrc)
2) mkdir -p $class-path($scfg, $volnamedest)
 exception dir_exists: error if not empty
 exception could_not_create: error
3) qemu-img convert -p -O $class-format($scfg) $class-path($scfg,
$volnamesrc) $class-path($scfg, $volnamedest).$class-format($scfg)
 exception could_not_create: error
 exception no_more_space: - rm -f $class-path($scfg,
$volnamedest).$class-format($scfg)
  - error
4) $class-device($scfg, $volnamesrc): $class-path($scfg,
$volnamedest).$class-format($scfg), cache=$class-cache($scfg,
$volnamesrc), $class-size($scfg, $volnamesrc) exception write_error: -
rm -f $class-path($scfg, $volnamedest).$class-format($scfg)
- error
5) lvremove -f $class-path($scfg, $volnamesrc)

   error:
  - lvchange -p rw $class-path($scfg, $volnamesrc)
  - lvchange -an $class-path($scfg, $volnamesrc)

iSCSI - iSCSI
0) lvchange -ay $class-path($scfg, $volnamesrc)
1) lvchange -p r $class-path($scfg, $volnamesrc)
2) lvcreate -L $class-size($scfg, $volnamesrc) -n $class-name($scfg)
$class-storage($scfg, $volnamedest) exception could_not_create: error
3) dd if=$class-path($scfg, $volnamesrc) of=$class-path($scfg,
$volnamedest) bs=1M exception could_not_create: - lvremove -f
$class-path($scfg, $volnamedest)
 - error
4) $class-device($scfg, $volnamesrc): $class-path($scfg,
$volnamedest), cache=$class-cache($scfg, $volnamesrc),
$class-size($scfg, $volnamesrc) exception write_error: - lvremove -f
$class-path($scfg, $volnamedest)
- error
5) lvremove -f $class-path($scfg, $volnamesrc)

   error:
  - lvchange -p rw $class-path($scfg, $volnamesrc)
  - lvchange -an $class-path($scfg, $volnamesrc)

NFS - iSCSI
0) chattr +i $class-path($scfg, $volnamesrc)
1) lvcreate -L $class-size($scfg, $volnamesrc) -n $class-name($scfg)
$class-storage($scfg, $volnamedest) exception could_not_create: error
2) dd if=$class-path($scfg, $volnamesrc) of=$class-path($scfg,
$volnamedest) bs=1M exception could_not_create: - lvremove -f
$class-path($scfg, $volnamedest)
 - error
3) $class-device($scfg, $volnamesrc): $class-path($scfg,
$volnamedest).$class-format($scfg), cache=$class-cache($scfg,
$volnamesrc), $class-size($scfg, $volnamesrc) exception write_error: -
lvremove -f $class-path($scfg, $volnamedest)
- error
4) chattr -i $class-path($scfg, $volnamesrc)
5) rm -f $class-path($scfg, $volnamesrc)

   error:
  - chattr -i $class-path($scfg, $volnamesrc)

NFS - NFS
0) chattr +i $class-path($scfg, $volnamesrc)
1) mkdir -p $class-path($scfg, $volnamedest)
 exception dir_exists: error if not empty
 exception could_not_create: error
2) dd if=$class-path($scfg, $volnamesrc) of=$class-path($scfg,
$volnamedest) bs=1M exception could_not_create: error
3) $class-device($scfg, $volnamesrc): $class-path($scfg,
$volnamedest).$class-format($scfg), cache=$class-cache($scfg,
$volnamesrc), $class-size($scfg, $volnamesrc) exception write_error: -
rm -f $class-path($scfg, $volnamedest).$class-format($scfg)
- error
4) chattr -i $class-path($scfg, $volnamesrc)
5) rm -f $class-path($scfg, $volnamesrc)

   error:
  - chattr -i $class-path($scfg, $volnamesrc)


PS. Does the current API contain any utility for changing a vm.conf?

-- 
Hilsen/Regards
Michael Rasmussen

Get my public GnuPG keys:
michael at rasmussen dot cc
http://pgp.mit.edu:11371/pks/lookup?op=getsearch=0xD3C9A00E
mir at datanom dot net
http://pgp.mit.edu:11371/pks/lookup?op=getsearch=0xE501F51C
mir at miras dot org
http://pgp.mit.edu:11371/pks/lookup?op=getsearch=0xE3E80917
--
Work Hard.
Rock Hard.
Eat Hard.
Sleep Hard.
Grow Big.
Wear Glasses If You Need 'Em.
-- The Webb Wilder Credo
Storage migration

Two distinct usecases - offline and online.
Four storage backend - raw, qcow2 and vmdk.
Two distinct environments - NFS and iSCSI(LVM).
Implementation preserves the machine on its current node.

Requirements: Since proxmox does