Re: [pve-devel] Storage migration: online design solution

2013-01-10 Thread Michael Rasmussen
I more or less have a complete solution. Need some more tests, though. I have 
discovered a potential problem. When a disk is migrated due to the fact that a 
new disk is created the UUID is changed.

Alexandre DERUMIER aderum...@odiso.com wrote:

for the uri, you can do 

#info block

in monitor, this is the file=  part.

you can use proxmox sub path to generate it

my path = PVE::Storage::path($storecfg, $dst_volid);




migration part should be something like

$drive = virtio0;
my $targetpath = PVE::Storage::path($storecfg, $dst_volid);
PVE::QemuServer::vm_mon_cmd($vmid, drive-mirror, device =
drive-$drive, target = $targetpath);

while{
  PVE::QemuServer::vm_mon_cmd($vmid, block-migrate-status);
}
  
PVE::QemuServer::vm_mon_cmd($vmid, block-job-complete, device =
drive-$drive);

- Mail original - 

De: Michael Rasmussen m...@datanom.net 
À: pve-devel@pve.proxmox.com 
Envoyé: Mercredi 9 Janvier 2013 22:12:47 
Objet: Re: [pve-devel] Storage migration: online design solution 

On Wed, 9 Jan 2013 22:01:42 +0100 
Michael Rasmussen m...@datanom.net wrote: 

 On Wed, 09 Jan 2013 12:05:05 +0100 (CET) 
 Alexandre DERUMIER aderum...@odiso.com wrote: 
 
  
  #drive_mirror -n -f drive-virtio0
sheepdog:127.0.0.1:7000:vm-144-disk-1 
  
 # info block 
 drive-virtio2: removable=0 io-status=ok 
 file=/dev/pve-storage1_vg/vm-102-disk-1 ro=0 drv=raw encrypted=0
bps=0 
 bps_rd=0 bps_wr=0 iops=0 iops_rd=0 iops_wr=0 
 
 # drive_mirror -f drive-virtio2 pve-storage2_lvm:vm-102-disk-2 
 Invalid block format 'raw' 
 
 Is raw only supported for destinations other than LVM? 
 
Found out that it has to be a URI: /dev/pve-storage2_vg/vm-102-disk-1 
works:-) 

-- 
Hilsen/Regards 
Michael Rasmussen 

Get my public GnuPG keys: 
michael at rasmussen dot cc 
http://pgp.mit.edu:11371/pks/lookup?op=getsearch=0xD3C9A00E 
mir at datanom dot net 
http://pgp.mit.edu:11371/pks/lookup?op=getsearch=0xE501F51C 
mir at miras dot org 
http://pgp.mit.edu:11371/pks/lookup?op=getsearch=0xE3E80917 
-- 
Mieux vaut tard que jamais! 

[ Better late than never ] 

___ 
pve-devel mailing list 
pve-devel@pve.proxmox.com 
http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel 

!DSPAM:50ee64de75081063085530!

-- 
Sent from my Android phone with K-9 Mail. Please excuse my brevity.___
pve-devel mailing list
pve-devel@pve.proxmox.com
http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel


Re: [pve-devel] Storage migration: online design solution

2013-01-10 Thread Alexandre DERUMIER
I more or less have a complete solution. Need some more tests, though. I have 
discovered a potential problem. When a disk is migrated due to the fact that 
a new disk is created the UUID is changed. 

mmm, yes, this seem normal because this is a new disk.
I don't know if we can force the uuid of disk,I'll do some research about this.









- Mail original - 

De: Michael Rasmussen m...@datanom.net 
À: Alexandre DERUMIER aderum...@odiso.com 
Cc: pve-devel@pve.proxmox.com 
Envoyé: Jeudi 10 Janvier 2013 10:23:23 
Objet: Re: [pve-devel] Storage migration: online design solution 

I more or less have a complete solution. Need some more tests, though. I have 
discovered a potential problem. When a disk is migrated due to the fact that a 
new disk is created the UUID is changed. 


Alexandre DERUMIER aderum...@odiso.com wrote: 

for the uri, you can do 

#info block 

in monitor, this is the file=  part. 

you can use proxmox sub path to generate it 

my path = PVE::Storage::path($storecfg, $dst_volid); 




migration part should be something like 

$drive = virtio0; 
my $targetpath = PVE::Storage::path($storecfg, $dst_volid); 
PVE::QemuServer::vm_mon_cmd($vmid, drive-mirror, device = drive-$drive, 
target = $targetpath); 

while{ 
PVE::QemuServer::vm_mon_cmd($vmid, block-migrate-status); 
} 

PVE::QemuServer::vm_mon_cmd($vmid, block-job-complete, device = 
drive-$drive); 

- Mail original - 

De: Michael Rasmussen m...@datanom.net 
À: pve-devel@pve.proxmox.com 
Envoyé: Mercredi 9 Janvier 2013 22:12:47 
Objet: Re: [pve-devel] S
 torage
migration: online design solution 

On Wed, 9 Jan 2013 22:01:42 +0100 
Michael Rasmussen m...@datanom.net wrote: 


blockquote
On Wed, 09 Jan 2013 12:05:05 +0100 (CET) 
Alexandre DERUMIER aderum...@odiso.com wrote: 



blockquote
#drive_mirror -n -f drive-virtio0 sheepdog: 127.0.0.1:7000 :vm-144-disk-1 


# info block 
drive-virtio2: removable=0 io-status=ok 
file=/dev/pve-storage1_vg/vm-102-disk-1 ro=0 drv=raw encrypted=0 bps=0 
bps_rd=0 bps_wr=0 iops=0 iops_rd=0 iops_wr=0 

# drive_mirror -f drive-virtio2 pve-storage2_lvm:vm-102-disk-2 
Invalid block format 'raw' 

Is raw only supported for destinations other than LVM? 
/blockquote

Found out that it has to be a URI: /dev/pve-storage2_vg/vm-102-disk-1 
works:-) 

/blockquote

-- 
Sent from my Android phone with K-9 Mail. Please excuse my brevity. 
___
pve-devel mailing list
pve-devel@pve.proxmox.com
http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel


Re: [pve-devel] Storage migration: online design solution

2013-01-10 Thread datanom.net

On 01-10-2013 11:11, Alexandre DERUMIER wrote:


mmm, yes, this seem normal because this is a new disk.
I don't know if we can force the uuid of disk,I'll do some research 
about this.



tune2fs -U random /dev/sdXY
___
pve-devel mailing list
pve-devel@pve.proxmox.com
http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel


Re: [pve-devel] Storage migration: online design solution

2013-01-10 Thread datanom.net

On 01-10-2013 11:19, datanom.net wrote:

tune2fs -U random /dev/sdXY
___

And to use an existing one:
tune2fs -U UUID /dev/sdXY
___
pve-devel mailing list
pve-devel@pve.proxmox.com
http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel


Re: [pve-devel] Storage migration: online design solution

2013-01-10 Thread Alexandre DERUMIER
Are the uuid stored on the drive ?

because if we mirror the full drive, it should be ok.



- Mail original - 

De: datanom.net m...@datanom.net 
À: pve-devel@pve.proxmox.com 
Envoyé: Jeudi 10 Janvier 2013 11:21:01 
Objet: Re: [pve-devel] Storage migration: online design solution 

On 01-10-2013 11:19, datanom.net wrote: 
 tune2fs -U random /dev/sdXY 
 ___ 
And to use an existing one: 
tune2fs -U UUID /dev/sdXY 
___ 
pve-devel mailing list 
pve-devel@pve.proxmox.com 
http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel 
___
pve-devel mailing list
pve-devel@pve.proxmox.com
http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel


Re: [pve-devel] Storage migration: online design solution

2013-01-10 Thread Alexandre DERUMIER
I have send a new version of my vm copy/clone feature,
I have added drive-mirror for live vm copy.

So you can look a it to get inspiration for your needs. (check patch 14/14)

I check the uuid of the partitions, they have not changed after the copy.


- Mail original - 

De: Michael Rasmussen m...@datanom.net 
À: Alexandre DERUMIER aderum...@odiso.com 
Cc: pve-devel@pve.proxmox.com 
Envoyé: Jeudi 10 Janvier 2013 10:23:23 
Objet: Re: [pve-devel] Storage migration: online design solution 

I more or less have a complete solution. Need some more tests, though. I have 
discovered a potential problem. When a disk is migrated due to the fact that a 
new disk is created the UUID is changed. 


Alexandre DERUMIER aderum...@odiso.com wrote: 

for the uri, you can do 

#info block 

in monitor, this is the file=  part. 

you can use proxmox sub path to generate it 

my path = PVE::Storage::path($storecfg, $dst_volid); 




migration part should be something like 

$drive = virtio0; 
my $targetpath = PVE::Storage::path($storecfg, $dst_volid); 
PVE::QemuServer::vm_mon_cmd($vmid, drive-mirror, device = drive-$drive, 
target = $targetpath); 

while{ 
PVE::QemuServer::vm_mon_cmd($vmid, block-migrate-status); 
} 

PVE::QemuServer::vm_mon_cmd($vmid, block-job-complete, device = 
drive-$drive); 

- Mail original - 

De: Michael Rasmussen m...@datanom.net 
À: pve-devel@pve.proxmox.com 
Envoyé: Mercredi 9 Janvier 2013 22:12:47 
Objet: Re: [pve-devel] S
 torage
migration: online design solution 

On Wed, 9 Jan 2013 22:01:42 +0100 
Michael Rasmussen m...@datanom.net wrote: 


blockquote
On Wed, 09 Jan 2013 12:05:05 +0100 (CET) 
Alexandre DERUMIER aderum...@odiso.com wrote: 



blockquote
#drive_mirror -n -f drive-virtio0 sheepdog: 127.0.0.1:7000 :vm-144-disk-1 


# info block 
drive-virtio2: removable=0 io-status=ok 
file=/dev/pve-storage1_vg/vm-102-disk-1 ro=0 drv=raw encrypted=0 bps=0 
bps_rd=0 bps_wr=0 iops=0 iops_rd=0 iops_wr=0 

# drive_mirror -f drive-virtio2 pve-storage2_lvm:vm-102-disk-2 
Invalid block format 'raw' 

Is raw only supported for destinations other than LVM? 
/blockquote

Found out that it has to be a URI: /dev/pve-storage2_vg/vm-102-disk-1 
works:-) 

/blockquote

-- 
Sent from my Android phone with K-9 Mail. Please excuse my brevity. 
___
pve-devel mailing list
pve-devel@pve.proxmox.com
http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel


Re: [pve-devel] Storage migration: online design solution

2013-01-10 Thread Alexandre DERUMIER
I had solved it with the human-monitor command 
how ? syntax ?  

but I discovered this 
was very unstable. Your usage of vm_mon_cmd seems a lot more stable so 
I have shifted to this too. 

They are 2 ways with qemu, human monitor command (hmp) aka old way.
since last year, we use QMP protocol, which use json, easier to parse response.

But both protocol are doing the same thing, so it's strange that you have 
resolved it with human monitor command.

Have you tried copying a boot device and tested that grub was able to 
see the new root?
Yes, that was the boot device.

- Mail original - 

De: Michael Rasmussen m...@datanom.net 
À: Alexandre DERUMIER aderum...@odiso.com 
Envoyé: Jeudi 10 Janvier 2013 19:35:30 
Objet: Re: [pve-devel] Storage migration: online design solution 

On Thu, 10 Jan 2013 14:22:02 +0100 (CET) 
Alexandre DERUMIER aderum...@odiso.com wrote: 

 I have send a new version of my vm copy/clone feature, 
 I have added drive-mirror for live vm copy. 
 
 So you can look a it to get inspiration for your needs. (check patch 14/14) 

I can see that our solutions are remarkable identical:-) 
I had solved it with the human-monitor command but I discovered this 
was very unstable. Your usage of vm_mon_cmd seems a lot more stable so 
I have shifted to this too. 
 
 I check the uuid of the partitions, they have not changed after the copy. 
 
Have you tried copying a boot device and tested that grub was able to 
see the new root? 

-- 
Hilsen/Regards 
Michael Rasmussen 

Get my public GnuPG keys: 
michael at rasmussen dot cc 
http://pgp.mit.edu:11371/pks/lookup?op=getsearch=0xD3C9A00E 
mir at datanom dot net 
http://pgp.mit.edu:11371/pks/lookup?op=getsearch=0xE501F51C 
mir at miras dot org 
http://pgp.mit.edu:11371/pks/lookup?op=getsearch=0xE3E80917 
-- 
I never put on a pair of shoes until I've worn them five years. 
-- Samuel Goldwyn 
___
pve-devel mailing list
pve-devel@pve.proxmox.com
http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel


Re: [pve-devel] Storage migration: online design solution

2013-01-09 Thread Alexandre DERUMIER
I am must certain I have tried that and was rewarded with a 'Device not 
found error'
mmm,this is the right name.
you can use info block monitor command to see name


# info block
drive-virtio1: removable=0 io-status=ok 
file=/mnt/pve/netapp-Vserver-pnfs/images/vm200/vm-200-disk-1.raw ro=0 drv=raw 
encrypted=0 bps=0 bps_rd=0 bps_wr=0 iops=0 iops_rd=0 iops_wr=0
drive-virtio0: removable=0 io-status=ok 
file=/mnt/pve/netapp-Vserver-pnfs/images/vm200/vm-200-disk-2.raw ro=0 drv=raw 
encrypted=0 bps=0 bps_rd=0 bps_wr=0 iops=0 iops_rd=0 iops_wr=0

you can have a look at qmp resize command in proxmox code (QemuServer.pm), it's 
using drive device as argument.
- Mail original - 

De: Michael Rasmussen m...@datanom.net 
À: pve-devel@pve.proxmox.com 
Envoyé: Mercredi 9 Janvier 2013 07:39:06 
Objet: Re: [pve-devel] Storage migration: online design solution 

On Wed, 09 Jan 2013 03:34:01 +0100 (CET) 
Alexandre DERUMIER aderum...@odiso.com wrote: 

 
 drive-virtioX 
 drive-ideX 
 drive-scsiX 
 drive-sataX 
 
I am must certain I have tried that and was rewarded with a 'Device not 
found error' 

-- 
Hilsen/Regards 
Michael Rasmussen 

Get my public GnuPG keys: 
michael at rasmussen dot cc 
http://pgp.mit.edu:11371/pks/lookup?op=getsearch=0xD3C9A00E 
mir at datanom dot net 
http://pgp.mit.edu:11371/pks/lookup?op=getsearch=0xE501F51C 
mir at miras dot org 
http://pgp.mit.edu:11371/pks/lookup?op=getsearch=0xE3E80917 
-- 
Not all men who drink are poets. Some of us drink because we aren't 
poets. 

___ 
pve-devel mailing list 
pve-devel@pve.proxmox.com 
http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel 
___
pve-devel mailing list
pve-devel@pve.proxmox.com
http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel


Re: [pve-devel] Storage migration: online design solution

2013-01-09 Thread Dietmar Maurer
 3) libvirt starts the destination QEMU and sets up the NBD server using the
 nbd-server-start and nbd-server-add commands.


They transfer data without encryption?
___
pve-devel mailing list
pve-devel@pve.proxmox.com
http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel


Re: [pve-devel] Storage migration: online design solution

2013-01-09 Thread Alexandre DERUMIER
Phase 5) drive-reopen, data: {device: ide-hd0, 
new-image-file: new block device, 

drive-reopen has been removed in qemu 1.3

we should use block-job-complete qmp command


- Mail original - 

De: Michael Rasmussen m...@datanom.net 
À: pve-devel@pve.proxmox.com 
Envoyé: Mercredi 9 Janvier 2013 02:23:11 
Objet: [pve-devel] Storage migration: online design solution 

Hi all, 

Doing online storage migration will involve the following phases: 
Phase 1) Create remote block device 
Phase 2) Connect this block device to NBD (nbd_server_add [-w] device) 
Phase 3) Start nbd_server (nbd_server_start [-a] [-w] host:port) 
Phase 4) drive-mirror, arguments: {device: ide-hd0, 
target: nbd:host:port, 
sync: full, 
format: (qcow2|raw) } 
Phase 5) drive-reopen, data: {device: ide-hd0, 
new-image-file: new block device, 
format: (qcow2|raw)} 
Phase 6) Stop nbd_server (nbd_server_stop) 
Phase 7) Remove old block device 

Have I missed something? 

PS. I am struggling with this 'device: ide-hd0'. What is the 
correct way of specifying a block device in proxmox given the following 
configuration: virtio2: pve-storage1_lvm:vm-102-disk-1,size=2G 

-- 
Hilsen/Regards 
Michael Rasmussen 

Get my public GnuPG keys: 
michael at rasmussen dot cc 
http://pgp.mit.edu:11371/pks/lookup?op=getsearch=0xD3C9A00E 
mir at datanom dot net 
http://pgp.mit.edu:11371/pks/lookup?op=getsearch=0xE501F51C 
mir at miras dot org 
http://pgp.mit.edu:11371/pks/lookup?op=getsearch=0xE3E80917 
-- 
There is nothing stranger in a strange land than the stranger who comes 
to visit. 

___ 
pve-devel mailing list 
pve-devel@pve.proxmox.com 
http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel 
___
pve-devel mailing list
pve-devel@pve.proxmox.com
http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel


Re: [pve-devel] Storage migration: online design solution

2013-01-09 Thread Alexandre DERUMIER
I cannot see why it is required to migrate the VM also? Maybe libvirt 
is having another agenda than my proposal?

Maybe I don't understand what you want to do ??? 

I see 2 cases:

case1: migrate a drive from a storage to another storage, both storage attached 
to host
---
we have a vm, on host1, with a drive on localstorage1 
---
|host1|  vm
--   |   
  [  |  disk 
  [  |
  |  --|  
---  --- 
local storage 1| |local storage 2|
  

we mirror the disk to local storage2 (qmp drive-mirror, target is a file or 
local block device, no need to nbd)

---
|host1|  vm
--   |   
  [  |  disk--mirror---disk2 
  [  |
  |  --|  
---  --- 
local storage 1| |local storage 2|
  

then, mirror is finished:

block-job-complete, the vm on host2 switch disk1 with disk2.


---
|host1|  vm
--   |   
  [  |   -disk2 
  [  |
  |  --|  
---  --- 
local storage 1| |local storage 2|
  

migration done


case 2 : migrate drive to a storage attached to a host2, and not attached with 
host1)
-
we have a vm, on host1, with a drive on localstorage1 

--- ---
|host1|  vm |host2|
--   |   --
  [  |  |
  [ disk|
--- --- 
local storage 1||local storage 2|



So you want to mirror the vm disk, to local storage2 on host2

--- ---
|host1|  vm |host2|
--   |   --
  [  |  |
  [ disk--mirror--nbd:disk2|
--- --- 
local storage 1||local storage 2|


then, mirror is finished:

block-job-complete, the vm on host2 switch disk1 with disk2.

---   ---
|host1|  vm   |host2|
-- |  --
  [| |
  [|---nbd:disk2|
--- --- 
local storage 1||local storage 2|



So, If we dont migrate the vm to host2, we'll have vm running on host1 with 
disk2 attached through nbd...



so, we need to migrate the vm to host2




---   ---
|host1|   vm  |host2|
--|   --
  [   | |
  [ disk2   |
--- --- 
local storage 1||local storage 2|


(note that I don't know how switch from nbd after the vm migration, I need to 
read libvirt code)





Note that this is the same with vmware vsphere 5.1
http://www.vmware.com/products/datacenter-virtualization/vsphere/vmotion.html

you can do vmotion + storage vMotion into one process.

- Mail original - 

De: Michael Rasmussen m...@datanom.net 
À: pve-devel@pve.proxmox.com 
Envoyé: Mercredi 9 Janvier 2013 17:55:16 
Objet: Re: [pve-devel] Storage migration: online design solution 

On Wed, 09 Jan 2013 09:24:15 +0100 (CET) 
Alexandre DERUMIER aderum...@odiso.com wrote: 

 
 
 Maybe I'm wrong, but I don't think we need nbd for storage migrate on the 
 same host. (just use new device/file as target option of qmp mirror) 
 
That was also what I ment. NBD is only required if you need to migrate 
to another host. 

 start ndb server on remote server 
 Launch drive-mirror on remote nbd, 
 at the end of block mirror,the vm run on source server with remote storage on 
 target server through nbd socket 
 the migrate vm to the target server 
 then rattach the target volume to the vm (drive-reopen (?)) 
 stop nbdserver 
 
 This is the way is implemented in libvirt 
 
I cannot see why it is required to migrate the VM also? Maybe libvirt 
is having another agenda than my proposal? 

-- 
Hilsen/Regards 
Michael Rasmussen 

Get my public GnuPG keys: 
michael at rasmussen dot cc 
http://pgp.mit.edu:11371/pks/lookup?op

Re: [pve-devel] Storage migration: online design solution

2013-01-09 Thread Michael Rasmussen
On Wed, 9 Jan 2013 18:47:43 +0100
Michael Rasmussen m...@datanom.net wrote:

 
 Only outstanding issue is about encryption. My proposal will be to have
 an option in the GUI for choosing to tunnel the migration through a ssh
 tunnel since this is already implemented in the current Proxmox code
 base? But I do think the default behavior should be the libvirt way
 which is without encryption. This is also, as I understand it, the way
 VmWare does it in vMotion.
 
Thinking some harder makes me realize that this is not a easy task
since we have not hoke into the process where bits are transferred and
my first assumption of using a tunnel seems not that obvious unless we
use my solution with NBD. NBD can be tunnelled but will require some
more work. I need to investigate, as you also have pointed out, how we
can switch from NBD to the real image. Maybe it will require a new
drive-mirror iteration.

But then again why should we use encryption? I see no difference
between using a remote block device today and the way drive-mirror does
its job. And connections to remote block devices today is not encrypted
either.

-- 
Hilsen/Regards
Michael Rasmussen

Get my public GnuPG keys:
michael at rasmussen dot cc
http://pgp.mit.edu:11371/pks/lookup?op=getsearch=0xD3C9A00E
mir at datanom dot net
http://pgp.mit.edu:11371/pks/lookup?op=getsearch=0xE501F51C
mir at miras dot org
http://pgp.mit.edu:11371/pks/lookup?op=getsearch=0xE3E80917
--
[The French Riviera is] a sunny place for shady people.
-- Somerset Maugham


signature.asc
Description: PGP signature
___
pve-devel mailing list
pve-devel@pve.proxmox.com
http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel


Re: [pve-devel] Storage migration: online design solution

2013-01-09 Thread Dietmar Maurer
 But then again why should we use encryption? I see no difference between
 using a remote block device today and the way drive-mirror does its job. And
 connections to remote block devices today is not encrypted either.

Hopefully you connect to the storage using a trusted network.

___
pve-devel mailing list
pve-devel@pve.proxmox.com
http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel


Re: [pve-devel] Storage migration: online design solution

2013-01-09 Thread Michael Rasmussen
On Wed, 9 Jan 2013 18:21:54 +
Dietmar Maurer diet...@proxmox.com wrote:

 
 Hopefully you connect to the storage using a trusted network.
 
exactly. Also the way according to proxmox documentation as it is with
all other hypervisors - host and storage should be kept on a dedicate
secured network.

-- 
Hilsen/Regards
Michael Rasmussen

Get my public GnuPG keys:
michael at rasmussen dot cc
http://pgp.mit.edu:11371/pks/lookup?op=getsearch=0xD3C9A00E
mir at datanom dot net
http://pgp.mit.edu:11371/pks/lookup?op=getsearch=0xE501F51C
mir at miras dot org
http://pgp.mit.edu:11371/pks/lookup?op=getsearch=0xE3E80917
--
  I'll give you a definite maybe. -Samuel Goldwyn


signature.asc
Description: PGP signature
___
pve-devel mailing list
pve-devel@pve.proxmox.com
http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel


Re: [pve-devel] Storage migration: online design solution

2013-01-09 Thread Michael Rasmussen
On Wed, 09 Jan 2013 12:05:05 +0100 (CET)
Alexandre DERUMIER aderum...@odiso.com wrote:

 
 #drive_mirror -n -f drive-virtio0 sheepdog:127.0.0.1:7000:vm-144-disk-1

# info block
drive-virtio2: removable=0 io-status=ok
file=/dev/pve-storage1_vg/vm-102-disk-1 ro=0 drv=raw encrypted=0 bps=0
bps_rd=0 bps_wr=0 iops=0 iops_rd=0 iops_wr=0

# drive_mirror -f drive-virtio2 pve-storage2_lvm:vm-102-disk-2
Invalid block format 'raw'

Is raw only supported for destinations other than LVM?

-- 
Hilsen/Regards
Michael Rasmussen

Get my public GnuPG keys:
michael at rasmussen dot cc
http://pgp.mit.edu:11371/pks/lookup?op=getsearch=0xD3C9A00E
mir at datanom dot net
http://pgp.mit.edu:11371/pks/lookup?op=getsearch=0xE501F51C
mir at miras dot org
http://pgp.mit.edu:11371/pks/lookup?op=getsearch=0xE3E80917
--
james abuse me.  I'm so lame I sent a bug report to
debian-devel-changes -- Seen on #Debian


signature.asc
Description: PGP signature
___
pve-devel mailing list
pve-devel@pve.proxmox.com
http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel


Re: [pve-devel] Storage migration: online design solution

2013-01-09 Thread Michael Rasmussen
On Wed, 9 Jan 2013 22:12:47 +0100
Michael Rasmussen m...@datanom.net wrote:

 Found out that it has to be a URI: /dev/pve-storage2_vg/vm-102-disk-1
 works:-)
 
And another observation: For LVM, or maybe any block device, the device
needs to be created before executing drive-mirror

# drive_mirror -f drive-virtio2 /dev/pve-storage2_vg/vm-102-disk-2
Could not open '/dev/pve-storage2_vg/vm-102-disk-2'

lvcreate -L 2G --name vm-102-disk-2 pve-storage2_vg
  Logical volume vm-102-disk-2 created

# drive_mirror -n -f drive-virtio2 /dev/pve-storage2_vg/vm-102-disk-2

# info block-jobs
Type mirror, device drive-virtio2: Completed 165675008 of 2147483648
bytes, speed limit 0 bytes/s

-- 
Hilsen/Regards
Michael Rasmussen

Get my public GnuPG keys:
michael at rasmussen dot cc
http://pgp.mit.edu:11371/pks/lookup?op=getsearch=0xD3C9A00E
mir at datanom dot net
http://pgp.mit.edu:11371/pks/lookup?op=getsearch=0xE501F51C
mir at miras dot org
http://pgp.mit.edu:11371/pks/lookup?op=getsearch=0xE3E80917
--
I surely do hope that's a syntax error.
-- Larry Wall in 199710011752.kaa21...@wall.org


signature.asc
Description: PGP signature
___
pve-devel mailing list
pve-devel@pve.proxmox.com
http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel


Re: [pve-devel] Storage migration: online design solution

2013-01-09 Thread Alexandre DERUMIER
And another observation: For LVM, or maybe any block device, the device
needs to be created before executing drive-mirror
Yes, it's better to create ourself the volume. (same for qemu-img convert).
Because Qemu can't create volume for all storages types (rbd,sheepdog, other 
storage api)


- Mail original - 

De: Michael Rasmussen m...@datanom.net 
À: pve-devel@pve.proxmox.com 
Envoyé: Mercredi 9 Janvier 2013 22:20:08 
Objet: Re: [pve-devel] Storage migration: online design solution 

On Wed, 9 Jan 2013 22:12:47 +0100 
Michael Rasmussen m...@datanom.net wrote: 

 Found out that it has to be a URI: /dev/pve-storage2_vg/vm-102-disk-1 
 works:-) 
 
And another observation: For LVM, or maybe any block device, the device 
needs to be created before executing drive-mirror 

# drive_mirror -f drive-virtio2 /dev/pve-storage2_vg/vm-102-disk-2 
Could not open '/dev/pve-storage2_vg/vm-102-disk-2' 

lvcreate -L 2G --name vm-102-disk-2 pve-storage2_vg 
Logical volume vm-102-disk-2 created 

# drive_mirror -n -f drive-virtio2 /dev/pve-storage2_vg/vm-102-disk-2 

# info block-jobs 
Type mirror, device drive-virtio2: Completed 165675008 of 2147483648 
bytes, speed limit 0 bytes/s 

-- 
Hilsen/Regards 
Michael Rasmussen 

Get my public GnuPG keys: 
michael at rasmussen dot cc 
http://pgp.mit.edu:11371/pks/lookup?op=getsearch=0xD3C9A00E 
mir at datanom dot net 
http://pgp.mit.edu:11371/pks/lookup?op=getsearch=0xE501F51C 
mir at miras dot org 
http://pgp.mit.edu:11371/pks/lookup?op=getsearch=0xE3E80917 
-- 
I surely do hope that's a syntax error. 
-- Larry Wall in 199710011752.kaa21...@wall.org 

___ 
pve-devel mailing list 
pve-devel@pve.proxmox.com 
http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel 
___
pve-devel mailing list
pve-devel@pve.proxmox.com
http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel


Re: [pve-devel] Storage migration: online design solution

2013-01-09 Thread Alexandre DERUMIER
for the uri, you can do 

#info block

in monitor, this is the file=  part.

you can use proxmox sub path to generate it

my path = PVE::Storage::path($storecfg, $dst_volid);




migration part should be something like

$drive = virtio0;
my $targetpath = PVE::Storage::path($storecfg, $dst_volid);
PVE::QemuServer::vm_mon_cmd($vmid, drive-mirror, device = drive-$drive, 
target = $targetpath);

while{
  PVE::QemuServer::vm_mon_cmd($vmid, block-migrate-status);
}
  
PVE::QemuServer::vm_mon_cmd($vmid, block-job-complete, device = 
drive-$drive);

- Mail original - 

De: Michael Rasmussen m...@datanom.net 
À: pve-devel@pve.proxmox.com 
Envoyé: Mercredi 9 Janvier 2013 22:12:47 
Objet: Re: [pve-devel] Storage migration: online design solution 

On Wed, 9 Jan 2013 22:01:42 +0100 
Michael Rasmussen m...@datanom.net wrote: 

 On Wed, 09 Jan 2013 12:05:05 +0100 (CET) 
 Alexandre DERUMIER aderum...@odiso.com wrote: 
 
  
  #drive_mirror -n -f drive-virtio0 sheepdog:127.0.0.1:7000:vm-144-disk-1 
  
 # info block 
 drive-virtio2: removable=0 io-status=ok 
 file=/dev/pve-storage1_vg/vm-102-disk-1 ro=0 drv=raw encrypted=0 bps=0 
 bps_rd=0 bps_wr=0 iops=0 iops_rd=0 iops_wr=0 
 
 # drive_mirror -f drive-virtio2 pve-storage2_lvm:vm-102-disk-2 
 Invalid block format 'raw' 
 
 Is raw only supported for destinations other than LVM? 
 
Found out that it has to be a URI: /dev/pve-storage2_vg/vm-102-disk-1 
works:-) 

-- 
Hilsen/Regards 
Michael Rasmussen 

Get my public GnuPG keys: 
michael at rasmussen dot cc 
http://pgp.mit.edu:11371/pks/lookup?op=getsearch=0xD3C9A00E 
mir at datanom dot net 
http://pgp.mit.edu:11371/pks/lookup?op=getsearch=0xE501F51C 
mir at miras dot org 
http://pgp.mit.edu:11371/pks/lookup?op=getsearch=0xE3E80917 
-- 
Mieux vaut tard que jamais! 

[ Better late than never ] 

___ 
pve-devel mailing list 
pve-devel@pve.proxmox.com 
http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel 
___
pve-devel mailing list
pve-devel@pve.proxmox.com
http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel


Re: [pve-devel] Storage migration: online design solution

2013-01-08 Thread Alexandre DERUMIER
one other question:

I'm reading the qmp doc

# @drive-mirror
#
# Start mirroring a block device's writes to a new destination.
#
# @device:  the name of the device whose writes should be mirrored.
#
# @target: the target of the new image. If the file exists, or if it
#  is a device, the existing file/device will be used as the new
#  destination.  If it does not exist, a new file will be created.
#
# @format: #optional the format of the new destination, default is the
#  format of the source

target can be file or device, (So I think it's for migrate between 2 storages 
available on same host)


So, if you use ndb server as target, is it for migrate storage between 2 
differents host ? (From 1 local storage on 1 host to another local storage on 
different host by example)
If yes, I think we need to migrate also the vm on the new host ?




- Mail original - 

De: Michael Rasmussen m...@datanom.net 
À: pve-devel@pve.proxmox.com 
Envoyé: Mercredi 9 Janvier 2013 02:23:11 
Objet: [pve-devel] Storage migration: online design solution 

Hi all, 

Doing online storage migration will involve the following phases: 
Phase 1) Create remote block device 
Phase 2) Connect this block device to NBD (nbd_server_add [-w] device) 
Phase 3) Start nbd_server (nbd_server_start [-a] [-w] host:port) 
Phase 4) drive-mirror, arguments: {device: ide-hd0, 
target: nbd:host:port, 
sync: full, 
format: (qcow2|raw) } 
Phase 5) drive-reopen, data: {device: ide-hd0, 
new-image-file: new block device, 
format: (qcow2|raw)} 
Phase 6) Stop nbd_server (nbd_server_stop) 
Phase 7) Remove old block device 

Have I missed something? 

PS. I am struggling with this 'device: ide-hd0'. What is the 
correct way of specifying a block device in proxmox given the following 
configuration: virtio2: pve-storage1_lvm:vm-102-disk-1,size=2G 

-- 
Hilsen/Regards 
Michael Rasmussen 

Get my public GnuPG keys: 
michael at rasmussen dot cc 
http://pgp.mit.edu:11371/pks/lookup?op=getsearch=0xD3C9A00E 
mir at datanom dot net 
http://pgp.mit.edu:11371/pks/lookup?op=getsearch=0xE501F51C 
mir at miras dot org 
http://pgp.mit.edu:11371/pks/lookup?op=getsearch=0xE3E80917 
-- 
There is nothing stranger in a strange land than the stranger who comes 
to visit. 

___ 
pve-devel mailing list 
pve-devel@pve.proxmox.com 
http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel 
___
pve-devel mailing list
pve-devel@pve.proxmox.com
http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel


Re: [pve-devel] Storage migration: online design solution

2013-01-08 Thread Michael Rasmussen
On Wed, 09 Jan 2013 03:34:01 +0100 (CET)
Alexandre DERUMIER aderum...@odiso.com wrote:

 
 drive-virtioX
 drive-ideX
 drive-scsiX
 drive-sataX
 
I am must certain I have tried that and was rewarded with a 'Device not
found error'

-- 
Hilsen/Regards
Michael Rasmussen

Get my public GnuPG keys:
michael at rasmussen dot cc
http://pgp.mit.edu:11371/pks/lookup?op=getsearch=0xD3C9A00E
mir at datanom dot net
http://pgp.mit.edu:11371/pks/lookup?op=getsearch=0xE501F51C
mir at miras dot org
http://pgp.mit.edu:11371/pks/lookup?op=getsearch=0xE3E80917
--
Not all men who drink are poets.  Some of us drink because we aren't
poets.


signature.asc
Description: PGP signature
___
pve-devel mailing list
pve-devel@pve.proxmox.com
http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel