[pve-devel] [PATCH 3/3] add downtime expected_downtime query-migrate info

2012-12-27 Thread Alexandre Derumier

Signed-off-by: Alexandre Derumier aderum...@odiso.com
---
 PVE/QemuMigrate.pm |6 --
 1 file changed, 4 insertions(+), 2 deletions(-)

diff --git a/PVE/QemuMigrate.pm b/PVE/QemuMigrate.pm
index 282cbc5..38f1d05 100644
--- a/PVE/QemuMigrate.pm
+++ b/PVE/QemuMigrate.pm
@@ -402,7 +402,8 @@ sub phase2 {
my $delay = time() - $start;
if ($delay  0) {
my $mbps = sprintf %.2f, $conf-{memory}/$delay;
-   $self-log('info', migration speed: $mbps MB/s);
+   my $downtime = $stat-{downtime} || 0;
+   $self-log('info', migration speed: $mbps MB/s - downtime 
$downtime ms);
}
}
 
@@ -424,11 +425,12 @@ sub phase2 {
my $xbzrlepages = $stat-{xbzrle-cache}-{pages} || 0;
my $xbzrlecachemiss = $stat-{xbzrle-cache}-{cache-miss} 
|| 0;
my $xbzrleoverflow = $stat-{xbzrle-cache}-{overflow} || 0;
+   my $expected_downtime = $stat-{expected-downtime} || 0;
#reduce sleep if remainig memory if lower than the everage 
transfert 
$usleep = 30 if $avglstat  $rem  $avglstat;
 
$self-log('info', migration status: $stat-{status} 
(transferred ${trans},  .
-  remaining ${rem}), total ${total}));
+  remaining ${rem}), total ${total}) , expected 
downtime ${expected_downtime});
 
#$self-log('info', migration xbzrle cachesize: 
${xbzrlecachesize} transferred ${xbzrlebytes} pages ${xbzrlepages} cachemiss 
${xbzrlecachemiss} overflow ${xbzrleoverflow});
}
-- 
1.7.10.4

___
pve-devel mailing list
pve-devel@pve.proxmox.com
http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel


[pve-devel] [PATCH 1/3] move qmp migrate_set_down migrate_set_speed to qemumigrate

2012-12-27 Thread Alexandre Derumier
so we can set the values when the vm is running
also use int() to get json working

Signed-off-by: Alexandre Derumier aderum...@odiso.com
---
 PVE/QemuMigrate.pm |   24 
 PVE/QemuServer.pm  |   15 ---
 2 files changed, 24 insertions(+), 15 deletions(-)

diff --git a/PVE/QemuMigrate.pm b/PVE/QemuMigrate.pm
index 0711681..9ca8f87 100644
--- a/PVE/QemuMigrate.pm
+++ b/PVE/QemuMigrate.pm
@@ -327,6 +327,30 @@ sub phase2 {
 
 my $start = time();
 
+# load_defaults
+my $defaults = PVE::QemuServer::load_defaults();
+
+# always set migrate speed (overwrite kvm default of 32m)
+# we set a very hight default of 8192m which is basically unlimited
+my $migrate_speed = $defaults-{migrate_speed} || 8192;
+$migrate_speed = $conf-{migrate_speed} || $migrate_speed;
+$migrate_speed = $migrate_speed * 1048576;
+$self-log('info', migrate_set_speed: $migrate_speed);
+eval {
+PVE::QemuServer::vm_mon_cmd_nocheck($vmid, migrate_set_speed, value 
= int($migrate_speed));
+};
+$self-log('info', migrate_set_speed error: $@) if $@;
+
+my $migrate_downtime = $defaults-{migrate_downtime};
+$migrate_downtime = $conf-{migrate_downtime} if 
defined($conf-{migrate_downtime});
+if (defined($migrate_downtime)) {
+   $self-log('info', migrate_set_downtime: $migrate_downtime);
+   eval {
+   PVE::QemuServer::vm_mon_cmd_nocheck($vmid, migrate_set_downtime, 
value = int($migrate_downtime));
+   };
+   $self-log('info', migrate_set_downtime error: $@) if $@;
+}
+
 my $capabilities = {};
 $capabilities-{capability} =  xbzrle;
 $capabilities-{state} = JSON::false;
diff --git a/PVE/QemuServer.pm b/PVE/QemuServer.pm
index 165eaf6..92c7db7 100644
--- a/PVE/QemuServer.pm
+++ b/PVE/QemuServer.pm
@@ -2976,21 +2976,6 @@ sub vm_start {
warn $@ if $@;
}
 
-   # always set migrate speed (overwrite kvm default of 32m)
-   # we set a very hight default of 8192m which is basically unlimited
-   my $migrate_speed = $defaults-{migrate_speed} || 8192;
-   $migrate_speed = $conf-{migrate_speed} || $migrate_speed;
-   $migrate_speed = $migrate_speed * 1048576;
-   eval {
-   vm_mon_cmd_nocheck($vmid, migrate_set_speed, value = 
$migrate_speed);
-   };
-
-   my $migrate_downtime = $defaults-{migrate_downtime};
-   $migrate_downtime = $conf-{migrate_downtime} if 
defined($conf-{migrate_downtime});
-   if (defined($migrate_downtime)) {
-   eval { vm_mon_cmd_nocheck($vmid, migrate_set_downtime, value = 
$migrate_downtime); };
-   }
-
if($migratedfrom) {
my $capabilities = {};
$capabilities-{capability} =  xbzrle;
-- 
1.7.10.4

___
pve-devel mailing list
pve-devel@pve.proxmox.com
http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel


Re: [pve-devel] fix setting migration parameters V3

2012-12-27 Thread Alexandre DERUMIER
I have resent patches splitted.


note that with migrate_downtime = 1s, I got around 1500ms of real downtime

with default value of 30ms, I got around 500ms of real downtime


(the 500ms of overhead seem to be fixed in last qemu git)


- Mail original - 

De: Stefan Priebe s.pri...@profihost.ag 
À: Alexandre Derumier aderum...@odiso.com 
Cc: pve-devel@pve.proxmox.com 
Envoyé: Jeudi 27 Décembre 2012 08:52:42 
Objet: Re: [pve-devel] fix setting migration parameters V3 

Hi, 

could you please resend your patch? i can't find it. I wanted to look 
what the extended new values show during migration. 

Thanks! 

Stefan Priebe 


Am 27.12.2012 06:45, schrieb Alexandre Derumier: 
 this is a V3 rework of stefan patch. 
 
 main change: remove default value of migrate_downtime, so will use qemu value 
 of 30ms. 
 
 tested with youtube video HD, downtime is around 500 ms, else if default 
 target is 30ms. 
 
 
 
 ___ 
 pve-devel mailing list 
 pve-devel@pve.proxmox.com 
 http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel 
 
___
pve-devel mailing list
pve-devel@pve.proxmox.com
http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel


[pve-devel] test: increase migrate_set_downtime only if expected_downtime have more than 30 iterations 0

2012-12-27 Thread Alexandre Derumier
This is a test attempt. (apply on top on 3 others patchs).

The idea is to use default 30ms qemu downtime value for migration.

If the expected_downtime is more than 30 iterations  0  (could be polished, 
maybe some average stats can be better),
then it's like a never ending migration, so we set to the migrate_set_downtime 
to the 1s default value. (maybe can we use the expected_downtime average as 
target value?)

So, We could get the lowest downtime if the memory workload can handle it, 
and for vm with big memory transfert, we upgrade the the value.

This could also help to avoid hang of the monitor(until we set 
migrate_set_downtime?)

___
pve-devel mailing list
pve-devel@pve.proxmox.com
http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel


[pve-devel] [PATCH] increase migrate_set_downtime only if expected downtime is more than 30 iterations 0

2012-12-27 Thread Alexandre Derumier

Signed-off-by: Alexandre Derumier aderum...@odiso.com
---
 PVE/QemuMigrate.pm |   24 ++--
 1 file changed, 14 insertions(+), 10 deletions(-)

diff --git a/PVE/QemuMigrate.pm b/PVE/QemuMigrate.pm
index dbbeb69..aeb6deb 100644
--- a/PVE/QemuMigrate.pm
+++ b/PVE/QemuMigrate.pm
@@ -341,16 +341,6 @@ sub phase2 {
 };
 $self-log('info', migrate_set_speed error: $@) if $@;
 
-my $migrate_downtime = $defaults-{migrate_downtime};
-$migrate_downtime = $conf-{migrate_downtime} if 
defined($conf-{migrate_downtime});
-if (defined($migrate_downtime)) {
-   $self-log('info', migrate_set_downtime: $migrate_downtime);
-   eval {
-   PVE::QemuServer::vm_mon_cmd_nocheck($vmid, migrate_set_downtime, 
value = int($migrate_downtime));
-   };
-   $self-log('info', migrate_set_downtime error: $@) if $@;
-}
-
 my $capabilities = {};
 $capabilities-{capability} =  xbzrle;
 $capabilities-{state} = JSON::false;
@@ -375,6 +365,8 @@ sub phase2 {
 my $usleep = 200;
 my $i = 0;
 my $err_count = 0;
+my $expecteddowntimecounter = 0;
+
 while (1) {
$i++;
my $avglstat = $lstat/$i if $lstat;
@@ -423,6 +415,7 @@ sub phase2 {
my $xbzrlecachemiss = $stat-{xbzrle-cache}-{cache-miss} 
|| 0;
my $xbzrleoverflow = $stat-{xbzrle-cache}-{overflow} || 0;
my $expected_downtime = $stat-{expected-downtime} || 0;
+   $expecteddowntimecounter++ if $expected_downtime  0;
 
#reduce sleep if remainig memory if lower than the everage 
transfert 
$usleep = 30 if $avglstat  $rem  $avglstat;
@@ -431,6 +424,17 @@ sub phase2 {
   remaining ${rem}), total ${total}, expected 
downtime ${expected_downtime}));
 
#$self-log('info', migration xbzrle cachesize: 
${xbzrlecachesize} transferred ${xbzrlebytes} pages ${xbzrlepages} cachemiss 
${xbzrlecachemiss} overflow ${xbzrleoverflow});
+
+   my $migrate_downtime = $defaults-{migrate_downtime};
+   $migrate_downtime = $conf-{migrate_downtime} if 
defined($conf-{migrate_downtime});
+   if (defined($migrate_downtime)  $expecteddowntimecounter == 
30) {
+   $self-log('info', migrate_set_downtime: 
$migrate_downtime);
+   eval {
+   PVE::QemuServer::vm_mon_cmd_nocheck($vmid, 
migrate_set_downtime, value = int($migrate_downtime));
+   };
+   $self-log('info', migrate_set_downtime error: $@) if $@;
+}
+
}
 
$lstat = $stat-{ram}-{transferred};
-- 
1.7.10.4

___
pve-devel mailing list
pve-devel@pve.proxmox.com
http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel


Re: [pve-devel] [PATCH 3/3] add downtime expected_downtime query-migrate info

2012-12-27 Thread Alexandre DERUMIER

Sure you mean just the output in the web GUI? 
yes

- Mail original - 

De: Stefan Priebe - Profihost AG s.pri...@profihost.ag 
À: Alexandre DERUMIER aderum...@odiso.com 
Cc: pve-devel@pve.proxmox.com 
Envoyé: Jeudi 27 Décembre 2012 11:11:56 
Objet: Re: [pve-devel] [PATCH 3/3] add downtime  expected_downtime 
query-migrate info 

Sure you mean just the output in the web GUI? 

Stefan 

Am 27.12.2012 um 10:29 schrieb Alexandre DERUMIER aderum...@odiso.com: 

 Can you send a migration log ? 
 
 - Mail original - 
 
 De: Stefan Priebe - Profihost AG s.pri...@profihost.ag 
 À: Alexandre DERUMIER aderum...@odiso.com 
 Cc: pve-devel@pve.proxmox.com 
 Envoyé: Jeudi 27 Décembre 2012 10:26:45 
 Objet: Re: [pve-devel] [PATCH 3/3] add downtime  expected_downtime 
 query-migrate info 
 
 Ok was just an idea. To me also 0.3 does not work but 0.03 works ;-( also 
 limit bandwidth to 500mb does not help. 
 
 Stefan 
 
 Am 27.12.2012 um 10:09 schrieb Alexandre DERUMIER aderum...@odiso.com: 
 
 for me expected downtime is 0 until the end of the migration. 
 
 here a sample log, using default 30ms downtime. 
 
 Dec 27 07:24:06 starting migration of VM 9 to node 'kvmtest2' 
 (10.3.94.47) 
 Dec 27 07:24:06 copying disk images 
 Dec 27 07:24:06 starting VM 9 on remote node 'kvmtest2' 
 Dec 27 07:24:08 starting migration tunnel 
 Dec 27 07:24:09 starting online/live migration on port 6 
 Dec 27 07:24:09 migrate_set_speed: 8589934592 
 Dec 27 07:24:11 migration status: active (transferred 66518837, remaining 
 8314994688), total 8397455360, expected downtime 0) 
 Dec 27 07:24:13 migration status: active (transferred 121753397, remaining 
 8259760128), total 8397455360, expected downtime 0) 
 Dec 27 07:24:15 migration status: active (transferred 171867087, remaining 
 7475191808), total 8397455360, expected downtime 0) 
 Dec 27 07:24:17 migration status: active (transferred 178976948, remaining 
 4921823232), total 8397455360, expected downtime 0) 
 Dec 27 07:24:19 migration status: active (transferred 227210472, remaining 
 4726611968), total 8397455360, expected downtime 0) 
 Dec 27 07:24:21 migration status: active (transferred 282889143, remaining 
 4361879552), total 8397455360, expected downtime 0) 
 Dec 27 07:24:23 migration status: active (transferred 345327372, remaining 
 4270788608), total 8397455360, expected downtime 0) 
 Dec 27 07:24:25 migration status: active (transferred 407383430, remaining 
 4185169920), total 8397455360, expected downtime 0) 
 Dec 27 07:24:27 migration status: active (transferred 469084514, remaining 
 3742027776), total 8397455360, expected downtime 0) 
 Dec 27 07:24:29 migration status: active (transferred 469687094, remaining 
 1273860096), total 8397455360, expected downtime 0) 
 Dec 27 07:24:31 migration status: active (transferred 501247097, remaining 
 79024128), total 8397455360, expected downtime 3893) 
 Dec 27 07:24:33 migration status: active (transferred 532052759, remaining 
 103800832), total 8397455360, expected downtime 139) 
 Dec 27 07:24:35 migration status: active (transferred 593541297, remaining 
 34357248), total 8397455360, expected downtime 85) 
 Dec 27 07:24:35 migration status: active (transferred 603842750, remaining 
 37982208), total 8397455360, expected downtime 44) 
 Dec 27 07:24:36 migration status: active (transferred 612899069, remaining 
 28667904), total 8397455360, expected downtime 44) 
 Dec 27 07:24:36 migration status: active (transferred 623036734, remaining 
 30404608), total 8397455360, expected downtime 43) 
 Dec 27 07:24:36 migration status: active (transferred 632519102, remaining 
 28622848), total 8397455360, expected downtime 38) 
 Dec 27 07:24:36 migration status: active (transferred 638048739, remaining 
 26222592), total 8397455360, expected downtime 33) 
 Dec 27 07:24:37 migration speed: 285.71 MB/s - downtime 648 ms 
 Dec 27 07:24:37 migration status: completed 
 Dec 27 07:24:42 migration finished successfuly (duration 00:00:36) 
 TASK OK 
 - Mail original - 
 
 De: Stefan Priebe s.pri...@profihost.ag 
 À: Alexandre Derumier aderum...@odiso.com 
 Cc: pve-devel@pve.proxmox.com 
 Envoyé: Jeudi 27 Décembre 2012 10:03:39 
 Objet: Re: [pve-devel] [PATCH 3/3] add downtime  expected_downtime 
 query-migrate info 
 
 Hi, 
 
 to me the whole VM stalls when the new expected downtime is 0. (64bit VM 
 4GB Mem 1GB in use VM totally IDLE). 
 
 That's why a low migration_downtime value help for me as qemu does no 
 longer believe that the expected downtime is 0. 
 
 Greets, 
 Stefan 
 
 Am 27.12.2012 09:18, schrieb Alexandre Derumier: 
 
 Signed-off-by: Alexandre Derumier aderum...@odiso.com 
 --- 
 PVE/QemuMigrate.pm | 6 -- 
 1 file changed, 4 insertions(+), 2 deletions(-) 
 
 diff --git a/PVE/QemuMigrate.pm b/PVE/QemuMigrate.pm 
 index 282cbc5..38f1d05 100644 
 --- a/PVE/QemuMigrate.pm 
 +++ b/PVE/QemuMigrate.pm 
 @@ -402,7 +402,8 @@ sub phase2 { 
 my $delay = time() - $start; 
 if ($delay  0) { 
 my $mbps = 

Re: [pve-devel] fix setting migration parameters V3

2012-12-27 Thread Dietmar Maurer
applied, thanks!

 -Original Message-
 From: pve-devel-boun...@pve.proxmox.com [mailto:pve-devel-
 boun...@pve.proxmox.com] On Behalf Of Alexandre DERUMIER
 Sent: Donnerstag, 27. Dezember 2012 09:20
 To: Stefan Priebe
 Cc: pve-devel@pve.proxmox.com
 Subject: Re: [pve-devel] fix setting migration parameters V3
 
 I have resent patches splitted.

___
pve-devel mailing list
pve-devel@pve.proxmox.com
http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel


Re: [pve-devel] [PATCH 3/3] add downtime expected_downtime query-migrate info

2012-12-27 Thread Stefan Priebe

Hi,
Am 27.12.2012 11:21, schrieb Alexandre DERUMIER:



Sure you mean just the output in the web GUI?

yes


Output with latest qemu-server and latest pve-qemu-kvm.

VM with 4GB Mem and 100MB used (totally IDLE):
Dec 27 12:55:46 starting migration of VM 105 to node 'cloud1-1203' 
(10.255.0.22)

Dec 27 12:55:46 copying disk images
Dec 27 12:55:46 starting VM 105 on remote node 'cloud1-1203'
Dec 27 12:55:48 starting migration tunnel
Dec 27 12:55:49 starting online/live migration on port 6
Dec 27 12:55:49 migrate_set_speed: 8589934592
Dec 27 12:55:49 migrate_set_downtime: 1
Dec 27 12:55:51 migration speed: 1024.00 MB/s - downtime 1534 ms
Dec 27 12:55:51 migration status: completed
Dec 27 12:55:54 migration finished successfuly (duration 00:00:09)
TASK OK

The same again with 1GB Memory used (cached mem):
Dec 27 12:57:11 starting migration of VM 105 to node 'cloud1-1202' 
(10.255.0.20)

Dec 27 12:57:11 copying disk images
Dec 27 12:57:11 starting VM 105 on remote node 'cloud1-1202'
Dec 27 12:57:15 starting migration tunnel
Dec 27 12:57:15 starting online/live migration on port 6
Dec 27 12:57:15 migrate_set_speed: 8589934592
Dec 27 12:57:15 migrate_set_downtime: 1
Dec 27 12:58:45 migration speed: 22.76 MB/s - downtime 90004 ms
Dec 27 12:58:45 migration status: completed
Dec 27 12:58:49 migration finished successfuly (duration 00:01:38)
TASK OK

VM was halted between and no output of stats where done as the monitor 
was blocked.


The same again with 1GB Memory and migrate_downtime set to 0.03 (cached 
mem):
Dec 27 13:00:19 starting migration of VM 105 to node 'cloud1-1203' 
(10.255.0.22)

Dec 27 13:00:19 copying disk images
Dec 27 13:00:19 starting VM 105 on remote node 'cloud1-1203'
Dec 27 13:00:22 starting migration tunnel
Dec 27 13:00:23 starting online/live migration on port 6
Dec 27 13:00:23 migrate_set_speed: 8589934592
Dec 27 13:00:23 migrate_set_downtime: 0.03
Dec 27 13:00:25 migration status: active (transferred 404647386, 
remaining 680390656), total 2156265472) , expected downtime 190
Dec 27 13:00:27 migration status: active (transferred 880582320, 
remaining 203579392), total 2156265472) , expected downtime 53

Dec 27 13:00:29 migration speed: 341.33 MB/s - downtime 490 ms
Dec 27 13:00:29 migration status: completed
Dec 27 13:00:32 migration finished successfuly (duration 00:00:13)
TASK OK

Stefan
___
pve-devel mailing list
pve-devel@pve.proxmox.com
http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel


Re: [pve-devel] [PATCH 3/3] add downtime expected_downtime query-migrate info

2012-12-27 Thread Stefan Priebe

Hi,
Am 27.12.2012 12:36, schrieb Dietmar Maurer:

for me expected downtime is 0 until the end of the migration.


Just uploaded a fix for that - please can you test:
https://git.proxmox.com/?p=pve-qemu-kvm.git;a=commit;h=ca5316794924e7af304c5af762d68e0f0e5cdc5d


Thanks sadly it doesn't fix the problem. I send you some outputs in the 
answer of alexandres question a minute ago.


Greets,
Stefan
___
pve-devel mailing list
pve-devel@pve.proxmox.com
http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel


Re: [pve-devel] [PATCH 3/3] add downtime expected_downtime query-migrate info

2012-12-27 Thread Alexandre DERUMIER
The same again with 1GB Memory used (cached mem): 
Dec 27 12:57:11 starting migration of VM 105 to node 'cloud1-1202' 
(10.255.0.20) 
Dec 27 12:57:11 copying disk images 
Dec 27 12:57:11 starting VM 105 on remote node 'cloud1-1202' 
Dec 27 12:57:15 starting migration tunnel 
Dec 27 12:57:15 starting online/live migration on port 6 
Dec 27 12:57:15 migrate_set_speed: 8589934592 
Dec 27 12:57:15 migrate_set_downtime: 1 
Dec 27 12:58:45 migration speed: 22.76 MB/s - downtime 90004 ms 
Dec 27 12:58:45 migration status: completed 
Dec 27 12:58:49 migration finished successfuly (duration 00:01:38) 
TASK OK 

damn, 9ms of downtime, that's crazy.

It's like it's trying to finish the migration direcly, sending the whole 1GB 
memory in 1 pass.

I think the monitor is blocked because it's blocked also for me at the end of 
the migration (but some ms, not 9ms)

Sound like a bug somewhere is qemu 




- Mail original - 

De: Stefan Priebe s.pri...@profihost.ag 
À: Alexandre DERUMIER aderum...@odiso.com 
Cc: pve-devel@pve.proxmox.com, Dietmar Maurer diet...@proxmox.com 
Envoyé: Jeudi 27 Décembre 2012 13:00:47 
Objet: Re: [pve-devel] [PATCH 3/3] add downtime  expected_downtime 
query-migrate info 

Hi, 
Am 27.12.2012 11:21, schrieb Alexandre DERUMIER: 
 
 Sure you mean just the output in the web GUI? 
 yes 

Output with latest qemu-server and latest pve-qemu-kvm. 

VM with 4GB Mem and 100MB used (totally IDLE): 
Dec 27 12:55:46 starting migration of VM 105 to node 'cloud1-1203' 
(10.255.0.22) 
Dec 27 12:55:46 copying disk images 
Dec 27 12:55:46 starting VM 105 on remote node 'cloud1-1203' 
Dec 27 12:55:48 starting migration tunnel 
Dec 27 12:55:49 starting online/live migration on port 6 
Dec 27 12:55:49 migrate_set_speed: 8589934592 
Dec 27 12:55:49 migrate_set_downtime: 1 
Dec 27 12:55:51 migration speed: 1024.00 MB/s - downtime 1534 ms 
Dec 27 12:55:51 migration status: completed 
Dec 27 12:55:54 migration finished successfuly (duration 00:00:09) 
TASK OK 

The same again with 1GB Memory used (cached mem): 
Dec 27 12:57:11 starting migration of VM 105 to node 'cloud1-1202' 
(10.255.0.20) 
Dec 27 12:57:11 copying disk images 
Dec 27 12:57:11 starting VM 105 on remote node 'cloud1-1202' 
Dec 27 12:57:15 starting migration tunnel 
Dec 27 12:57:15 starting online/live migration on port 6 
Dec 27 12:57:15 migrate_set_speed: 8589934592 
Dec 27 12:57:15 migrate_set_downtime: 1 
Dec 27 12:58:45 migration speed: 22.76 MB/s - downtime 90004 ms 
Dec 27 12:58:45 migration status: completed 
Dec 27 12:58:49 migration finished successfuly (duration 00:01:38) 
TASK OK 

VM was halted between and no output of stats where done as the monitor 
was blocked. 

The same again with 1GB Memory and migrate_downtime set to 0.03 (cached 
mem): 
Dec 27 13:00:19 starting migration of VM 105 to node 'cloud1-1203' 
(10.255.0.22) 
Dec 27 13:00:19 copying disk images 
Dec 27 13:00:19 starting VM 105 on remote node 'cloud1-1203' 
Dec 27 13:00:22 starting migration tunnel 
Dec 27 13:00:23 starting online/live migration on port 6 
Dec 27 13:00:23 migrate_set_speed: 8589934592 
Dec 27 13:00:23 migrate_set_downtime: 0.03 
Dec 27 13:00:25 migration status: active (transferred 404647386, 
remaining 680390656), total 2156265472) , expected downtime 190 
Dec 27 13:00:27 migration status: active (transferred 880582320, 
remaining 203579392), total 2156265472) , expected downtime 53 
Dec 27 13:00:29 migration speed: 341.33 MB/s - downtime 490 ms 
Dec 27 13:00:29 migration status: completed 
Dec 27 13:00:32 migration finished successfuly (duration 00:00:13) 
TASK OK 

Stefan 
___
pve-devel mailing list
pve-devel@pve.proxmox.com
http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel


Re: [pve-devel] [PATCH 3/3] add downtime expected_downtime query-migrate info

2012-12-27 Thread Stefan Priebe

Hi,
Am 27.12.2012 13:19, schrieb Dietmar Maurer:

The same again with 1GB Memory used (cached mem):


Does the cache contains duplicated pages? Or pages filled with zeroes?


Should'nt be zeros. If there are duplicated pages i've no idea.

I've done the following:
- boot VM (mem usage is 100MB)
- started
find /usr -type f -print0 | xargs cat /dev/null
- memory usage is 860MB cached and 100MB usage

Greets,
Stefan
___
pve-devel mailing list
pve-devel@pve.proxmox.com
http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel


Re: [pve-devel] [PATCH 3/3] add downtime expected_downtime query-migrate info

2012-12-27 Thread Stefan Priebe

Hi,
Am 27.12.2012 13:39, schrieb Alexandre DERUMIER:

not right now - but i tested this yesterday and didn't saw a difference
so i moved again to 3.6.11.

It'll do test with a 3.6 kernel too, to see if I have a difference


Thanks! Will retest with pve kernel too.


Do you have tried with last qemu 1.4 git ?
Because I'm looking into the code, and the change in migration code is really 
huge.
So we could known if it's a qemu migration code problem or not...


The problem is the LATEST git qemu code they've changed a LOT of include 
file locations so nearly NO PVE patch applies...


Stefan
___
pve-devel mailing list
pve-devel@pve.proxmox.com
http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel


Re: [pve-devel] [PATCH 3/3] add downtime expected_downtime query-migrate info

2012-12-27 Thread Dietmar Maurer
 not right now - but i tested this yesterday and didn't saw a difference so i
 moved again to 3.6.11.
 
 But i can retest?

Yes, please re-test.
___
pve-devel mailing list
pve-devel@pve.proxmox.com
http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel


Re: [pve-devel] [PATCH 3/3] add downtime expected_downtime query-migrate info

2012-12-27 Thread Alexandre DERUMIER
The problem is the LATEST git qemu code they've changed a LOT of include 
file locations so nearly NO PVE patch applies... 

I'm currently building a pve-qemu-kvm on qemu 1.4, with basics patches

fr-ca-keymap-corrections.diff
fairsched.diff
pve-auth.patch
vencrypt-auth-plain.patch
enable-kvm-by-default.patch

should be enough to connect with vnc and test migration

I'll keep in touch



- Mail original - 

De: Stefan Priebe s.pri...@profihost.ag 
À: Alexandre DERUMIER aderum...@odiso.com 
Cc: pve-devel@pve.proxmox.com, Dietmar Maurer diet...@proxmox.com 
Envoyé: Jeudi 27 Décembre 2012 13:40:24 
Objet: Re: [pve-devel] [PATCH 3/3] add downtime  expected_downtime 
query-migrate info 

Hi, 
Am 27.12.2012 13:39, schrieb Alexandre DERUMIER: 
 not right now - but i tested this yesterday and didn't saw a difference 
 so i moved again to 3.6.11. 
 It'll do test with a 3.6 kernel too, to see if I have a difference 

Thanks! Will retest with pve kernel too. 

 Do you have tried with last qemu 1.4 git ? 
 Because I'm looking into the code, and the change in migration code is really 
 huge. 
 So we could known if it's a qemu migration code problem or not... 

The problem is the LATEST git qemu code they've changed a LOT of include 
file locations so nearly NO PVE patch applies... 

Stefan 
___
pve-devel mailing list
pve-devel@pve.proxmox.com
http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel


Re: [pve-devel] [PATCH 3/3] add downtime expected_downtime query-migrate info

2012-12-27 Thread Stefan Priebe

Strangely the status of the VM is always paused after migration.

Stefan
Am 27.12.2012 15:18, schrieb Stefan Priebe:

Hi,

have now done the same.

With current git qemu migration is really fast using 1,6GB memory:
Dec 27 15:17:45 starting migration of VM 105 to node 'cloud1-1202'
(10.255.0.20)
Dec 27 15:17:45 copying disk images
Dec 27 15:17:45 starting VM 105 on remote node 'cloud1-1202'
Dec 27 15:17:48 starting online/live migration on tcp:10.255.0.20:6
Dec 27 15:17:48 migrate_set_speed: 8589934592
Dec 27 15:17:48 migrate_set_downtime: 0.05
Dec 27 15:17:52 migration speed: 512.00 MB/s - downtime 174 ms
Dec 27 15:17:52 migration status: completed
Dec 27 15:17:53 migration finished successfuly (duration 00:00:09)
TASK OK

It's so fast that i can't check if i see that's while migrating.

Greets,
Stefan

Am 27.12.2012 14:26, schrieb Alexandre DERUMIER:

The problem is the LATEST git qemu code they've changed a LOT of
include
file locations so nearly NO PVE patch applies...


I'm currently building a pve-qemu-kvm on qemu 1.4, with basics patches

fr-ca-keymap-corrections.diff
fairsched.diff
pve-auth.patch
vencrypt-auth-plain.patch
enable-kvm-by-default.patch

should be enough to connect with vnc and test migration

I'll keep in touch



- Mail original -

De: Stefan Priebe s.pri...@profihost.ag
À: Alexandre DERUMIER aderum...@odiso.com
Cc: pve-devel@pve.proxmox.com, Dietmar Maurer diet...@proxmox.com
Envoyé: Jeudi 27 Décembre 2012 13:40:24
Objet: Re: [pve-devel] [PATCH 3/3] add downtime  expected_downtime
query-migrate info

Hi,
Am 27.12.2012 13:39, schrieb Alexandre DERUMIER:

not right now - but i tested this yesterday and didn't saw a
difference
so i moved again to 3.6.11.

It'll do test with a 3.6 kernel too, to see if I have a difference


Thanks! Will retest with pve kernel too.


Do you have tried with last qemu 1.4 git ?
Because I'm looking into the code, and the change in migration code
is really huge.
So we could known if it's a qemu migration code problem or not...


The problem is the LATEST git qemu code they've changed a LOT of include
file locations so nearly NO PVE patch applies...

Stefan


___
pve-devel mailing list
pve-devel@pve.proxmox.com
http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel


[pve-devel] [PATCH 5/8] nexenta: has_feature

2012-12-27 Thread Alexandre Derumier

Signed-off-by: Alexandre Derumier aderum...@odiso.com
---
 PVE/Storage/NexentaPlugin.pm |   14 ++
 1 file changed, 14 insertions(+)

diff --git a/PVE/Storage/NexentaPlugin.pm b/PVE/Storage/NexentaPlugin.pm
index 386656f..9622548 100644
--- a/PVE/Storage/NexentaPlugin.pm
+++ b/PVE/Storage/NexentaPlugin.pm
@@ -380,4 +380,18 @@ sub volume_snapshot_delete {
 nexenta_request($scfg, 'destroy', 'snapshot', 
$scfg-{pool}/$volname\@$snap, '');
 }
 
+sub volume_has_feature {
+my ($class, $scfg, $feature, $storeid, $volname, $snapname, $running) = @_;
+
+my $features = {
+snapshot = { current = 1, snap = 1},
+clone = { snap = 1},
+};
+
+my $snap = $snapname ? 'snap' : 'current';
+return 1 if $features-{$feature}-{$snap};
+
+return undef;
+}
+
 1;
-- 
1.7.10.4

___
pve-devel mailing list
pve-devel@pve.proxmox.com
http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel


[pve-devel] [PATCH 8/8] iscsidirect : has_feature

2012-12-27 Thread Alexandre Derumier

Signed-off-by: Alexandre Derumier aderum...@odiso.com
---
 PVE/Storage/ISCSIDirectPlugin.pm |6 ++
 1 file changed, 6 insertions(+)

diff --git a/PVE/Storage/ISCSIDirectPlugin.pm b/PVE/Storage/ISCSIDirectPlugin.pm
index e2490e8..b648fd5 100644
--- a/PVE/Storage/ISCSIDirectPlugin.pm
+++ b/PVE/Storage/ISCSIDirectPlugin.pm
@@ -208,4 +208,10 @@ sub volume_snapshot_delete {
 die volume snapshot delete is not possible on iscsi device;
 }
 
+sub volume_has_feature {
+my ($class, $scfg, $feature, $storeid, $volname, $snapname, $running) = @_;
+
+return undef;
+}
+
 1;
-- 
1.7.10.4

___
pve-devel mailing list
pve-devel@pve.proxmox.com
http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel


[pve-devel] [PATCH 7/8] iscsi : has_feature

2012-12-27 Thread Alexandre Derumier

Signed-off-by: Alexandre Derumier aderum...@odiso.com
---
 PVE/Storage/ISCSIPlugin.pm |6 ++
 1 file changed, 6 insertions(+)

diff --git a/PVE/Storage/ISCSIPlugin.pm b/PVE/Storage/ISCSIPlugin.pm
index 173ca1d..ac8384b 100644
--- a/PVE/Storage/ISCSIPlugin.pm
+++ b/PVE/Storage/ISCSIPlugin.pm
@@ -383,5 +383,11 @@ sub volume_resize {
 die volume resize is not possible on iscsi device;
 }
 
+sub volume_has_feature {
+my ($class, $scfg, $feature, $storeid, $volname, $snapname, $running) = @_;
+
+return undef;
+}
+
 
 1;
-- 
1.7.10.4

___
pve-devel mailing list
pve-devel@pve.proxmox.com
http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel


[pve-devel] [PATCH] enable snapshot button only if vm has snapshot feature

2012-12-27 Thread Alexandre Derumier

Signed-off-by: Alexandre Derumier aderum...@odiso.com
---
 www/manager/qemu/SnapshotTree.js |   17 +
 1 file changed, 17 insertions(+)

diff --git a/www/manager/qemu/SnapshotTree.js b/www/manager/qemu/SnapshotTree.js
index 0fd1b82..f98c849 100644
--- a/www/manager/qemu/SnapshotTree.js
+++ b/www/manager/qemu/SnapshotTree.js
@@ -71,6 +71,20 @@ Ext.define('PVE.qemu.SnapshotTree', {
me.load_task.delay(me.load_delay);
}
});
+
+PVE.Utils.API2Request({
+   url: '/nodes/' + me.nodename + '/qemu/' + me.vmid + '/feature',
+   params: { feature: 'snapshot' },
+method: 'GET',
+success: function(response, options) {
+var res = response.result.data;
+   if (res === 1) {
+  Ext.getCmp('snapshotBtn').enable();
+   }
+}
+});
+
+
 },
 
 initComponent: function() {
@@ -94,6 +108,7 @@ Ext.define('PVE.qemu.SnapshotTree', {
return record  record.data  record.data.name 
record.data.name !== 'current';
};
+
var valid_snapshot_rollback = function(record) {
return record  record.data  record.data.name 
record.data.name !== 'current'  !record.data.snapstate;
@@ -193,7 +208,9 @@ Ext.define('PVE.qemu.SnapshotTree', {
});
 
var snapshotBtn = Ext.create('Ext.Button', { 
+   id: 'snapshotBtn',
text: gettext('Take Snapshot'),
+   disabled: true,
handler: function() {
var win = Ext.create('PVE.window.Snapshot', { 
nodename: me.nodename,
-- 
1.7.10.4

___
pve-devel mailing list
pve-devel@pve.proxmox.com
http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel


Re: [pve-devel] [PATCH 3/3] add downtime expected_downtime query-migrate info

2012-12-27 Thread Stefan Priebe

Hi,

yes it works really fine with qemu 1.4:

Same VM 2GB in use of 4GB:

Dec 27 19:33:15 starting migration of VM 105 to node 'cloud1-1203' 
(10.255.0.22)

Dec 27 19:33:15 copying disk images
Dec 27 19:33:15 starting VM 105 on remote node 'cloud1-1203'
Dec 27 19:33:19 starting migration tunnel
Dec 27 19:33:20 starting online/live migration on port 6
Dec 27 19:33:20 migrate_set_speed: 8589934592
Dec 27 19:33:20 migrate_set_downtime: 1
Dec 27 19:33:22 migration status: active (transferred 545515913, 
remaining 1537454080), total 2156396544) , expected downtime 0
Dec 27 19:33:24 migration status: active (transferred 993620520, 
remaining 1089703936), total 2156396544) , expected downtime 0
Dec 27 19:33:26 migration status: active (transferred 1460008672, 
remaining 623603712), total 2156396544) , expected downtime 0
Dec 27 19:33:28 migration status: active (transferred 1927652198, 
remaining 156454912), total 2156396544) , expected downtime 0

Dec 27 19:33:29 migration speed: 227.56 MB/s - downtime 652 ms
Dec 27 19:33:29 migration status: completed
Dec 27 19:33:33 migration finished successfuly (duration 00:00:18)
TASK OK

Stefan
Am 27.12.2012 15:40, schrieb Alexandre DERUMIER:

you are too fast ;) here my package:
http://odisoweb1.odiso.net/pve-qemu-kvm_1.3-10_amd64.deb

series (patches attached in the mail)
--
fr-ca-keymap-corrections.diff
fairsched.diff
pve-auth.patch
vencrypt-auth-plain.patch
enable-kvm-by-default.patch
virtio-balloon-drop-old-stats-code.patch
virtio-balloon-re-enable-balloon-stats.patch
virtio-balloon-document-stats.patch
virtio-balloon-fix-query.patch


Does it work for you with migrate_downtime 1 ?


here my logs:

migration logs:small memory workload (default 0.030s migrate_downtime): 
downtime 29ms !

Dec 27 15:20:29 starting migration of VM 9 to node 'kvmtest2' (10.3.94.47)
Dec 27 15:20:29 copying disk images
Dec 27 15:20:29 starting VM 9 on remote node 'kvmtest2'
Dec 27 15:20:31 starting migration tunnel
Dec 27 15:20:32 starting online/live migration on port 6
Dec 27 15:20:32 migrate_set_speed: 8589934592
Dec 27 15:20:34 migration status: active (transferred 76262747, remaining 
8305565696), total 8397586432, expected downtime 0)
Dec 27 15:20:36 migration status: active (transferred 170298066, remaining 
8170553344), total 8397586432, expected downtime 0)
Dec 27 15:20:38 migration status: active (transferred 178034093, remaining 
4819476480), total 8397586432, expected downtime 0)
Dec 27 15:20:40 migration status: active (transferred 205755801, remaining 
4589043712), total 8397586432, expected downtime 0)
Dec 27 15:20:42 migration status: active (transferred 231493719, remaining 
4186116096), total 8397586432, expected downtime 0)
Dec 27 15:20:44 migration status: active (transferred 294439405, remaining 
3542519808), total 8397586432, expected downtime 0)
Dec 27 15:20:46 migration status: active (transferred 301252962, remaining 
441729024), total 8397586432, expected downtime 0)
Dec 27 15:20:48 migration speed: 500.00 MB/s - downtime 29 ms
Dec 27 15:20:48 migration status: completed
Dec 27 15:20:52 migration finished successfuly (duration 00:00:24)
TASK OK



migration logs:playing HD video in vlc : downtime 600ms (migrate_downtime set 
to 1sec because of never ending migration)


topDec 27 15:34:37 starting migration of VM 9 to node 'kvmtest2' 
(10.3.94.47)
Dec 27 15:34:37 copying disk images
Dec 27 15:34:37 starting VM 9 on remote node 'kvmtest2'
Dec 27 15:34:39 starting migration tunnel
Dec 27 15:34:40 starting online/live migration on port 6
Dec 27 15:34:40 migrate_set_speed: 8589934592
Dec 27 15:34:42 migration status: active (transferred 96367979, remaining 
8285630464), total 8397586432, expected downtime 0)
Dec 27 15:34:44 migration status: active (transferred 170482023, remaining 
8142753792), total 8397586432, expected downtime 0)
Dec 27 15:34:46 migration status: active (transferred 198946937, remaining 
6157733888), total 8397586432, expected downtime 0)
Dec 27 15:34:48 migration status: active (transferred 239722016, remaining 
5490028544), total 8397586432, expected downtime 0)
Dec 27 15:34:50 migration status: active (transferred 298664987, remaining 
4960985088), total 8397586432, expected downtime 0)
Dec 27 15:34:52 migration status: active (transferred 374755031, remaining 
4403380224), total 8397586432, expected downtime 0)
Dec 27 15:34:54 migration status: active (transferred 438843200, remaining 
4119465984), total 8397586432, expected downtime 0)
Dec 27 15:34:57 migration status: active (transferred 462321818, remaining 0), 
total 8397586432, expected downtime 0)
Dec 27 15:34:57 migration status: active (transferred 500708093, remaining 
187273216), total 8397586432, expected downtime 0)
Dec 27 15:35:01 migration status: active (transferred 525596322, remaining 
66678784), total 8397586432, expected downtime 0)
Dec 27 15:35:01 migration status: active (transferred 547180175, remaining 0), 
total 8397586432, expected 

Re: [pve-devel] [PATCH 3/3] add downtime expected_downtime query-migrate info

2012-12-27 Thread Stefan Priebe

Hi,

Am 27.12.2012 16:21, schrieb Alexandre DERUMIER:

But if it's work fine for you with 1s migrate_downtime, we need to find where 
the problem is in the current qemu 1.3 code ... (maybe qemu mailing can help)

To my last mails nobody answered...

Stefan



- Mail original -

De: Stefan Priebe s.pri...@profihost.ag
À: Alexandre DERUMIER aderum...@odiso.com
Cc: pve-devel@pve.proxmox.com, Dietmar Maurer diet...@proxmox.com
Envoyé: Jeudi 27 Décembre 2012 15:19:41
Objet: Re: [pve-devel] [PATCH 3/3] add downtime  expected_downtime 
query-migrate info

Strangely the status of the VM is always paused after migration.

Stefan
Am 27.12.2012 15:18, schrieb Stefan Priebe:

Hi,

have now done the same.

With current git qemu migration is really fast using 1,6GB memory:
Dec 27 15:17:45 starting migration of VM 105 to node 'cloud1-1202'
(10.255.0.20)
Dec 27 15:17:45 copying disk images
Dec 27 15:17:45 starting VM 105 on remote node 'cloud1-1202'
Dec 27 15:17:48 starting online/live migration on tcp:10.255.0.20:6
Dec 27 15:17:48 migrate_set_speed: 8589934592
Dec 27 15:17:48 migrate_set_downtime: 0.05
Dec 27 15:17:52 migration speed: 512.00 MB/s - downtime 174 ms
Dec 27 15:17:52 migration status: completed
Dec 27 15:17:53 migration finished successfuly (duration 00:00:09)
TASK OK

It's so fast that i can't check if i see that's while migrating.

Greets,
Stefan

Am 27.12.2012 14:26, schrieb Alexandre DERUMIER:

The problem is the LATEST git qemu code they've changed a LOT of
include
file locations so nearly NO PVE patch applies...


I'm currently building a pve-qemu-kvm on qemu 1.4, with basics patches

fr-ca-keymap-corrections.diff
fairsched.diff
pve-auth.patch
vencrypt-auth-plain.patch
enable-kvm-by-default.patch

should be enough to connect with vnc and test migration

I'll keep in touch



- Mail original -

De: Stefan Priebe s.pri...@profihost.ag
À: Alexandre DERUMIER aderum...@odiso.com
Cc: pve-devel@pve.proxmox.com, Dietmar Maurer diet...@proxmox.com
Envoyé: Jeudi 27 Décembre 2012 13:40:24
Objet: Re: [pve-devel] [PATCH 3/3] add downtime  expected_downtime
query-migrate info

Hi,
Am 27.12.2012 13:39, schrieb Alexandre DERUMIER:

not right now - but i tested this yesterday and didn't saw a
difference
so i moved again to 3.6.11.

It'll do test with a 3.6 kernel too, to see if I have a difference


Thanks! Will retest with pve kernel too.


Do you have tried with last qemu 1.4 git ?
Because I'm looking into the code, and the change in migration code
is really huge.
So we could known if it's a qemu migration code problem or not...


The problem is the LATEST git qemu code they've changed a LOT of include
file locations so nearly NO PVE patch applies...

Stefan


___
pve-devel mailing list
pve-devel@pve.proxmox.com
http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel


[pve-devel] manually edit datacenter.cfg?

2012-12-27 Thread Stefan Priebe

Hello list,

to test a new patch i want to edit datacenter.cfg manually. But there is 
a .version file inside /etc/pve. Also it seems i still get a cached version.


Is there a way to manually edit this file and update the cache / version?

Greets,
Stefan
___
pve-devel mailing list
pve-devel@pve.proxmox.com
http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel


[pve-devel] [PATCH] qemu-server: add support for unsecure migration (setting in datacenter.cfg)

2012-12-27 Thread Stefan Priebe
This patch adds support for unsecure migration using a direct tcp connection
KVM = KVM instead of an extra SSH tunnel. Without ssh the limit is just the
bandwith and no longer the CPU / one single core.

You can enable this by adding:
migration_unsecure: 1
to datacenter.cfg

Examples using qemu 1.4 as migration with qemu 1.3 still does not work for me:

current default with SSH Tunnel VM uses 2GB mem:
Dec 27 21:10:32 starting migration of VM 105 to node 'cloud1-1202' (10.255.0.20)
Dec 27 21:10:32 copying disk images
Dec 27 21:10:32 starting VM 105 on remote node 'cloud1-1202'
Dec 27 21:10:35 starting ssh migration tunnel
Dec 27 21:10:36 starting online/live migration on localhost:6
Dec 27 21:10:36 migrate_set_speed: 8589934592
Dec 27 21:10:36 migrate_set_downtime: 1
Dec 27 21:10:38 migration status: active (transferred 152481002, remaining 
1938546688), total 2156396544) , expected downtime 0
Dec 27 21:10:40 migration status: active (transferred 279836995, remaining 
1811140608), total 2156396544) , expected downtime 0
Dec 27 21:10:42 migration status: active (transferred 421265271, remaining 
1669840896), total 2156396544) , expected downtime 0
Dec 27 21:10:44 migration status: active (transferred 570987974, remaining 
1520152576), total 2156396544) , expected downtime 0
Dec 27 21:10:46 migration status: active (transferred 721469404, remaining 
1369939968), total 2156396544) , expected downtime 0
Dec 27 21:10:48 migration status: active (transferred 875595258, remaining 
1216057344), total 2156396544) , expected downtime 0
Dec 27 21:10:50 migration status: active (transferred 1034654822, remaining 
1056931840), total 2156396544) , expected downtime 0
Dec 27 21:10:54 migration status: active (transferred 1176288424, remaining 
915369984), total 2156396544) , expected downtime 0
Dec 27 21:10:56 migration status: active (transferred 1339734759, remaining 
752050176), total 2156396544) , expected downtime 0
Dec 27 21:10:58 migration status: active (transferred 1503743261, remaining 
588206080), total 2156396544) , expected downtime 0
Dec 27 21:11:02 migration status: active (transferred 1645097827, remaining 
446906368), total 2156396544) , expected downtime 0
Dec 27 21:11:04 migration status: active (transferred 1810562934, remaining 
281751552), total 2156396544) , expected downtime 0
Dec 27 21:11:06 migration status: active (transferred 1964377505, remaining 
126033920), total 2156396544) , expected downtime 0
Dec 27 21:11:08 migration status: active (transferred 2077930417, remaining 0), 
total 2156396544) , expected downtime 0
Dec 27 21:11:09 migration speed: 62.06 MB/s - downtime 37 ms
Dec 27 21:11:09 migration status: completed
Dec 27 21:11:13 migration finished successfuly (duration 00:00:41)
TASK OK

with unsecure migration without SSH Tunnel:
Dec 27 22:43:14 starting migration of VM 105 to node 'cloud1-1203' (10.255.0.22)
Dec 27 22:43:14 copying disk images
Dec 27 22:43:14 starting VM 105 on remote node 'cloud1-1203'
Dec 27 22:43:17 starting online/live migration on 10.255.0.22:6
Dec 27 22:43:17 migrate_set_speed: 8589934592
Dec 27 22:43:17 migrate_set_downtime: 1
Dec 27 22:43:19 migration speed: 1024.00 MB/s - downtime 1100 ms
Dec 27 22:43:19 migration status: completed
Dec 27 22:43:22 migration finished successfuly (duration 00:00:09)
TASK OK

---
 PVE/QemuMigrate.pm |   41 -
 PVE/QemuServer.pm  |   12 +---
 2 files changed, 29 insertions(+), 24 deletions(-)

diff --git a/PVE/QemuMigrate.pm b/PVE/QemuMigrate.pm
index 38f1d05..41b9446 100644
--- a/PVE/QemuMigrate.pm
+++ b/PVE/QemuMigrate.pm
@@ -298,8 +298,8 @@ sub phase2 {
 
 $self-log('info', starting VM $vmid on remote node '$self-{node}');
 
+my $raddr;
 my $rport;
-
 my $nodename = PVE::INotify::nodename();
 
 ## start on remote node
@@ -308,27 +308,27 @@ sub phase2 {
 
 PVE::Tools::run_command($cmd, outfunc = sub {
my $line = shift;
-
-   if ($line =~ m/^migration listens on port (\d+)$/) {
-   $rport = $1;
+   if ($line =~ m/^migration listens on tcp:([\d\.]+|localhost):(\d+)$/) {
+   $raddr = $1;
+   $rport = $2;
}
 }, errfunc = sub {
my $line = shift;
$self-log('info', $line);
 });
 
-die unable to detect remote migration port\n if !$rport;
+die unable to detect remote migration address\n if !$raddr;
 
-$self-log('info', starting migration tunnel);
+if ($raddr eq localhost) {
+$self-log('info', starting ssh migration tunnel);
 
-## create tunnel to remote port
-my $lport = PVE::QemuServer::next_migrate_port();
-$self-{tunnel} = $self-fork_tunnel($self-{nodeip}, $lport, $rport);
-
-$self-log('info', starting online/live migration on port $lport);
-# start migration
+## create tunnel to remote port 
+my $lport = PVE::QemuServer::next_migrate_port();
+$self-{tunnel} = $self-fork_tunnel($self-{nodeip}, $lport, $rport);
+}
 

Re: [pve-devel] [PATCH 3/3] add downtime expected_downtime query-migrate info

2012-12-27 Thread Dietmar Maurer
 Am 27.12.2012 16:21, schrieb Alexandre DERUMIER:
  But if it's work fine for you with 1s migrate_downtime, we need to
  find where the problem is in the current qemu 1.3 code ... (maybe qemu
  mailing can help)
 To my last mails nobody answered...

What information do you miss (what last mails?)?
___
pve-devel mailing list
pve-devel@pve.proxmox.com
http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel


Re: [pve-devel] [PATCH 3/3] add downtime expected_downtime query-migrate info

2012-12-27 Thread Stefan Priebe - Profihost AG

Am 28.12.2012 um 07:15 schrieb Dietmar Maurer diet...@proxmox.com:

 Am 27.12.2012 16:21, schrieb Alexandre DERUMIER:
 But if it's work fine for you with 1s migrate_downtime, we need to
 find where the problem is in the current qemu 1.3 code ... (maybe qemu
 mailing can help)
 To my last mails nobody answered...
 
 What information do you miss (what last mails?)?
Last mails to qemu mailing list. It was regarding my migration problems.
___
pve-devel mailing list
pve-devel@pve.proxmox.com
http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel


Re: [pve-devel] manually edit datacenter.cfg?

2012-12-27 Thread Stefan Priebe - Profihost AG

Am 28.12.2012 um 07:24 schrieb Dietmar Maurer diet...@proxmox.com:

 to test a new patch i want to edit datacenter.cfg manually. But there is a
 .version file inside /etc/pve. Also it seems i still get a cached version.
 
 What is the problem exaclty?
 
Got that fixed ;-) didn't know that I had to add the parameter to Cluster.pm. 
Sorry.

Stefan
___
pve-devel mailing list
pve-devel@pve.proxmox.com
http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel



Re: [pve-devel] [PATCH] qemu-server: add support for unsecure migration (setting in datacenter.cfg)

2012-12-27 Thread Dietmar Maurer
So downtime with your patch is 30 times larger? 

___
pve-devel mailing list
pve-devel@pve.proxmox.com
http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel


Re: [pve-devel] [PATCH 3/3] add downtime expected_downtime query-migrate info

2012-12-27 Thread Dietmar Maurer
  Am 27.12.2012 16:21, schrieb Alexandre DERUMIER:
  But if it's work fine for you with 1s migrate_downtime, we need to
  find where the problem is in the current qemu 1.3 code ... (maybe
  qemu mailing can help)
  To my last mails nobody answered...
 
  What information do you miss (what last mails?)?
 Last mails to qemu mailing list. It was regarding my migration problems.

Ah, yes. I will do further tests today to reproduce the bug here.

___
pve-devel mailing list
pve-devel@pve.proxmox.com
http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel


Re: [pve-devel] qemu-server : add has_feature sub to detect storage features

2012-12-27 Thread Dietmar Maurer
applied.

 -Original Message-
 From: pve-devel-boun...@pve.proxmox.com [mailto:pve-devel-
 boun...@pve.proxmox.com] On Behalf Of Alexandre Derumier
 Sent: Donnerstag, 27. Dezember 2012 16:07
 To: pve-devel@pve.proxmox.com
 Subject: [pve-devel] qemu-server : add has_feature sub to detect storage
 features

___
pve-devel mailing list
pve-devel@pve.proxmox.com
http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel


Re: [pve-devel] qemu-server : add has_feature sub to detect storage features

2012-12-27 Thread Alexandre DERUMIER
Thanks ! 


- Mail original - 

De: Dietmar Maurer diet...@proxmox.com 
À: Alexandre Derumier aderum...@odiso.com, pve-devel@pve.proxmox.com 
Envoyé: Vendredi 28 Décembre 2012 07:56:22 
Objet: RE: [pve-devel] qemu-server : add has_feature sub to detect storage 
features 

applied. 

 -Original Message- 
 From: pve-devel-boun...@pve.proxmox.com [mailto:pve-devel- 
 boun...@pve.proxmox.com] On Behalf Of Alexandre Derumier 
 Sent: Donnerstag, 27. Dezember 2012 16:07 
 To: pve-devel@pve.proxmox.com 
 Subject: [pve-devel] qemu-server : add has_feature sub to detect storage 
 features 
___
pve-devel mailing list
pve-devel@pve.proxmox.com
http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel


Re: [pve-devel] [PATCH] qemu-server: add support for unsecure migration (setting in datacenter.cfg)

2012-12-27 Thread Stefan Priebe - Profihost AG

Am 28.12.2012 um 07:32 schrieb Dietmar Maurer diet...@proxmox.com:

 So downtime with your patch is 30 times larger? 

That's just caused by migrate_downtime = 1s if you set a lower value you'll get 
lower down times. Qemu guesses that it can transfer whole vm in around 1s and 
that's allowed.

I can send you outputs with migrate_downtime 0.3 as well.

Stefan
___
pve-devel mailing list
pve-devel@pve.proxmox.com
http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel


Re: [pve-devel] [PATCH] qemu-server: add support for unsecure migration (setting in datacenter.cfg)

2012-12-27 Thread Stefan Priebe - Profihost AG

Am 28.12.2012 um 07:29 schrieb Dietmar Maurer diet...@proxmox.com:

 This patch adds support for unsecure migration using a direct tcp connection
 KVM = KVM instead of an extra SSH tunnel. Without ssh the limit is just the
 bandwith and no longer the CPU / one single core.
 
 I think this should be done in ssh (chipher=none), so that we still make sure 
 that we connect to
 the correct nodes. But yes, it is considerable amount of work to patch ssh - 
 not sure about that.
It doesn't seem that cipher none gets implemented. So who wants to care about a 
custom OpenSSH?

At which point do you see the risk connect to wrong nodes?

Stefan
___
pve-devel mailing list
pve-devel@pve.proxmox.com
http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel


Re: [pve-devel] [PATCH 3/3] add downtime expected_downtime query-migrate info

2012-12-27 Thread Alexandre DERUMIER
To my last mails nobody answered... 

I have reply to the mail, adding more infos, because I think it was not enough 
detailled.
I also put paolo bonzini and juan quintala (the migration code author) in copy.


- Mail original - 

De: Stefan Priebe s.pri...@profihost.ag 
À: Alexandre DERUMIER aderum...@odiso.com 
Cc: pve-devel@pve.proxmox.com, Dietmar Maurer diet...@proxmox.com 
Envoyé: Jeudi 27 Décembre 2012 19:38:49 
Objet: Re: [pve-devel] [PATCH 3/3] add downtime  expected_downtime 
query-migrate info 

Hi, 

Am 27.12.2012 16:21, schrieb Alexandre DERUMIER: 
 But if it's work fine for you with 1s migrate_downtime, we need to find where 
 the problem is in the current qemu 1.3 code ... (maybe qemu mailing can help) 
To my last mails nobody answered... 

Stefan 


 - Mail original - 
 
 De: Stefan Priebe s.pri...@profihost.ag 
 À: Alexandre DERUMIER aderum...@odiso.com 
 Cc: pve-devel@pve.proxmox.com, Dietmar Maurer diet...@proxmox.com 
 Envoyé: Jeudi 27 Décembre 2012 15:19:41 
 Objet: Re: [pve-devel] [PATCH 3/3] add downtime  expected_downtime 
 query-migrate info 
 
 Strangely the status of the VM is always paused after migration. 
 
 Stefan 
 Am 27.12.2012 15:18, schrieb Stefan Priebe: 
 Hi, 
 
 have now done the same. 
 
 With current git qemu migration is really fast using 1,6GB memory: 
 Dec 27 15:17:45 starting migration of VM 105 to node 'cloud1-1202' 
 (10.255.0.20) 
 Dec 27 15:17:45 copying disk images 
 Dec 27 15:17:45 starting VM 105 on remote node 'cloud1-1202' 
 Dec 27 15:17:48 starting online/live migration on tcp:10.255.0.20:6 
 Dec 27 15:17:48 migrate_set_speed: 8589934592 
 Dec 27 15:17:48 migrate_set_downtime: 0.05 
 Dec 27 15:17:52 migration speed: 512.00 MB/s - downtime 174 ms 
 Dec 27 15:17:52 migration status: completed 
 Dec 27 15:17:53 migration finished successfuly (duration 00:00:09) 
 TASK OK 
 
 It's so fast that i can't check if i see that's while migrating. 
 
 Greets, 
 Stefan 
 
 Am 27.12.2012 14:26, schrieb Alexandre DERUMIER: 
 The problem is the LATEST git qemu code they've changed a LOT of 
 include 
 file locations so nearly NO PVE patch applies... 
 
 I'm currently building a pve-qemu-kvm on qemu 1.4, with basics patches 
 
 fr-ca-keymap-corrections.diff 
 fairsched.diff 
 pve-auth.patch 
 vencrypt-auth-plain.patch 
 enable-kvm-by-default.patch 
 
 should be enough to connect with vnc and test migration 
 
 I'll keep in touch 
 
 
 
 - Mail original - 
 
 De: Stefan Priebe s.pri...@profihost.ag 
 À: Alexandre DERUMIER aderum...@odiso.com 
 Cc: pve-devel@pve.proxmox.com, Dietmar Maurer diet...@proxmox.com 
 Envoyé: Jeudi 27 Décembre 2012 13:40:24 
 Objet: Re: [pve-devel] [PATCH 3/3] add downtime  expected_downtime 
 query-migrate info 
 
 Hi, 
 Am 27.12.2012 13:39, schrieb Alexandre DERUMIER: 
 not right now - but i tested this yesterday and didn't saw a 
 difference 
 so i moved again to 3.6.11. 
 It'll do test with a 3.6 kernel too, to see if I have a difference 
 
 Thanks! Will retest with pve kernel too. 
 
 Do you have tried with last qemu 1.4 git ? 
 Because I'm looking into the code, and the change in migration code 
 is really huge. 
 So we could known if it's a qemu migration code problem or not... 
 
 The problem is the LATEST git qemu code they've changed a LOT of include 
 file locations so nearly NO PVE patch applies... 
 
 Stefan 
 
___
pve-devel mailing list
pve-devel@pve.proxmox.com
http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel


Re: [pve-devel] [PATCH] qemu-server: add support for unsecure migration (setting in datacenter.cfg)

2012-12-27 Thread Dietmar Maurer
 That's just caused by migrate_downtime = 1s if you set a lower value you'll
 get lower down times. Qemu guesses that it can transfer whole vm in around
 1s and that's allowed.
 
 I can send you outputs with migrate_downtime 0.3 as well.

not neccessary, thanks.

___
pve-devel mailing list
pve-devel@pve.proxmox.com
http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel


Re: [pve-devel] [PATCH] qemu-server: add support for unsecure migration (setting in datacenter.cfg)

2012-12-27 Thread Dietmar Maurer
 At which point do you see the risk connect to wrong nodes?

IP or MAC spoofing attacks?

___
pve-devel mailing list
pve-devel@pve.proxmox.com
http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel


Re: [pve-devel] [PATCH] qemu-server: add support for unsecure migration (setting in datacenter.cfg)

2012-12-27 Thread Stefan Priebe - Profihost AG

Am 28.12.2012 um 08:15 schrieb Dietmar Maurer diet...@proxmox.com:

 At which point do you see the risk connect to wrong nodes?
 
 IP or MAC spoofing attacks?

But how? I mean this is a feature for TRUSTED networks! So private separated 
LAN for cluster interconnect. If you have that how should this happen? If 
someone has already access to your systems he can do easier things as well.

Stefan

___
pve-devel mailing list
pve-devel@pve.proxmox.com
http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel