[pve-devel] [PATCH] qemu-server: add support for unsecure migration (setting in datacenter.cfg)

2013-07-26 Thread Stefan Priebe
This patch adds support for unsecure migration using a direct tcp connection
KVM = KVM instead of an extra SSH tunnel. Without ssh the limit is just the
bandwith and no longer the CPU / one single core.

You can enable this by adding:
migration_unsecure: 1
to datacenter.cfg

Examples using qemu 1.4 as migration with qemu 1.3 still does not work for me:

current default with SSH Tunnel VM uses 2GB mem:
Dec 27 21:10:32 starting migration of VM 105 to node 'cloud1-1202' (10.255.0.20)
Dec 27 21:10:32 copying disk images
Dec 27 21:10:32 starting VM 105 on remote node 'cloud1-1202'
Dec 27 21:10:35 starting ssh migration tunnel
Dec 27 21:10:36 starting online/live migration on localhost:6
Dec 27 21:10:36 migrate_set_speed: 8589934592
Dec 27 21:10:36 migrate_set_downtime: 1
Dec 27 21:10:38 migration status: active (transferred 152481002, remaining 
1938546688), total 2156396544) , expected downtime 0
Dec 27 21:10:40 migration status: active (transferred 279836995, remaining 
1811140608), total 2156396544) , expected downtime 0
Dec 27 21:10:42 migration status: active (transferred 421265271, remaining 
1669840896), total 2156396544) , expected downtime 0
Dec 27 21:10:44 migration status: active (transferred 570987974, remaining 
1520152576), total 2156396544) , expected downtime 0
Dec 27 21:10:46 migration status: active (transferred 721469404, remaining 
1369939968), total 2156396544) , expected downtime 0
Dec 27 21:10:48 migration status: active (transferred 875595258, remaining 
1216057344), total 2156396544) , expected downtime 0
Dec 27 21:10:50 migration status: active (transferred 1034654822, remaining 
1056931840), total 2156396544) , expected downtime 0
Dec 27 21:10:54 migration status: active (transferred 1176288424, remaining 
915369984), total 2156396544) , expected downtime 0
Dec 27 21:10:56 migration status: active (transferred 1339734759, remaining 
752050176), total 2156396544) , expected downtime 0
Dec 27 21:10:58 migration status: active (transferred 1503743261, remaining 
588206080), total 2156396544) , expected downtime 0
Dec 27 21:11:02 migration status: active (transferred 1645097827, remaining 
446906368), total 2156396544) , expected downtime 0
Dec 27 21:11:04 migration status: active (transferred 1810562934, remaining 
281751552), total 2156396544) , expected downtime 0
Dec 27 21:11:06 migration status: active (transferred 1964377505, remaining 
126033920), total 2156396544) , expected downtime 0
Dec 27 21:11:08 migration status: active (transferred 2077930417, remaining 0), 
total 2156396544) , expected downtime 0
Dec 27 21:11:09 migration speed: 62.06 MB/s - downtime 37 ms
Dec 27 21:11:09 migration status: completed
Dec 27 21:11:13 migration finished successfuly (duration 00:00:41)
TASK OK

with unsecure migration without SSH Tunnel:
Dec 27 22:43:14 starting migration of VM 105 to node 'cloud1-1203' (10.255.0.22)
Dec 27 22:43:14 copying disk images
Dec 27 22:43:14 starting VM 105 on remote node 'cloud1-1203'
Dec 27 22:43:17 starting online/live migration on 10.255.0.22:6
Dec 27 22:43:17 migrate_set_speed: 8589934592
Dec 27 22:43:17 migrate_set_downtime: 1
Dec 27 22:43:19 migration speed: 1024.00 MB/s - downtime 1100 ms
Dec 27 22:43:19 migration status: completed
Dec 27 22:43:22 migration finished successfuly (duration 00:00:09)
TASK OK
---
 PVE/QemuMigrate.pm |   32 +---
 PVE/QemuServer.pm  |   12 +---
 2 files changed, 26 insertions(+), 18 deletions(-)

diff --git a/PVE/QemuMigrate.pm b/PVE/QemuMigrate.pm
index dd48f78..be7df23 100644
--- a/PVE/QemuMigrate.pm
+++ b/PVE/QemuMigrate.pm
@@ -306,8 +306,8 @@ sub phase2 {
 
 $self-log('info', starting VM $vmid on remote node '$self-{node}');
 
+my $raddr;
 my $rport;
-
 my $nodename = PVE::INotify::nodename();
 
 ## start on remote node
@@ -320,27 +320,28 @@ sub phase2 {
 
 PVE::Tools::run_command($cmd, outfunc = sub {
my $line = shift;
-
-   if ($line =~ m/^migration listens on port (\d+)$/) {
-   $rport = $1;
+   if ($line =~ m/^migration listens on tcp:([\d\.]+|localhost):(\d+)$/) {
+   $raddr = $1;
+   $rport = $2;
}
 }, errfunc = sub {
my $line = shift;
$self-log('info', $line);
 });
 
-die unable to detect remote migration port\n if !$rport;
-
-$self-log('info', starting migration tunnel);
+die unable to detect remote migration address\n if !$raddr;
 
-## create tunnel to remote port
-my $lport = PVE::Tools::next_migrate_port();
-$self-{tunnel} = $self-fork_tunnel($self-{nodeip}, $lport, $rport);
+if ($raddr eq localhost) {
+$self-log('info', starting ssh migration tunnel);
 
-$self-log('info', starting online/live migration on port $lport);
-# start migration
+## create tunnel to remote port 
+my $lport = PVE::Tools::next_migrate_port();
+$self-{tunnel} = $self-fork_tunnel($self-{nodeip}, $lport, $rport);
+}
 
 my $start = 

Re: [pve-devel] [PATCH] qemu-server: add support for unsecure migration (setting in datacenter.cfg)

2013-07-26 Thread Dietmar Maurer
 OK
 
  (just remove ssh parameters '-L', $lport:localhost:$rport if !$rport
  in fork_tunnel)

yes (if we do not need the ssh tunnel)

 That makes no sense to me as $rport is always set. Or do you mean if $raddr
 ne localhost?
 
  +if ($raddr eq localhost) {
  +$self-log('info', starting ssh migration tunnel);
 
  -$self-log('info', starting online/live migration on port $lport);
  -# start migration
  +## create tunnel to remote port
  +my $lport = PVE::Tools::next_migrate_port();
  +$self-{tunnel} = $self-fork_tunnel($self-{nodeip}, $lport, 
  $rport);
  +}
 
   my $start = time();
  +$self-log('info', starting online/live migration on $raddr:$rport);
  +$self-{livemigration} = 1;
 
  no need to change if we start the tunnel anyways?
 Most probably but maybe still nicer than relying on tunnel variable?

OK


___
pve-devel mailing list
pve-devel@pve.proxmox.com
http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel


[pve-devel] [PATCH] qemu-server: add support for unsecure migration (setting in datacenter.cfg)

2013-07-26 Thread Stefan Priebe
This patch adds support for unsecure migration using a direct tcp connection
KVM = KVM instead of an extra SSH tunnel. Without ssh the limit is just the
bandwith and no longer the CPU / one single core.

You can enable this by adding:
migration_unsecure: 1
to datacenter.cfg

Examples using qemu 1.4 as migration with qemu 1.3 still does not work for me:

current default with SSH Tunnel VM uses 2GB mem:
Dec 27 21:10:32 starting migration of VM 105 to node 'cloud1-1202' (10.255.0.20)
Dec 27 21:10:32 copying disk images
Dec 27 21:10:32 starting VM 105 on remote node 'cloud1-1202'
Dec 27 21:10:35 starting ssh migration tunnel
Dec 27 21:10:36 starting online/live migration on localhost:6
Dec 27 21:10:36 migrate_set_speed: 8589934592
Dec 27 21:10:36 migrate_set_downtime: 1
Dec 27 21:10:38 migration status: active (transferred 152481002, remaining 
1938546688), total 2156396544) , expected downtime 0
Dec 27 21:10:40 migration status: active (transferred 279836995, remaining 
1811140608), total 2156396544) , expected downtime 0
Dec 27 21:10:42 migration status: active (transferred 421265271, remaining 
1669840896), total 2156396544) , expected downtime 0
Dec 27 21:10:44 migration status: active (transferred 570987974, remaining 
1520152576), total 2156396544) , expected downtime 0
Dec 27 21:10:46 migration status: active (transferred 721469404, remaining 
1369939968), total 2156396544) , expected downtime 0
Dec 27 21:10:48 migration status: active (transferred 875595258, remaining 
1216057344), total 2156396544) , expected downtime 0
Dec 27 21:10:50 migration status: active (transferred 1034654822, remaining 
1056931840), total 2156396544) , expected downtime 0
Dec 27 21:10:54 migration status: active (transferred 1176288424, remaining 
915369984), total 2156396544) , expected downtime 0
Dec 27 21:10:56 migration status: active (transferred 1339734759, remaining 
752050176), total 2156396544) , expected downtime 0
Dec 27 21:10:58 migration status: active (transferred 1503743261, remaining 
588206080), total 2156396544) , expected downtime 0
Dec 27 21:11:02 migration status: active (transferred 1645097827, remaining 
446906368), total 2156396544) , expected downtime 0
Dec 27 21:11:04 migration status: active (transferred 1810562934, remaining 
281751552), total 2156396544) , expected downtime 0
Dec 27 21:11:06 migration status: active (transferred 1964377505, remaining 
126033920), total 2156396544) , expected downtime 0
Dec 27 21:11:08 migration status: active (transferred 2077930417, remaining 0), 
total 2156396544) , expected downtime 0
Dec 27 21:11:09 migration speed: 62.06 MB/s - downtime 37 ms
Dec 27 21:11:09 migration status: completed
Dec 27 21:11:13 migration finished successfuly (duration 00:00:41)
TASK OK

with unsecure migration without SSH Tunnel:
Dec 27 22:43:14 starting migration of VM 105 to node 'cloud1-1203' (10.255.0.22)
Dec 27 22:43:14 copying disk images
Dec 27 22:43:14 starting VM 105 on remote node 'cloud1-1203'
Dec 27 22:43:17 starting online/live migration on 10.255.0.22:6
Dec 27 22:43:17 migrate_set_speed: 8589934592
Dec 27 22:43:17 migrate_set_downtime: 1
Dec 27 22:43:19 migration speed: 1024.00 MB/s - downtime 1100 ms
Dec 27 22:43:19 migration status: completed
Dec 27 22:43:22 migration finished successfuly (duration 00:00:09)
TASK OK
---
 PVE/QemuMigrate.pm |   33 +++--
 PVE/QemuServer.pm  |   12 +---
 2 files changed, 28 insertions(+), 17 deletions(-)

diff --git a/PVE/QemuMigrate.pm b/PVE/QemuMigrate.pm
index dd48f78..d7af9b2 100644
--- a/PVE/QemuMigrate.pm
+++ b/PVE/QemuMigrate.pm
@@ -75,7 +75,9 @@ sub finish_command_pipe {
 sub fork_tunnel {
 my ($self, $nodeip, $lport, $rport) = @_;
 
-my $cmd = [@{$self-{rem_ssh}}, '-L', $lport:localhost:$rport,
+my @localtunnelinfo = (defined $lport) ? qw(-L $lport:localhost:$rport) : 
();
+
+my $cmd = [@{$self-{rem_ssh}}, @localtunnelinfo,
   'qm', 'mtunnel' ];
 
 my $tunnel = $self-fork_command_pipe($cmd);
@@ -306,8 +308,8 @@ sub phase2 {
 
 $self-log('info', starting VM $vmid on remote node '$self-{node}');
 
+my $raddr;
 my $rport;
-
 my $nodename = PVE::INotify::nodename();
 
 ## start on remote node
@@ -320,8 +322,12 @@ sub phase2 {
 
 PVE::Tools::run_command($cmd, outfunc = sub {
my $line = shift;
-
-   if ($line =~ m/^migration listens on port (\d+)$/) {
+   if ($line =~ m/^migration listens on tcp:([\d\.]+|localhost):(\d+)$/) {
+   $raddr = $1;
+   $rport = $2;
+   }
+   elsif ($line =~ m/^migration listens on port (\d+)$/) {
+   $raddr = localhost;
$rport = $1;
}
 }, errfunc = sub {
@@ -329,18 +335,16 @@ sub phase2 {
$self-log('info', $line);
 });
 
-die unable to detect remote migration port\n if !$rport;
-
-$self-log('info', starting migration tunnel);
+die unable to detect remote migration address\n if !$raddr;
 
 ## create tunnel to 

Re: [pve-devel] [PATCH] qemu-server: add support for unsecure migration (setting in datacenter.cfg)

2013-07-26 Thread Stefan Priebe - Profihost AG
new patch sent

Am 26.07.2013 10:14, schrieb Dietmar Maurer:
 OK

 (just remove ssh parameters '-L', $lport:localhost:$rport if !$rport
 in fork_tunnel)
 
 yes (if we do not need the ssh tunnel)
 
 That makes no sense to me as $rport is always set. Or do you mean if $raddr
 ne localhost?

 +if ($raddr eq localhost) {
 +$self-log('info', starting ssh migration tunnel);

 -$self-log('info', starting online/live migration on port $lport);
 -# start migration
 +## create tunnel to remote port
 +my $lport = PVE::Tools::next_migrate_port();
 +$self-{tunnel} = $self-fork_tunnel($self-{nodeip}, $lport, 
 $rport);
 +}

  my $start = time();
 +$self-log('info', starting online/live migration on $raddr:$rport);
 +$self-{livemigration} = 1;

 no need to change if we start the tunnel anyways?
 Most probably but maybe still nicer than relying on tunnel variable?
 
 OK
 
 
___
pve-devel mailing list
pve-devel@pve.proxmox.com
http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel


Re: [pve-devel] [PATCH] qemu-server: add support for unsecure migration (setting in datacenter.cfg)

2013-07-26 Thread Dietmar Maurer
Applying: qemu-server: add support for unsecure migration (setting in 
datacenter.cfg)
error: patch failed: PVE/QemuMigrate.pm:320
error: PVE/QemuMigrate.pm: patch does not apply
error: patch failed: PVE/QemuServer.pm:3059
error: PVE/QemuServer.pm: patch does not apply


___
pve-devel mailing list
pve-devel@pve.proxmox.com
http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel


Re: [pve-devel] [PATCH] qemu-server: add support for unsecure migration (setting in datacenter.cfg)

2013-07-26 Thread Stefan Priebe - Profihost AG
Sorry wasn't up2date. The spice changes did change the file. Will send a
new one.

Am 26.07.2013 11:16, schrieb Dietmar Maurer:
 Applying: qemu-server: add support for unsecure migration (setting in 
 datacenter.cfg)
 error: patch failed: PVE/QemuMigrate.pm:320
 error: PVE/QemuMigrate.pm: patch does not apply
 error: patch failed: PVE/QemuServer.pm:3059
 error: PVE/QemuServer.pm: patch does not apply
 
 
___
pve-devel mailing list
pve-devel@pve.proxmox.com
http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel


[pve-devel] [PATCH] qemu-server: add support for unsecure migration (setting in datacenter.cfg)

2013-07-26 Thread Stefan Priebe
This patch adds support for unsecure migration using a direct tcp connection
KVM = KVM instead of an extra SSH tunnel. Without ssh the limit is just the
bandwith and no longer the CPU / one single core.

You can enable this by adding:
migration_unsecure: 1
to datacenter.cfg

Examples using qemu 1.4 as migration with qemu 1.3 still does not work for me:

current default with SSH Tunnel VM uses 2GB mem:
Dec 27 21:10:32 starting migration of VM 105 to node 'cloud1-1202' (10.255.0.20)
Dec 27 21:10:32 copying disk images
Dec 27 21:10:32 starting VM 105 on remote node 'cloud1-1202'
Dec 27 21:10:35 starting ssh migration tunnel
Dec 27 21:10:36 starting online/live migration on localhost:6
Dec 27 21:10:36 migrate_set_speed: 8589934592
Dec 27 21:10:36 migrate_set_downtime: 1
Dec 27 21:10:38 migration status: active (transferred 152481002, remaining 
1938546688), total 2156396544) , expected downtime 0
Dec 27 21:10:40 migration status: active (transferred 279836995, remaining 
1811140608), total 2156396544) , expected downtime 0
Dec 27 21:10:42 migration status: active (transferred 421265271, remaining 
1669840896), total 2156396544) , expected downtime 0
Dec 27 21:10:44 migration status: active (transferred 570987974, remaining 
1520152576), total 2156396544) , expected downtime 0
Dec 27 21:10:46 migration status: active (transferred 721469404, remaining 
1369939968), total 2156396544) , expected downtime 0
Dec 27 21:10:48 migration status: active (transferred 875595258, remaining 
1216057344), total 2156396544) , expected downtime 0
Dec 27 21:10:50 migration status: active (transferred 1034654822, remaining 
1056931840), total 2156396544) , expected downtime 0
Dec 27 21:10:54 migration status: active (transferred 1176288424, remaining 
915369984), total 2156396544) , expected downtime 0
Dec 27 21:10:56 migration status: active (transferred 1339734759, remaining 
752050176), total 2156396544) , expected downtime 0
Dec 27 21:10:58 migration status: active (transferred 1503743261, remaining 
588206080), total 2156396544) , expected downtime 0
Dec 27 21:11:02 migration status: active (transferred 1645097827, remaining 
446906368), total 2156396544) , expected downtime 0
Dec 27 21:11:04 migration status: active (transferred 1810562934, remaining 
281751552), total 2156396544) , expected downtime 0
Dec 27 21:11:06 migration status: active (transferred 1964377505, remaining 
126033920), total 2156396544) , expected downtime 0
Dec 27 21:11:08 migration status: active (transferred 2077930417, remaining 0), 
total 2156396544) , expected downtime 0
Dec 27 21:11:09 migration speed: 62.06 MB/s - downtime 37 ms
Dec 27 21:11:09 migration status: completed
Dec 27 21:11:13 migration finished successfuly (duration 00:00:41)
TASK OK

with unsecure migration without SSH Tunnel:
Dec 27 22:43:14 starting migration of VM 105 to node 'cloud1-1203' (10.255.0.22)
Dec 27 22:43:14 copying disk images
Dec 27 22:43:14 starting VM 105 on remote node 'cloud1-1203'
Dec 27 22:43:17 starting online/live migration on 10.255.0.22:6
Dec 27 22:43:17 migrate_set_speed: 8589934592
Dec 27 22:43:17 migrate_set_downtime: 1
Dec 27 22:43:19 migration speed: 1024.00 MB/s - downtime 1100 ms
Dec 27 22:43:19 migration status: completed
Dec 27 22:43:22 migration finished successfuly (duration 00:00:09)
TASK OK
---
 PVE/QemuMigrate.pm |   35 +--
 PVE/QemuServer.pm  |   12 +---
 2 files changed, 30 insertions(+), 17 deletions(-)

diff --git a/PVE/QemuMigrate.pm b/PVE/QemuMigrate.pm
index 01d4185..0c0f3c8 100644
--- a/PVE/QemuMigrate.pm
+++ b/PVE/QemuMigrate.pm
@@ -76,7 +76,9 @@ sub finish_command_pipe {
 sub fork_tunnel {
 my ($self, $nodeip, $lport, $rport) = @_;
 
-my $cmd = [@{$self-{rem_ssh}}, '-L', $lport:localhost:$rport,
+my @localtunnelinfo = (defined $lport) ? qw(-L $lport:localhost:$rport) : 
();
+
+my $cmd = [@{$self-{rem_ssh}}, @localtunnelinfo,
   'qm', 'mtunnel' ];
 
 my $tunnel = $self-fork_command_pipe($cmd);
@@ -307,8 +309,8 @@ sub phase2 {
 
 $self-log('info', starting VM $vmid on remote node '$self-{node}');
 
+my $raddr;
 my $rport;
-
 my $nodename = PVE::INotify::nodename();
 
 ## start on remote node
@@ -333,9 +335,15 @@ sub phase2 {
 PVE::Tools::run_command($cmd, input = $spice_ticket, outfunc = sub {
my $line = shift;
 
-   if ($line =~ m/^migration listens on port (\d+)$/) {
+   if ($line =~ m/^migration listens on tcp:([\d\.]+|localhost):(\d+)$/) {
+   $raddr = $1;
+   $rport = int($2);
+   }
+   elsif ($line =~ m/^migration listens on port (\d+)$/) {
+   $raddr = localhost;
$rport = int($1);
-   }elsif ($line =~ m/^spice listens on port (\d+)$/) {
+   }
+elsif ($line =~ m/^spice listens on port (\d+)$/) {
$spice_port = int($1);
}
 }, errfunc = sub {
@@ -343,18 +351,16 @@ sub phase2 {
$self-log('info', $line);
 });
 
-die 

Re: [pve-devel] [PATCH] qemu-server: add support for unsecure migration (setting in datacenter.cfg)

2013-01-07 Thread Stefan Priebe - Profihost AG

Hi Dietmar,

you may have followed the discussion on the openSSH dev list. The 
current status is:
There won't be a solution. 400MB/s is the current maximum speed you can 
get with specific cipher and umac...@openssh.com.


Limiting factor is the CPU but no the encryption it's the checksumming 
and mac generation.


So i would still vote to integrate my patch.

Stefan

Am 03.01.2013 05:46, schrieb Dietmar Maurer:

But that should be fixable?

Sure here patches http://www.psc.edu/index.php/hpn-ssh

but nobody at openssh wants to implement them.


I just searched the openssh-unix-dev archives for this year.
There is not a single complaint about speed!


Asked the list!



Thanks.


___
pve-devel mailing list
pve-devel@pve.proxmox.com
http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel


Re: [pve-devel] [PATCH] qemu-server: add support for unsecure migration (setting in datacenter.cfg)

2013-01-07 Thread Dietmar Maurer
 you may have followed the discussion on the openSSH dev list. The current
 status is:
 There won't be a solution. 400MB/s is the current maximum speed you can
 get with specific cipher and umac...@openssh.com.

But 400MB would be good enough by far (much better than current 70MB/s).

There is no real advantage transferring RAM at GB/s speed, because VM downtime 
remains the same?

Normally, there is other traffic on the net too, so you never get the 
theoretical limit anyways.
___
pve-devel mailing list
pve-devel@pve.proxmox.com
http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel


Re: [pve-devel] [PATCH] qemu-server: add support for unsecure migration (setting in datacenter.cfg)

2013-01-07 Thread Stefan Priebe - Profihost AG

Hi Dietmar,

Am 07.01.2013 16:23, schrieb Dietmar Maurer:

you may have followed the discussion on the openSSH dev list. The current
status is:
There won't be a solution. 400MB/s is the current maximum speed you can
get with specific cipher and umac...@openssh.com.


But 400MB would be good enough by far (much better than current 70MB/s).

There is no real advantage transferring RAM at GB/s speed

But total migration time reduces a lot.

 because VM downtime remains the same?
VM downtime is the same. But if you need to move for example 30 VMs away 
using 150GB memory cause you want to replace HW it is a LOT faster.



Normally, there is other traffic on the net too, so you never get the 
theoretical limit anyways.
Yes but at least for all my tests i was able to transfer at around 
1000MB/s. Storage and other things were using around 200MB/s.


Stefan

___
pve-devel mailing list
pve-devel@pve.proxmox.com
http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel


Re: [pve-devel] [PATCH] qemu-server: add support for unsecure migration (setting in datacenter.cfg)

2013-01-07 Thread Dietmar Maurer
   because VM downtime remains the same?
 VM downtime is the same. But if you need to move for example 30 VMs
 away using 150GB memory cause you want to replace HW it is a LOT faster.

Why? If you rum 30 ssh connections in parallel you will get full speed  (only a 
single connection is limited)?
___
pve-devel mailing list
pve-devel@pve.proxmox.com
http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel


Re: [pve-devel] [PATCH] qemu-server: add support for unsecure migration (setting in datacenter.cfg)

2013-01-07 Thread Dietmar Maurer
But AFAIK you currently get 75MB/s, not 400MB/s - so we still not know why it 
is that slow?

  Why? If you rum 30 ssh connections in parallel you will get full speed  
  (only
 a single connection is limited)?
 
 Oh i like to migrate one by one. But OK i'll keep the patch in my local 
 branch.
 
 Stefan

___
pve-devel mailing list
pve-devel@pve.proxmox.com
http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel


Re: [pve-devel] [PATCH] qemu-server: add support for unsecure migration (setting in datacenter.cfg)

2013-01-07 Thread Stefan Priebe - Profihost AG

Hi,

Am 07.01.2013 16:33, schrieb Dietmar Maurer:

But AFAIK you currently get 75MB/s, not 400MB/s - so we still not know why it 
is that slow?


Yes i got 200MB/s - 75Mb/s was from a wrong scp test. With 
umac...@openssh.com as mac generator for ssh i get 400MB/s as 100% CPU 
Load per SSH Session.


With diect TCP Connection i have nearly 10% CPU Load.

Stefan



Why? If you rum 30 ssh connections in parallel you will get full speed  (only

a single connection is limited)?

Oh i like to migrate one by one. But OK i'll keep the patch in my local branch.

Stefan



___
pve-devel mailing list
pve-devel@pve.proxmox.com
http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel


Re: [pve-devel] [PATCH] qemu-server: add support for unsecure migration (setting in datacenter.cfg)

2013-01-07 Thread Eric Blevins
On 01/07/2013 10:42 AM, Dietmar Maurer wrote:

 So you can transfer an VM with 4GB within 10 seconds? IMHO not that bad.

But 4 seconds is even better, when you have dozens of machines to
migrate seconds add up to minutes.
Besides, networks will get faster over time but SSH will still have
limitations due to its design.

 
 Anyways, would It be faster if we open a tls connection to the remote host 
 (instead of the ssh tunnel)?

Maybe this would be a good compromise allowing faster than SSH migration
while still maintaining encryption and the security that comes with it.

I would like to saturate my 10G cluster network when live migrating to
get the most speed possible, with SSH that is impossible.

Eric
___
pve-devel mailing list
pve-devel@pve.proxmox.com
http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel


Re: [pve-devel] [PATCH] qemu-server: add support for unsecure migration (setting in datacenter.cfg)

2013-01-07 Thread Alexandre DERUMIER
Again, you can run more than one ssh connection if you want to migrate dozens 
of machines. 

Hi, Maybe it'll be cpu limited with a lot a parallel migration ?



- Mail original - 

De: Dietmar Maurer diet...@proxmox.com 
À: Eric Blevins e...@netwalk.com, pve-devel@pve.proxmox.com 
Envoyé: Lundi 7 Janvier 2013 17:49:38 
Objet: Re: [pve-devel] [PATCH] qemu-server: add support for unsecure migration 
(setting in datacenter.cfg) 

  So you can transfer an VM with 4GB within 10 seconds? IMHO not that bad. 
 
 But 4 seconds is even better, when you have dozens of machines to migrate 
 seconds add up to minutes. 
 Besides, networks will get faster over time but SSH will still have 
 limitations 
 due to its design. 

Again, you can run more than one ssh connection if you want to migrate dozens 
of machines. 

___ 
pve-devel mailing list 
pve-devel@pve.proxmox.com 
http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel 
___
pve-devel mailing list
pve-devel@pve.proxmox.com
http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel


Re: [pve-devel] [PATCH] qemu-server: add support for unsecure migration (setting in datacenter.cfg)

2013-01-07 Thread Dietmar Maurer
  Again, you can run more than one ssh connection if you want to migrate
 dozens of machines.
 
 But remember that each migration needs 100% CPU so a WHOLE core. So you
 cannot run dozens of migration in parallel.

You just need 2 to saturate the network. You already tried aes128-ctr?

___
pve-devel mailing list
pve-devel@pve.proxmox.com
http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel


Re: [pve-devel] [PATCH] qemu-server: add support for unsecure migration (setting in datacenter.cfg)

2013-01-02 Thread Dietmar Maurer
  But that should be fixable?
 
  Sure here patches http://www.psc.edu/index.php/hpn-ssh
 
  but nobody at openssh wants to implement them.
 
  I just searched the openssh-unix-dev archives for this year.
  There is not a single complaint about speed!
 
 Asked the list!


Thanks.
___
pve-devel mailing list
pve-devel@pve.proxmox.com
http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel


Re: [pve-devel] [PATCH] qemu-server: add support for unsecure migration (setting in datacenter.cfg)

2012-12-30 Thread Stefan Priebe - Profihost AG
Am 30.12.2012 um 06:51 schrieb Dietmar Maurer diet...@proxmox.com:

 
 i wasn't able to find more information about the speed of ssh influenced by
 aes-ni. I'm not sure that it will result in so much speed. The Problem isn't 
 the
 encryption it is at a specific point the buffer handling in ssh. Which uses
 HARDCODED pretty small buffers.
 
 But that should be fixable?

Sure here patches http://www.psc.edu/index.php/hpn-ssh

but nobody at openssh wants to implement them.

 So the only solution will be a direct tcp connection. No chance to get the
 patches included?
 
 I would really like to get ssh fixed instead - that would help a lot more 
 people.
Would be great but then proxmox need to maintain a custom ssh version.

Stefan___
pve-devel mailing list
pve-devel@pve.proxmox.com
http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel


Re: [pve-devel] [PATCH] qemu-server: add support for unsecure migration (setting in datacenter.cfg)

2012-12-29 Thread Stefan Priebe

Hi,
Am 28.12.2012 13:09, schrieb Dietmar Maurer:

This is just memory openssl speed. There i'm getting 600-700MB/s:
The 'numbers' are in 1000s of bytes per second processed.
type 16 bytes 64 bytes256 bytes   1024 bytes   8192
bytes
aes-128-cbc 648664.33k   688924.90k   695855.45k   700784.64k
704027.06k

But scp / ssh just boosts from 42MB/s to 76Mb/s.


if we can get 700MB/s there must be a bug somewhere? Maybe you
can ask on the ssh mailing lists?


i wasn't able to find more information about the speed of ssh influenced 
by aes-ni. I'm not sure that it will result in so much speed. The 
Problem isn't the encryption it is at a specific point the buffer 
handling in ssh. Which uses HARDCODED pretty small buffers.


So the only solution will be a direct tcp connection. No chance to get 
the patches included?


Greets,
Stefan
___
pve-devel mailing list
pve-devel@pve.proxmox.com
http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel


Re: [pve-devel] [PATCH] qemu-server: add support for unsecure migration (setting in datacenter.cfg)

2012-12-29 Thread Dietmar Maurer
 
 i wasn't able to find more information about the speed of ssh influenced by
 aes-ni. I'm not sure that it will result in so much speed. The Problem isn't 
 the
 encryption it is at a specific point the buffer handling in ssh. Which uses
 HARDCODED pretty small buffers.

But that should be fixable?

 So the only solution will be a direct tcp connection. No chance to get the
 patches included?

I would really like to get ssh fixed instead - that would help a lot more 
people.

___
pve-devel mailing list
pve-devel@pve.proxmox.com
http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel


Re: [pve-devel] [PATCH] qemu-server: add support for unsecure migration (setting in datacenter.cfg)

2012-12-28 Thread Stefan Priebe - Profihost AG
Am 28.12.2012 um 08:38 schrieb Dietmar Maurer diet...@proxmox.com:

 But how? I mean this is a feature for TRUSTED networks! So private
 separated LAN for cluster interconnect. If you have that how should this
 happen?
 
 Security matters is something unusual happens, some kind of attack. So your
 TRUSTED network turn into a not fully TRUSTED network.

But then you trusted host servers might be effected as well.

I don't see a special risk here. Network security can't be handled by ssh. A 
much lower level is needed ip/ mac filter per port on switch.

Stefan

 
 
 
 
 
___
pve-devel mailing list
pve-devel@pve.proxmox.com
http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel


Re: [pve-devel] [PATCH] qemu-server: add support for unsecure migration (setting in datacenter.cfg)

2012-12-28 Thread Dietmar Maurer
 I don't see a special risk here. Network security can't be handled by ssh. A
 much lower level is needed ip/ mac filter per port on switch.

Any attacker inside the same network can read that traffic because it is not 
encrypted.

Anyways, how much faster is it compared to AES hardware chipher?

___
pve-devel mailing list
pve-devel@pve.proxmox.com
http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel


Re: [pve-devel] [PATCH] qemu-server: add support for unsecure migration (setting in datacenter.cfg)

2012-12-28 Thread Stefan Priebe - Profihost AG
Am 28.12.2012 um 09:33 schrieb Dietmar Maurer diet...@proxmox.com:

 I don't see a special risk here. Network security can't be handled by ssh. A
 much lower level is needed ip/ mac filter per port on switch.
 
 Any attacker inside the same network can read that traffic because it is not 
 encrypted.
But if I have a physically separated network how could an attacker attached to 
it?


 
 Anyways, how much faster is it compared to AESl hardware chipher?

42 Mb/s ssh
76 Mb/s ssh aes-ni
1050 Mb/s direct tcp

Stefan
___
pve-devel mailing list
pve-devel@pve.proxmox.com
http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel


Re: [pve-devel] [PATCH] qemu-server: add support for unsecure migration (setting in datacenter.cfg)

2012-12-28 Thread Alexandre DERUMIER
That looks strange - AES-ni should be much faster? 

Are you sure that squeeze support aes-ni with ssh ? because libssl is quite old




- Mail original - 

De: Dietmar Maurer diet...@proxmox.com 
À: Stefan Priebe - Profihost AG s.pri...@profihost.ag 
Cc: pve-devel@pve.proxmox.com 
Envoyé: Vendredi 28 Décembre 2012 11:29:23 
Objet: Re: [pve-devel] [PATCH] qemu-server: add support for unsecure migration 
(setting in datacenter.cfg) 

  Anyways, how much faster is it compared to AESl hardware chipher? 
 
 42 Mb/s ssh 
 76 Mb/s ssh aes-ni 

That looks strange - AES-ni should be much faster? 

Just did a quick search, and people claim to get  500MB/s 

for example: 

http://datacenteroverlords.com/2011/09/07/aes-ni-pimp-your-aes/ 

So whats wrong? 

___ 
pve-devel mailing list 
pve-devel@pve.proxmox.com 
http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel 
___
pve-devel mailing list
pve-devel@pve.proxmox.com
http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel


Re: [pve-devel] [PATCH] qemu-server: add support for unsecure migration (setting in datacenter.cfg)

2012-12-28 Thread Alexandre DERUMIER
Are you sure that squeeze support aes-ni with ssh ? because libssl is quite 
old 
See here:
Add support for AES-NI (Package: libssl1.0.0)
http://bugs.debian.org/cgi-bin/bugreport.cgi?bug=644743

- Mail original - 

De: Alexandre DERUMIER aderum...@odiso.com 
À: Dietmar Maurer diet...@proxmox.com 
Cc: pve-devel@pve.proxmox.com 
Envoyé: Vendredi 28 Décembre 2012 11:33:35 
Objet: Re: [pve-devel] [PATCH] qemu-server: add support for unsecure migration 
(setting in datacenter.cfg) 

That looks strange - AES-ni should be much faster? 

Are you sure that squeeze support aes-ni with ssh ? because libssl is quite old 




- Mail original - 

De: Dietmar Maurer diet...@proxmox.com 
À: Stefan Priebe - Profihost AG s.pri...@profihost.ag 
Cc: pve-devel@pve.proxmox.com 
Envoyé: Vendredi 28 Décembre 2012 11:29:23 
Objet: Re: [pve-devel] [PATCH] qemu-server: add support for unsecure migration 
(setting in datacenter.cfg) 

  Anyways, how much faster is it compared to AESl hardware chipher? 
 
 42 Mb/s ssh 
 76 Mb/s ssh aes-ni 

That looks strange - AES-ni should be much faster? 

Just did a quick search, and people claim to get  500MB/s 

for example: 

http://datacenteroverlords.com/2011/09/07/aes-ni-pimp-your-aes/ 

So whats wrong? 

___ 
pve-devel mailing list 
pve-devel@pve.proxmox.com 
http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel 
___ 
pve-devel mailing list 
pve-devel@pve.proxmox.com 
http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel 
___
pve-devel mailing list
pve-devel@pve.proxmox.com
http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel


Re: [pve-devel] [PATCH] qemu-server: add support for unsecure migration (setting in datacenter.cfg)

2012-12-28 Thread Dietmar Maurer
 This is just memory openssl speed. There i'm getting 600-700MB/s:
 The 'numbers' are in 1000s of bytes per second processed.
 type 16 bytes 64 bytes256 bytes   1024 bytes   8192
 bytes
 aes-128-cbc 648664.33k   688924.90k   695855.45k   700784.64k
 704027.06k
 
 But scp / ssh just boosts from 42MB/s to 76Mb/s.

if we can get 700MB/s there must be a bug somewhere? Maybe you
can ask on the ssh mailing lists?

___
pve-devel mailing list
pve-devel@pve.proxmox.com
http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel


Re: [pve-devel] [PATCH] qemu-server: add support for unsecure migration (setting in datacenter.cfg)

2012-12-28 Thread Alexandre DERUMIER
if we can get 700MB/s there must be a bug somewhere? Maybe you 
can ask on the ssh mailing lists? 
Do you have the aes ni intel module loaded ? (#modprobe aesni-intel )


- Mail original - 

De: Dietmar Maurer diet...@proxmox.com 
À: Stefan Priebe s.pri...@profihost.ag 
Cc: pve-devel@pve.proxmox.com 
Envoyé: Vendredi 28 Décembre 2012 13:09:16 
Objet: Re: [pve-devel] [PATCH] qemu-server: add support for unsecure migration 
(setting in datacenter.cfg) 

 This is just memory openssl speed. There i'm getting 600-700MB/s: 
 The 'numbers' are in 1000s of bytes per second processed. 
 type 16 bytes 64 bytes 256 bytes 1024 bytes 8192 
 bytes 
 aes-128-cbc 648664.33k 688924.90k 695855.45k 700784.64k 
 704027.06k 
 
 But scp / ssh just boosts from 42MB/s to 76Mb/s. 

if we can get 700MB/s there must be a bug somewhere? Maybe you 
can ask on the ssh mailing lists? 

___ 
pve-devel mailing list 
pve-devel@pve.proxmox.com 
http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel 
___
pve-devel mailing list
pve-devel@pve.proxmox.com
http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel


Re: [pve-devel] [PATCH] qemu-server: add support for unsecure migration (setting in datacenter.cfg)

2012-12-28 Thread Stefan Priebe - Profihost AG
Sure

Am 28.12.2012 um 14:14 schrieb Alexandre DERUMIER aderum...@odiso.com:

 if we can get 700MB/s there must be a bug somewhere? Maybe you 
 can ask on the ssh mailing lists?
 Do you have the aes ni intel module loaded ? (#modprobe aesni-intel )
 
 
 - Mail original - 
 
 De: Dietmar Maurer diet...@proxmox.com 
 À: Stefan Priebe s.pri...@profihost.ag 
 Cc: pve-devel@pve.proxmox.com 
 Envoyé: Vendredi 28 Décembre 2012 13:09:16 
 Objet: Re: [pve-devel] [PATCH] qemu-server: add support for unsecure 
 migration (setting in datacenter.cfg) 
 
 This is just memory openssl speed. There i'm getting 600-700MB/s: 
 The 'numbers' are in 1000s of bytes per second processed. 
 type 16 bytes 64 bytes 256 bytes 1024 bytes 8192 
 bytes 
 aes-128-cbc 648664.33k 688924.90k 695855.45k 700784.64k 
 704027.06k 
 
 But scp / ssh just boosts from 42MB/s to 76Mb/s.
 
 if we can get 700MB/s there must be a bug somewhere? Maybe you 
 can ask on the ssh mailing lists? 
 
 ___ 
 pve-devel mailing list 
 pve-devel@pve.proxmox.com 
 http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel 
___
pve-devel mailing list
pve-devel@pve.proxmox.com
http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel


[pve-devel] [PATCH] qemu-server: add support for unsecure migration (setting in datacenter.cfg)

2012-12-27 Thread Stefan Priebe
This patch adds support for unsecure migration using a direct tcp connection
KVM = KVM instead of an extra SSH tunnel. Without ssh the limit is just the
bandwith and no longer the CPU / one single core.

You can enable this by adding:
migration_unsecure: 1
to datacenter.cfg

Examples using qemu 1.4 as migration with qemu 1.3 still does not work for me:

current default with SSH Tunnel VM uses 2GB mem:
Dec 27 21:10:32 starting migration of VM 105 to node 'cloud1-1202' (10.255.0.20)
Dec 27 21:10:32 copying disk images
Dec 27 21:10:32 starting VM 105 on remote node 'cloud1-1202'
Dec 27 21:10:35 starting ssh migration tunnel
Dec 27 21:10:36 starting online/live migration on localhost:6
Dec 27 21:10:36 migrate_set_speed: 8589934592
Dec 27 21:10:36 migrate_set_downtime: 1
Dec 27 21:10:38 migration status: active (transferred 152481002, remaining 
1938546688), total 2156396544) , expected downtime 0
Dec 27 21:10:40 migration status: active (transferred 279836995, remaining 
1811140608), total 2156396544) , expected downtime 0
Dec 27 21:10:42 migration status: active (transferred 421265271, remaining 
1669840896), total 2156396544) , expected downtime 0
Dec 27 21:10:44 migration status: active (transferred 570987974, remaining 
1520152576), total 2156396544) , expected downtime 0
Dec 27 21:10:46 migration status: active (transferred 721469404, remaining 
1369939968), total 2156396544) , expected downtime 0
Dec 27 21:10:48 migration status: active (transferred 875595258, remaining 
1216057344), total 2156396544) , expected downtime 0
Dec 27 21:10:50 migration status: active (transferred 1034654822, remaining 
1056931840), total 2156396544) , expected downtime 0
Dec 27 21:10:54 migration status: active (transferred 1176288424, remaining 
915369984), total 2156396544) , expected downtime 0
Dec 27 21:10:56 migration status: active (transferred 1339734759, remaining 
752050176), total 2156396544) , expected downtime 0
Dec 27 21:10:58 migration status: active (transferred 1503743261, remaining 
588206080), total 2156396544) , expected downtime 0
Dec 27 21:11:02 migration status: active (transferred 1645097827, remaining 
446906368), total 2156396544) , expected downtime 0
Dec 27 21:11:04 migration status: active (transferred 1810562934, remaining 
281751552), total 2156396544) , expected downtime 0
Dec 27 21:11:06 migration status: active (transferred 1964377505, remaining 
126033920), total 2156396544) , expected downtime 0
Dec 27 21:11:08 migration status: active (transferred 2077930417, remaining 0), 
total 2156396544) , expected downtime 0
Dec 27 21:11:09 migration speed: 62.06 MB/s - downtime 37 ms
Dec 27 21:11:09 migration status: completed
Dec 27 21:11:13 migration finished successfuly (duration 00:00:41)
TASK OK

with unsecure migration without SSH Tunnel:
Dec 27 22:43:14 starting migration of VM 105 to node 'cloud1-1203' (10.255.0.22)
Dec 27 22:43:14 copying disk images
Dec 27 22:43:14 starting VM 105 on remote node 'cloud1-1203'
Dec 27 22:43:17 starting online/live migration on 10.255.0.22:6
Dec 27 22:43:17 migrate_set_speed: 8589934592
Dec 27 22:43:17 migrate_set_downtime: 1
Dec 27 22:43:19 migration speed: 1024.00 MB/s - downtime 1100 ms
Dec 27 22:43:19 migration status: completed
Dec 27 22:43:22 migration finished successfuly (duration 00:00:09)
TASK OK

---
 PVE/QemuMigrate.pm |   41 -
 PVE/QemuServer.pm  |   12 +---
 2 files changed, 29 insertions(+), 24 deletions(-)

diff --git a/PVE/QemuMigrate.pm b/PVE/QemuMigrate.pm
index 38f1d05..41b9446 100644
--- a/PVE/QemuMigrate.pm
+++ b/PVE/QemuMigrate.pm
@@ -298,8 +298,8 @@ sub phase2 {
 
 $self-log('info', starting VM $vmid on remote node '$self-{node}');
 
+my $raddr;
 my $rport;
-
 my $nodename = PVE::INotify::nodename();
 
 ## start on remote node
@@ -308,27 +308,27 @@ sub phase2 {
 
 PVE::Tools::run_command($cmd, outfunc = sub {
my $line = shift;
-
-   if ($line =~ m/^migration listens on port (\d+)$/) {
-   $rport = $1;
+   if ($line =~ m/^migration listens on tcp:([\d\.]+|localhost):(\d+)$/) {
+   $raddr = $1;
+   $rport = $2;
}
 }, errfunc = sub {
my $line = shift;
$self-log('info', $line);
 });
 
-die unable to detect remote migration port\n if !$rport;
+die unable to detect remote migration address\n if !$raddr;
 
-$self-log('info', starting migration tunnel);
+if ($raddr eq localhost) {
+$self-log('info', starting ssh migration tunnel);
 
-## create tunnel to remote port
-my $lport = PVE::QemuServer::next_migrate_port();
-$self-{tunnel} = $self-fork_tunnel($self-{nodeip}, $lport, $rport);
-
-$self-log('info', starting online/live migration on port $lport);
-# start migration
+## create tunnel to remote port 
+my $lport = PVE::QemuServer::next_migrate_port();
+$self-{tunnel} = $self-fork_tunnel($self-{nodeip}, $lport, $rport);
+}
 

Re: [pve-devel] [PATCH] qemu-server: add support for unsecure migration (setting in datacenter.cfg)

2012-12-27 Thread Dietmar Maurer
So downtime with your patch is 30 times larger? 

___
pve-devel mailing list
pve-devel@pve.proxmox.com
http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel


Re: [pve-devel] [PATCH] qemu-server: add support for unsecure migration (setting in datacenter.cfg)

2012-12-27 Thread Stefan Priebe - Profihost AG

Am 28.12.2012 um 07:32 schrieb Dietmar Maurer diet...@proxmox.com:

 So downtime with your patch is 30 times larger? 

That's just caused by migrate_downtime = 1s if you set a lower value you'll get 
lower down times. Qemu guesses that it can transfer whole vm in around 1s and 
that's allowed.

I can send you outputs with migrate_downtime 0.3 as well.

Stefan
___
pve-devel mailing list
pve-devel@pve.proxmox.com
http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel


Re: [pve-devel] [PATCH] qemu-server: add support for unsecure migration (setting in datacenter.cfg)

2012-12-27 Thread Stefan Priebe - Profihost AG

Am 28.12.2012 um 07:29 schrieb Dietmar Maurer diet...@proxmox.com:

 This patch adds support for unsecure migration using a direct tcp connection
 KVM = KVM instead of an extra SSH tunnel. Without ssh the limit is just the
 bandwith and no longer the CPU / one single core.
 
 I think this should be done in ssh (chipher=none), so that we still make sure 
 that we connect to
 the correct nodes. But yes, it is considerable amount of work to patch ssh - 
 not sure about that.
It doesn't seem that cipher none gets implemented. So who wants to care about a 
custom OpenSSH?

At which point do you see the risk connect to wrong nodes?

Stefan
___
pve-devel mailing list
pve-devel@pve.proxmox.com
http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel


Re: [pve-devel] [PATCH] qemu-server: add support for unsecure migration (setting in datacenter.cfg)

2012-12-27 Thread Dietmar Maurer
 That's just caused by migrate_downtime = 1s if you set a lower value you'll
 get lower down times. Qemu guesses that it can transfer whole vm in around
 1s and that's allowed.
 
 I can send you outputs with migrate_downtime 0.3 as well.

not neccessary, thanks.

___
pve-devel mailing list
pve-devel@pve.proxmox.com
http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel


Re: [pve-devel] [PATCH] qemu-server: add support for unsecure migration (setting in datacenter.cfg)

2012-12-27 Thread Dietmar Maurer
 At which point do you see the risk connect to wrong nodes?

IP or MAC spoofing attacks?

___
pve-devel mailing list
pve-devel@pve.proxmox.com
http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel


Re: [pve-devel] [PATCH] qemu-server: add support for unsecure migration (setting in datacenter.cfg)

2012-12-27 Thread Stefan Priebe - Profihost AG

Am 28.12.2012 um 08:15 schrieb Dietmar Maurer diet...@proxmox.com:

 At which point do you see the risk connect to wrong nodes?
 
 IP or MAC spoofing attacks?

But how? I mean this is a feature for TRUSTED networks! So private separated 
LAN for cluster interconnect. If you have that how should this happen? If 
someone has already access to your systems he can do easier things as well.

Stefan

___
pve-devel mailing list
pve-devel@pve.proxmox.com
http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel