Re: [pve-devel] [PATCH docs] add migration settings documentation

2016-11-30 Thread Dietmar Maurer
> > Both works:
> > 10.1.2.1/24 == 10.1.2.0/24 == 10.1.2.128/24
> > 
> > /24 tells us that the last 8 bit are irrelevant and masked away, at least in
> > this case, ip-tools can handle it just fine :)
> 
> After thinking about it a bit I do think using .0 is a more consistent
> and sane choice for the reference documentation, mostly because in many
> other parts we throw errors when the user enters a network with a
> non-zero host part.

I already changed that.

___
pve-devel mailing list
pve-devel@pve.proxmox.com
http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel


Re: [pve-devel] [PATCH docs] add migration settings documentation

2016-11-30 Thread Wolfgang Bumiller
On Tue, Nov 29, 2016 at 04:44:30PM +0100, Thomas Lamprecht wrote:
> Hi,
> 
> 
> On 11/29/2016 04:34 PM, Alexandre DERUMIER wrote:
> >Hi,
> >
> >+
> >>>+Here we want to use the 10.1.2.1/24 network as migration network.
> >>>+migration: secure,network=10.1.2.1/24
> >I think the network is:
> >
> >10.1.2.0/24
> 
> Both works:
> 10.1.2.1/24 == 10.1.2.0/24 == 10.1.2.128/24
> 
> /24 tells us that the last 8 bit are irrelevant and masked away, at least in
> this case, ip-tools can handle it just fine :)

After thinking about it a bit I do think using .0 is a more consistent
and sane choice for the reference documentation, mostly because in many
other parts we throw errors when the user enters a network with a
non-zero host part.

___
pve-devel mailing list
pve-devel@pve.proxmox.com
http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel


Re: [pve-devel] [PATCH docs] add migration settings documentation

2016-11-29 Thread Thomas Lamprecht

Hi,


On 11/29/2016 04:34 PM, Alexandre DERUMIER wrote:

Hi,

+

+Here we want to use the 10.1.2.1/24 network as migration network.
+migration: secure,network=10.1.2.1/24

I think the network is:

10.1.2.0/24


Both works:
10.1.2.1/24 == 10.1.2.0/24 == 10.1.2.128/24

/24 tells us that the last 8 bit are irrelevant and masked away, at 
least in this case, ip-tools can handle it just fine :)



?

  
- Mail original -

De: "Thomas Lamprecht" 
À: "pve-devel" 
Envoyé: Mardi 29 Novembre 2016 10:56:05
Objet: [pve-devel] [PATCH docs] add migration settings documentation

Signed-off-by: Thomas Lamprecht 
---

pvecm seemed like a reasonable place for this. migrations make only sense in
clustered setups, the settings are in the pve-cluster package
(datacenter.cfg), but I'm naturally open for suggestions about better places.

pvecm.adoc | 98 ++
1 file changed, 98 insertions(+)

diff --git a/pvecm.adoc b/pvecm.adoc
index 8db8e47..c3acc84 100644
--- a/pvecm.adoc
+++ b/pvecm.adoc
@@ -880,6 +880,104 @@ When you turn on nodes, or when power comes back after 
power failure,
it is likely that some nodes boots faster than others. Please keep in
mind that guest startup is delayed until you reach quorum.

+Guest Migration
+---
+
+Migrating Virtual Guests (live) to other nodes is a useful feature in a
+cluster. There exist settings to control the behavior of such migrations.
+This can be done cluster wide via the 'datacenter.cfg' configuration file or
+also for a single migration through API or command line tool parameters.
+
+Migration Type
+~~
+
+The migration type defines if the migration data should be sent over a
+encrypted ('secure') channel or an unencrypted ('insecure') one.
+Setting the migration type to insecure means that the RAM content of a
+Virtual Guest gets also transfered unencrypted, which can lead to
+information disclosure of critical data from inside the guest for example
+passwords or encryption keys.
+Thus we strongly recommend to use the secure channel if you have not full
+control over the network and cannot guarantee that no one is eavesdropping
+on it.
+
+Note that storage migration do not obey this setting, they will always send
+the content over an secure channel currently.
+
+While this setting is often changed to 'insecure' in favor of gaining better
+performance on migrations it may actually have an small impact on systems
+with AES encryption hardware support in the CPU. This impact can get bigger
+if the network link can transmit 10Gbps or more.
+
+Migration Network
+~
+
+By default {pve} uses the network where the cluster communication happens
+for sending the migration traffic. This is may be suboptimal, for one the
+sensible cluster traffic can be disturbed and on the other hand it may not
+have the best bandwidth available from all network interfaces on the node.
+Setting the migration network parameter allows using a dedicated network for
+sending all the migration traffic when migrating a guest system. This
+includes the traffic for offline storage migrations.
+
+The migration network is represented as a network in 'CIDR' notation. This
+has the advantage that you do not need to set a IP for each node, {pve} is
+able to figure out the real address from the given CIDR denoted network and
+the networks configured on the target node.
+To let this work the network must be specific enough, i.e. each node must
+have one and only one IP configured in the given network.
+
+Example
+^^^
+
+Lets assume that we have a three node setup with three networks, one for the
+public communication with the Internet, one for the cluster communication
+and one very fast one, which we want to use as an dedicated migration
+network. A network configuration for such a setup could look like:
+
+
+iface eth0 inet manual
+
+# public network
+auto vmbr0
+iface vmbr0 inet static
+ address 192.X.Y.57
+ netmask 255.255.250.0
+ gateway 192.X.Y.1
+ bridge_ports eth0
+ bridge_stp off
+ bridge_fd 0
+
+# cluster network
+auto eth1
+iface eth1 inet static
+ address 10.1.1.1
+ netmask 255.255.255.0
+
+# fast network
+auto eth2
+iface eth2 inet static
+ address 10.1.2.1
+ netmask 255.255.255.0
+
+# [...]
+
+
+Here we want to use the 10.1.2.1/24 network as migration network.
+For a single migration you can achieve this by using the 'migration_network'
+parameter:
+
+# qm migrate 106 tre --online --migration_network 10.1.2.1/24
+
+
+To set this up as default network for all migrations cluster wide you can use
+the migration property in '/etc/pve/datacenter.cfg':
+
+# [...]
+migration: secure,network=10.1.2.1/24
+
+
+Note that the migration type must be always set if the network gets set.

ifdef::manvolnum[]
include::pve-copyright.adoc[]



___
pve-devel mailing list
pve-devel@pve.proxmox.com
http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel


Re: [pve-devel] [PATCH docs] add migration settings documentation

2016-11-29 Thread Alexandre DERUMIER
Hi,

+ 
>>+Here we want to use the 10.1.2.1/24 network as migration network. 
>>+migration: secure,network=10.1.2.1/24 

I think the network is:

10.1.2.0/24

?

 
- Mail original -
De: "Thomas Lamprecht" 
À: "pve-devel" 
Envoyé: Mardi 29 Novembre 2016 10:56:05
Objet: [pve-devel] [PATCH docs] add migration settings documentation

Signed-off-by: Thomas Lamprecht  
--- 

pvecm seemed like a reasonable place for this. migrations make only sense in 
clustered setups, the settings are in the pve-cluster package 
(datacenter.cfg), but I'm naturally open for suggestions about better places. 

pvecm.adoc | 98 ++ 
1 file changed, 98 insertions(+) 

diff --git a/pvecm.adoc b/pvecm.adoc 
index 8db8e47..c3acc84 100644 
--- a/pvecm.adoc 
+++ b/pvecm.adoc 
@@ -880,6 +880,104 @@ When you turn on nodes, or when power comes back after 
power failure, 
it is likely that some nodes boots faster than others. Please keep in 
mind that guest startup is delayed until you reach quorum. 

+Guest Migration 
+--- 
+ 
+Migrating Virtual Guests (live) to other nodes is a useful feature in a 
+cluster. There exist settings to control the behavior of such migrations. 
+This can be done cluster wide via the 'datacenter.cfg' configuration file or 
+also for a single migration through API or command line tool parameters. 
+ 
+Migration Type 
+~~ 
+ 
+The migration type defines if the migration data should be sent over a 
+encrypted ('secure') channel or an unencrypted ('insecure') one. 
+Setting the migration type to insecure means that the RAM content of a 
+Virtual Guest gets also transfered unencrypted, which can lead to 
+information disclosure of critical data from inside the guest for example 
+passwords or encryption keys. 
+Thus we strongly recommend to use the secure channel if you have not full 
+control over the network and cannot guarantee that no one is eavesdropping 
+on it. 
+ 
+Note that storage migration do not obey this setting, they will always send 
+the content over an secure channel currently. 
+ 
+While this setting is often changed to 'insecure' in favor of gaining better 
+performance on migrations it may actually have an small impact on systems 
+with AES encryption hardware support in the CPU. This impact can get bigger 
+if the network link can transmit 10Gbps or more. 
+ 
+Migration Network 
+~ 
+ 
+By default {pve} uses the network where the cluster communication happens 
+for sending the migration traffic. This is may be suboptimal, for one the 
+sensible cluster traffic can be disturbed and on the other hand it may not 
+have the best bandwidth available from all network interfaces on the node. 
+Setting the migration network parameter allows using a dedicated network for 
+sending all the migration traffic when migrating a guest system. This 
+includes the traffic for offline storage migrations. 
+ 
+The migration network is represented as a network in 'CIDR' notation. This 
+has the advantage that you do not need to set a IP for each node, {pve} is 
+able to figure out the real address from the given CIDR denoted network and 
+the networks configured on the target node. 
+To let this work the network must be specific enough, i.e. each node must 
+have one and only one IP configured in the given network. 
+ 
+Example 
+^^^ 
+ 
+Lets assume that we have a three node setup with three networks, one for the 
+public communication with the Internet, one for the cluster communication 
+and one very fast one, which we want to use as an dedicated migration 
+network. A network configuration for such a setup could look like: 
+ 
+ 
+iface eth0 inet manual 
+ 
+# public network 
+auto vmbr0 
+iface vmbr0 inet static 
+ address 192.X.Y.57 
+ netmask 255.255.250.0 
+ gateway 192.X.Y.1 
+ bridge_ports eth0 
+ bridge_stp off 
+ bridge_fd 0 
+ 
+# cluster network 
+auto eth1 
+iface eth1 inet static 
+ address 10.1.1.1 
+ netmask 255.255.255.0 
+ 
+# fast network 
+auto eth2 
+iface eth2 inet static 
+ address 10.1.2.1 
+ netmask 255.255.255.0 
+ 
+# [...] 
+ 
+ 
+Here we want to use the 10.1.2.1/24 network as migration network. 
+For a single migration you can achieve this by using the 'migration_network' 
+parameter: 
+ 
+# qm migrate 106 tre --online --migration_network 10.1.2.1/24 
+ 
+ 
+To set this up as default network for all migrations cluster wide you can use 
+the migration property in '/etc/pve/datacenter.cfg': 
+ 
+# [...] 
+migration: secure,network=10.1.2.1/24 
+ 
+ 
+Note that the migration type must be always set if the network gets set. 

ifdef::manvolnum[] 
include::pve-copyright.adoc[] 
-- 
2.1.4 


___ 
pve-devel mailing list 
pve-devel@pve.proxmox.com 
http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel 

___
pve-devel mailing list
pve-devel@pve.proxmox.com
http://pve.proxmo