Re: [pve-devel] [PATCH ha-manager 2/3] Status: improve feedback for 'ignored' state transitions

2016-11-29 Thread Dietmar Maurer
Why do you call that state "setting ignored"? I would prefer a simple "ignored".
Else we have to document two different states with same semantics.

> diff --git a/src/PVE/API2/HA/Status.pm b/src/PVE/API2/HA/Status.pm
> index dbf23d5..0e081bc 100644
> --- a/src/PVE/API2/HA/Status.pm
> +++ b/src/PVE/API2/HA/Status.pm
> @@ -162,10 +162,14 @@ __PACKAGE__->register_method ({
>   $state = 'starting';
>   } elsif ($req eq 'disabled') {
>   $state = 'disabled';
> + } elsif ($req eq 'ignored') {
> + $state = 'setting ignored';

___
pve-devel mailing list
pve-devel@pve.proxmox.com
http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel


Re: [pve-devel] [PATCH ha-manager 1/3] add ingore state for resources

2016-11-29 Thread Dietmar Maurer
> in this state the resource will not get touched by us, all commands
> (like start/stop/migrate) go directly to the VM/CT itself and not
> through the HA stack.
> The resource will not get recovered if its node fails.
> 
> Achieve that by simply removing the respective service from the
> manager_status service status hash if it is in ignored state.

IMHO you do this at the wrong place. We must handle 'ignored' resources
every time we call $haenv->read_service_config(). 
You can find those locations easily with:

# grep -r read_service_config
...
src/PVE/HA/Tools.pm:my $conf = $haenv->read_service_config();
...

I just included one line to show you that this is relevant.

So maybe it is easier to simply return a list without ignored resources
in $haenv->read_service_config()?

___
pve-devel mailing list
pve-devel@pve.proxmox.com
http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel


Re: [pve-devel] [PATCH ha-manager] Resource/API: abort early if resource in error state

2016-11-29 Thread Dietmar Maurer
comments inline:

> diff --git a/src/PVE/API2/HA/Resources.pm b/src/PVE/API2/HA/Resources.pm
> index 621c9e6..f1fac54 100644
> --- a/src/PVE/API2/HA/Resources.pm
> +++ b/src/PVE/API2/HA/Resources.pm
> @@ -187,6 +187,14 @@ __PACKAGE__->register_method ({
>   if !$group_cfg->{ids}->{$group};
>   }
>  
> + my $service_status = PVE::HA::Config::get_service_status($sid);
> + if ($service_status->{state} eq 'error' &&
> + !(defined($param->{state}) && $param->{state} eq 'disabled')) {
> + # service in error state, must be disabled first before new state
> + # request can be executed
> + die "service '$sid' in error state, must be disabled and fixed 
> first\n";
> + }

IMHO it is perfectly valid to edit a resource while it is in error state, so 
I do not really think this is helpful.


>   PVE::HA::Config::lock_ha_domain(
>   sub {
>  
> @@ -288,6 +296,11 @@ __PACKAGE__->register_method ({
>  
>   PVE::HA::Config::service_is_ha_managed($sid);
>  
> + my $service_status = PVE::HA::Config::get_service_status($sid);
> + if ($service_status->{state} eq 'error') {
> + die "service '$sid' in error state, must be disabled and fixed 
> first\n";
> + }
> +
>   PVE::HA::Config::queue_crm_commands("migrate $sid $param->{node}");


Would it make sense to move that check into queue_crm_commands() ?

___
pve-devel mailing list
pve-devel@pve.proxmox.com
http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel


[pve-devel] applied: [PATCH manager] pvereport: tell lsblk to use ascii

2016-11-29 Thread Dietmar Maurer
applied

___
pve-devel mailing list
pve-devel@pve.proxmox.com
http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel


[pve-devel] applied: [RFC qemu-server] increase timeout from guest-fsfreeze-freeze

2016-11-29 Thread Dietmar Maurer
applied

___
pve-devel mailing list
pve-devel@pve.proxmox.com
http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel


Re: [pve-devel] [PATCH docs] add migration settings documentation

2016-11-29 Thread Thomas Lamprecht

Hi,


On 11/29/2016 04:34 PM, Alexandre DERUMIER wrote:

Hi,

+

+Here we want to use the 10.1.2.1/24 network as migration network.
+migration: secure,network=10.1.2.1/24

I think the network is:

10.1.2.0/24


Both works:
10.1.2.1/24 == 10.1.2.0/24 == 10.1.2.128/24

/24 tells us that the last 8 bit are irrelevant and masked away, at 
least in this case, ip-tools can handle it just fine :)



?

  
- Mail original -

De: "Thomas Lamprecht" 
À: "pve-devel" 
Envoyé: Mardi 29 Novembre 2016 10:56:05
Objet: [pve-devel] [PATCH docs] add migration settings documentation

Signed-off-by: Thomas Lamprecht 
---

pvecm seemed like a reasonable place for this. migrations make only sense in
clustered setups, the settings are in the pve-cluster package
(datacenter.cfg), but I'm naturally open for suggestions about better places.

pvecm.adoc | 98 ++
1 file changed, 98 insertions(+)

diff --git a/pvecm.adoc b/pvecm.adoc
index 8db8e47..c3acc84 100644
--- a/pvecm.adoc
+++ b/pvecm.adoc
@@ -880,6 +880,104 @@ When you turn on nodes, or when power comes back after 
power failure,
it is likely that some nodes boots faster than others. Please keep in
mind that guest startup is delayed until you reach quorum.

+Guest Migration
+---
+
+Migrating Virtual Guests (live) to other nodes is a useful feature in a
+cluster. There exist settings to control the behavior of such migrations.
+This can be done cluster wide via the 'datacenter.cfg' configuration file or
+also for a single migration through API or command line tool parameters.
+
+Migration Type
+~~
+
+The migration type defines if the migration data should be sent over a
+encrypted ('secure') channel or an unencrypted ('insecure') one.
+Setting the migration type to insecure means that the RAM content of a
+Virtual Guest gets also transfered unencrypted, which can lead to
+information disclosure of critical data from inside the guest for example
+passwords or encryption keys.
+Thus we strongly recommend to use the secure channel if you have not full
+control over the network and cannot guarantee that no one is eavesdropping
+on it.
+
+Note that storage migration do not obey this setting, they will always send
+the content over an secure channel currently.
+
+While this setting is often changed to 'insecure' in favor of gaining better
+performance on migrations it may actually have an small impact on systems
+with AES encryption hardware support in the CPU. This impact can get bigger
+if the network link can transmit 10Gbps or more.
+
+Migration Network
+~
+
+By default {pve} uses the network where the cluster communication happens
+for sending the migration traffic. This is may be suboptimal, for one the
+sensible cluster traffic can be disturbed and on the other hand it may not
+have the best bandwidth available from all network interfaces on the node.
+Setting the migration network parameter allows using a dedicated network for
+sending all the migration traffic when migrating a guest system. This
+includes the traffic for offline storage migrations.
+
+The migration network is represented as a network in 'CIDR' notation. This
+has the advantage that you do not need to set a IP for each node, {pve} is
+able to figure out the real address from the given CIDR denoted network and
+the networks configured on the target node.
+To let this work the network must be specific enough, i.e. each node must
+have one and only one IP configured in the given network.
+
+Example
+^^^
+
+Lets assume that we have a three node setup with three networks, one for the
+public communication with the Internet, one for the cluster communication
+and one very fast one, which we want to use as an dedicated migration
+network. A network configuration for such a setup could look like:
+
+
+iface eth0 inet manual
+
+# public network
+auto vmbr0
+iface vmbr0 inet static
+ address 192.X.Y.57
+ netmask 255.255.250.0
+ gateway 192.X.Y.1
+ bridge_ports eth0
+ bridge_stp off
+ bridge_fd 0
+
+# cluster network
+auto eth1
+iface eth1 inet static
+ address 10.1.1.1
+ netmask 255.255.255.0
+
+# fast network
+auto eth2
+iface eth2 inet static
+ address 10.1.2.1
+ netmask 255.255.255.0
+
+# [...]
+
+
+Here we want to use the 10.1.2.1/24 network as migration network.
+For a single migration you can achieve this by using the 'migration_network'
+parameter:
+
+# qm migrate 106 tre --online --migration_network 10.1.2.1/24
+
+
+To set this up as default network for all migrations cluster wide you can use
+the migration property in '/etc/pve/datacenter.cfg':
+
+# [...]
+migration: secure,network=10.1.2.1/24
+
+
+Note that the migration type must be always set if the network gets set.

ifdef::manvolnum[]
include::pve-copyright.adoc[]



___
pve-devel mailing list
pve-devel@pve.proxmox.com
http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel


Re: [pve-devel] [PATCH docs] add migration settings documentation

2016-11-29 Thread Alexandre DERUMIER
Hi,

+ 
>>+Here we want to use the 10.1.2.1/24 network as migration network. 
>>+migration: secure,network=10.1.2.1/24 

I think the network is:

10.1.2.0/24

?

 
- Mail original -
De: "Thomas Lamprecht" 
À: "pve-devel" 
Envoyé: Mardi 29 Novembre 2016 10:56:05
Objet: [pve-devel] [PATCH docs] add migration settings documentation

Signed-off-by: Thomas Lamprecht  
--- 

pvecm seemed like a reasonable place for this. migrations make only sense in 
clustered setups, the settings are in the pve-cluster package 
(datacenter.cfg), but I'm naturally open for suggestions about better places. 

pvecm.adoc | 98 ++ 
1 file changed, 98 insertions(+) 

diff --git a/pvecm.adoc b/pvecm.adoc 
index 8db8e47..c3acc84 100644 
--- a/pvecm.adoc 
+++ b/pvecm.adoc 
@@ -880,6 +880,104 @@ When you turn on nodes, or when power comes back after 
power failure, 
it is likely that some nodes boots faster than others. Please keep in 
mind that guest startup is delayed until you reach quorum. 

+Guest Migration 
+--- 
+ 
+Migrating Virtual Guests (live) to other nodes is a useful feature in a 
+cluster. There exist settings to control the behavior of such migrations. 
+This can be done cluster wide via the 'datacenter.cfg' configuration file or 
+also for a single migration through API or command line tool parameters. 
+ 
+Migration Type 
+~~ 
+ 
+The migration type defines if the migration data should be sent over a 
+encrypted ('secure') channel or an unencrypted ('insecure') one. 
+Setting the migration type to insecure means that the RAM content of a 
+Virtual Guest gets also transfered unencrypted, which can lead to 
+information disclosure of critical data from inside the guest for example 
+passwords or encryption keys. 
+Thus we strongly recommend to use the secure channel if you have not full 
+control over the network and cannot guarantee that no one is eavesdropping 
+on it. 
+ 
+Note that storage migration do not obey this setting, they will always send 
+the content over an secure channel currently. 
+ 
+While this setting is often changed to 'insecure' in favor of gaining better 
+performance on migrations it may actually have an small impact on systems 
+with AES encryption hardware support in the CPU. This impact can get bigger 
+if the network link can transmit 10Gbps or more. 
+ 
+Migration Network 
+~ 
+ 
+By default {pve} uses the network where the cluster communication happens 
+for sending the migration traffic. This is may be suboptimal, for one the 
+sensible cluster traffic can be disturbed and on the other hand it may not 
+have the best bandwidth available from all network interfaces on the node. 
+Setting the migration network parameter allows using a dedicated network for 
+sending all the migration traffic when migrating a guest system. This 
+includes the traffic for offline storage migrations. 
+ 
+The migration network is represented as a network in 'CIDR' notation. This 
+has the advantage that you do not need to set a IP for each node, {pve} is 
+able to figure out the real address from the given CIDR denoted network and 
+the networks configured on the target node. 
+To let this work the network must be specific enough, i.e. each node must 
+have one and only one IP configured in the given network. 
+ 
+Example 
+^^^ 
+ 
+Lets assume that we have a three node setup with three networks, one for the 
+public communication with the Internet, one for the cluster communication 
+and one very fast one, which we want to use as an dedicated migration 
+network. A network configuration for such a setup could look like: 
+ 
+ 
+iface eth0 inet manual 
+ 
+# public network 
+auto vmbr0 
+iface vmbr0 inet static 
+ address 192.X.Y.57 
+ netmask 255.255.250.0 
+ gateway 192.X.Y.1 
+ bridge_ports eth0 
+ bridge_stp off 
+ bridge_fd 0 
+ 
+# cluster network 
+auto eth1 
+iface eth1 inet static 
+ address 10.1.1.1 
+ netmask 255.255.255.0 
+ 
+# fast network 
+auto eth2 
+iface eth2 inet static 
+ address 10.1.2.1 
+ netmask 255.255.255.0 
+ 
+# [...] 
+ 
+ 
+Here we want to use the 10.1.2.1/24 network as migration network. 
+For a single migration you can achieve this by using the 'migration_network' 
+parameter: 
+ 
+# qm migrate 106 tre --online --migration_network 10.1.2.1/24 
+ 
+ 
+To set this up as default network for all migrations cluster wide you can use 
+the migration property in '/etc/pve/datacenter.cfg': 
+ 
+# [...] 
+migration: secure,network=10.1.2.1/24 
+ 
+ 
+Note that the migration type must be always set if the network gets set. 

ifdef::manvolnum[] 
include::pve-copyright.adoc[] 
-- 
2.1.4 


___ 
pve-devel mailing list 
pve-devel@pve.proxmox.com 
http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel 

___
pve-devel mailing list
pve-devel@pve.proxmox.com
http://pve.proxmo

[pve-devel] [PATCH storage 3/3] add crucial smart attribute for wear leveling

2016-11-29 Thread Dominik Csapak
Signed-off-by: Dominik Csapak 
---
 PVE/Diskmanage.pm | 1 +
 1 file changed, 1 insertion(+)

diff --git a/PVE/Diskmanage.pm b/PVE/Diskmanage.pm
index abd9ca9..48e6e0a 100644
--- a/PVE/Diskmanage.pm
+++ b/PVE/Diskmanage.pm
@@ -301,6 +301,7 @@ sub get_wear_leveling_info {
'samsung' => 177,
'intel' => 233,
'sandisk' => 233,
+   'crucial' => 202,
'default' => 233,
 };
 
-- 
2.1.4


___
pve-devel mailing list
pve-devel@pve.proxmox.com
http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel


[pve-devel] [PATCH storage 1/3] catch '---' in threshold output of sandisk ssds

2016-11-29 Thread Dominik Csapak
sandisk ssds have a default threshold of '---' on nearly all fields,
which prevents our parsing

Signed-off-by: Dominik Csapak 
---
 PVE/Diskmanage.pm | 5 +++--
 1 file changed, 3 insertions(+), 2 deletions(-)

diff --git a/PVE/Diskmanage.pm b/PVE/Diskmanage.pm
index 938d6a4..abd9ca9 100644
--- a/PVE/Diskmanage.pm
+++ b/PVE/Diskmanage.pm
@@ -97,14 +97,15 @@ sub get_smart_data {
 # Data Units Written: 5,584,952 [2.85 TB]
 # Accumulated start-stop cycles:  34
 
-   if (defined($type) && $type eq 'ata' && $line =~ m/^([ 
\d]{2}\d)\s+(\S+)\s+(\S{6})\s+(\d+)\s+(\d+)\s+(\d+)\s+(\S+)\s+(.*)$/) {
+   if (defined($type) && $type eq 'ata' && $line =~ m/^([ 
\d]{2}\d)\s+(\S+)\s+(\S{6})\s+(\d+)\s+(\d+)\s+(\S+)\s+(\S+)\s+(.*)$/) {
my $entry = {};
$entry->{name} = $2 if defined $2;
$entry->{flags} = $3 if defined $3;
# the +0 makes a number out of the strings
$entry->{value} = $4+0 if defined $4;
$entry->{worst} = $5+0 if defined $5;
-   $entry->{threshold} = $6+0 if defined $6;
+   $entry->{threshold} = $6 if defined $6 && $6 eq '---';
+   $entry->{threshold} = $6+0 if defined $6 && 
!defined($entry->{threshold});
$entry->{fail} = $7 if defined $7;
$entry->{raw} = $8 if defined $8;
$entry->{id} = $1 if defined $1;
-- 
2.1.4


___
pve-devel mailing list
pve-devel@pve.proxmox.com
http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel


[pve-devel] [PATCH storage 2/3] correct regression test for sandisk ssds

2016-11-29 Thread Dominik Csapak
in my initial patch series for the regression test, i failed to notice
the missing attributes for the sandisk ssds (which had not been parsed)

Signed-off-by: Dominik Csapak 
---
 test/disk_tests/ssd_smart/disklist_expected.json  |   2 +-
 test/disk_tests/ssd_smart/sdd_smart_expected.json | 248 +-
 2 files changed, 245 insertions(+), 5 deletions(-)

diff --git a/test/disk_tests/ssd_smart/disklist_expected.json 
b/test/disk_tests/ssd_smart/disklist_expected.json
index f00717f..2eab675 100644
--- a/test/disk_tests/ssd_smart/disklist_expected.json
+++ b/test/disk_tests/ssd_smart/disklist_expected.json
@@ -50,7 +50,7 @@
"serial" : "",
"vendor" : "ATA",
"journals" : 0,
-   "wearout" : "N/A",
+   "wearout" : "100",
"health" : "PASSED",
"devpath" : "/dev/sdd",
"model" : "SanDisk SD8SB8U1T001122",
diff --git a/test/disk_tests/ssd_smart/sdd_smart_expected.json 
b/test/disk_tests/ssd_smart/sdd_smart_expected.json
index 99175d8..0818960 100644
--- a/test/disk_tests/ssd_smart/sdd_smart_expected.json
+++ b/test/disk_tests/ssd_smart/sdd_smart_expected.json
@@ -1,14 +1,254 @@
 {
 "attributes" : [
{
-   "id" : "232",
+   "id" : "  5",
+   "flags" : "-O--CK",
"fail" : "-",
+   "worst" : 100,
+   "raw" : "0",
+   "threshold" : "---",
+   "value" : 100,
+   "name" : "Reallocated_Sector_Ct"
+   },
+   {
+   "worst" : 100,
+   "fail" : "-",
+   "flags" : "-O--CK",
+   "id" : "  9",
+   "name" : "Power_On_Hours",
+   "threshold" : "---",
+   "value" : 100,
+   "raw" : "799"
+   },
+   {
+   "name" : "Power_Cycle_Count",
+   "raw" : "92",
+   "threshold" : "---",
+   "value" : 100,
+   "fail" : "-",
+   "worst" : 100,
+   "id" : " 12",
+   "flags" : "-O--CK"
+   },
+   {
+   "worst" : 100,
+   "fail" : "-",
+   "flags" : "-O--CK",
+   "id" : "165",
+   "name" : "Unknown_Attribute",
+   "threshold" : "---",
+   "value" : 100,
+   "raw" : "9699447"
+   },
+   {
+   "value" : 100,
+   "threshold" : "---",
+   "raw" : "1",
+   "name" : "Unknown_Attribute",
+   "flags" : "-O--CK",
+   "id" : "166",
+   "worst" : 100,
+   "fail" : "-"
+   },
+   {
+   "id" : "167",
+   "flags" : "-O--CK",
+   "fail" : "-",
+   "worst" : 100,
+   "raw" : "46",
+   "value" : 100,
+   "threshold" : "---",
+   "name" : "Unknown_Attribute"
+   },
+   {
+   "name" : "Unknown_Attribute",
+   "raw" : "5",
+   "value" : 100,
+   "threshold" : "---",
+   "fail" : "-",
+   "worst" : 100,
+   "id" : "168",
+   "flags" : "-O--CK"
+   },
+   {
+   "flags" : "-O--CK",
+   "id" : "169",
+   "worst" : 100,
+   "fail" : "-",
+   "value" : 100,
+   "threshold" : "---",
+   "raw" : "1079",
+   "name" : "Unknown_Attribute"
+   },
+   {
+   "raw" : "0",
+   "threshold" : "---",
+   "value" : 100,
+   "name" : "Unknown_Attribute",
+   "id" : "170",
+   "flags" : "-O--CK",
+   "fail" : "-",
+   "worst" : 100
+   },
+   {
+   "fail" : "-",
+   "worst" : 100,
+   "id" : "171",
+   "flags" : "-O--CK",
+   "name" : "Unknown_Attribute",
+   "raw" : "0",
+   "value" : 100,
+   "threshold" : "---"
+   },
+   {
+   "name" : "Unknown_Attribute",
+   "raw" : "0",
+   "value" : 100,
+   "threshold" : "---",
+   "fail" : "-",
+   "worst" : 100,
+   "id" : "172",
+   "flags" : "-O--CK"
+   },
+   {
+   "name" : "Unknown_Attribute",
+   "threshold" : "---",
+   "value" : 100,
+   "raw" : "1",
+   "worst" : 100,
+   "fail" : "-",
+   "flags" : "-O--CK",
+   "id" : "173"
+   },
+   {
+   "name" : "Unknown_Attribute",
+   "value" : 100,
+   "threshold" : "---",
+   "raw" : "22",
+   "worst" : 100,
+   "fail" : "-",
+   "flags" : "-O--CK",
+   "id" : "174"
+   },
+   {
+   "worst" : 100,
+   "fail" : "-",
+   "flags" : "-O--CK",
+   "id" : "184",
+   "name" : "End-to-End_Error",
+   "value" : 100,
+   "threshold" : "---",
+   "raw" : "0"
+   },
+   {
+   "name" : "Reported_Uncorrect",
+   "value" : 100,
+   "threshold" : "---",
+   "raw" : "0",
+   "worst

[pve-devel] [PATCH manager 2/5] add ceph flags api call

2016-11-29 Thread Dominik Csapak
here we can set/unset a single ceph flag, like noout

Signed-off-by: Dominik Csapak 
---
 PVE/API2/Ceph.pm | 63 
 1 file changed, 63 insertions(+)

diff --git a/PVE/API2/Ceph.pm b/PVE/API2/Ceph.pm
index 3dd1439..eaca08d 100644
--- a/PVE/API2/Ceph.pm
+++ b/PVE/API2/Ceph.pm
@@ -535,6 +535,7 @@ __PACKAGE__->register_method ({
{ name => 'config' },
{ name => 'log' },
{ name => 'disks' },
+   { name => 'flags' },
];
 
return $result;
@@ -1312,6 +1313,68 @@ __PACKAGE__->register_method ({
 }});
 
 __PACKAGE__->register_method ({
+name => 'flags',
+path => 'flags',
+method => 'PUT',
+description => "Set/unset a ceph flag",
+proxyto => 'node',
+protected => 1,
+permissions => {
+   check => ['perm', '/', [ 'Sys.Modify' ]],
+},
+parameters => {
+   additionalProperties => 0,
+   properties => {
+   node => get_standard_option('pve-node'),
+   flag => {
+   description => 'The ceph flag to set/unset',
+   type => 'string',
+   enum => [ 'full', 'pause', 'noup', 'nodown', 'noout', 'noin', 
'nobackfill', 'norebalance', 'norecover', 'noscrub', 'nodeep-scrub', 
'notieragent'],
+   },
+   set => {
+   description => 'true if you want to set the flag, false to 
unset it',
+   type => 'boolean',
+   optional => 1,
+   default => 1,
+   },
+   unset => {
+   description => 'true if you want to unset the flag, false to 
set it',
+   type => 'boolean',
+   optional => 1,
+   default => 0,
+   }
+   },
+},
+returns => { type => 'null' },
+code => sub {
+   my ($param) = @_;
+
+   PVE::CephTools::check_ceph_inited();
+
+   my $pve_ckeyring_path = PVE::CephTools::get_config('pve_ckeyring_path');
+
+   die "not fully configured - missing '$pve_ckeyring_path'\n"
+   if ! -f $pve_ckeyring_path;
+
+   my $set = $param->{set} // !$param->{unset};
+   my $rados = PVE::RADOS->new();
+
+   if ($set) {
+   $rados->mon_command({
+   prefix => "osd set",
+   key => $param->{flag},
+   });
+   } else {
+   $rados->mon_command({
+   prefix => "osd unset",
+   key => $param->{flag},
+   });
+   }
+
+   return undef;
+}});
+
+__PACKAGE__->register_method ({
 name => 'destroypool',
 path => 'pools/{name}',
 method => 'DELETE',
-- 
2.1.4


___
pve-devel mailing list
pve-devel@pve.proxmox.com
http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel


[pve-devel] [PATCH manager 3/5] add noout button and reorder actions

2016-11-29 Thread Dominik Csapak
this patch adds a set/unset noout button (for easy maintenance of your
ceph cluster) and reorders the buttons so that global actions (reload,
add osd, set noout) are left, and osd specific actions are on the right

to reduce confusion, there is now a label left of the osd actions which
displays the selected osd

Signed-off-by: Dominik Csapak 
---
 www/manager6/ceph/OSD.js | 47 ++-
 1 file changed, 46 insertions(+), 1 deletion(-)

diff --git a/www/manager6/ceph/OSD.js b/www/manager6/ceph/OSD.js
index 588e3d3..25c4f12 100644
--- a/www/manager6/ceph/OSD.js
+++ b/www/manager6/ceph/OSD.js
@@ -280,6 +280,9 @@ Ext.define('PVE.node.CephOsdTree', {
 /*jslint confusion: true */
 var me = this;
 
+   // we expect noout to be not set by default
+   var noout = false;
+
var nodename = me.pveSelNode.data.node;
if (!nodename) {
throw "no node name specified";
@@ -301,6 +304,13 @@ Ext.define('PVE.node.CephOsdTree', {
sm.deselectAll();
me.setRootNode(response.result.data.root);
me.expandAll();
+   // extract noout flag
+   if (response.result.data.flags &&
+   response.result.data.flags.search(/noout/) !== -1) {
+   noout = true;
+   } else {
+   noout = false;
+   }
set_button_status();
}
});
@@ -399,8 +409,41 @@ Ext.define('PVE.node.CephOsdTree', {
}
});
 
+   var noout_btn = new Ext.Button({
+   text: gettext('Set noout'),
+   handler: function() {
+   PVE.Utils.API2Request({
+   url: "/nodes/" + nodename + "/ceph/flags",
+   params: {
+   flag: 'noout',
+   set: noout ? 0 : 1
+   },
+   waitMsgTarget: me,
+   method: 'PUT',
+   failure: function(response, opts) {
+   Ext.Msg.alert(gettext('Error'), response.htmlStatus);
+   },
+   success: reload
+   });
+   }
+   });
+
+   var osd_label = new Ext.toolbar.TextItem({
+   data: {
+   osd: undefined
+   },
+   tpl: [
+   '',
+   '{osd}:',
+   '',
+   gettext('No OSD selected'),
+   ''
+   ]
+   });
+
set_button_status = function() {
var rec = sm.getSelection()[0];
+   noout_btn.setText(noout?gettext('Unset noout'):gettext('Set 
noout'));
 
if (!rec) {
start_btn.setDisabled(true);
@@ -419,6 +462,8 @@ Ext.define('PVE.node.CephOsdTree', {
 
osd_out_btn.setDisabled(!(isOsd && rec.data['in']));
osd_in_btn.setDisabled(!(isOsd && !rec.data['in']));
+
+   osd_label.update(isOsd?{osd:rec.data.name}:undefined);
};
 
sm.on('selectionchange', set_button_status);
@@ -429,7 +474,7 @@ Ext.define('PVE.node.CephOsdTree', {
});
 
Ext.apply(me, {
-   tbar: [ create_btn, reload_btn, start_btn, stop_btn, osd_out_btn, 
osd_in_btn, remove_btn ],
+   tbar: [ create_btn, reload_btn, noout_btn, '->', osd_label, 
start_btn, stop_btn, osd_out_btn, osd_in_btn, remove_btn ],
rootVisible: false,
fields: ['name', 'type', 'status', 'host', 'in', 'id' ,
 { type: 'number', name: 'reweight' },
-- 
2.1.4


___
pve-devel mailing list
pve-devel@pve.proxmox.com
http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel


[pve-devel] [PATCH manager 4/5] make ceph status in cluster dashboard clickable

2016-11-29 Thread Dominik Csapak
to get faster from the datacenter dashboard to the ceph dashboard

also refactor the cursor style in the css

Signed-off-by: Dominik Csapak 
---
 www/css/ext6-pve.css  |  3 +++
 www/manager6/Workspace.js |  4 +---
 www/manager6/dc/Health.js | 26 +-
 3 files changed, 29 insertions(+), 4 deletions(-)

diff --git a/www/css/ext6-pve.css b/www/css/ext6-pve.css
index 5cd09ca..1efd93a 100644
--- a/www/css/ext6-pve.css
+++ b/www/css/ext6-pve.css
@@ -500,3 +500,6 @@ table.osds td:first-of-type {
 text-align: left;
 }
 
+.pointer {
+cursor: pointer;
+}
diff --git a/www/manager6/Workspace.js b/www/manager6/Workspace.js
index dad11ea..48eb05d 100644
--- a/www/manager6/Workspace.js
+++ b/www/manager6/Workspace.js
@@ -403,9 +403,7 @@ Ext.define('PVE.StdWorkspace', {
xtype: 'button',
margin: '0 10 0 3',
iconCls: 'fa black fa-gear',
-   style: {
-   cursor: 'pointer'
-   },
+   userCls: 'pointer',
handler: function() {
var win = Ext.create('PVE.window.Settings');
win.show();
diff --git a/www/manager6/dc/Health.js b/www/manager6/dc/Health.js
index 428f95c..fbb74a6 100644
--- a/www/manager6/dc/Health.js
+++ b/www/manager6/dc/Health.js
@@ -126,9 +126,33 @@ Ext.define('PVE.dc.Health', {
itemId: 'ceph',
width: 250,
columnWidth: undefined,
+   userCls: 'pointer',
title: gettext('Ceph'),
xtype: 'pveHealthWidget',
-   hidden: true
+   hidden: true,
+   listeners: {
+   element: 'el',
+   click: function() {
+   var me = this;
+   var sp = Ext.state.Manager.getProvider();
+
+   // preselect the ceph tab
+   sp.set('nodetab', {value:'ceph'});
+
+   // select the first node which is online
+   var nodeid = '';
+   var nodes = PVE.data.ResourceStore.getNodes();
+   Ext.Array.some(nodes, function(node) {
+   if (node.running) {
+   nodeid = node.id;
+   return true;
+   }
+
+   return false;
+   });
+   
Ext.ComponentQuery.query('pveResourceTree')[0].selectById(nodeid);
+   }
+   }
}
 ],
 
-- 
2.1.4


___
pve-devel mailing list
pve-devel@pve.proxmox.com
http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel


[pve-devel] [PATCH manager 1/5] also return the ceph flags in osd api call

2016-11-29 Thread Dominik Csapak
we want to set/get the flags in the ceph/osd tab, so we have to
return it there

Signed-off-by: Dominik Csapak 
---
 PVE/API2/Ceph.pm | 9 +++--
 1 file changed, 7 insertions(+), 2 deletions(-)

diff --git a/PVE/API2/Ceph.pm b/PVE/API2/Ceph.pm
index 7b1bbd0..3dd1439 100644
--- a/PVE/API2/Ceph.pm
+++ b/PVE/API2/Ceph.pm
@@ -30,6 +30,8 @@ my $get_osd_status = sub {
 
 my $osdlist = $stat->{osds} || [];
 
+my $flags = $stat->{flags} || undef;
+
 my $osdstat;
 foreach my $d (@$osdlist) {
$osdstat->{$d->{osd}} = $d if defined($d->{osd});
@@ -39,7 +41,7 @@ my $get_osd_status = sub {
return $osdstat->{$osdid};
 }
 
-return $osdstat;
+return wantarray? ($osdstat, $flags):$osdstat;
 };
 
 my $get_osd_usage = sub {
@@ -86,7 +88,7 @@ __PACKAGE__->register_method ({
 
 die "no tree nodes found\n" if !($res && $res->{nodes});
 
-   my $osdhash = &$get_osd_status($rados);
+   my ($osdhash, $flags) = &$get_osd_status($rados);
 
my $usagehash = &$get_osd_usage($rados);
 
@@ -151,6 +153,9 @@ __PACKAGE__->register_method ({
 
my $data = { root => { leaf =>  0, children => $roots } };
 
+   # we want this for the noout flag
+   $data->{flags} = $flags if $flags;
+
return $data;
 }});
 
-- 
2.1.4


___
pve-devel mailing list
pve-devel@pve.proxmox.com
http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel


[pve-devel] [PATCH manager 5/5] show in/out/up/down icons in osd overview

2016-11-29 Thread Dominik Csapak
Signed-off-by: Dominik Csapak 
---
 www/manager6/ceph/OSD.js | 13 +++--
 1 file changed, 11 insertions(+), 2 deletions(-)

diff --git a/www/manager6/ceph/OSD.js b/www/manager6/ceph/OSD.js
index 25c4f12..d53d46d 100644
--- a/www/manager6/ceph/OSD.js
+++ b/www/manager6/ceph/OSD.js
@@ -188,8 +188,17 @@ Ext.define('PVE.node.CephOsdTree', {
if (!value) {
return value;
}
-   var data = rec.data;
-   return value + '/' + (data['in'] ? 'in' : 'out');
+   var inout = rec.data['in'] ? 'in' : 'out';
+   var updownicon = value === 'up' ? 'good fa-arrow-circle-up' :
+ 'critical 
fa-arrow-circle-down';
+
+   var inouticon = rec.data['in'] ? 'good fa-circle' :
+'warning fa-circle-o';
+
+   var text = value + '  / ' +
+  inout + ' ';
+
+   return text;
},
width: 80
},
-- 
2.1.4


___
pve-devel mailing list
pve-devel@pve.proxmox.com
http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel


[pve-devel] [PATCH manager 0/5] improve ceph gui

2016-11-29 Thread Dominik Csapak
this patch improves the ceph gui in several ways:

 * make the ceph status in the cluster status clickable
   (it leads you to the ceph dashboard)
 * adds a noout button
 * groups the osd action button logically
 * shows icons in osd view for the status


Dominik Csapak (5):
  also return the ceph flags in osd api call
  add ceph flags api call
  add noout button and reorder actions
  make ceph status in cluster dashboard clickable
  show in/out/up/down icons in osd overview

 PVE/API2/Ceph.pm  | 72 +--
 www/css/ext6-pve.css  |  3 ++
 www/manager6/Workspace.js |  4 +--
 www/manager6/ceph/OSD.js  | 60 +--
 www/manager6/dc/Health.js | 26 -
 5 files changed, 156 insertions(+), 9 deletions(-)

-- 
2.1.4


___
pve-devel mailing list
pve-devel@pve.proxmox.com
http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel


Re: [pve-devel] [PATCH pve-manager] check if dir /var/log/pveproxy exists.

2016-11-29 Thread Wolfgang Bumiller
On Tue, Nov 29, 2016 at 11:46:26AM +0100, Fabian Grünbichler wrote:
> On Thu, Sep 15, 2016 at 01:09:38PM +0200, Dietmar Maurer wrote:
> > 
> > 
> > > On September 15, 2016 at 12:25 PM Wolfgang Link  
> > > wrote:
> > > 
> > > 
> > > We will check on every start of pveproxy if the logdir are available.
> > > If not we make a new one and give www-data permission to this dir.
> > 
> > And you want to do that with every directory?
> 
> this came up in a bug report with /var/log on tmpfs:
> https://bugzilla.proxmox.com/show_bug.cgi?id=1216
> 
> are there still objections to adding this? I think recreating a needed
> log directory is good behaviour for a service daemon anyway..

Ack.

(But can we just do `if (mkdir(...))` instead of checking with -d?
Checking dirs before creating them is generally a bad style since mkdir
can fail with EEXIST anyway)

___
pve-devel mailing list
pve-devel@pve.proxmox.com
http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel


[pve-devel] [PATCH firewall 1/2] ipsets: catch zero-prefix entries

2016-11-29 Thread Wolfgang Bumiller
This way the error is visible with pve-firewall compile
without breaking the rest.
---
 src/PVE/Firewall.pm | 4 
 1 file changed, 4 insertions(+)

diff --git a/src/PVE/Firewall.pm b/src/PVE/Firewall.pm
index ef74ca2..c7d90f8 100644
--- a/src/PVE/Firewall.pm
+++ b/src/PVE/Firewall.pm
@@ -2773,6 +2773,10 @@ sub generic_fw_config_parser {
$errors->{cidr} = $err;
}
 
+   if ($cidr =~ m!/0+$!) {
+   $errors->{cidr} = "a zero prefix is not allowed in ipset 
entries\n";
+   }
+
my $entry = { cidr => $cidr };
$entry->{nomatch} = 1 if $nomatch;
$entry->{comment} = $comment if $comment;
-- 
2.1.4


___
pve-devel mailing list
pve-devel@pve.proxmox.com
http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel


[pve-devel] [PATCH firewall 2/2] ipset: don't allow the creation of zero-prefix entries

2016-11-29 Thread Wolfgang Bumiller
---
 src/PVE/API2/Firewall/IPSet.pm | 3 +++
 1 file changed, 3 insertions(+)

diff --git a/src/PVE/API2/Firewall/IPSet.pm b/src/PVE/API2/Firewall/IPSet.pm
index 6129c9d..ea6d1a2 100644
--- a/src/PVE/API2/Firewall/IPSet.pm
+++ b/src/PVE/API2/Firewall/IPSet.pm
@@ -187,6 +187,9 @@ sub register_create_ip {
if $entry->{cidr} eq $cidr;
}
 
+   raise_param_exc({ cidr => "a zero prefix is not allowed in ipset 
entries" })
+   if $cidr =~ m!/0+$!;
+
# make sure alias exists (if $cidr is an alias)
PVE::Firewall::resolve_alias($cluster_conf, $fw_conf, $cidr)
if $cidr =~ m/^${PVE::Firewall::ip_alias_pattern}$/;
-- 
2.1.4


___
pve-devel mailing list
pve-devel@pve.proxmox.com
http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel


Re: [pve-devel] [PATCH pve-manager] check if dir /var/log/pveproxy exists.

2016-11-29 Thread Fabian Grünbichler
On Thu, Sep 15, 2016 at 01:09:38PM +0200, Dietmar Maurer wrote:
> 
> 
> > On September 15, 2016 at 12:25 PM Wolfgang Link  wrote:
> > 
> > 
> > We will check on every start of pveproxy if the logdir are available.
> > If not we make a new one and give www-data permission to this dir.
> 
> And you want to do that with every directory?

this came up in a bug report with /var/log on tmpfs:
https://bugzilla.proxmox.com/show_bug.cgi?id=1216

are there still objections to adding this? I think recreating a needed
log directory is good behaviour for a service daemon anyway..

___
pve-devel mailing list
pve-devel@pve.proxmox.com
http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel


[pve-devel] [PATCH docs] add migration settings documentation

2016-11-29 Thread Thomas Lamprecht
Signed-off-by: Thomas Lamprecht 
---

pvecm seemed like a reasonable place for this. migrations make only sense in
clustered setups, the settings are in the pve-cluster package
(datacenter.cfg), but I'm naturally open for suggestions about better places.

 pvecm.adoc | 98 ++
 1 file changed, 98 insertions(+)

diff --git a/pvecm.adoc b/pvecm.adoc
index 8db8e47..c3acc84 100644
--- a/pvecm.adoc
+++ b/pvecm.adoc
@@ -880,6 +880,104 @@ When you turn on nodes, or when power comes back after 
power failure,
 it is likely that some nodes boots faster than others. Please keep in
 mind that guest startup is delayed until you reach quorum.
 
+Guest Migration
+---
+
+Migrating Virtual Guests (live) to other nodes is a useful feature in a
+cluster. There exist settings to control the behavior of such migrations.
+This can be done cluster wide via the 'datacenter.cfg' configuration file or
+also for a single migration through API or command line tool parameters.
+
+Migration Type
+~~
+
+The migration type defines if the migration data should be sent over a
+encrypted ('secure') channel or an unencrypted ('insecure') one.
+Setting the migration type to insecure means that the RAM content of a
+Virtual Guest gets also transfered unencrypted, which can lead to
+information disclosure of critical data from inside the guest for example
+passwords or encryption keys.
+Thus we strongly recommend to use the secure channel if you have not full
+control over the network and cannot guarantee that no one is eavesdropping
+on it.
+
+Note that storage migration do not obey this setting, they will always send
+the content over an secure channel currently.
+
+While this setting is often changed to 'insecure' in favor of gaining better
+performance on migrations it may actually have an small impact on systems
+with AES encryption hardware support in the CPU. This impact can get bigger
+if the network link can transmit 10Gbps or more.
+
+Migration Network
+~
+
+By default {pve} uses the network where the cluster communication happens
+for sending the migration traffic. This is may be suboptimal, for one the
+sensible cluster traffic can be disturbed and on the other hand it may not
+have the best bandwidth available from all network interfaces on the node.
+Setting the migration network parameter allows using a dedicated network for
+sending all the migration traffic when migrating a guest system. This
+includes the traffic for offline storage migrations.
+
+The migration network is represented as a network in 'CIDR' notation. This
+has the advantage that you do not need to set a IP for each node, {pve} is
+able to figure out the real address from the given CIDR denoted network and
+the networks configured on the target node.
+To let this work the network must be specific enough, i.e. each node must
+have one and only one IP configured in the given network.
+
+Example
+^^^
+
+Lets assume that we have a three node setup with three networks, one for the
+public communication with the Internet, one for the cluster communication
+and one very fast one, which we want to use as an dedicated migration
+network. A network configuration for such a setup could look like:
+
+
+iface eth0 inet manual
+
+# public network
+auto vmbr0
+iface vmbr0 inet static
+address 192.X.Y.57
+netmask 255.255.250.0
+gateway 192.X.Y.1
+bridge_ports eth0
+bridge_stp off
+bridge_fd 0
+
+# cluster network
+auto eth1
+iface eth1 inet static
+address  10.1.1.1
+netmask  255.255.255.0
+
+# fast network
+auto eth2
+iface eth2 inet static
+address  10.1.2.1
+netmask  255.255.255.0
+
+# [...]
+
+
+Here we want to use the 10.1.2.1/24 network as migration network.
+For a single migration you can achieve this by using the 'migration_network'
+parameter:
+
+# qm migrate 106 tre --online --migration_network 10.1.2.1/24
+
+
+To set this up as default network for all migrations cluster wide you can use
+the migration property in '/etc/pve/datacenter.cfg':
+
+# [...]
+migration: secure,network=10.1.2.1/24
+
+
+Note that the migration type must be always set if the network gets set.
 
 ifdef::manvolnum[]
 include::pve-copyright.adoc[]
-- 
2.1.4


___
pve-devel mailing list
pve-devel@pve.proxmox.com
http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel


Re: [pve-devel] making the firewall more robust?

2016-11-29 Thread Stefan Priebe - Profihost AG

Am 29.11.2016 um 10:29 schrieb Dietmar Maurer:
>> So it seems that the whole firewall breaks if there is somewhere
>> something wrong.
>>
>> I think especially for the firewall it's important to jsut skip that
>> line but process all other values.
> 
> That is how it should work. If there is a bug, we need to fix it. So
> the first question is how to trigger that bug?

# cat 120.fw
[OPTIONS]

policy_in: DROP
log_level_in: nolog
enable: 1

[IPSET letsencrypt]

0.0.0.0/0 # All IP
all_ips

[RULES]

|IN ACCEPT -i net1 -source 0.0.0.0/0 -p tcp -dport  # netcat test
IN ACCEPT -i net1 -source 0.0.0.0/0 -p tcp -dport 80,443 # From all IP
to Port 80 and 443
GROUP ph_default_group -i net1

Stefan
___
pve-devel mailing list
pve-devel@pve.proxmox.com
http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel


Re: [pve-devel] making the firewall more robust?

2016-11-29 Thread Stefan Priebe - Profihost AG
Am 29.11.2016 um 10:24 schrieb Fabian Grünbichler:
> On Tue, Nov 29, 2016 at 10:10:53AM +0100, Stefan Priebe - Profihost AG wrote:
>> Hello,
>>
>> today i've noticed that the firewall is nearly inactive on a node.
>>
>> systemctl status says:
>> Nov 29 10:07:05 node2 pve-firewall[2534]: status update error:
>> ipset_restore_cmdlist: ipset v6.23: Error in line 3: The value of the
>> CIDR parameter of the IP address is invalid
>> Nov 29 10:07:14 node2 pve-firewall[2534]: status update error:
>> ipset_restore_cmdlist: ipset v6.23: Error in line 3: The value of the
>> CIDR parameter of the IP address is invalid
>> Nov 29 10:07:24 node2 pve-firewall[2534]: status update error:
>> ipset_restore_cmdlist: ipset v6.23: Error in line 3: The value of the
>> CIDR parameter of the IP address is invalid
>>
>> So it seems that the whole firewall breaks if there is somewhere
>> something wrong.
>>
>> I think especially for the firewall it's important to jsut skip that
>> line but process all other values.
>>
>> How is your opinion? Any idea how to "fix" that?
> 
> that bug should already be fixed in git AFAIK.

Which one? Cannot find the commit. I'm ruinning pve-firewall 2.0-31

> there are two problems with partially applying firewall rules:
> - we don't know which rules are invalid (because of course we try to
>   generate valid rules, errors like the above are clearly bugs ;)) - we
>   could guess based on some error message by the underlying tools, but
>   that is error prone
> - applying some rules but not all can have as catastrophic consequences
>   as not applying any (e.g., if you miss a single ACCEPT rule because of
>   a bug, you might not be able to access your cluster at all!)

OK sure. But then we should may be send an email to root in case of a
failure? Currently nobody knows if such a failure happens. Also the
pve-firewall daemon does not fail itself. So even systemd says
pve-firewall is up and running.

Greets,
Stefan
___
pve-devel mailing list
pve-devel@pve.proxmox.com
http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel


Re: [pve-devel] making the firewall more robust?

2016-11-29 Thread Michael Rasmussen
On Tue, 29 Nov 2016 10:29:34 +0100 (CET)
Dietmar Maurer  wrote:

> 
> That is how it should work. If there is a bug, we need to fix it. So
> the first question is how to trigger that bug?
> 
iptables does not like catch all like 0.0.0.0/0. It has to be 0/0.

-- 
Hilsen/Regards
Michael Rasmussen

Get my public GnuPG keys:
michael  rasmussen  cc
http://pgp.mit.edu:11371/pks/lookup?op=get&search=0xD3C9A00E
mir  datanom  net
http://pgp.mit.edu:11371/pks/lookup?op=get&search=0xE501F51C
mir  miras  org
http://pgp.mit.edu:11371/pks/lookup?op=get&search=0xE3E80917
--
/usr/games/fortune -es says:
Work Hard.
Rock Hard.
Eat Hard.
Sleep Hard.
Grow Big.
Wear Glasses If You Need 'Em.
-- The Webb Wilder Credo


pgpN_1EfStC0j.pgp
Description: OpenPGP digital signature
___
pve-devel mailing list
pve-devel@pve.proxmox.com
http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel


Re: [pve-devel] making the firewall more robust?

2016-11-29 Thread Dietmar Maurer
> So it seems that the whole firewall breaks if there is somewhere
> something wrong.
> 
> I think especially for the firewall it's important to jsut skip that
> line but process all other values.

That is how it should work. If there is a bug, we need to fix it. So
the first question is how to trigger that bug?

___
pve-devel mailing list
pve-devel@pve.proxmox.com
http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel


Re: [pve-devel] making the firewall more robust?

2016-11-29 Thread Fabian Grünbichler
On Tue, Nov 29, 2016 at 10:10:53AM +0100, Stefan Priebe - Profihost AG wrote:
> Hello,
> 
> today i've noticed that the firewall is nearly inactive on a node.
> 
> systemctl status says:
> Nov 29 10:07:05 node2 pve-firewall[2534]: status update error:
> ipset_restore_cmdlist: ipset v6.23: Error in line 3: The value of the
> CIDR parameter of the IP address is invalid
> Nov 29 10:07:14 node2 pve-firewall[2534]: status update error:
> ipset_restore_cmdlist: ipset v6.23: Error in line 3: The value of the
> CIDR parameter of the IP address is invalid
> Nov 29 10:07:24 node2 pve-firewall[2534]: status update error:
> ipset_restore_cmdlist: ipset v6.23: Error in line 3: The value of the
> CIDR parameter of the IP address is invalid
> 
> So it seems that the whole firewall breaks if there is somewhere
> something wrong.
> 
> I think especially for the firewall it's important to jsut skip that
> line but process all other values.
> 
> How is your opinion? Any idea how to "fix" that?

that bug should already be fixed in git AFAIK.

there are two problems with partially applying firewall rules:
- we don't know which rules are invalid (because of course we try to
  generate valid rules, errors like the above are clearly bugs ;)) - we
  could guess based on some error message by the underlying tools, but
  that is error prone
- applying some rules but not all can have as catastrophic consequences
  as not applying any (e.g., if you miss a single ACCEPT rule because of
  a bug, you might not be able to access your cluster at all!)

bugs such as the above do not occur very often (a quick scan of the log
shows the last bug fixes before the current one were in June) and the
firewall is in general a very stable package with a conservative update
policy.

we could of course implement some kind of error detection and skipping
with an opt-in configuration option - but I am not sure whether this
will not make things more confusing and complicated?

___
pve-devel mailing list
pve-devel@pve.proxmox.com
http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel


Re: [pve-devel] making the firewall more robust?

2016-11-29 Thread Stefan Priebe - Profihost AG
In this case an employee managed to create the following ipset:

# cat /var/lib/pve-firewall/ipsetcmdlist1
destroy PVEFW-120-letsencrypt-v4_swap
create PVEFW-120-letsencrypt-v4_swap hash:net family inet hashsize 64
maxelem 64
add PVEFW-120-letsencrypt-v4_swap 0.0.0.0/0
swap PVEFW-120-letsencrypt-v4_swap PVEFW-120-letsencrypt-v4
flush PVEFW-120-letsencrypt-v4_swap
destroy PVEFW-120-letsencrypt-v4_swap

which fails:
ipset_restore_cmdlist: ipset v6.23: Error in line 3: The value of the
CIDR parameter of the IP address is invalid

Stefan

Am 29.11.2016 um 10:10 schrieb Stefan Priebe - Profihost AG:
> Hello,
> 
> today i've noticed that the firewall is nearly inactive on a node.
> 
> systemctl status says:
> Nov 29 10:07:05 node2 pve-firewall[2534]: status update error:
> ipset_restore_cmdlist: ipset v6.23: Error in line 3: The value of the
> CIDR parameter of the IP address is invalid
> Nov 29 10:07:14 node2 pve-firewall[2534]: status update error:
> ipset_restore_cmdlist: ipset v6.23: Error in line 3: The value of the
> CIDR parameter of the IP address is invalid
> Nov 29 10:07:24 node2 pve-firewall[2534]: status update error:
> ipset_restore_cmdlist: ipset v6.23: Error in line 3: The value of the
> CIDR parameter of the IP address is invalid
> 
> So it seems that the whole firewall breaks if there is somewhere
> something wrong.
> 
> I think especially for the firewall it's important to jsut skip that
> line but process all other values.
> 
> How is your opinion? Any idea how to "fix" that?
> 
> Greets,
> Stefan
> 
___
pve-devel mailing list
pve-devel@pve.proxmox.com
http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel


[pve-devel] applied: [PATCH storage] use qemu gluster blockdriver for linked clone creation

2016-11-29 Thread Dietmar Maurer
applied

___
pve-devel mailing list
pve-devel@pve.proxmox.com
http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel


[pve-devel] making the firewall more robust?

2016-11-29 Thread Stefan Priebe - Profihost AG
Hello,

today i've noticed that the firewall is nearly inactive on a node.

systemctl status says:
Nov 29 10:07:05 node2 pve-firewall[2534]: status update error:
ipset_restore_cmdlist: ipset v6.23: Error in line 3: The value of the
CIDR parameter of the IP address is invalid
Nov 29 10:07:14 node2 pve-firewall[2534]: status update error:
ipset_restore_cmdlist: ipset v6.23: Error in line 3: The value of the
CIDR parameter of the IP address is invalid
Nov 29 10:07:24 node2 pve-firewall[2534]: status update error:
ipset_restore_cmdlist: ipset v6.23: Error in line 3: The value of the
CIDR parameter of the IP address is invalid

So it seems that the whole firewall breaks if there is somewhere
something wrong.

I think especially for the firewall it's important to jsut skip that
line but process all other values.

How is your opinion? Any idea how to "fix" that?

Greets,
Stefan
___
pve-devel mailing list
pve-devel@pve.proxmox.com
http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel


[pve-devel] applied: [PATCH storage] use qemu gluster blockdriver for linked clone creation

2016-11-29 Thread Dietmar Maurer
applied

___
pve-devel mailing list
pve-devel@pve.proxmox.com
http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel


[pve-devel] applied: [PATCH storage 1/2] improve zpool activate_storage

2016-11-29 Thread Dietmar Maurer
applied

___
pve-devel mailing list
pve-devel@pve.proxmox.com
http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel


[pve-devel] applied: [PATCH storage 2/2] increase default timeout for zpool import

2016-11-29 Thread Dietmar Maurer
applied

___
pve-devel mailing list
pve-devel@pve.proxmox.com
http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel


Re: [pve-devel] [PATCH] add new storagecopy feature && add rbdplugin

2016-11-29 Thread Fabian Grünbichler
On Tue, Nov 29, 2016 at 09:26:50AM +0100, Dietmar Maurer wrote:
> > I think this would also be the first change to the Storage plugin API
> > that would warrant bumping its version (in PVE/Storage.pm:37)..
> 
> Why exactly? I think this change is fully backward compatible?

my initial reaction was probably overboard here - as long as Plugin.pm
does not implement it external plugins can just fallback to the default
and it should be caught by has_feature checks anyway.

___
pve-devel mailing list
pve-devel@pve.proxmox.com
http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel


Re: [pve-devel] [PATCH] add new storagecopy feature && add rbdplugin

2016-11-29 Thread Dietmar Maurer
> I think this would also be the first change to the Storage plugin API
> that would warrant bumping its version (in PVE/Storage.pm:37)..

Why exactly? I think this change is fully backward compatible?

___
pve-devel mailing list
pve-devel@pve.proxmox.com
http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel


[pve-devel] applied: [PATCH manager] Add Windows 2016 as available ostype to select

2016-11-29 Thread Dietmar Maurer
applied

___
pve-devel mailing list
pve-devel@pve.proxmox.com
http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel


[pve-devel] applied: [PATCH qemu-server] Add entry for windows 10 and 2016 support

2016-11-29 Thread Dietmar Maurer
applied

___
pve-devel mailing list
pve-devel@pve.proxmox.com
http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel