Re: [pve-devel] [PATCH pve-manager 1/2] pvestatd : broadcast sdn transportzone status

2019-06-25 Thread Dietmar Maurer
I am not sure if json is a good idea here. We use colon separated lists for
everything else, so I would prefer that. It is easier to parse inside C, which
is important when you want to generate RRD databases from inside pmxcfs ...

Also, consider that it is quite hard to change that format later, because all 
cluster nodes
reads/write that data.

So do we want to generate some RRD databases with that data?

> On 25 June 2019 00:04 Alexandre Derumier  wrote:
> 
>  
> Signed-off-by: Alexandre Derumier 
> ---
>  PVE/Service/pvestatd.pm | 22 ++
>  1 file changed, 22 insertions(+)
> 
> diff --git a/PVE/Service/pvestatd.pm b/PVE/Service/pvestatd.pm
> index e138b2e8..bad1b73d 100755
> --- a/PVE/Service/pvestatd.pm
> +++ b/PVE/Service/pvestatd.pm
> @@ -37,6 +37,12 @@ PVE::Status::Plugin->init();
>  
>  use base qw(PVE::Daemon);
>  
> +my $have_sdn;
> +eval {
> +require PVE::API2::Network::SDN;
> +$have_sdn = 1;
> +};
> +
>  my $opt_debug;
>  my $restart_request;
>  
> @@ -457,6 +463,16 @@ sub update_ceph_version {
>  }
>  }
>  
> +sub update_sdn_status {
> +
> +if($have_sdn) {
> + my ($transport_status, $vnet_status) = PVE::Network::SDN::status();
> +
> + my $status = $transport_status ? encode_json($transport_status) : undef;
> + PVE::Cluster::broadcast_node_kv("sdn", $status);
> +}
> +}
> +
>  sub update_status {
>  
>  # update worker list. This is not really required and
> @@ -524,6 +540,12 @@ sub update_status {
>  $err = $@;
>  syslog('err', "error getting ceph services: $err") if $err;
>  
> +eval {
> + update_sdn_status();
> +};
> +$err = $@;
> +syslog('err', "sdn status update error: $err") if $err;
> +
>  }
>  
>  my $next_update = 0;
> -- 
> 2.20.1
> 
> ___
> pve-devel mailing list
> pve-devel@pve.proxmox.com
> https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel

___
pve-devel mailing list
pve-devel@pve.proxmox.com
https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel


Re: [pve-devel] [RFC manager 2/2] upgrade checklist

2019-06-25 Thread Thomas Lamprecht
applied, with follow ups..

On 6/24/19 1:56 PM, Fabian Grünbichler wrote:
> diff --git a/bin/Makefile b/bin/Makefile
> index 52044ca9..31229477 100644
> --- a/bin/Makefile
> +++ b/bin/Makefile
> @@ -7,7 +7,7 @@ PERL_DOC_INC_DIRS=..
>  include /usr/share/pve-doc-generator/pve-doc-generator.mk
>  
>  SERVICES = pvestatd pveproxy pvedaemon spiceproxy
> -CLITOOLS = vzdump pvesubscription pveceph pveam pvesr pvenode pvesh
> +CLITOOLS = vzdump pvesubscription pveceph pveam pvesr pvenode pvesh pve5to6
>  
>  SCRIPTS =\
>   ${SERVICES} \
> @@ -48,6 +48,7 @@ all: ${SERVICE_MANS} ${CLI_MANS} pvemailforward
>   podselect $* > $@.tmp
>   mv $@.tmp $@
>  
> +pve5to6.1.pod: pve5to6

as it's no pod and we do not care immediately regarding a man page here I just 
create
a dummy one now..

>  pveversion.1.pod: pveversion
>  pveupgrade.1.pod: pveupgrade
>  pvereport.1.pod: pvereport


> diff --git a/bin/pve5to6 b/bin/pve5to6
> new file mode 100755
> index ..4802e185
> --- /dev/null
> +++ b/bin/pve5to6
> @@ -0,0 +1,10 @@
> +#!/usr/bin/perl -T

removed taint switch.

> +
> +use strict;
> +use warnings;
> +
> +use lib qw(.);

remove above lib use

> +
> +use PVE::CLI::pve5to6;
> +
> +PVE::CLI::pve5to6->run_cli_handler();
> 



___
pve-devel mailing list
pve-devel@pve.proxmox.com
https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel


Re: [pve-devel] [RFC manager 2/2] upgrade checklist

2019-06-25 Thread Thomas Lamprecht
On 6/24/19 1:56 PM, Fabian Grünbichler wrote:
> Signed-off-by: Fabian Grünbichler 
> ---
> Notes:
> leftout on purpose:
> - checking of sources.list (no parser, lots of false negatives, needs to 
> happen after upgrade to corosync 3)
> 
> still missing for PVE 6.x / post-upgrade version:
> - modification of checked versions
> - ceph-volume scan on managed nodes with OSDs
> 
> still missing for PVE 6.x / post-reboot version:
> - check for running kernel
> 
> suggestions for additional checks/adaptations very welcome!
> 
> to actually install and test, the usual build cycle with pve-docs needs to
> be manually broken. alternatively, manual copying/execution works fine as
> well ;)
> 
>  PVE/CLI/Makefile   |   2 +-
>  bin/Makefile   |   3 +-
>  PVE/CLI/pve5to6.pm | 370 +
>  bin/pve5to6|  10 ++
>  4 files changed, 383 insertions(+), 2 deletions(-)
>  create mode 100644 PVE/CLI/pve5to6.pm
>  create mode 100755 bin/pve5to6
> 
> diff --git a/PVE/CLI/Makefile b/PVE/CLI/Makefile
> index 93b3f3c6..7e9ae0df 100644
> --- a/PVE/CLI/Makefile
> +++ b/PVE/CLI/Makefile
> @@ -1,6 +1,6 @@
>  include ../../defines.mk
>  
> -SOURCES=vzdump.pm pvesubscription.pm pveceph.pm pveam.pm pvesr.pm pvenode.pm 
> pvesh.pm
> +SOURCES=vzdump.pm pvesubscription.pm pveceph.pm pveam.pm pvesr.pm pvenode.pm 
> pvesh.pm pve5to6.pm
>  
>  all:
>  
> diff --git a/bin/Makefile b/bin/Makefile
> index 52044ca9..31229477 100644
> --- a/bin/Makefile
> +++ b/bin/Makefile
> @@ -7,7 +7,7 @@ PERL_DOC_INC_DIRS=..
>  include /usr/share/pve-doc-generator/pve-doc-generator.mk
>  
>  SERVICES = pvestatd pveproxy pvedaemon spiceproxy
> -CLITOOLS = vzdump pvesubscription pveceph pveam pvesr pvenode pvesh
> +CLITOOLS = vzdump pvesubscription pveceph pveam pvesr pvenode pvesh pve5to6
>  
>  SCRIPTS =\
>   ${SERVICES} \
> @@ -48,6 +48,7 @@ all: ${SERVICE_MANS} ${CLI_MANS} pvemailforward
>   podselect $* > $@.tmp
>   mv $@.tmp $@
>  
> +pve5to6.1.pod: pve5to6

above makes no sense?? This has no POD included so podselect won't find 
anything,
that's not bootstrapping pve-docs issue that's just wrong..

>  pveversion.1.pod: pveversion
>  pveupgrade.1.pod: pveupgrade
>  pvereport.1.pod: pvereport


> diff --git a/bin/pve5to6 b/bin/pve5to6
> new file mode 100755
> index ..4802e185
> --- /dev/null
> +++ b/bin/pve5to6
> @@ -0,0 +1,10 @@
> +#!/usr/bin/perl -T

why a tainted switch?

> +
> +use strict;
> +use warnings;
> +
> +use lib qw(.);

why this as lib? one needs to call it with bin/pve5to6 then, at least locally,
for such stuff just use "perl -I. bin/..", so the source don't needs to be 
touched
(and then potentially distributed in this way, I mean the lib is now always the 
CWD
from which this gets called!)

> +
> +use PVE::CLI::pve5to6;
> +
> +PVE::CLI::pve5to6->run_cli_handler();
> 


___
pve-devel mailing list
pve-devel@pve.proxmox.com
https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel


Re: [pve-devel] [RFC manager] ui: workspace: cope better with upgrade related false positive 401 HTTP codes

2019-06-25 Thread Dominik Csapak

LGTM, do we want to apply this to master only or also to stable-5 ?

___
pve-devel mailing list
pve-devel@pve.proxmox.com
https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel


[pve-devel] [PATCH docs] add cloudinit dump and snippets documentation

2019-06-25 Thread Mira Limbeck
Adds documentation for the cloudinit snippets support and how 'qm
cloudinit dump' can be used to get a base config file.

Signed-off-by: Mira Limbeck 
---
 qm-cloud-init.adoc | 36 
 1 file changed, 36 insertions(+)

diff --git a/qm-cloud-init.adoc b/qm-cloud-init.adoc
index 23750da..9795479 100644
--- a/qm-cloud-init.adoc
+++ b/qm-cloud-init.adoc
@@ -131,6 +131,42 @@ commands for reducing the line length. Also make sure to 
adopt the IP
 setup for your specific environment.
 
 
+Custom Cloud-Init Configuration
+~~~
+
+The Cloud-Init integration also allows custom config files to be used instead
+of the automatically generated configs. This is done via the `cicustom`
+option on the command line:
+
+
+qm set 9000 --cicustom "user=,network=,meta="
+
+
+The custom config files have to be on a storage that supports snippets and have
+to be available on all nodes the VM is going to be migrated to. Otherwise the
+VM won't be able to start.
+For example:
+
+
+qm set 9000 --cicustom "user=local:snippets/userconfig.yaml"
+
+
+There are three kinds of configs for Cloud-Init. The first one is the `user`
+config as seen in the example above. The second is the `network` config and
+the third the `meta` config. They can all be specified together or mixed
+and matched however needed.
+The automatically generated config will be used for any that don't have a
+custom config file specified.
+
+The generated config can be dumped to serve as a base for custom configs:
+
+
+qm cloudinit dump 9000 user
+
+
+The same command exists for `network` and `meta`.
+
+
 Cloud-Init specific Options
 ~~~
 
-- 
2.20.1


___
pve-devel mailing list
pve-devel@pve.proxmox.com
https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel


[pve-devel] applied: [PATCH manager 1/2] Ceph: add get_cluster_versions helper

2019-06-25 Thread Thomas Lamprecht
On 6/24/19 1:56 PM, Fabian Grünbichler wrote:
> to make 'ceph versions' and 'ceph XX versions' accessible.
> 
> Signed-off-by: Fabian Grünbichler 
> ---
>  PVE/Ceph/Tools.pm | 8 
>  1 file changed, 8 insertions(+)
> 
> diff --git a/PVE/Ceph/Tools.pm b/PVE/Ceph/Tools.pm
> index 617aba66..319e2ddd 100644
> --- a/PVE/Ceph/Tools.pm
> +++ b/PVE/Ceph/Tools.pm
> @@ -61,6 +61,14 @@ sub get_local_version {
>  return undef;
>  }
>  
> +sub get_cluster_versions {
> +my ($service, $noerr) = @_;
> +
> +my $rados = PVE::RADOS->new();
> +my $cmd = $service ? "$service versions" : 'versions';
> +return $rados->mon_command({ prefix => $cmd });
> +}
> +
>  sub get_config {
>  my $key = shift;
>  
> 

applied


___
pve-devel mailing list
pve-devel@pve.proxmox.com
https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel


Re: [pve-devel] [RFC manager] ui: workspace: cope better with upgrade related false positive 401 HTTP codes

2019-06-25 Thread Thomas Lamprecht
On 6/25/19 11:22 AM, Dominik Csapak wrote:
> LGTM, do we want to apply this to master only or also to stable-5 ?
> 

thanks for looking at this, applied (to master and stable-5)!

___
pve-devel mailing list
pve-devel@pve.proxmox.com
https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel


[pve-devel] applied: [PATCH manager] backup jobs: Aquire lock before modifying vzdump.cron

2019-06-25 Thread Thomas Lamprecht
On 6/24/19 5:26 PM, Christian Ebner wrote:
> Signed-off-by: Christian Ebner 
> ---
>  PVE/API2/Backup.pm | 108 
> +
>  1 file changed, 59 insertions(+), 49 deletions(-)
> 

applied, thanks!

___
pve-devel mailing list
pve-devel@pve.proxmox.com
https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel


[pve-devel] applied: [PATCH v2 docs] Update pvecm doc regarding IP vs hostname as corosync address

2019-06-25 Thread Thomas Lamprecht
On 6/18/19 3:24 PM, Stefan Reiter wrote:
> Signed-off-by: Stefan Reiter 
> ---
> 
> v2:
>  * Removed mentions of ring addresses to avoid confusion
>(removed in Corosync 3 - though the rest of the file should be
>updated for coro 3 too)
>  * Changed section title
>  * Added CAUTION: about hostnames, removed NOTE: about PVE tooling
>  * Reworded according to feedback
> 
> Left the <<>> references as is, as Thomas mentioned about Aaron's rework.
> 

applied, thanks!

___
pve-devel mailing list
pve-devel@pve.proxmox.com
https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel


Re: [pve-devel] [PATCH manager 3/3] fix #1278 gui: backup: add backup mode pool

2019-06-25 Thread Dominik Csapak

i could not apply 2/3 so i looked only at the gui for now

looks mostly good, some comments inline

On 6/19/19 12:08 PM, Tim Marx wrote:

Signed-off-by: Tim Marx 
---
  www/manager6/dc/Backup.js | 77 +++
  1 file changed, 71 insertions(+), 6 deletions(-)

diff --git a/www/manager6/dc/Backup.js b/www/manager6/dc/Backup.js
index c056a647..34052746 100644
--- a/www/manager6/dc/Backup.js
+++ b/www/manager6/dc/Backup.js
@@ -31,7 +31,8 @@ Ext.define('PVE.dc.BackupEdit', {
comboItems: [
['include', gettext('Include selected VMs')],
['all', gettext('All')],
-   ['exclude', gettext('Exclude selected VMs')]
+   ['exclude', gettext('Exclude selected VMs')],
+   ['pool', gettext('Pool')]
],
fieldLabel: gettext('Selection mode'),
name: 'selMode',
@@ -111,6 +112,33 @@ Ext.define('PVE.dc.BackupEdit', {
]
});
  
+	var selectPoolMembers = function(poolid) {

+   if (!poolid) {
+   return;
+   }
+   sm.deselectAll(true);
+   store.filter([
+   {
+   id: 'poolFilter',
+   property: 'pool',
+   value: poolid
+   }
+   ]);
+   sm.selectAll(true);
+   };
+
+   var selPool = Ext.create('PVE.form.PoolSelector', {
+   fieldLabel: gettext('Pool'),
+   hidden: true,
+   allowBlank: true,
+   name: 'pool',
+   listeners: {
+   change: function( selpool, newValue, oldValue) {
+   selectPoolMembers(newValue);
+   }
+   }
+   });
+
var nodesel = Ext.create('PVE.form.NodeSelector', {
name: 'node',
fieldLabel: gettext('Node'),
@@ -129,6 +157,10 @@ Ext.define('PVE.dc.BackupEdit', {
if (mode === 'all') {
sm.selectAll(true);
}
+
+   if (mode === 'pool') {
+   selectPoolMembers(selPool.value);
+   }
}
}
});
@@ -153,7 +185,8 @@ Ext.define('PVE.dc.BackupEdit', {
value: '00:00',
allowBlank: false
},
-   selModeField
+   selModeField,
+   selPool
];
  
  	var column2 = [

@@ -217,13 +250,19 @@ Ext.define('PVE.dc.BackupEdit', {
values.all = 1;
values.exclude = values.vmid;
delete values.vmid;
+   } else if (selMode === 'pool') {
+   delete values.vmid;
+   }
+
+   if (selMode !== 'pool') {
+   delete values.pool;
}
return values;
}
});
  
  	var update_vmid_selection = function(list, mode) {

-   if (mode !== 'all') {
+   if (mode !== 'all' && mode !== 'pool') {
sm.deselectAll(true);
if (list) {
Ext.Array.each(list.split(','), function(vmid) {
@@ -242,15 +281,32 @@ Ext.define('PVE.dc.BackupEdit', {
});
  
  	selModeField.on('change', function(f, value, oldValue) {

+   if (oldValue === 'pool') {
+   store.removeFilter('poolFilter');
+   }
+
+   if (oldValue === 'all') {
+   sm.deselectAll(true);
+   vmidField.setValue('');
+   }
+
if (value === 'all') {
sm.selectAll(true);
vmgrid.setDisabled(true);
} else {
vmgrid.setDisabled(false);
}
-   if (oldValue === 'all') {
-   sm.deselectAll(true);
+
+   if (value === 'pool') {
+   vmgrid.setDisabled(true);
vmidField.setValue('');
+   selPool.setVisible(true);
+   selPool.allowBlank = false;
+   selectPoolMembers(selPool.value);
+
+   } else {
+   selPool.setVisible(false);
+   selPool.allowBlank = true;
}
var list = vmidField.getValue();
update_vmid_selection(list, value);


while this hunk is not wrong, it looks a little complicated
because you set the same things multiple times (e.g. vmgrid)

since we have here a limited set of modes
(afaics 'all', 'pool', 'include', 'exclude')

why not do a big

switch(oldValue):
case 'foo'
case 'bar'

block and one for the new value?

i think this would greatly improve the readability,
since one can see instantly what happens in each case


@@ -269,6 +325,8 @@ Ext.define('PVE.dc.BackupEdit', {
var mode = selModeField.getValue();
if (mode === 'all') {
sm.selectAll(true);
+   } else if (mode === 'pool'){
+   

[pve-devel] [RFC guest-common 1/3] fix #1291: implement remove_vmid_from_cronjobs

2019-06-25 Thread Christian Ebner
remove_vmid_from_cronjobs updates the vzdump.cron backup jobs,
excluding the given vmid.

Signed-off-by: Christian Ebner 
---
 PVE/VZDump/Plugin.pm | 51 +++
 1 file changed, 51 insertions(+)

diff --git a/PVE/VZDump/Plugin.pm b/PVE/VZDump/Plugin.pm
index 9933ef6..28f018b 100644
--- a/PVE/VZDump/Plugin.pm
+++ b/PVE/VZDump/Plugin.pm
@@ -7,6 +7,8 @@ use POSIX qw(strftime);
 
 use PVE::Tools;
 use PVE::SafeSyslog;
+use PVE::Cluster qw(cfs_read_file cfs_write_file cfs_lock_file);
+use PVE::API2::Backup;
 
 my $log_level = {
 err =>  'ERROR:',
@@ -168,4 +170,53 @@ sub cleanup {
 die "internal error"; # implement in subclass
 }
 
+sub exclude_vmid_from_list {
+my ($list, $exclude_vmid) = @_;
+
+my $updated_list = [];
+foreach my $vmid (PVE::Tools::split_list($list)) {
+   push @$updated_list, $vmid if $vmid ne $exclude_vmid;
+}
+return join ",", @$updated_list;
+}
+
+sub exclude_vmid_from_jobs {
+my ($jobs, $exclude_vmid) = @_;
+
+my $updated_jobs = [];
+foreach my $job (@$jobs) {
+   if (defined $job->{vmid}) {
+   my $list = exclude_vmid_from_list($job->{vmid}, $exclude_vmid);
+   if ($list) {
+   $job->{vmid} = $list;
+   push @$updated_jobs, $job;
+   }
+   } elsif (defined $job->{exclude}) {
+   my $list = exclude_vmid_from_list($job->{exclude}, $exclude_vmid);
+   if ($list) {
+   $job->{exclude} = $list;
+   } else {
+   delete $job->{exclude};
+   }
+   push @$updated_jobs, $job;
+   } else {
+   push @$updated_jobs, $job;
+   }
+}
+return $updated_jobs;
+}
+
+sub remove_vmid_from_cronjobs {
+my ($vmid) = @_;
+
+my $update_cron = sub {
+   my $cron_cfg = cfs_read_file('vzdump.cron');
+   my $jobs = $cron_cfg->{jobs} || [];
+   $cron_cfg->{jobs} = exclude_vmid_from_jobs($jobs, $vmid);
+   cfs_write_file('vzdump.cron', $cron_cfg);
+};
+cfs_lock_file('vzdump.cron', undef, $update_cron);
+die "$@" if ($@);
+}
+
 1;
-- 
2.11.0

___
pve-devel mailing list
pve-devel@pve.proxmox.com
https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel


[pve-devel] [RFC container 3/3] fix #1291: add option purge for destroy_vm api call

2019-06-25 Thread Christian Ebner
The purge option allows to remove the vmid from the vzdump.cron jobs.

Signed-off-by: Christian Ebner 
---
 src/PVE/API2/LXC.pm | 11 +++
 1 file changed, 7 insertions(+), 4 deletions(-)

diff --git a/src/PVE/API2/LXC.pm b/src/PVE/API2/LXC.pm
index cf14d75..563cfb9 100644
--- a/src/PVE/API2/LXC.pm
+++ b/src/PVE/API2/LXC.pm
@@ -18,6 +18,7 @@ use PVE::LXC;
 use PVE::LXC::Create;
 use PVE::LXC::Migrate;
 use PVE::GuestHelpers;
+use PVE::VZDump::Plugin;
 use PVE::API2::LXC::Config;
 use PVE::API2::LXC::Status;
 use PVE::API2::LXC::Snapshot;
@@ -627,6 +628,11 @@ __PACKAGE__->register_method({
properties => {
node => get_standard_option('pve-node'),
vmid => get_standard_option('pve-vmid', { completion => 
\::LXC::complete_ctid_stopped }),
+   purge => {
+   type => 'boolean',
+   description => "Remove vmid from backup cron jobs.",
+   optional => 1,
+   },
},
 },
 returns => {
@@ -636,16 +642,12 @@ __PACKAGE__->register_method({
my ($param) = @_;
 
my $rpcenv = PVE::RPCEnvironment::get();
-
my $authuser = $rpcenv->get_user();
-
my $vmid = $param->{vmid};
 
# test if container exists
my $conf = PVE::LXC::Config->load_config($vmid);
-
my $storage_cfg = cfs_read_file("storage.cfg");
-
PVE::LXC::Config->check_protection($conf, "can't remove CT $vmid");
 
die "unable to remove CT $vmid - used in HA resources\n"
@@ -669,6 +671,7 @@ __PACKAGE__->register_method({
PVE::LXC::destroy_lxc_container($storage_cfg, $vmid, $conf);
PVE::AccessControl::remove_vm_access($vmid);
PVE::Firewall::remove_vmfw_conf($vmid);
+   PVE::VZDump::Plugin::remove_vmid_from_cronjobs($vmid) if 
($param->{purge});
};
 
my $realcmd = sub { PVE::LXC::Config->lock_config($vmid, $code); };
-- 
2.11.0

___
pve-devel mailing list
pve-devel@pve.proxmox.com
https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel


[pve-devel] [RFC 0/3] fix #1291: add purge option for VM/CT destroy

2019-06-25 Thread Christian Ebner
The purge flag allows to remove the vmid from the vzdump.cron backup jobs on
VM/CT destruction.

Christian Ebner (1):
  fix #1291: implement remove_vmid_from_cronjobs

 PVE/VZDump/Plugin.pm | 51 +++
 1 file changed, 51 insertions(+)

Christian Ebner (1):
  fix #1291: add purge option to vm_destroy api call

 PVE/API2/Qemu.pm | 18 --
 1 file changed, 8 insertions(+), 10 deletions(-)

Christian Ebner (1):
  fix #1291: add option purge for destroy_vm api call

 src/PVE/API2/LXC.pm | 11 +++
 1 file changed, 7 insertions(+), 4 deletions(-)

-- 
2.11.0

___
pve-devel mailing list
pve-devel@pve.proxmox.com
https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel


[pve-devel] [RFC qemu 2/3] fix #1291: add purge option to vm_destroy api call

2019-06-25 Thread Christian Ebner
The purge flag allows to remove the vmid from the vzdump.cron backup jobs.

Signed-off-by: Christian Ebner 
---
 PVE/API2/Qemu.pm | 18 --
 1 file changed, 8 insertions(+), 10 deletions(-)

diff --git a/PVE/API2/Qemu.pm b/PVE/API2/Qemu.pm
index a628a20..60b0f11 100644
--- a/PVE/API2/Qemu.pm
+++ b/PVE/API2/Qemu.pm
@@ -28,6 +28,7 @@ use PVE::Network;
 use PVE::Firewall;
 use PVE::API2::Firewall::VM;
 use PVE::API2::Qemu::Agent;
+use PVE::VZDump::Plugin;
 
 BEGIN {
 if (!$ENV{PVE_GENERATING_DOCS}) {
@@ -1436,7 +1437,6 @@ __PACKAGE__->register_method({
 }
 });
 
-
 __PACKAGE__->register_method({
 name => 'destroy_vm',
 path => '{vmid}',
@@ -1453,6 +1453,11 @@ __PACKAGE__->register_method({
node => get_standard_option('pve-node'),
vmid => get_standard_option('pve-vmid', { completion => 
\::QemuServer::complete_vmid_stopped }),
skiplock => get_standard_option('skiplock'),
+   purge => {
+   type => 'boolean',
+   description => "Remove vmid from backup cron jobs.",
+   optional => 1,
+   },
},
 },
 returns => {
@@ -1462,9 +1467,7 @@ __PACKAGE__->register_method({
my ($param) = @_;
 
my $rpcenv = PVE::RPCEnvironment::get();
-
my $authuser = $rpcenv->get_user();
-
my $vmid = $param->{vmid};
 
my $skiplock = $param->{skiplock};
@@ -1473,11 +1476,8 @@ __PACKAGE__->register_method({
 
# test if VM exists
my $conf = PVE::QemuConfig->load_config($vmid);
-
my $storecfg = PVE::Storage::config();
-
PVE::QemuConfig->check_protection($conf, "can't remove VM $vmid");
-
die "unable to remove VM $vmid - used in HA resources\n"
if PVE::HA::Config::vm_is_ha_managed($vmid);
 
@@ -1493,12 +1493,10 @@ __PACKAGE__->register_method({
my $upid = shift;
 
syslog('info', "destroy VM $vmid: $upid\n");
-
PVE::QemuServer::vm_destroy($storecfg, $vmid, $skiplock);
-
PVE::AccessControl::remove_vm_access($vmid);
-
-PVE::Firewall::remove_vmfw_conf($vmid);
+   PVE::Firewall::remove_vmfw_conf($vmid);
+   PVE::VZDump::Plugin::remove_vmid_from_cronjobs($vmid) if 
($param->{purge});
};
 
return $rpcenv->fork_worker('qmdestroy', $vmid, $authuser, $realcmd);
-- 
2.11.0

___
pve-devel mailing list
pve-devel@pve.proxmox.com
https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel


[pve-devel] [PATCH qemu-server] Fix guest agent shutdown without timeout

2019-06-25 Thread Stefan Reiter
Regression from change allowing timeouts to be set, now shutting down
also works without timeouts again (previously qmp failed because of
the unknown "timeout" parameter passed to it).

We always delete the timeout value from the arguments, regardless of
truthiness. "delete" returns the deleted element, deleting a
non-existant hash entry returns undef, which is fine after this point:

"deleting non-existent elements returns the undefined value in their
corresponding positions."
- https://perldoc.perl.org/functions/delete.html

Signed-off-by: Stefan Reiter 
---
 PVE/QemuServer.pm | 5 ++---
 1 file changed, 2 insertions(+), 3 deletions(-)

diff --git a/PVE/QemuServer.pm b/PVE/QemuServer.pm
index bf71210..fbfc3fb 100644
--- a/PVE/QemuServer.pm
+++ b/PVE/QemuServer.pm
@@ -5485,9 +5485,8 @@ sub vm_qmp_command {
 my $res;
 
 my $timeout;
-if ($cmd->{arguments} && $cmd->{arguments}->{timeout}) {
-   $timeout = $cmd->{arguments}->{timeout};
-   delete $cmd->{arguments}->{timeout};
+if ($cmd->{arguments}) {
+   $timeout = delete $cmd->{arguments}->{timeout};
 }
 
 eval {
-- 
2.20.1


___
pve-devel mailing list
pve-devel@pve.proxmox.com
https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel


[pve-devel] applied: [PATCH qemu-server] Fix guest agent shutdown without timeout

2019-06-25 Thread Thomas Lamprecht
On 6/25/19 4:44 PM, Stefan Reiter wrote:
> Regression from change allowing timeouts to be set, now shutting down
> also works without timeouts again (previously qmp failed because of
> the unknown "timeout" parameter passed to it).
> 
> We always delete the timeout value from the arguments, regardless of
> truthiness. "delete" returns the deleted element, deleting a
> non-existant hash entry returns undef, which is fine after this point:
> 
> "deleting non-existent elements returns the undefined value in their
> corresponding positions."
> - https://perldoc.perl.org/functions/delete.html
> 
> Signed-off-by: Stefan Reiter 
> ---
>  PVE/QemuServer.pm | 5 ++---
>  1 file changed, 2 insertions(+), 3 deletions(-)
> 
> diff --git a/PVE/QemuServer.pm b/PVE/QemuServer.pm
> index bf71210..fbfc3fb 100644
> --- a/PVE/QemuServer.pm
> +++ b/PVE/QemuServer.pm
> @@ -5485,9 +5485,8 @@ sub vm_qmp_command {
>  my $res;
>  
>  my $timeout;
> -if ($cmd->{arguments} && $cmd->{arguments}->{timeout}) {
> - $timeout = $cmd->{arguments}->{timeout};
> - delete $cmd->{arguments}->{timeout};
> +if ($cmd->{arguments}) {
> + $timeout = delete $cmd->{arguments}->{timeout};
>  }
>  
>  eval {
> 

applied, thanks!

___
pve-devel mailing list
pve-devel@pve.proxmox.com
https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel


[pve-devel] applied: [PATCH manager] switch over default console viewer to xterm.js

2019-06-25 Thread Thomas Lamprecht
at least where possible, this affects mostly the node shell button.

Signed-off-by: Thomas Lamprecht 
---
 www/manager6/Utils.js | 7 +++
 1 file changed, 3 insertions(+), 4 deletions(-)

diff --git a/www/manager6/Utils.js b/www/manager6/Utils.js
index 06d7b5fa..9387c582 100644
--- a/www/manager6/Utils.js
+++ b/www/manager6/Utils.js
@@ -404,7 +404,7 @@ Ext.define('PVE.Utils', { utilities: {
 },
 
 console_map: {
-   '__default__': Proxmox.Utils.defaultText + ' (HTML5)',
+   '__default__': Proxmox.Utils.defaultText + ' (xterm.js)',
'vv': 'SPICE (remote-viewer)',
'html5': 'HTML5 (noVNC)',
'xtermjs': 'xterm.js'
@@ -945,10 +945,9 @@ Ext.define('PVE.Utils', { utilities: {
allowSpice = consoles.spice;
allowXtermjs = !!consoles.xtermjs;
}
-   var vncdefault = 'html5';
-   var dv = PVE.VersionInfo.console || vncdefault;
+   var dv = PVE.VersionInfo.console || 'xtermjs';
if ((dv === 'vv' && !allowSpice) || (dv === 'xtermjs' && 
!allowXtermjs)) {
-   dv = vncdefault;
+   dv = 'html5';
}
 
return dv;
-- 
2.20.1


___
pve-devel mailing list
pve-devel@pve.proxmox.com
https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel


[pve-devel] [PATCH ceph] add ceph-volume zap fix

2019-06-25 Thread Dominik Csapak
this is pending review upstream

Signed-off-by: Dominik Csapak 
---
 ...vm.zap-fix-cleanup-for-db-partitions.patch | 29 +++
 patches/series|  1 +
 2 files changed, 30 insertions(+)
 create mode 100644 
patches/0008-ceph-volume-lvm.zap-fix-cleanup-for-db-partitions.patch

diff --git 
a/patches/0008-ceph-volume-lvm.zap-fix-cleanup-for-db-partitions.patch 
b/patches/0008-ceph-volume-lvm.zap-fix-cleanup-for-db-partitions.patch
new file mode 100644
index 0..df26ecc86
--- /dev/null
+++ b/patches/0008-ceph-volume-lvm.zap-fix-cleanup-for-db-partitions.patch
@@ -0,0 +1,29 @@
+From 2db844652e8df36adba7ba3b1a334ee583d1a1e1 Mon Sep 17 00:00:00 2001
+From: Dominik Csapak 
+Date: Tue, 28 May 2019 16:29:21 +0200
+Subject: [PATCH] ceph-volume lvm.zap fix cleanup for db partitions
+
+this uses the correct type 'db' for db type partitions, else
+a block.db parition does not get cleaned up by ceph-volume zap
+
+Signed-off-by: Dominik Csapak 
+---
+ src/ceph-volume/ceph_volume/devices/lvm/zap.py | 2 +-
+ 1 file changed, 1 insertion(+), 1 deletion(-)
+
+diff --git a/src/ceph-volume/ceph_volume/devices/lvm/zap.py 
b/src/ceph-volume/ceph_volume/devices/lvm/zap.py
+index 328a036152..9a7a103ada 100644
+--- a/src/ceph-volume/ceph_volume/devices/lvm/zap.py
 b/src/ceph-volume/ceph_volume/devices/lvm/zap.py
+@@ -77,7 +77,7 @@ def ensure_associated_lvs(lvs):
+ wal_lvs = lvs._filter(lv_tags={'ceph.type': 'wal'})
+ backing_devices = [
+ (journal_lvs, 'journal'),
+-(db_lvs, 'block'),
++(db_lvs, 'db'),
+ (wal_lvs, 'wal')
+ ]
+ 
+-- 
+2.20.1
+
diff --git a/patches/series b/patches/series
index e7e33..de9726160 100644
--- a/patches/series
+++ b/patches/series
@@ -2,3 +2,4 @@
 0002-enable-systemd-targets-by-default.patch
 0006-debian-control-add-break-libpvestorage-perl.patch
 0007-debian-rules-ship-Ceph-changelog-as-upstream-changel.patch
+0008-ceph-volume-lvm.zap-fix-cleanup-for-db-partitions.patch
-- 
2.20.1


___
pve-devel mailing list
pve-devel@pve.proxmox.com
https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel


Re: [pve-devel] [PATCH pve-manager 1/2] pvestatd : broadcast sdn transportzone status

2019-06-25 Thread Alexandre DERUMIER
>>I am not sure if json is a good idea here. We use colon separated lists for 
>>everything else, so I would prefer that. It is easier to parse inside C, 
>>which 
>>is important when you want to generate RRD databases from inside pmxcfs ... 


>>Also, consider that it is quite hard to change that format later, because all 
>>cluster nodes 
>>reads/write that data. 

>>So do we want to generate some RRD databases with that data? 

I don't think we need a rrd here, it's a simple status (ok/error/pending/...) 
on the transportzone.

I don't want to stream vnet status, because it could be really huge.
(like 20 servers broadcasting 300vnets for example).

I the gui, I would like to display transportzone like a storage in the left 
tree.
Then for detail, click on the transportzone (like the volumes display on the 
storage on right pane),
then query vnets status on the specific node at this time only.


But I can use implement colon lists, no problem.


- Mail original -
De: "dietmar" 
À: "pve-devel" , "aderumier" 
Envoyé: Mardi 25 Juin 2019 08:37:18
Objet: Re: [pve-devel] [PATCH pve-manager 1/2] pvestatd : broadcast sdn 
transportzone status

I am not sure if json is a good idea here. We use colon separated lists for 
everything else, so I would prefer that. It is easier to parse inside C, which 
is important when you want to generate RRD databases from inside pmxcfs ... 

Also, consider that it is quite hard to change that format later, because all 
cluster nodes 
reads/write that data. 

So do we want to generate some RRD databases with that data? 

> On 25 June 2019 00:04 Alexandre Derumier  wrote: 
> 
> 
> Signed-off-by: Alexandre Derumier  
> --- 
> PVE/Service/pvestatd.pm | 22 ++ 
> 1 file changed, 22 insertions(+) 
> 
> diff --git a/PVE/Service/pvestatd.pm b/PVE/Service/pvestatd.pm 
> index e138b2e8..bad1b73d 100755 
> --- a/PVE/Service/pvestatd.pm 
> +++ b/PVE/Service/pvestatd.pm 
> @@ -37,6 +37,12 @@ PVE::Status::Plugin->init(); 
> 
> use base qw(PVE::Daemon); 
> 
> +my $have_sdn; 
> +eval { 
> + require PVE::API2::Network::SDN; 
> + $have_sdn = 1; 
> +}; 
> + 
> my $opt_debug; 
> my $restart_request; 
> 
> @@ -457,6 +463,16 @@ sub update_ceph_version { 
> } 
> } 
> 
> +sub update_sdn_status { 
> + 
> + if($have_sdn) { 
> + my ($transport_status, $vnet_status) = PVE::Network::SDN::status(); 
> + 
> + my $status = $transport_status ? encode_json($transport_status) : undef; 
> + PVE::Cluster::broadcast_node_kv("sdn", $status); 
> + } 
> +} 
> + 
> sub update_status { 
> 
> # update worker list. This is not really required and 
> @@ -524,6 +540,12 @@ sub update_status { 
> $err = $@; 
> syslog('err', "error getting ceph services: $err") if $err; 
> 
> + eval { 
> + update_sdn_status(); 
> + }; 
> + $err = $@; 
> + syslog('err', "sdn status update error: $err") if $err; 
> + 
> } 
> 
> my $next_update = 0; 
> -- 
> 2.20.1 
> 
> ___ 
> pve-devel mailing list 
> pve-devel@pve.proxmox.com 
> https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel 

___
pve-devel mailing list
pve-devel@pve.proxmox.com
https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel


Re: [pve-devel] [PATCH pve-manager 1/2] pvestatd : broadcast sdn transportzone status

2019-06-25 Thread Dietmar Maurer
> >>So do we want to generate some RRD databases with that data? 
> 
> I don't think we need a rrd here, it's a simple status (ok/error/pending/...) 
> on the transportzone.

Ok

> I don't want to stream vnet status, because it could be really huge.
> (like 20 servers broadcasting 300vnets for example).

Yes, I would also want to avoid sending too much data over this interface.

> I the gui, I would like to display transportzone like a storage in the left 
> tree.
> Then for detail, click on the transportzone (like the volumes display on the 
> storage on right pane),
> then query vnets status on the specific node at this time only.
> 
> 
> But I can use implement colon lists, no problem.

Thanks.

___
pve-devel mailing list
pve-devel@pve.proxmox.com
https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel


Re: [pve-devel] [PATCH pve-manager 1/2] pvestatd : broadcast sdn transportzone status

2019-06-25 Thread Alexandre DERUMIER
>>But I can use implement colon lists, no problem.

After thinking about it.

actually, colon list are mainly used for rrd, but we have 1 rrd by object (and 
the filename is the key).

If I'm using kvstore for broadcast, with columnlist, I think I'll have 1key by 
transportzone,
not sure it'll really clean. (I also don't known how to cleany remove a key if 
a transportzone is removed from config)

I don't known what is the better/fastest way to store this kind status ? (kv ? 
rrd ?)

Also,I don't known if user would like to have network stats ? Maybe for vnets ?

in this case, for vnets, rrd make more sense. But I don't known how much data 
will be broadcasted (20nodes x 300vnets for examples) 




- Mail original -
De: "aderumier" 
À: "dietmar" 
Cc: "pve-devel" 
Envoyé: Mardi 25 Juin 2019 19:02:42
Objet: Re: [pve-devel] [PATCH pve-manager 1/2] pvestatd : broadcast sdn 
transportzone status

>>I am not sure if json is a good idea here. We use colon separated lists for 
>>everything else, so I would prefer that. It is easier to parse inside C, 
>>which 
>>is important when you want to generate RRD databases from inside pmxcfs ... 


>>Also, consider that it is quite hard to change that format later, because all 
>>cluster nodes 
>>reads/write that data. 

>>So do we want to generate some RRD databases with that data? 

I don't think we need a rrd here, it's a simple status (ok/error/pending/...) 
on the transportzone. 

I don't want to stream vnet status, because it could be really huge. 
(like 20 servers broadcasting 300vnets for example). 

I the gui, I would like to display transportzone like a storage in the left 
tree. 
Then for detail, click on the transportzone (like the volumes display on the 
storage on right pane), 
then query vnets status on the specific node at this time only. 


But I can use implement colon lists, no problem. 


- Mail original - 
De: "dietmar"  
À: "pve-devel" , "aderumier"  
Envoyé: Mardi 25 Juin 2019 08:37:18 
Objet: Re: [pve-devel] [PATCH pve-manager 1/2] pvestatd : broadcast sdn 
transportzone status 

I am not sure if json is a good idea here. We use colon separated lists for 
everything else, so I would prefer that. It is easier to parse inside C, which 
is important when you want to generate RRD databases from inside pmxcfs ... 

Also, consider that it is quite hard to change that format later, because all 
cluster nodes 
reads/write that data. 

So do we want to generate some RRD databases with that data? 

> On 25 June 2019 00:04 Alexandre Derumier  wrote: 
> 
> 
> Signed-off-by: Alexandre Derumier  
> --- 
> PVE/Service/pvestatd.pm | 22 ++ 
> 1 file changed, 22 insertions(+) 
> 
> diff --git a/PVE/Service/pvestatd.pm b/PVE/Service/pvestatd.pm 
> index e138b2e8..bad1b73d 100755 
> --- a/PVE/Service/pvestatd.pm 
> +++ b/PVE/Service/pvestatd.pm 
> @@ -37,6 +37,12 @@ PVE::Status::Plugin->init(); 
> 
> use base qw(PVE::Daemon); 
> 
> +my $have_sdn; 
> +eval { 
> + require PVE::API2::Network::SDN; 
> + $have_sdn = 1; 
> +}; 
> + 
> my $opt_debug; 
> my $restart_request; 
> 
> @@ -457,6 +463,16 @@ sub update_ceph_version { 
> } 
> } 
> 
> +sub update_sdn_status { 
> + 
> + if($have_sdn) { 
> + my ($transport_status, $vnet_status) = PVE::Network::SDN::status(); 
> + 
> + my $status = $transport_status ? encode_json($transport_status) : undef; 
> + PVE::Cluster::broadcast_node_kv("sdn", $status); 
> + } 
> +} 
> + 
> sub update_status { 
> 
> # update worker list. This is not really required and 
> @@ -524,6 +540,12 @@ sub update_status { 
> $err = $@; 
> syslog('err', "error getting ceph services: $err") if $err; 
> 
> + eval { 
> + update_sdn_status(); 
> + }; 
> + $err = $@; 
> + syslog('err', "sdn status update error: $err") if $err; 
> + 
> } 
> 
> my $next_update = 0; 
> -- 
> 2.20.1 
> 
> ___ 
> pve-devel mailing list 
> pve-devel@pve.proxmox.com 
> https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel 

___ 
pve-devel mailing list 
pve-devel@pve.proxmox.com 
https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel 

___
pve-devel mailing list
pve-devel@pve.proxmox.com
https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel