Re: [pve-devel] [PATCH pve-docs] Close #1623: replace apt-get to apt

2020-07-07 Thread Fabian Grünbichler
On July 6, 2020 1:38 pm, Moayad Almalat wrote:
> Signed-off-by: Moayad Almalat 
> ---
>  api-viewer/apidata.js| 8 
>  local-zfs.adoc   | 4 ++--
>  pve-firewall.adoc| 2 +-
>  pve-installation.adoc| 4 ++--
>  pve-package-repos.adoc   | 2 +-
>  pve-storage-iscsi.adoc   | 2 +-
>  qm-cloud-init.adoc   | 2 +-
>  system-software-updates.adoc | 6 +++---
>  8 files changed, 15 insertions(+), 15 deletions(-)
> 
> diff --git a/api-viewer/apidata.js b/api-viewer/apidata.js
> index a44e50e..818740c 100644
> --- a/api-viewer/apidata.js
> +++ b/api-viewer/apidata.js

this file is generated, so you need to find the API definitions in other 
repos to fix them (after checking whether they actually use apt or 
apt-get, and fixing that as well ;))

> @@ -34214,7 +34214,7 @@ var pveapi = [
> },
> "POST" : {
>"allowtoken" : 1,
> -  "description" : "This is used to resynchronize 
> the package index files from their sources (apt-get update).",
> +  "description" : "This is used to resynchronize 
> the package index files from their sources (apt update).",
>"method" : "POST",
>"name" : "update_database",
>"parameters" : {
> @@ -37010,7 +37010,7 @@ var pveapi = [
>},
>"upgrade" : {
>   "default" : 0,
> - "description" : "Deprecated, use the 'cmd' 
> property instead! Run 'apt-get dist-upgrade' instead of normal shell.",
> + "description" : "Deprecated, use the 'cmd' 
> property instead! Run 'apt dist-upgrade' instead of normal shell.",
>   "optional" : 1,
>   "type" : "boolean",
>   "typetext" : ""
> @@ -37097,7 +37097,7 @@ var pveapi = [
>},
>"upgrade" : {
>   "default" : 0,
> - "description" : "Deprecated, use the 'cmd' 
> property instead! Run 'apt-get dist-upgrade' instead of normal shell.",
> + "description" : "Deprecated, use the 'cmd' 
> property instead! Run 'apt dist-upgrade' instead of normal shell.",
>   "optional" : 1,
>   "type" : "boolean",
>   "typetext" : ""
> @@ -37229,7 +37229,7 @@ var pveapi = [
>},
>"upgrade" : {
>   "default" : 0,
> - "description" : "Deprecated, use the 'cmd' 
> property instead! Run 'apt-get dist-upgrade' instead of normal shell.",
> + "description" : "Deprecated, use the 'cmd' 
> property instead! Run 'apt dist-upgrade' instead of normal shell.",
>   "optional" : 1,
>   "type" : "boolean",
>   "typetext" : ""
> diff --git a/local-zfs.adoc b/local-zfs.adoc
> index fd03e89..0271510 100644
> --- a/local-zfs.adoc
> +++ b/local-zfs.adoc
> @@ -333,10 +333,10 @@ Activate E-Mail Notification
>  ZFS comes with an event daemon, which monitors events generated by the
>  ZFS kernel module. The daemon can also send emails on ZFS events like
>  pool errors. Newer ZFS packages ship the daemon in a separate package,
> -and you can install it using `apt-get`:
> +and you can install it using `apt`:
>  
>  
> -# apt-get install zfs-zed
> +# apt install zfs-zed
>  
>  
>  To activate the daemon it is necessary to edit `/etc/zfs/zed.d/zed.rc` with 
> your
> diff --git a/pve-firewall.adoc b/pve-firewall.adoc
> index 7089778..f286018 100644
> --- a/pve-firewall.adoc
> +++ b/pve-firewall.adoc
> @@ -573,7 +573,7 @@ Rejected/Dropped firewall packets don't go to the IPS.
>  Install suricata on proxmox host:
>  
>  
> -# apt-get install suricata
> +# apt install suricata
>  # modprobe nfnetlink_queue  
>  
>  
> diff --git a/pve-installation.adoc b/pve-installation.adoc
> index 0d416ac..5129c14 100644
> --- a/pve-installation.adoc
> +++ b/pve-installation.adoc
> @@ -291,8 +291,8 @@ xref:sysadmin_package_repositories[After configuring the 
> repositories] you need
>  to run the following commands:
>  
>  
> -# apt-get update
> -# apt-get install proxmox-ve
> +# apt update
> +# apt install proxmox-ve
>  
>  
>  Installing on top of an existing Debian installation looks easy, but it 
> presumes
> diff --git a/pve-package-repos.adoc b/pve-package-repos.adoc
> index 4fcf147..34d1700 100644
> --- a/pve-package-repos.adoc
> +++ 

Re: [pve-devel] [PATCH xtermjs] termproxy: rewrite in rust

2020-07-07 Thread Fabian Grünbichler
On July 7, 2020 8:36 am, Thomas Lamprecht wrote:
> On 07.07.20 08:24, Fabian Grünbichler wrote:
>> On July 7, 2020 6:42 am, Dietmar Maurer wrote:
>>>> so we have a 'termproxy' crate+binary and a binary package with name
>>>> 'pve-xtermjs'
>>>
>>> This is quite confusing ...
>> 
>> well, it replaces a 'pve-xtermjs' binary package that ships a 'termproxy'
>> binary (/CLIHandler). the alternative is to bump the old pve-xtermjs to 
>> become a transitional package depending on a new 'termproxy' 
>> (pve-termproxy? proxmox-termproxy?) package that ships the termproxy 
>> binary and breaks+replaces the old pve-xtermjs package..
> 
> the name is OK, IMO, it's just the same as before. The source package name
> is the only thing confusing to me, but I told Dominik already that we can
> do that like proxmox-backup, with our own control(.in) - but as we do not
> really use source package names yet, that confusion is now a bit secondary
> to me.

yes, but we only do control.in in proxmox-backup because debcargo does 
not support multiple binary packages (yet) ;) if we can avoid it, I'd 
avoid it.. if we absolutely have to, we can also just keep the changelog 
as it is and sed the 'Source' line in the generated d/control file..

___
pve-devel mailing list
pve-devel@pve.proxmox.com
https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel


Re: [pve-devel] [PATCH storage 3/6] Introduce prune-backups property for directory-based storages

2020-07-07 Thread Fabian Grünbichler
On July 7, 2020 8:43 am, Thomas Lamprecht wrote:
> On 07.07.20 08:36, Fabian Grünbichler wrote:
>> On June 4, 2020 11:08 am, Fabian Ebner wrote:
>>> @@ -43,6 +43,54 @@ cfs_register_file ('storage.cfg',
>>>sub { __PACKAGE__->parse_config(@_); },
>>>sub { __PACKAGE__->write_config(@_); });
>>>  
>>> +my %prune_option = (
>>> +optional => 1,
>>> +type => 'number', minimum => '1',
>> 
>> shouldn't this be type 'integer'?
>> 
>> I am not sure whether minimum '1' is a good choice - what if I want to 
>> explicitly disable a certain keep-* option instead of just keeping it 
>> blank? also see next patch..
> 
> mirrors the exact PBS behavior, you do not have to sent it but once you do
> it has to be >= 1 - makes sense for such an option, what would setting week
> to 0 mean?

keep-last=5,keep-hourly=0,keep-daily=7,keep-weekly=4,keep-monthly=12,keep-yearly=10

just reads clearer IMHO than having to drop keep-hourly entirely, but 
semantics wise it would be the same ;) I can live with either though.

> integer makes sense to me, though.

___
pve-devel mailing list
pve-devel@pve.proxmox.com
https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel


Re: [pve-devel] [PATCH storage 6/6] Add API and pvesm calls for prune_backups

2020-07-07 Thread Fabian Grünbichler
s/VM/guest in most descriptions - this is not in qemu-server ;)

On June 4, 2020 11:08 am, Fabian Ebner wrote:
> Signed-off-by: Fabian Ebner 
> ---
> 
> Not sure if this is the best place for the new API endpoints.
> 
> I decided to opt for two distinct calls rather than just using a
> --dry-run option and use a worker for actually pruning, because
> removing many backups over the network might take a while.

see comments below

> 
>  PVE/API2/Storage/Makefile|   2 +-
>  PVE/API2/Storage/PruneBackups.pm | 153 +++
>  PVE/API2/Storage/Status.pm   |   7 ++
>  PVE/CLI/pvesm.pm |  27 ++
>  4 files changed, 188 insertions(+), 1 deletion(-)
>  create mode 100644 PVE/API2/Storage/PruneBackups.pm
> 
> diff --git a/PVE/API2/Storage/Makefile b/PVE/API2/Storage/Makefile
> index a33525b..3f667e8 100644
> --- a/PVE/API2/Storage/Makefile
> +++ b/PVE/API2/Storage/Makefile
> @@ -1,5 +1,5 @@
>  
> -SOURCES= Content.pm Status.pm Config.pm
> +SOURCES= Content.pm Status.pm Config.pm PruneBackups.pm
>  
>  .PHONY: install
>  install:
> diff --git a/PVE/API2/Storage/PruneBackups.pm 
> b/PVE/API2/Storage/PruneBackups.pm
> new file mode 100644
> index 000..7968730
> --- /dev/null
> +++ b/PVE/API2/Storage/PruneBackups.pm
> @@ -0,0 +1,153 @@
> +package PVE::API2::Storage::PruneBackups;
> +
> +use strict;
> +use warnings;
> +
> +use PVE::Cluster;
> +use PVE::JSONSchema qw(get_standard_option);
> +use PVE::RESTHandler;
> +use PVE::RPCEnvironment;
> +use PVE::Storage;
> +use PVE::Tools qw(extract_param);
> +
> +use base qw(PVE::RESTHandler);
> +
> +__PACKAGE__->register_method ({
> +name => 'index',

IMHO this is not an index, but just a regular 'GET' API path.

> +path => '',
> +method => 'GET',
> +description => "Get prune information for backups.",
> +permissions => {
> + check => ['perm', '/storage/{storage}', ['Datastore.Audit', 
> 'Datastore.AllocateSpace'], any => 1],
> +},
> +protected => 1,
> +proxyto => 'node',
> +parameters => {
> + additionalProperties => 0,
> + properties => {
> + node => get_standard_option('pve-node'),
> + storage => get_standard_option('pve-storage-id', {
> + completion => \::Storage::complete_storage_enabled,
> +}),
> + 'prune-backups' => get_standard_option('prune-backups', {
> + description => "Use these retention options instead of those 
> from the storage configuration.",
> + optional => 1,
> + }),
> + vmid => get_standard_option('pve-vmid', {
> + description => "Only consider backups for this VM.",

of this guest

would it make sense to allow specification of guest-type as well, so 
that it's possible to indirectly specify the 'backup-group' ?

> + optional => 1,
> + completion => \::Cluster::complete_vmid,
> + }),
> + },
> +},
> +returns => {
> + type => 'array',
> + items => {
> + type => 'object',
> + properties => {
> + volid => {
> + description => "Backup volume ID.",
> + type => 'string',
> + },
> + type => {
> + description => "One of 'qemu', 'lxc', 'openvz' or 
> 'unknown'.",
> + type => 'string',
> + },
> + 'ctime' => {
> + description => "Creation time of the backup (seconds since 
> the UNIX epoch).",
> + type => 'integer',
> + },
> + 'mark' => {
> + description => "Whether the backup would be kept or 
> removed. For backups that don't " .
> +"use the standard naming scheme, it's 
> 'protected'.",
> + type => 'string',
> + },
> + 'vmid' => {
> + description => "The VM the backup belongs to.",
> + type => 'integer',
> + optional => 1,
> + },
> + },
> + },
> +},
> +code => sub {
> + my ($param) = @_;
> +
> + my $cfg = PVE::Storage::config();
> +
> + my $vmid = extract_param($param, 'vmid');
> + my $storeid = extract_param($param, 'storage');
> + my $prune_options = extract_param($param, 'prune-backups');
> +
> + my $opts_override;
> + if (defined($prune_options)) {
> + $opts_override = 
> PVE::JSONSchema::parse_property_string('prune-backups', $prune_options);
> + }
> +
> + my @res = PVE::Storage::prune_backups($cfg, $storeid, $opts_override, 
> $vmid, 1);
> + return \@res;

one more reason to make the return value a reference ;)

> +}});
> +
> +__PACKAGE__->register_method ({
> +name => 'delete',
> +path => '',
> +method => 'DELETE',
> +description => "Prune backups. Only those using the standard naming 
> scheme are considered.",
> +permissions => {
> + description => "You need the 

Re: [pve-devel] [PATCH storage 4/6] Add prune_backups to storage API

2020-07-07 Thread Fabian Grünbichler
On June 4, 2020 11:08 am, Fabian Ebner wrote:
> Implement it for generic storages supporting backups (i.e.
> directory-based storages) and add a wrapper for PBS.
> 
> Signed-off-by: Fabian Ebner 
> ---
>  PVE/Storage.pm |  27 -
>  PVE/Storage/PBSPlugin.pm   |  50 
>  PVE/Storage/Plugin.pm  | 128 
>  test/prune_backups_test.pm | 237 +
>  test/run_plugin_tests.pl   |   1 +
>  5 files changed, 441 insertions(+), 2 deletions(-)
>  create mode 100644 test/prune_backups_test.pm
> 
> diff --git a/PVE/Storage.pm b/PVE/Storage.pm
> index 07a4f53..e1d0d02 100755
> --- a/PVE/Storage.pm
> +++ b/PVE/Storage.pm
> @@ -40,11 +40,11 @@ use PVE::Storage::DRBDPlugin;
>  use PVE::Storage::PBSPlugin;
>  
>  # Storage API version. Icrement it on changes in storage API interface.
> -use constant APIVER => 5;
> +use constant APIVER => 6;
>  # Age is the number of versions we're backward compatible with.
>  # This is like having 'current=APIVER' and age='APIAGE' in libtool,
>  # see 
> https://www.gnu.org/software/libtool/manual/html_node/Libtool-versioning.html
> -use constant APIAGE => 4;
> +use constant APIAGE => 5;
>  
>  # load standard plugins
>  PVE::Storage::DirPlugin->register();
> @@ -1522,6 +1522,29 @@ sub extract_vzdump_config {
>  }
>  }
>  
> +sub prune_backups {
> +my ($cfg, $storeid, $opts_override, $vmid, $dryrun) = @_;

s/opts_override/opts/
s/prune_opts/opts/

or maybe $keep? since we only have either, and they both represent the 
same options..

> +
> +my $scfg = storage_config($cfg, $storeid);
> +die "storage '$storeid' does not support backups\n" if 
> !$scfg->{content}->{backup};
> +
> +my $prune_opts = $opts_override;

not needed then

> +if (!defined($prune_opts)) {
> + die "no prune-backups options configured for storage '$storeid'\n"
> + if !defined($scfg->{'prune-backups'});
> + $prune_opts = PVE::JSONSchema::parse_property_string('prune-backups', 
> $scfg->{'prune-backups'});
> +}
> +
> +my $all_zero = 1;
> +foreach my $opt (keys %{$prune_opts}) {
> + $all_zero = 0 if defined($prune_opts->{$opt}) && $prune_opts->{$opt} > 
> 0;
> +}
> +die "at least one keep-option must be set and positive\n" if $all_zero;

if we switch to integer:

die ...
if !grep { defined($opts->{$_}) && $opts->{$_} } keys %$opts;

or we could use the new format validator functionality to ensure this 
directly in parse_property_string ;)

> +
> +my $plugin = PVE::Storage::Plugin->lookup($scfg->{type});
> +return $plugin->prune_backups($scfg, $storeid, $prune_opts, $vmid, 
> $dryrun);
> +}
> +
>  sub volume_export {
>  my ($cfg, $fh, $volid, $format, $snapshot, $base_snapshot, 
> $with_snapshots) = @_;
>  
> diff --git a/PVE/Storage/PBSPlugin.pm b/PVE/Storage/PBSPlugin.pm
> index fba4b2b..f37bd66 100644
> --- a/PVE/Storage/PBSPlugin.pm
> +++ b/PVE/Storage/PBSPlugin.pm
> @@ -173,6 +173,56 @@ sub extract_vzdump_config {
>  return $config;
>  }
>  
> +sub prune_backups {
> +my ($class, $scfg, $storeid, $prune_opts, $vmid, $dryrun) = @_;
> +
> +my $backups = $class->list_volumes($storeid, $scfg, $vmid, ['backup']);
> +
> +my $backup_groups = {};
> +foreach my $backup (@{$backups}) {
> + (my $guest_type = $backup->{format}) =~ s/^pbs-//;
> + my $backup_group = "$guest_type/$backup->{vmid}";
> + $backup_groups->{$backup_group} = 1;
> +}
> +
> +my @param;
> +foreach my $prune_option (keys %{$prune_opts}) {
> + push @param, "--$prune_option";
> + push @param, "$prune_opts->{$prune_option}";
> +}
> +
> +push @param, '--dry-run' if $dryrun;
> +push @param, '--output-format=json';
> +
> +my @prune_list;

why not a reference? would make extending this easier in the future wrt 
return values

> +
> +foreach my $backup_group (keys %{$backup_groups}) {
> + my $json_str;
> + my $outfunc = sub { $json_str .= "$_[0]\n" };

a simple

sub { $json_str .= shift }

should be enough, since we don't actually care about the new lines?

> +
> + run_raw_client_cmd($scfg, $storeid, 'prune', [ $backup_group, @param ],
> +outfunc => $outfunc, errmsg => 
> 'proxmox-backup-client failed');

error handling here

> +
> + my $res = decode_json($json_str);

and here would be nice (at least to get better error messages, possibly 
to warn and continue with pruning other backup groups and die at the 
end?)

> + foreach my $backup (@{$res}) {
> + my $type = $backup->{'backup-type'};
> + my $vmid = $backup->{'backup-id'};
> + my $ctime = $backup->{'backup-time'};
> + my $time_str = localtime($ctime);
> + my $prune_entry = {
> + volid => "$type/$vmid-$time_str",
> + type => $type eq 'vm' ? 'qemu' : 'lxc',
> + ctime => $ctime,
> + vmid => $vmid,
> + mark => $backup->{keep} ? 'keep' : 'remove',


Re: [pve-devel] [PATCH storage 3/6] Introduce prune-backups property for directory-based storages

2020-07-07 Thread Fabian Grünbichler
small nit in-line

On June 4, 2020 11:08 am, Fabian Ebner wrote:
> Signed-off-by: Fabian Ebner 
> ---
>  PVE/Storage/CIFSPlugin.pm  |  1 +
>  PVE/Storage/CephFSPlugin.pm|  1 +
>  PVE/Storage/DirPlugin.pm   |  5 ++--
>  PVE/Storage/GlusterfsPlugin.pm |  5 ++--
>  PVE/Storage/NFSPlugin.pm   |  5 ++--
>  PVE/Storage/PBSPlugin.pm   |  1 +
>  PVE/Storage/Plugin.pm  | 51 +-
>  7 files changed, 62 insertions(+), 7 deletions(-)
> 
> diff --git a/PVE/Storage/CIFSPlugin.pm b/PVE/Storage/CIFSPlugin.pm
> index 0eb1722..7347ba6 100644
> --- a/PVE/Storage/CIFSPlugin.pm
> +++ b/PVE/Storage/CIFSPlugin.pm
> @@ -134,6 +134,7 @@ sub options {
>   nodes => { optional => 1 },
>   disable => { optional => 1 },
>   maxfiles => { optional => 1 },
> + 'prune-backups' => { optional => 1 },
>   content => { optional => 1 },
>   format => { optional => 1 },
>   username => { optional => 1 },
> diff --git a/PVE/Storage/CephFSPlugin.pm b/PVE/Storage/CephFSPlugin.pm
> index 6575f4f..880ec05 100644
> --- a/PVE/Storage/CephFSPlugin.pm
> +++ b/PVE/Storage/CephFSPlugin.pm
> @@ -150,6 +150,7 @@ sub options {
>   fuse => { optional => 1 },
>   bwlimit => { optional => 1 },
>   maxfiles => { optional => 1 },
> + 'prune-backups' => { optional => 1 },
>  };
>  }
>  
> diff --git a/PVE/Storage/DirPlugin.pm b/PVE/Storage/DirPlugin.pm
> index 39760a8..3c81d24 100644
> --- a/PVE/Storage/DirPlugin.pm
> +++ b/PVE/Storage/DirPlugin.pm
> @@ -49,10 +49,11 @@ sub properties {
>  sub options {
>  return {
>   path => { fixed => 1 },
> -nodes => { optional => 1 },
> + nodes => { optional => 1 },
>   shared => { optional => 1 },
>   disable => { optional => 1 },
> -maxfiles => { optional => 1 },
> + maxfiles => { optional => 1 },
> + 'prune-backups' => { optional => 1 },
>   content => { optional => 1 },
>   format => { optional => 1 },
>   mkdir => { optional => 1 },
> diff --git a/PVE/Storage/GlusterfsPlugin.pm b/PVE/Storage/GlusterfsPlugin.pm
> index 70ea4fc..2dd414d 100644
> --- a/PVE/Storage/GlusterfsPlugin.pm
> +++ b/PVE/Storage/GlusterfsPlugin.pm
> @@ -129,9 +129,10 @@ sub options {
>   server2 => { optional => 1 },
>   volume => { fixed => 1 },
>   transport => { optional => 1 },
> -nodes => { optional => 1 },
> + nodes => { optional => 1 },
>   disable => { optional => 1 },
> -maxfiles => { optional => 1 },
> + maxfiles => { optional => 1 },
> + 'prune-backups' => { optional => 1 },
>   content => { optional => 1 },
>   format => { optional => 1 },
>   mkdir => { optional => 1 },
> diff --git a/PVE/Storage/NFSPlugin.pm b/PVE/Storage/NFSPlugin.pm
> index 82b0c5f..6abb24b 100644
> --- a/PVE/Storage/NFSPlugin.pm
> +++ b/PVE/Storage/NFSPlugin.pm
> @@ -79,9 +79,10 @@ sub options {
>   path => { fixed => 1 },
>   server => { fixed => 1 },
>   export => { fixed => 1 },
> -nodes => { optional => 1 },
> + nodes => { optional => 1 },
>   disable => { optional => 1 },
> -maxfiles => { optional => 1 },
> + maxfiles => { optional => 1 },
> + 'prune-backups' => { optional => 1 },
>   options => { optional => 1 },
>   content => { optional => 1 },
>   format => { optional => 1 },
> diff --git a/PVE/Storage/PBSPlugin.pm b/PVE/Storage/PBSPlugin.pm
> index 65696f4..fba4b2b 100644
> --- a/PVE/Storage/PBSPlugin.pm
> +++ b/PVE/Storage/PBSPlugin.pm
> @@ -50,6 +50,7 @@ sub options {
>   username => { optional => 1 },
>   password => { optional => 1},
>   maxfiles => { optional => 1 },
> + 'prune-backups' => { optional => 1 },
>   fingerprint => { optional => 1 },
>  };
>  }
> diff --git a/PVE/Storage/Plugin.pm b/PVE/Storage/Plugin.pm
> index 595af54..1ba1b99 100644
> --- a/PVE/Storage/Plugin.pm
> +++ b/PVE/Storage/Plugin.pm
> @@ -10,7 +10,7 @@ use File::Basename;
>  use File::stat qw();
>  
>  use PVE::Tools qw(run_command);
> -use PVE::JSONSchema qw(get_standard_option);
> +use PVE::JSONSchema qw(get_standard_option register_standard_option);
>  use PVE::Cluster qw(cfs_register_file);
>  
>  use JSON;
> @@ -43,6 +43,54 @@ cfs_register_file ('storage.cfg',
>  sub { __PACKAGE__->parse_config(@_); },
>  sub { __PACKAGE__->write_config(@_); });
>  
> +my %prune_option = (
> +optional => 1,
> +type => 'number', minimum => '1',

shouldn't this be type 'integer'?

I am not sure whether minimum '1' is a good choice - what if I want to 
explicitly disable a certain keep-* option instead of just keeping it 
blank? also see next patch..

> +format_descritipn => 'N',

s/descritipn/description

> +);
> +
> +my $prune_backups_format = {
> + 'keep-last' => {
> + %prune_option,
> + description => 'Keep the last  backups.',
> + },
> + 'keep-hourly' => {
> + %prune_option,
> + description => 'Keep backups 

Re: [pve-devel] [PATCH xtermjs] termproxy: rewrite in rust

2020-07-07 Thread Fabian Grünbichler
On July 7, 2020 6:42 am, Dietmar Maurer wrote:
>> so we have a 'termproxy' crate+binary and a binary package with name
>> 'pve-xtermjs'
> 
> This is quite confusing ...

well, it replaces a 'pve-xtermjs' binary package that ships a 'termproxy'
binary (/CLIHandler). the alternative is to bump the old pve-xtermjs to 
become a transitional package depending on a new 'termproxy' 
(pve-termproxy? proxmox-termproxy?) package that ships the termproxy 
binary and breaks+replaces the old pve-xtermjs package..

___
pve-devel mailing list
pve-devel@pve.proxmox.com
https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel


Re: [pve-devel] [PATCH v3 qemu-server 3/3] fix #2671: include CPU format in man page again

2020-07-06 Thread Fabian Grünbichler
this looks good, waiting for pve-common bump before applying this with a 
versioned dependency.

On June 25, 2020 1:35 pm, Stefan Reiter wrote:
> Use the new register_format(3) call to use a validator (instead of a
> parser) for 'pve-(vm-)?cpu-conf'. This way the $cpu_fmt hash can be used for
> generating the documentation, while still applying the same verification
> rules as before.
> 
> Since the function no longer parses but only verifies, the parsing in
> print_cpu_device/get_cpu_options has to go via JSONSchema directly.
> 
> Signed-off-by: Stefan Reiter 
> ---
>  PVE/QemuServer/CPUConfig.pm | 56 -
>  1 file changed, 18 insertions(+), 38 deletions(-)
> 
> diff --git a/PVE/QemuServer/CPUConfig.pm b/PVE/QemuServer/CPUConfig.pm
> index 6250591..8ed898c 100644
> --- a/PVE/QemuServer/CPUConfig.pm
> +++ b/PVE/QemuServer/CPUConfig.pm
> @@ -153,6 +153,7 @@ my $cpu_fmt = {
>  'phys-bits' => {
>   type => 'string',
>   format => 'pve-phys-bits',
> + format_description => '8-64|host',
>   description => "The physical memory address bits that are reported to"
>. " the guest OS. Should be smaller or equal to the 
> host's."
>. " Set to 'host' to use value from host CPU, but note 
> that"
> @@ -182,57 +183,36 @@ sub parse_phys_bits {
>  
>  # $cpu_fmt describes both the CPU config passed as part of a VM config, as 
> well
>  # as the definition of a custom CPU model. There are some slight differences
> -# though, which we catch in the custom verification function below.
> -PVE::JSONSchema::register_format('pve-cpu-conf', \_cpu_conf_basic);
> -sub parse_cpu_conf_basic {
> -my ($cpu_str, $noerr) = @_;
> -
> -my $cpu = eval { PVE::JSONSchema::parse_property_string($cpu_fmt, 
> $cpu_str) };
> -if ($@) {
> -die $@ if !$noerr;
> -return undef;
> -}
> +# though, which we catch in the custom validation functions below.
> +PVE::JSONSchema::register_format('pve-cpu-conf', $cpu_fmt, 
> \_cpu_conf);
> +sub validate_cpu_conf {
> +my ($cpu) = @_;
>  
>  # required, but can't be forced in schema since it's encoded in section
>  # header for custom models
> -if (!$cpu->{cputype}) {
> - die "CPU is missing cputype\n" if !$noerr;
> - return undef;
> -}
> -
> -return $cpu;
> +die "CPU is missing cputype\n" if !$cpu->{cputype};
>  }
> +PVE::JSONSchema::register_format('pve-vm-cpu-conf', $cpu_fmt, 
> \_vm_cpu_conf);
> +sub validate_vm_cpu_conf {
> +my ($cpu) = @_;
>  
> -PVE::JSONSchema::register_format('pve-vm-cpu-conf', \_vm_cpu_conf);
> -sub parse_vm_cpu_conf {
> -my ($cpu_str, $noerr) = @_;
> -
> -my $cpu = parse_cpu_conf_basic($cpu_str, $noerr);
> -return undef if !$cpu;
> +validate_cpu_conf($cpu);
>  
>  my $cputype = $cpu->{cputype};
>  
>  # a VM-specific config is only valid if the cputype exists
>  if (is_custom_model($cputype)) {
> - eval { get_custom_model($cputype); };
> - if ($@) {
> - die $@ if !$noerr;
> - return undef;
> - }
> + # dies on unknown model
> + get_custom_model($cputype);
>  } else {
> - if (!defined($cpu_vendor_list->{$cputype})) {
> - die "Built-in cputype '$cputype' is not defined (missing 'custom-' 
> prefix?)\n" if !$noerr;
> - return undef;
> - }
> + die "Built-in cputype '$cputype' is not defined (missing 'custom-' 
> prefix?)\n"
> + if !defined($cpu_vendor_list->{$cputype});
>  }
>  
>  # in a VM-specific config, certain properties are limited/forbidden
>  
> -if ($cpu->{flags} && $cpu->{flags} !~ 
> m/$cpu_flag_supported_re(;$cpu_flag_supported_re)*/) {
> - die "VM-specific CPU flags must be a subset of: @{[join(', ', 
> @supported_cpu_flags)]}\n"
> - if !$noerr;
> - return undef;
> -}
> +die "VM-specific CPU flags must be a subset of: @{[join(', ', 
> @supported_cpu_flags)]}\n"
> + if ($cpu->{flags} && $cpu->{flags} !~ 
> m/$cpu_flag_supported_re(;$cpu_flag_supported_re)*/);
>  
>  die "Property 'reported-model' not allowed in VM-specific CPU config.\n"
>   if defined($cpu->{'reported-model'});
> @@ -369,7 +349,7 @@ sub print_cpu_device {
>  my $kvm = $conf->{kvm} // 1;
>  my $cpu = $kvm ? "kvm64" : "qemu64";
>  if (my $cputype = $conf->{cpu}) {
> - my $cpuconf = parse_cpu_conf_basic($cputype)
> + my $cpuconf = PVE::JSONSchema::parse_property_string('pve-vm-cpu-conf', 
> $cputype)
>   or die "Cannot parse cpu description: $cputype\n";
>   $cpu = $cpuconf->{cputype};
>  
> @@ -481,7 +461,7 @@ sub get_cpu_options {
>  my $custom_cpu;
>  my $hv_vendor_id;
>  if (my $cpu_prop_str = $conf->{cpu}) {
> - $cpu = parse_vm_cpu_conf($cpu_prop_str)
> + $cpu = PVE::JSONSchema::parse_property_string('pve-vm-cpu-conf', 
> $cpu_prop_str)
>   or die "Cannot parse cpu description: $cpu_prop_str\n";
>  
>   $cputype = $cpu->{cputype};
> 

[pve-devel] applied: [PATCH v3 common 2/3] JSONSchema: use validator in print_property_string too

2020-07-06 Thread Fabian Grünbichler
On June 25, 2020 1:35 pm, Stefan Reiter wrote:
> Suggested-by: Fabian Grünbichler 
> Signed-off-by: Stefan Reiter 
> ---
>  src/PVE/JSONSchema.pm | 5 +
>  1 file changed, 5 insertions(+)
> 
> diff --git a/src/PVE/JSONSchema.pm b/src/PVE/JSONSchema.pm
> index f987006..59a2b5a 100644
> --- a/src/PVE/JSONSchema.pm
> +++ b/src/PVE/JSONSchema.pm
> @@ -1878,9 +1878,12 @@ sub generate_typetext {
>  sub print_property_string {
>  my ($data, $format, $skip, $path) = @_;
>  
> +my $validator;
>  if (ref($format) ne 'HASH') {
>   my $schema = get_format($format);
>   die "not a valid format: $format\n" if !$schema;
> + # named formats can have validators attached
> + $validator = $format_validators->{$format};
>   $format = $schema;
>  }
>  
> @@ -1890,6 +1893,8 @@ sub print_property_string {
>   raise "format error", errors => $errors;
>  }
>  
> +$data = $validator->($data) if $validator;
> +
>  my ($default_key, $keyAliasProps) = &$find_schema_default_key($format);
>  
>  my $res = '';
> -- 
> 2.20.1
> 
> 
> ___
> pve-devel mailing list
> pve-devel@pve.proxmox.com
> https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel
> 

___
pve-devel mailing list
pve-devel@pve.proxmox.com
https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel


[pve-devel] applied: [PATCH v3 common 1/3] JSONSchema: add format validator support and cleanup check_format

2020-07-06 Thread Fabian Grünbichler
On June 25, 2020 1:35 pm, Stefan Reiter wrote:
> Adds a third, optional parameter to register_format that allows specifying
> a function that will be called after parsing and can validate the parsed
> data. A validator should die on failed validation, and can also change the
> parsed object by returning a modified version of it.
> 
> This is useful so one can register a format with its hash, thus allowing
> documentation to be generated automatically, while still enforcing certain
> validation rules.
> 
> The validator only needs to be called in parse_property_string, since
> check_format always calls parse_property_string if there is a
> possibility of a validator existing at all. parse_property_string should
> then be called with named formats for best effect, as only then can
> validators be used.
> 
> Clean up 'check_format' as well (which pretty much amounts to a rewrite).
> No existing functionality is intentionally changed.
> 
> Signed-off-by: Stefan Reiter 
> ---
>  src/PVE/JSONSchema.pm | 87 +++
>  1 file changed, 55 insertions(+), 32 deletions(-)
> 
> diff --git a/src/PVE/JSONSchema.pm b/src/PVE/JSONSchema.pm
> index 84fb694..f987006 100644
> --- a/src/PVE/JSONSchema.pm
> +++ b/src/PVE/JSONSchema.pm
> @@ -121,19 +121,26 @@ register_standard_option('pve-snapshot-name', {
>  });
>  
>  my $format_list = {};
> +my $format_validators = {};
>  
>  sub register_format {
> -my ($format, $code) = @_;
> +my ($name, $format, $validator) = @_;
>  
> -die "JSON schema format '$format' already registered\n"
> - if $format_list->{$format};
> +die "JSON schema format '$name' already registered\n"
> + if $format_list->{$name};
>  
> -$format_list->{$format} = $code;
> +if ($validator) {
> + die "A \$validator function can only be specified for hash-based 
> formats\n"
> + if ref($format) ne 'HASH';
> + $format_validators->{$name} = $validator;
> +}
> +
> +$format_list->{$name} = $format;
>  }
>  
>  sub get_format {
> -my ($format) = @_;
> -return $format_list->{$format};
> +my ($name) = @_;
> +return $format_list->{$name};
>  }
>  
>  my $renderer_hash = {};
> @@ -647,39 +654,47 @@ sub pve_verify_tfa_secret {
>  sub check_format {
>  my ($format, $value, $path) = @_;
>  
> -return parse_property_string($format, $value, $path) if ref($format) eq 
> 'HASH';
> +if (ref($format) eq 'HASH') {
> + # hash ref cannot have validator/list/opt handling attached
> + return parse_property_string($format, $value, $path);
> +}
> +
> +if (ref($format) eq 'CODE') {
> + # we are the (sole, old-style) validator
> + return $format->($value);
> +}
> +
>  return if $format eq 'regex';
>  
> -if ($format =~ m/^(.*)-a?list$/) {
> +my $parsed;
> +$format =~ m/^(.*?)(?:-a?(list|opt))?$/;
> +my ($format_name, $format_type) = ($1, $2 // 'none');
> +my $registered = get_format($format_name);
> +die "undefined format '$format'\n" if !$registered;
>  
> - my $code = $format_list->{$1};
> -
> - die "undefined format '$format'\n" if !$code;
> +die "'-$format_type' format must have code ref, not hash\n"
> + if $format_type ne 'none' && ref($registered) ne 'CODE';
>  
> +if ($format_type eq 'list') {
>   # Note: we allow empty lists
>   foreach my $v (split_list($value)) {
> - &$code($v);
> + $parsed = $registered->($v);
>   }
> -
> -} elsif ($format =~ m/^(.*)-opt$/) {
> -
> - my $code = $format_list->{$1};
> -
> - die "undefined format '$format'\n" if !$code;
> -
> - return if !$value; # allow empty string
> -
> - &$code($value);
> -
> +} elsif ($format_type eq 'opt') {
> + $parsed = $registered->($value) if $value;
> } else {
> -
> - my $code = $format_list->{$format};
> -
> - die "undefined format '$format'\n" if !$code;
> -
> - return parse_property_string($code, $value, $path) if ref($code) eq 
> 'HASH';
> - &$code($value);
> + if (ref($registered) eq 'HASH') {
> + # Note: this is the only case where a validator function could be
> + # attached, hence it's safe to handle that in parse_property_string.
> + # We do however have to call it with $format_name instead of
> + # $registered, so it knows about the name (and thus any validators).
> + $parsed = parse_property_string($format, $value, $path);
> + } else {
> + $parsed = $registered->($value);
> + }
>  }
> +
> +return $parsed;
>  }
>  
>  sub parse_size {
> @@ -735,9 +750,16 @@ sub parse_property_string {
>  $additional_properties = 0 if !defined($additional_properties);
>  
>  # Support named formats here, too:
> +my $validator;
>  if (!ref($format)) {
> - if (my $desc = $format_list->{$format}) {
> - $format = $desc;
> + if (my $reg = get_format($format)) {
> + die "parse_property_string only accepts hash 

Re: [pve-devel] [PATCH qemu] PVE-Backup: remove dirty-bitmap in pvebackup_complete_cb for failed jobs

2020-07-02 Thread Fabian Grünbichler
it should also be possible to keep the old bitmap (and associated backup 
checksum) in this case? this is what bitmap-mode on-success is supposed 
to do, but maybe errors are not triggering the right code paths?

On July 1, 2020 2:17 pm, Dietmar Maurer wrote:
> Note: We remove the device from di_list, so pvebackup_co_cleanup does
> not handle this case.
> ---
>  pve-backup.c | 6 ++
>  1 file changed, 6 insertions(+)
> 
> diff --git a/pve-backup.c b/pve-backup.c
> index 61a8b4d2a4..1c4f6cf9e0 100644
> --- a/pve-backup.c
> +++ b/pve-backup.c
> @@ -318,6 +318,12 @@ static void pvebackup_complete_cb(void *opaque, int ret)
>  // remove self from job queue
>  backup_state.di_list = g_list_remove(backup_state.di_list, di);
>  
> +if (di->bitmap && ret < 0) {
> +// on error or cancel we cannot ensure synchronization of dirty
> +// bitmaps with backup server, so remove all and do full backup next
> +bdrv_release_dirty_bitmap(di->bitmap);
> +}
> +
>  g_free(di);
>  
>  qemu_mutex_unlock(_state.backup_mutex);
> -- 
> 2.20.1
> 
> ___
> pve-devel mailing list
> pve-devel@pve.proxmox.com
> https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel
> 
> 

___
pve-devel mailing list
pve-devel@pve.proxmox.com
https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel


Re: [pve-devel] [PATCH container] fix #2820: block adding new volume with same id if it's pending delete

2020-07-01 Thread Fabian Grünbichler
On July 1, 2020 2:05 pm, Thomas Lamprecht wrote:
> On 01.07.20 09:11, Fabian Grünbichler wrote:
>> - we can actually just put the new mpX into the pending queue, and 
>>   remove the entry from the pending deletion queue? (it's hotplugging 
>>   that is the problem, not queuing the pending change)
> 
> Even if we could I'm not sure I want to be able to add a new mpX as pending
> if the old is still pending its deletion. But, tbh, I did not looked at 
> details
> so I may missing something..

well, the sequence is

- delete mp0 (queued)
- set a new mp0 (queued)

just like a general

- delete foo  (queued)
- set foo (queued)

where the set removes the queued deletion. in the case of mp, applying 
that pending change should then add the old volume ID as unused, but 
that IMHO does not change the semantics of '(queuing a) set overrides 
earlier queued delete'.

but this is broken for regular hotplug without deletion as well, setting 
mpX with a new volume ID if the slot is already used does not queue it 
as pending change, but
- mounts the new volume ID in addition to the old one
- adds the old volume ID as unused, even though it is still mounted in 
  the container

so this is broken in more ways than just what I initially found..

___
pve-devel mailing list
pve-devel@pve.proxmox.com
https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel


Re: [pve-devel] [PATCH v2 container] fix #2820: block adding new volume with same id if it's pending delete

2020-07-01 Thread Fabian Grünbichler
On July 1, 2020 11:56 am, Oguz Bektas wrote:
> if a user tries to add a mountpoint mpX which is waiting for a pending
> delete, hotplugging a new mountpoint with name mpX before the
> previous one is detached should not be allowed.
> 
> do a simple check to see if the given mpX is already in the pending delete 
> section.
> 
> Signed-off-by: Oguz Bektas 
> ---
> 
> v1->v2:
> * use exact matching

this is still not exact matching. split the list, look for an exact 
match

> * change full stop to comma
> * s/mountpoint/mount point/

you did not address my question on why you die, instead of just blocking 
hotplugging..

> 
> 
>  src/PVE/LXC/Config.pm | 3 +++
>  1 file changed, 3 insertions(+)
> 
> diff --git a/src/PVE/LXC/Config.pm b/src/PVE/LXC/Config.pm
> index 0a28380..f582eb8 100644
> --- a/src/PVE/LXC/Config.pm
> +++ b/src/PVE/LXC/Config.pm
> @@ -974,6 +974,9 @@ sub update_pct_config {
>   my $value = $param->{$opt};
>   if ($opt =~ m/^mp(\d+)$/ || $opt eq 'rootfs') {
>   $class->check_protection($conf, "can't update CT $vmid drive 
> '$opt'");
> + if ($conf->{pending}->{delete} =~ m/$opt\b/) {
> + die "${opt} is in pending delete queue, please choose another 
> mount point ID\n";
> + }
>   my $mp = $class->parse_volume($opt, $value);
>   $check_content_type->($mp) if ($mp->{type} eq 'volume');
>   } elsif ($opt eq 'hookscript') {
> -- 
> 2.20.1
> 
> ___
> pve-devel mailing list
> pve-devel@pve.proxmox.com
> https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel
> 
> 

___
pve-devel mailing list
pve-devel@pve.proxmox.com
https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel


[pve-devel] [PATCH vncterm 1/2] libvncserver: update sources to 0.9.13

2020-07-01 Thread Fabian Grünbichler
Signed-off-by: Fabian Grünbichler 
---
 LibVNCServer-0.9.11.tar.gz | Bin 1413739 -> 0 bytes
 LibVNCServer-0.9.13.tar.gz | Bin 0 -> 567491 bytes
 2 files changed, 0 insertions(+), 0 deletions(-)
 delete mode 100644 LibVNCServer-0.9.11.tar.gz
 create mode 100644 LibVNCServer-0.9.13.tar.gz

diff --git a/LibVNCServer-0.9.11.tar.gz b/LibVNCServer-0.9.11.tar.gz
deleted file mode 100644
index c33564a..000
Binary files a/LibVNCServer-0.9.11.tar.gz and /dev/null differ
diff --git a/LibVNCServer-0.9.13.tar.gz b/LibVNCServer-0.9.13.tar.gz
new file mode 100644
index 000..e0d6242
Binary files /dev/null and b/LibVNCServer-0.9.13.tar.gz differ
-- 
2.20.1


___
pve-devel mailing list
pve-devel@pve.proxmox.com
https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel


[pve-devel] [PATCH vncterm 2/2] build: rebase on libvncserver 0.9.13

2020-07-01 Thread Fabian Grünbichler
upstream switched to cmake from autotools.

it is possible to drop systemd via a build option now, so use that
instead of patching.

Signed-off-by: Fabian Grünbichler 
---
 Makefile  |  7 +++---
 vncpatches/tls-auth-pluging.patch | 41 +++
 debian/control|  3 ++-
 vncpatches/series |  1 -
 4 files changed, 25 insertions(+), 27 deletions(-)

diff --git a/Makefile b/Makefile
index a84205e..da654d9 100644
--- a/Makefile
+++ b/Makefile
@@ -4,11 +4,11 @@ include /usr/share/dpkg/architecture.mk
 PACKAGE=vncterm
 GITVERSION:=$(shell cat .git/refs/heads/master)
 
-VNCVER=0.9.11
+VNCVER=0.9.13
 VNCREL=LibVNCServer-${VNCVER}
 VNCDIR=libvncserver-${VNCREL}
 VNCSRC=${VNCREL}.tar.gz
-VNCLIB=${VNCDIR}/libvncserver/.libs/libvncserver.a
+VNCLIB=${VNCDIR}/libvncserver.a
 
 DEB=${PACKAGE}_${DEB_VERSION_UPSTREAM_REVISION}_${DEB_BUILD_ARCH}.deb
 
@@ -32,8 +32,7 @@ ${VNCLIB}: ${VNCSRC}
tar xf ${VNCSRC}
ln -s ../vncpatches ${VNCDIR}/patches
cd ${VNCDIR}; quilt push -a
-   cd ${VNCDIR}; ./autogen.sh --without-ssl --without-websockets 
--without-tightvnc-filetransfer;
-   cd ${VNCDIR}; $(MAKE)
+   cd ${VNCDIR}; cmake -D WITH_GNUTLS=OFF -D WITH_OPENSSL=OFF -D 
WITH_WEBSOCKETS=OFF -D WITH_SYSTEMD=OFF -D WITH_TIGHTVNC_FILETRANSFER=OFF -D 
WITH_GCRYPT=OFF -D BUILD_SHARED_LIBS=OFF .; cmake --build .
 
 vncterm: vncterm.c wchardata.c $(VNCLIB)
$(CC) $(CPPFLAGS) $(CFLAGS) -o $@ $^ $(VNC_LIBS)
diff --git a/vncpatches/tls-auth-pluging.patch 
b/vncpatches/tls-auth-pluging.patch
index 17a8c47..837612f 100644
--- a/vncpatches/tls-auth-pluging.patch
+++ b/vncpatches/tls-auth-pluging.patch
@@ -1,23 +1,22 @@
-Index: vnc/libvncserver/auth.c
+Index: libvncserver-LibVNCServer-0.9.13/libvncserver/auth.c
 ===
-@@ -300,8 +300,9 @@
- int32_t securityType = rfbSecTypeInvalid;
+--- libvncserver-LibVNCServer-0.9.13.orig/libvncserver/auth.c
 libvncserver-LibVNCServer-0.9.13/libvncserver/auth.c
+@@ -301,7 +301,8 @@ rfbAuthNewClient(rfbClientPtr cl)
  
  if (!cl->screen->authPasswdData || cl->reverseConnection) {
--  /* chk if this condition is valid or not. */
+   /* chk if this condition is valid or not. */
 -  securityType = rfbSecTypeNone;
-+  /* chk if this condition is valid or not. */
 +  /* we disable anonymous auth */
 +  // securityType = rfbSecTypeNone;
  } else if (cl->screen->authPasswdData) {
securityType = rfbSecTypeVncAuth;
  }
-Index: vnc/newterm/Makefile.am
-Index: vnc/libvncserver/sockets.c
+Index: libvncserver-LibVNCServer-0.9.13/libvncserver/sockets.c
 ===
 vnc.orig/libvncserver/sockets.c2011-01-20 16:42:41.0 +0100
-+++ vnc/libvncserver/sockets.c 2011-01-21 10:20:03.0 +0100
-@@ -613,7 +613,11 @@ rfbReadExactTimeout(rfbClientPtr cl, char* buf, int len, 
int timeout)
+--- libvncserver-LibVNCServer-0.9.13.orig/libvncserver/sockets.c
 libvncserver-LibVNCServer-0.9.13/libvncserver/sockets.c
+@@ -638,7 +638,11 @@ rfbReadExactTimeout(rfbClientPtr cl, cha
  n = read(sock, buf, len);
  }
  #else
@@ -30,7 +29,7 @@ Index: vnc/libvncserver/sockets.c
  #endif
  
  if (n > 0) {
-@@ -801,7 +805,11 @@ rfbWriteExact(rfbClientPtr cl,
+@@ -826,7 +830,11 @@ rfbWriteExact(rfbClientPtr cl,
n = rfbssl_write(cl, buf, len);
else
  #endif
@@ -42,21 +41,21 @@ Index: vnc/libvncserver/sockets.c
  
  if (n > 0) {
  
-Index: vnc/rfb/rfb.h
+Index: libvncserver-LibVNCServer-0.9.13/rfb/rfb.h
 ===
 vnc.orig/rfb/rfb.h 2011-01-20 16:36:06.0 +0100
-+++ vnc/rfb/rfb.h  2011-01-21 06:44:22.0 +0100
-@@ -397,6 +397,9 @@
- struct _rfbStatList *Next;
- } rfbStatList;
+--- libvncserver-LibVNCServer-0.9.13.orig/rfb/rfb.h
 libvncserver-LibVNCServer-0.9.13/rfb/rfb.h
+@@ -411,6 +411,9 @@ typedef struct _rfbStatList {
+ typedef struct _rfbSslCtx rfbSslCtx;
+ typedef struct _wsCtx wsCtx;
  
 +typedef ssize_t (*sock_read_fn_t)(struct _rfbClientRec *cl, void *buf, size_t 
count);
 +typedef ssize_t (*sock_write_fn_t)(struct _rfbClientRec *cl, const void *buf, 
size_t count);
 +
  typedef struct _rfbClientRec {
-   
- /* back pointer to the screen */
-@@ -417,6 +420,10 @@
+ 
+ /** back pointer to the screen */
+@@ -431,6 +434,10 @@ typedef struct _rfbClientRec {
  void* clientData;
  ClientGoneHookPtr clientGoneHook;
  
@@ -64,6 +63,6 @@ Index: vnc/rfb/rfb.h
 +sock_read_fn_t sock_read_fn;
 +sock_read_fn_t sock_write_fn;
 +
- SOCKET sock;
+ rfbSocket sock;
  char *host;
  
diff --git a/debian/control b/debian/control
index 89c0f5d..cb6c7b2 100644
--- a/debian/control
+++ b/debian/control
@@ -1,6 +1,7 @@
 Source: vncterm
 Maintainer: Proxmox Support

[pve-devel] applied: [PATCH firewall] ebtables: keep policy of custom chains

2020-07-01 Thread Fabian Grünbichler
with bug # added to commit subject. sorry for the delay, and thanks for 
the fix!

On June 2, 2020 10:06 am, Stoiko Ivanov wrote:
> currently all ebtalbes chains are created with a hardcoded policy of ACCEPT.
> This patch changes the functionality to store the configured policy of a
> chain while reading the 'ebtables-save' output and uses this policy when
> creating the command list.
> 
> This is only relevant for ebtablers chains not generated by pve-firewall (the
> ones having an action of 'ignore' in the status-hash).
> 
> Reported on the pve-user list:
> https://pve.proxmox.com/pipermail/pve-user/2020-May/171731.html
> 
> Minimally tested with the example from the thread.
> 
> Signed-off-by: Stoiko Ivanov 
> ---
>  src/PVE/Firewall.pm | 7 +--
>  1 file changed, 5 insertions(+), 2 deletions(-)
> 
> diff --git a/src/PVE/Firewall.pm b/src/PVE/Firewall.pm
> index a2105e5..97670fd 100644
> --- a/src/PVE/Firewall.pm
> +++ b/src/PVE/Firewall.pm
> @@ -1944,9 +1944,10 @@ sub ebtables_get_chains {
>   my $line = shift;
>   return if $line =~ m/^#/;
>   return if $line =~ m/^\s*$/;
> - if ($line =~ m/^:(\S+)\s\S+$/) {
> + if ($line =~ m/^:(\S+)\s(ACCEPT|DROP|RETURN)$/) {
>   # Make sure we know chains exist even if they're empty.
>   $chains->{$1} //= [];
> + $res->{$1}->{policy} = $2;
>   } elsif ($line =~ m/^(?:\S+)\s(\S+)\s(?:\S+).*/) {
>   my $chain = $1;
>   $line =~ s/\s+$//;
> @@ -4063,6 +4064,7 @@ sub get_ruleset_status {
>   if (defined($change_only_regex)) {
>   $action = 'ignore' if ($chain !~ m/$change_only_regex/);
>   $statushash->{$chain}->{rules} = $active_chains->{$chain}->{rules};
> + $statushash->{$chain}->{policy} = 
> $active_chains->{$chain}->{policy};
>   $sig = $sig->{sig};
>   }
>   $statushash->{$chain}->{action} = $action;
> @@ -4163,7 +4165,8 @@ sub get_ebtables_cmdlist {
>  my $pve_include = 0;
>  foreach my $chain (sort keys %$statushash) {
>   next if ($statushash->{$chain}->{action} eq 'delete');
> - $cmdlist .= ":$chain ACCEPT\n";
> + my $policy = $statushash->{$chain}->{policy} // 'ACCEPT';
> + $cmdlist .= ":$chain $policy\n";
>   $pve_include = 1 if ($chain eq 'PVEFW-FORWARD');
>  }
>  
> -- 
> 2.20.1
> 
> 
> ___
> pve-devel mailing list
> pve-devel@pve.proxmox.com
> https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel
> 
> 

___
pve-devel mailing list
pve-devel@pve.proxmox.com
https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel


[pve-devel] applied: [PATCH pve-network 2/2] build: fix erroneous install with empty DESTDIR

2020-07-01 Thread Fabian Grünbichler
dh calls make in the dh_auto_build step without setting DESTDIR, so the
missing empty default target meant that we'd install all the files to
the build system during dh_auto_build, and then install them again to
the tmp build dir during dh_auto_install. obviously the former is not
something we want to do ;)

Signed-off-by: Fabian Grünbichler 
---
noticed this while building as non-root..

 PVE/Makefile | 2 ++
 1 file changed, 2 insertions(+)

diff --git a/PVE/Makefile b/PVE/Makefile
index 1fb961d..26e01a4 100644
--- a/PVE/Makefile
+++ b/PVE/Makefile
@@ -1,3 +1,5 @@
+all:
+
 .PHONY: install
 install:
make -C Network install
-- 
2.20.1


___
pve-devel mailing list
pve-devel@pve.proxmox.com
https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel


[pve-devel] applied: [PATCH pve-network 1/2] remove more variable declarations with post-ifs

2020-07-01 Thread Fabian Grünbichler
usage of $mtu is always guarded by an if itself anyway, so all of these
are redundant post-ifs and can be removed.

Signed-off-by: Fabian Grünbichler 
---
 PVE/Network/SDN/Zones/QinQPlugin.pm | 4 ++--
 PVE/Network/SDN/Zones/VlanPlugin.pm | 6 +++---
 2 files changed, 5 insertions(+), 5 deletions(-)

diff --git a/PVE/Network/SDN/Zones/QinQPlugin.pm 
b/PVE/Network/SDN/Zones/QinQPlugin.pm
index c0b2402..c8dd0ab 100644
--- a/PVE/Network/SDN/Zones/QinQPlugin.pm
+++ b/PVE/Network/SDN/Zones/QinQPlugin.pm
@@ -55,7 +55,7 @@ sub generate_sdn_config {
 die "can't find bridge $bridge" if !-d "/sys/class/net/$bridge";
 
 my $vlan_aware = 
PVE::Tools::file_read_firstline("/sys/class/net/$bridge/bridge/vlan_filtering");
-my $is_ovs = 1 if !-d "/sys/class/net/$bridge/brif";
+my $is_ovs = !-d "/sys/class/net/$bridge/brif";
 
 my @iface_config = ();
 my $vnet_bridge_ports = "";
@@ -177,7 +177,7 @@ sub status {
 }
 
 my $vlan_aware = 
PVE::Tools::file_read_firstline("/sys/class/net/$bridge/bridge/vlan_filtering");
-my $is_ovs = 1 if !-d "/sys/class/net/$bridge/brif";
+my $is_ovs = !-d "/sys/class/net/$bridge/brif";
 
 my $tag = $vnet->{tag};
 my $vnet_uplink = "ln_".$vnetid;
diff --git a/PVE/Network/SDN/Zones/VlanPlugin.pm 
b/PVE/Network/SDN/Zones/VlanPlugin.pm
index 8e99fc4..dedb32c 100644
--- a/PVE/Network/SDN/Zones/VlanPlugin.pm
+++ b/PVE/Network/SDN/Zones/VlanPlugin.pm
@@ -44,11 +44,11 @@ sub generate_sdn_config {
 die "can't find bridge $bridge" if !-d "/sys/class/net/$bridge";
 
 my $vlan_aware = 
PVE::Tools::file_read_firstline("/sys/class/net/$bridge/bridge/vlan_filtering");
-my $is_ovs = 1 if !-d "/sys/class/net/$bridge/brif";
+my $is_ovs = !-d "/sys/class/net/$bridge/brif";
 
 my $tag = $vnet->{tag};
 my $alias = $vnet->{alias};
-my $mtu = $plugin_config->{mtu} if $plugin_config->{mtu};
+my $mtu = $plugin_config->{mtu};
 
 my $vnet_uplink = "ln_".$vnetid;
 my $vnet_uplinkpeer = "pr_".$vnetid;
@@ -142,7 +142,7 @@ sub status {
 }
 
 my $vlan_aware = 
PVE::Tools::file_read_firstline("/sys/class/net/$bridge/bridge/vlan_filtering");
-my $is_ovs = 1 if !-d "/sys/class/net/$bridge/brif";
+my $is_ovs = !-d "/sys/class/net/$bridge/brif";
 
 my $tag = $vnet->{tag};
 my $vnet_uplink = "ln_".$vnetid;
-- 
2.20.1


___
pve-devel mailing list
pve-devel@pve.proxmox.com
https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel


[pve-devel] applied: [PATCH pve-network] use PVE::Tools::split_list for ip lists

2020-07-01 Thread Fabian Grünbichler
with the following applied on-top, since we don't want declarations 
combined with post-if:

my $foo = 'bla' if $bar;

is undefined behaviour[1].

-8<-
diff --git a/PVE/Network/SDN/Controllers/EvpnPlugin.pm 
b/PVE/Network/SDN/Controllers/EvpnPlugin.pm
index 9321af1..d82de2a 100644
--- a/PVE/Network/SDN/Controllers/EvpnPlugin.pm
+++ b/PVE/Network/SDN/Controllers/EvpnPlugin.pm
@@ -47,11 +47,13 @@ sub options {
 sub generate_controller_config {
 my ($class, $plugin_config, $controller, $id, $uplinks, $config) = @_;
 
-my @peers = PVE::Tools::split_list($plugin_config->{'peers'}) if 
$plugin_config->{'peers'};
+my @peers;
+@peers = PVE::Tools::split_list($plugin_config->{'peers'}) if 
$plugin_config->{'peers'};
 
 my $asn = $plugin_config->{asn};
 my $gatewaynodes = $plugin_config->{'gateway-nodes'};
-my @gatewaypeers = 
PVE::Tools::split_list($plugin_config->{'gateway-external-peers'}) if 
$plugin_config->{'gateway-external-peers'};
+my @gatewaypeers;
+@gatewaypeers = 
PVE::Tools::split_list($plugin_config->{'gateway-external-peers'}) if 
$plugin_config->{'gateway-external-peers'};
 
 return if !$asn;
 
diff --git a/PVE/Network/SDN/Zones/VxlanPlugin.pm 
b/PVE/Network/SDN/Zones/VxlanPlugin.pm
index 5f17e15..e8cf1bd 100644
--- a/PVE/Network/SDN/Zones/VxlanPlugin.pm
+++ b/PVE/Network/SDN/Zones/VxlanPlugin.pm
@@ -50,7 +50,8 @@ sub generate_sdn_config {
 my $ipv6 = $vnet->{ipv6};
 my $mac = $vnet->{mac};
 my $multicastaddress = $plugin_config->{'multicast-address'};
-my @peers = PVE::Tools::split_list($plugin_config->{'peers'}) if 
$plugin_config->{'peers'};
+my @peers;
+@peers = PVE::Tools::split_list($plugin_config->{'peers'}) if 
$plugin_config->{'peers'};
 my $vxlan_iface = "vxlan_$vnetid";
 
 die "missing vxlan tag" if !$tag;
->8-

I don't know enough about the code paths here to quickly evaluate this 
myself, but I have a feeling that some checks for undef are missing in 
these files, e.g.:

PVE::Network::SDN::Controllers::EvpnPlugin::generate_controler_config()

@peers and @gatewaypeers might be undef (or if not, then the post-if is 
unnecessary)
@peers is passed to 
PVE::Network::SDN::Zones::Plugin::find_local_ip_interface_peers
if @peers is undef, this returns undef as well ($ifaceip, $interface)
$ifaceip is put into a string inside @controller_config without checks

something else that confused me at first glance in that method:

@controller_config is re-used multiple times. it would probably be a 
good idea to find a better name for the three instances, and make that 
three variables ;)

1: 
https://git.proxmox.com/?p=pve-container.git;a=commitdiff;h=9de0505c772f7c382c82d9bfb170b3e0664af9ed

On June 30, 2020 2:25 pm, Alexandre Derumier wrote:
> Signed-off-by: Alexandre Derumier 
> ---
>  PVE/Network/SDN/Controllers/EvpnPlugin.pm | 4 ++--
>  PVE/Network/SDN/Zones/EvpnPlugin.pm   | 2 +-
>  PVE/Network/SDN/Zones/VxlanPlugin.pm  | 2 +-
>  3 files changed, 4 insertions(+), 4 deletions(-)
> 
> diff --git a/PVE/Network/SDN/Controllers/EvpnPlugin.pm 
> b/PVE/Network/SDN/Controllers/EvpnPlugin.pm
> index 79ecaeb..9321af1 100644
> --- a/PVE/Network/SDN/Controllers/EvpnPlugin.pm
> +++ b/PVE/Network/SDN/Controllers/EvpnPlugin.pm
> @@ -47,11 +47,11 @@ sub options {
>  sub generate_controller_config {
>  my ($class, $plugin_config, $controller, $id, $uplinks, $config) = @_;
>  
> -my @peers = split(',', $plugin_config->{'peers'}) if 
> $plugin_config->{'peers'};
> +my @peers = PVE::Tools::split_list($plugin_config->{'peers'}) if 
> $plugin_config->{'peers'};
>  
>  my $asn = $plugin_config->{asn};
>  my $gatewaynodes = $plugin_config->{'gateway-nodes'};
> -my @gatewaypeers = split(',', 
> $plugin_config->{'gateway-external-peers'}) if 
> $plugin_config->{'gateway-external-peers'};
> +my @gatewaypeers = 
> PVE::Tools::split_list($plugin_config->{'gateway-external-peers'}) if 
> $plugin_config->{'gateway-external-peers'};
>  
>  return if !$asn;
>  
> diff --git a/PVE/Network/SDN/Zones/EvpnPlugin.pm 
> b/PVE/Network/SDN/Zones/EvpnPlugin.pm
> index 95fbb64..b2f57ee 100644
> --- a/PVE/Network/SDN/Zones/EvpnPlugin.pm
> +++ b/PVE/Network/SDN/Zones/EvpnPlugin.pm
> @@ -52,7 +52,7 @@ sub generate_sdn_config {
>  die "missing vxlan tag" if !$tag;
>  warn "vlan-aware vnet can't be enabled with evpn plugin" if 
> $vnet->{vlanaware};
>  
> -my @peers = split(',', $controller->{'peers'});
> +my @peers = PVE::Tools::split_list($controller->{'peers'});
>  my ($ifaceip, $iface) = 
> PVE::Network::SDN::Zones::Plugin::find_local_ip_interface_peers(\@peers);
>  
>  my $mtu = 1450;
> diff --git a/PVE/Network/SDN/Zones/VxlanPlugin.pm 
> b/PVE/Network/SDN/Zones/VxlanPlugin.pm
> index bc585c6..5f17e15 100644
> --- a/PVE/Network/SDN/Zones/VxlanPlugin.pm
> +++ b/PVE/Network/SDN/Zones/VxlanPlugin.pm
> @@ -50,7 +50,7 @@ sub generate_sdn_config {
>  my $ipv6 = $vnet->{ipv6};
>  my $mac = 

Re: [pve-devel] [PATCH container] fix #2820: block adding new volume with same id if it's pending delete

2020-07-01 Thread Fabian Grünbichler
On June 30, 2020 3:56 pm, Oguz Bektas wrote:
> do a simple check to see if our $opt is already in the delete section.
> 
> Signed-off-by: Oguz Bektas 
> ---
>  src/PVE/LXC/Config.pm | 3 +++
>  1 file changed, 3 insertions(+)
> 
> diff --git a/src/PVE/LXC/Config.pm b/src/PVE/LXC/Config.pm
> index 0a28380..237e2e5 100644
> --- a/src/PVE/LXC/Config.pm
> +++ b/src/PVE/LXC/Config.pm
> @@ -974,6 +974,9 @@ sub update_pct_config {
>   my $value = $param->{$opt};
>   if ($opt =~ m/^mp(\d+)$/ || $opt eq 'rootfs') {
>   $class->check_protection($conf, "can't update CT $vmid drive 
> '$opt'");
> + if ($conf->{pending}->{delete} =~ m/$opt/) {

this is incomplete:
- $conf->{pending} or $conf->{pending}->{delete} might be undef
- the matching is fuzzy (e.g., there might be a pending deletion of 
  mp10, and we are currently hotplugging mp1)
- we can actually just put the new mpX into the pending queue, and 
  remove the entry from the pending deletion queue? (it's hotplugging 
  that is the problem, not queuing the pending change)

> + die "${opt} is in pending delete queue. please select another 
> mountpoint ID\n";
> + }
>   my $mp = $class->parse_volume($opt, $value);
>   $check_content_type->($mp) if ($mp->{type} eq 'volume');
>   } elsif ($opt eq 'hookscript') {
> -- 
> 2.20.1
> 
> ___
> pve-devel mailing list
> pve-devel@pve.proxmox.com
> https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel
> 
> 

___
pve-devel mailing list
pve-devel@pve.proxmox.com
https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel


Re: [pve-devel] [PATCH container] Move volume activation to vm_start

2020-06-26 Thread Fabian Grünbichler
On June 17, 2020 2:18 pm, Stoiko Ivanov wrote:
> currently all volumes for a container are activated in the pre-start hook,
> which runs in a separate mount namespace (lxc.monitor.unshare is set to 1
> in our container config).
> This leads to problems with ZFS, if a pool is imported by this call the
> filesystems are mounted only inside the containers mount namespace.
> 
> by running the volume activation inside vm_start, right before starting the
> container via systemctl there should be no regression when using the API/our
> CLI tools.
> 
> Starting a container manually using lxc-start is usually done for obtaining
> debug-logs (after starting failed with our tooling) - so the potential for
> regression in that case should also be small.
> 
> The $loopdevlist variable is not used anywhere in our codebase since 2015
> (da6298481ea4dfe7d894f42fa105cda015ebe5ce).
> 
> Tested by creating a zpool, two containers with 'onboot: 1' stored on it,
> `echo -n '' > /etc/zfs/zpool.cache`, `update-initramfs -k all -u`, reboot
> 
> Signed-off-by: Stoiko Ivanov 
> ---
> This should address multiple reports in the forum (e.g. [0,1,2]), about
> containers on ZFS not starting after a reboot (mitigated by setting the 
> correct
> cache-file, thus making the pool get imported through zfs-import-cache.service
> + zfs-mount.service)
> 
> Huge Thanks to Fabian and Wolfgang for providing input on this!
> 
> [0] 
> https://forum.proxmox.com/threads/zfs-mount-on-start-problem-segfault-at-0-error-4-in-libc-2-28-so-subvolumes-not-mounted.68519/
> [1] https://forum.proxmox.com/threads/kann-keine-container-mehr-starten.69975/
> [2] 
> https://forum.proxmox.com/threads/why-i-need-to-mount-zfs-volume-manualy-for-container-after-update-initramfs-usage.71008/
> 
> 
>  src/PVE/LXC.pm| 5 +
>  src/lxc-pve-prestart-hook | 5 -
>  2 files changed, 5 insertions(+), 5 deletions(-)
> 
> diff --git a/src/PVE/LXC.pm b/src/PVE/LXC.pm
> index f3aca7a..db5b8ca 100644
> --- a/src/PVE/LXC.pm
> +++ b/src/PVE/LXC.pm
> @@ -2186,6 +2186,11 @@ sub vm_start {
>   close($fh);
>  }
>  
> +my $storage_cfg = PVE::Storage::config();
> +my $vollist = PVE::LXC::Config->get_vm_volumes($conf);
> +
> +PVE::Storage::activate_volumes($storage_cfg, $vollist);
> +
>  my $cmd = ['systemctl', 'start', "pve-container\@$vmid"];
>  
>  PVE::GuestHelpers::exec_hookscript($conf, $vmid, 'pre-start', 1);
> diff --git a/src/lxc-pve-prestart-hook b/src/lxc-pve-prestart-hook
> index ed25aa4..8823cad 100755
> --- a/src/lxc-pve-prestart-hook
> +++ b/src/lxc-pve-prestart-hook
> @@ -38,11 +38,6 @@ PVE::LXC::Tools::lxc_hook('pre-start', 'lxc', sub {
>  
>  my $storage_cfg = PVE::Storage::config();
>  
> -my $vollist = PVE::LXC::Config->get_vm_volumes($conf);
> -my $loopdevlist = PVE::LXC::Config->get_vm_volumes($conf, 'rootfs');
> -
> -PVE::Storage::activate_volumes($storage_cfg, $vollist);
> -

we are now missing an activate_volumes call for the following (unlikely, 
but possible) scenario:

pre-requisite:
running CT with pending mpX addition
pending volume needs activation (e.g., LVM)

trigger:
reboot (via API or within container)

expected outcome:
reboot, pending changes successfully applied, mpX propery mounted

actual outcome:
pending changes applied, CT fails to reboot

one way to trigger this is to:
add volume A to container
remove volume A again (now unused)
add volume B to container
start container
detach volume B (pending)
re-attach volume A to slot where volume B was/is
reboot container

maybe pve-container-stop-wrapper would be a good place for that 
activation?

there is also some other, unrelated issue with hotplugging newly created 
volumes over pending-detached volumes which I've filed as #2820.

>  my $rootdir = $vars->{ROOTFS_PATH};
>  
>  # Delete any leftover reboot-trigger file
> -- 
> 2.20.1
> 
> 
> ___
> pve-devel mailing list
> pve-devel@pve.proxmox.com
> https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel
> 
> 

___
pve-devel mailing list
pve-devel@pve.proxmox.com
https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel


[pve-devel] applied: [PATCH node_config 1/1] node_config: Allow leading underscore in ACME aliases

2020-06-25 Thread Fabian Grünbichler
applied with rename follow-up, thanks!

On June 22, 2020 12:10 pm, Fabian Möller wrote:
> ---
>  PVE/NodeConfig.pm | 12 +++-
>  1 file changed, 11 insertions(+), 1 deletion(-)
> 
> diff --git a/PVE/NodeConfig.pm b/PVE/NodeConfig.pm
> index ad49e288..017f6b30 100644
> --- a/PVE/NodeConfig.pm
> +++ b/PVE/NodeConfig.pm
> @@ -25,6 +25,16 @@ PVE::JSONSchema::register_format('pve-acme-domain', sub {
>  die "value '$domain' does not look like a valid domain name!\n";
>  });
>  
> +PVE::JSONSchema::register_format('pve-acme-alias', sub {
> +my ($domain, $noerr) = @_;
> +
> +my $label = qr/[a-z0-9_][a-z0-9_-]*/i;
> +
> +return $domain if $domain =~ /^$label(?:\.$label)+$/;
> +return undef if $noerr;
> +die "value '$domain' does not look like a valid domain name!\n";
> +});
> +
>  sub config_file {
>  my ($node) = @_;
>  
> @@ -107,7 +117,7 @@ my $acme_domain_desc = {
>  },
>  alias => {
>   type => 'string',
> - format => 'pve-acme-domain',
> + format => 'pve-acme-alias',
>   format_description => 'domain',
>   description => 'Alias for the Domain to verify ACME Challenge over DNS',
>   optional => 1,
> -- 
> 2.27.0
> 
> ___
> pve-devel mailing list
> pve-devel@pve.proxmox.com
> https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel
> 
> 

___
pve-devel mailing list
pve-devel@pve.proxmox.com
https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel


Re: [pve-devel] [RFC qemu-server] close #2741: check for VM.Config.Cloudinit permission

2020-06-24 Thread Fabian Grünbichler
On June 3, 2020 3:58 pm, Mira Limbeck wrote:
> This allows setting ciuser, cipassword and all other cloudinit settings that
> are not part of the network without VM.Config.Network permissions.
> 
> Signed-off-by: Mira Limbeck 
> ---
>  PVE/API2/Qemu.pm | 5 -
>  1 file changed, 4 insertions(+), 1 deletion(-)
> 
> diff --git a/PVE/API2/Qemu.pm b/PVE/API2/Qemu.pm
> index 974ee3b..23a569e 100644
> --- a/PVE/API2/Qemu.pm
> +++ b/PVE/API2/Qemu.pm
> @@ -357,8 +357,11 @@ my $check_vm_modify_config_perm = sub {
>   $rpcenv->check_vm_perm($authuser, $vmid, $pool, ['VM.PowerMgmt']);
>   } elsif ($diskoptions->{$opt}) {
>   $rpcenv->check_vm_perm($authuser, $vmid, $pool, ['VM.Config.Disk']);
> - } elsif ($cloudinitoptions->{$opt} || ($opt =~ 
> m/^(?:net|ipconfig)\d+$/)) {
> + } elsif ($opt =~ m/^(?:net|ipconfig)\d+$/) {
>   $rpcenv->check_vm_perm($authuser, $vmid, $pool, 
> ['VM.Config.Network']);
> + } elsif ($cloudinitoptions->{$opt}) {
> + print "checking VM.Config.Cloudinit\n";

print probably left from debugging?

> + $rpcenv->check_vm_perm($authuser, $vmid, $pool, 
> ['VM.Config.Cloudinit']);

shouldn't this also still work with VM.Config.Network to not break 
existing setups, at least for the duration of 6.x?

>   } elsif ($opt eq 'vmstate') {
>   # the user needs Disk and PowerMgmt privileges to change the vmstate
>   # also needs privileges on the storage, that will be checked later
> -- 
> 2.20.1
> 
> 
> ___
> pve-devel mailing list
> pve-devel@pve.proxmox.com
> https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel
> 
> 

___
pve-devel mailing list
pve-devel@pve.proxmox.com
https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel


Re: [pve-devel] [PATCH v2 common 1/2] JSONSchema: add format validator support and cleanup check_format

2020-06-24 Thread Fabian Grünbichler
On June 24, 2020 10:54 am, Stefan Reiter wrote:
> On 6/23/20 3:39 PM, Fabian Grünbichler wrote:
>> LGTM, what do you think about the following follow-up:
>> 
>> --8<-
>> 
>> diff --git a/src/PVE/JSONSchema.pm b/src/PVE/JSONSchema.pm
>> index b2ba9f7..d28143d 100644
>> --- a/src/PVE/JSONSchema.pm
>> +++ b/src/PVE/JSONSchema.pm
>> @@ -1878,9 +1878,12 @@ sub generate_typetext {
>>   sub print_property_string {
>>   my ($data, $format, $skip, $path) = @_;
>>   
>> +my $validator;
>>   if (ref($format) ne 'HASH') {
>>  my $schema = get_format($format);
>>  die "not a valid format: $format\n" if !$schema;
>> +# named formats can have validators attached
>> +$validator = $format_validators->{$format};
>>  $format = $schema;
>>   }
>>   
>> @@ -1890,6 +1893,8 @@ sub print_property_string {
>>  raise "format error", errors => $errors;
>>   }
>>   
>> +$res = $validator->($res) if $validator;
> 
> This would have to be $data, I suppose?

yes, sorry!

> 
>> +
>>   my ($default_key, $keyAliasProps) = &$find_schema_default_key($format);
>>   
>>   my $res = '';
>> 
>> ->8--
>> 
>> to ensure our code calling 'print_property_string' also adheres to the
>> format the validator enforces?
>> 
> 
> Fine by me, though I'm not sure how necessary. It implies that 
> validators always validate positively on values they have already 
> accepted (and potentially modified) once before though. Also I'm not 
> sure I would use the validators result for print_property_string, since 
> that means that the value the user passes in might not be the one 
> printed (if a validator modifies it) - so maybe call the validator but 
> throw away it's result?

well, it's not a given at all that $data is something that was 
previously/originally returned by parse_property_string. but yes, IMHO 
every object that we transform into a property string should pass the 
checks that that property string parsed back into an object passes (what 
a sentence -.-). or in other words, a validator should not transform a 
valid value into an invalid one ;)

I was not sure whether to include modifications done by the validator or 
not, but I'd tend to include them just to make the interface simpler. 
otherwise we'd have to dclone $data, because even if we throwaway the 
return value $data might have been modified. also, the question is 
whether to run the validator before or after check_object. for almost 
all cases it probably does not make a difference, so we might also just 
re-visit that if we ever find the need to switch the order around.

the main use case for a validator that modifies is probably to enforce a 
single/newer variant where the format itself would accept multiple, and 
that is something that does not hurt in the opposite direction either.

in any case the whole mechanism should probably be documented next to 
register_format's declaration.

> 
>> it might also be nice to mark formats with a validator (at least for the
>> API dump) to make it obvious that the displayed format might be further
>> restricted.
>> 
> 
> Sounds good to me, I'll look into it.
> 
>> I went through the code paths (again) and still think it might be
>> worthwhile to extend named formats to check_object as well, instead of
>> just scalar property string values. but that is a bigger follow-up, for
>> now limiting the scope of these validators to just property strings
>> seems sensible.
>> 
> 

___
pve-devel mailing list
pve-devel@pve.proxmox.com
https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel


Re: [pve-devel] Package for vma seperately

2020-06-24 Thread Fabian Grünbichler
On June 5, 2020 1:33 pm, Mark Schouten wrote:
> 
> Hi!
> 
> I'm building Linux Images in Gitlab to distribute over customer clusters 
> easily. We have an (daDup.eu) s3 bucket where we place the vma-files and 
> mount that bucket via s3fs (works pretty well!). But, to create the VMA, I 
> need to install  pve-qemu-kvm, just to install the binary VMA, which in part 
> depends on about 300 packages.
> 
> I would be grateful if there would be a separate package for VMA, if 
> possible. :)

I don't think you'd gain much, since that separate 'vma' package would 
need to depend on most of the dependencies of pve-qemu-kvm as well (look 
at the output of ldd /usr/bin/vma to get an idea ;)).

the vma binary is just a thin wrapper around qemu functions after all.

the format is easy enough[1], so I guess in theory you could write your own 
'dump .vma to raw img' binary if you really really want to..

1: https://git.proxmox.com/?p=pve-qemu.git;a=blob;f=vma_spec.txt

___
pve-devel mailing list
pve-devel@pve.proxmox.com
https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel


Re: [pve-devel] [PATCH pve-network] allow [ ,;] for ip lists

2020-06-24 Thread Fabian Grünbichler
why not use PVE::Tools::split_list ? it's our standard helper for these 
kind of things, and also correctly trims whitespace and has support for 
\0-separated lists ;)

On June 12, 2020 6:14 pm, Alexandre Derumier wrote:
> Signed-off-by: Alexandre Derumier 
> ---
>  PVE/Network/SDN/Controllers/EvpnPlugin.pm | 4 ++--
>  PVE/Network/SDN/Zones/EvpnPlugin.pm   | 2 +-
>  PVE/Network/SDN/Zones/VxlanPlugin.pm  | 2 +-
>  3 files changed, 4 insertions(+), 4 deletions(-)
> 
> diff --git a/PVE/Network/SDN/Controllers/EvpnPlugin.pm 
> b/PVE/Network/SDN/Controllers/EvpnPlugin.pm
> index 79ecaeb..8db2bed 100644
> --- a/PVE/Network/SDN/Controllers/EvpnPlugin.pm
> +++ b/PVE/Network/SDN/Controllers/EvpnPlugin.pm
> @@ -47,11 +47,11 @@ sub options {
>  sub generate_controller_config {
>  my ($class, $plugin_config, $controller, $id, $uplinks, $config) = @_;
>  
> -my @peers = split(',', $plugin_config->{'peers'}) if 
> $plugin_config->{'peers'};
> +my @peers = split(/[ ,;]+/, $plugin_config->{'peers'}) if 
> $plugin_config->{'peers'};
>  
>  my $asn = $plugin_config->{asn};
>  my $gatewaynodes = $plugin_config->{'gateway-nodes'};
> -my @gatewaypeers = split(',', 
> $plugin_config->{'gateway-external-peers'}) if 
> $plugin_config->{'gateway-external-peers'};
> +my @gatewaypeers = split(/[ ,;]+/, 
> $plugin_config->{'gateway-external-peers'}) if 
> $plugin_config->{'gateway-external-peers'};
>  
>  return if !$asn;
>  
> diff --git a/PVE/Network/SDN/Zones/EvpnPlugin.pm 
> b/PVE/Network/SDN/Zones/EvpnPlugin.pm
> index 95fbb64..dba3ffc 100644
> --- a/PVE/Network/SDN/Zones/EvpnPlugin.pm
> +++ b/PVE/Network/SDN/Zones/EvpnPlugin.pm
> @@ -52,7 +52,7 @@ sub generate_sdn_config {
>  die "missing vxlan tag" if !$tag;
>  warn "vlan-aware vnet can't be enabled with evpn plugin" if 
> $vnet->{vlanaware};
>  
> -my @peers = split(',', $controller->{'peers'});
> +my @peers = split(/[ ,;]+/, $controller->{'peers'});
>  my ($ifaceip, $iface) = 
> PVE::Network::SDN::Zones::Plugin::find_local_ip_interface_peers(\@peers);
>  
>  my $mtu = 1450;
> diff --git a/PVE/Network/SDN/Zones/VxlanPlugin.pm 
> b/PVE/Network/SDN/Zones/VxlanPlugin.pm
> index bc585c6..f2c2eec 100644
> --- a/PVE/Network/SDN/Zones/VxlanPlugin.pm
> +++ b/PVE/Network/SDN/Zones/VxlanPlugin.pm
> @@ -50,7 +50,7 @@ sub generate_sdn_config {
>  my $ipv6 = $vnet->{ipv6};
>  my $mac = $vnet->{mac};
>  my $multicastaddress = $plugin_config->{'multicast-address'};
> -my @peers = split(',', $plugin_config->{'peers'}) if 
> $plugin_config->{'peers'};
> +my @peers = split(/[ ,;]+/, $plugin_config->{'peers'}) if 
> $plugin_config->{'peers'};
>  my $vxlan_iface = "vxlan_$vnetid";
>  
>  die "missing vxlan tag" if !$tag;
> -- 
> 2.20.1
> 
> ___
> pve-devel mailing list
> pve-devel@pve.proxmox.com
> https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel
> 
> 

___
pve-devel mailing list
pve-devel@pve.proxmox.com
https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel


Re: [pve-devel] [PATCH qemu-server] api: add option to get pending config returned as object instead of an array

2020-06-24 Thread Fabian Grünbichler
a bit of a rationale would be nice ;) isn't this just a simple map 
transformation that can be done client-side?

my $hash = { map { my $key = delete $_->{key}; return ($key => $_); } @$array };

or in whatever language you need. filtering/sorting/limiting server-side 
makes sense for some calls that might return a lot of data otherwise, 
but I don't think adapting all API calls to represent data with 
different structures but identical content is a good idea.

(not that I am too happy that we have so many 'array of object with key 
property' return values to accomodate ExtJS, when our backend and API 
usually carries the key as key and not part of the object)

On May 20, 2020 2:23 pm, Tim Marx wrote:
> Signed-off-by: Tim Marx 
> ---
>  PVE/API2/Qemu.pm | 13 +
>  1 file changed, 9 insertions(+), 4 deletions(-)
> 
> diff --git a/PVE/API2/Qemu.pm b/PVE/API2/Qemu.pm
> index fd51bf3..0b31f53 100644
> --- a/PVE/API2/Qemu.pm
> +++ b/PVE/API2/Qemu.pm
> @@ -940,14 +940,20 @@ __PACKAGE__->register_method({
>   check => ['perm', '/vms/{vmid}', [ 'VM.Audit' ]],
>  },
>  parameters => {
> - additionalProperties => 0,
>   properties => {
>   node => get_standard_option('pve-node'),
>   vmid => get_standard_option('pve-vmid', { completion => 
> \::QemuServer::complete_vmid }),
> + object => {
> + description => "In this case the root element is an object 
> instead of an array.".
> +"The key property of the items will be extracted 
> and used as the root object keys.",
> + optional => 1,
> + default => 0,
> + type => 'boolean',
> + },
>   },
>  },
>  returns => {
> - type => "array",
> + type => [ "array", "object"],
>   items => {
>   type => "object",
>   properties => {
> @@ -985,8 +991,7 @@ __PACKAGE__->register_method({
>  
>   $conf->{cipassword} = '**' if defined($conf->{cipassword});
>   $conf->{pending}->{cipassword} = '** ' if 
> defined($conf->{pending}->{cipassword});
> -
> - return PVE::GuestHelpers::config_with_pending_array($conf, 
> $pending_delete_hash);
> + return PVE::GuestHelpers::config_with_pending($conf, 
> $pending_delete_hash, $param->{object});
> }});
>  
>  # POST/PUT {vmid}/config implementation
> -- 
> 2.20.1
> 
> ___
> pve-devel mailing list
> pve-devel@pve.proxmox.com
> https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel
> 
> 

___
pve-devel mailing list
pve-devel@pve.proxmox.com
https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel


[pve-devel] applied: [PATCH manager] fix #2810: reset state of mounts array in initComponent

2020-06-24 Thread Fabian Grünbichler
On June 24, 2020 9:32 am, Dominik Csapak wrote:
> so that each new instance has an empty mounts list
> 
> Signed-off-by: Dominik Csapak 
> ---
> @fabian @oguz, i remembered that i know this issue and had a fix already^^
>  www/manager6/lxc/FeaturesEdit.js | 8 +++-
>  1 file changed, 7 insertions(+), 1 deletion(-)
> 
> diff --git a/www/manager6/lxc/FeaturesEdit.js 
> b/www/manager6/lxc/FeaturesEdit.js
> index 1275a2e0..e0b851de 100644
> --- a/www/manager6/lxc/FeaturesEdit.js
> +++ b/www/manager6/lxc/FeaturesEdit.js
> @@ -108,7 +108,13 @@ Ext.define('PVE.lxc.FeaturesInputPanel', {
>   }
>   this.callParent([res]);
>   }
> -}
> +},
> +
> +initComponent: function() {
> + let me = this;
> + me.mounts = []; // reset state
> + me.callParent();
> +},
>  });
>  
>  Ext.define('PVE.lxc.FeaturesEdit', {
> -- 
> 2.20.1
> 
> 
> 

___
pve-devel mailing list
pve-devel@pve.proxmox.com
https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel


Re: [pve-devel] [PATCH v2 common 1/2] JSONSchema: add format validator support and cleanup check_format

2020-06-23 Thread Fabian Grünbichler
LGTM, what do you think about the following follow-up:

--8<-

diff --git a/src/PVE/JSONSchema.pm b/src/PVE/JSONSchema.pm
index b2ba9f7..d28143d 100644
--- a/src/PVE/JSONSchema.pm
+++ b/src/PVE/JSONSchema.pm
@@ -1878,9 +1878,12 @@ sub generate_typetext {
 sub print_property_string {
 my ($data, $format, $skip, $path) = @_;
 
+my $validator;
 if (ref($format) ne 'HASH') {
my $schema = get_format($format);
die "not a valid format: $format\n" if !$schema;
+   # named formats can have validators attached
+   $validator = $format_validators->{$format};
$format = $schema;
 }
 
@@ -1890,6 +1893,8 @@ sub print_property_string {
raise "format error", errors => $errors;
 }
 
+$res = $validator->($res) if $validator;
+
 my ($default_key, $keyAliasProps) = &$find_schema_default_key($format);
 
 my $res = '';

->8--

to ensure our code calling 'print_property_string' also adheres to the 
format the validator enforces?

it might also be nice to mark formats with a validator (at least for the 
API dump) to make it obvious that the displayed format might be further 
restricted.

I went through the code paths (again) and still think it might be 
worthwhile to extend named formats to check_object as well, instead of 
just scalar property string values. but that is a bigger follow-up, for 
now limiting the scope of these validators to just property strings 
seems sensible.

On June 17, 2020 4:44 pm, Stefan Reiter wrote:
> Adds a third, optional parameter to register_format that allows specifying
> a function that will be called after parsing and can validate the parsed
> data. A validator should die on failed validation, and can also change the
> parsed object by returning a modified version of it.
> 
> This is useful so one can register a format with its hash, thus allowing
> documentation to be generated automatically, while still enforcing certain
> validation rules.
> 
> The validator only needs to be called in parse_property_string, since
> check_format always calls parse_property_string if there is a
> possibility of a validator existing at all. parse_property_string should
> then be called with named formats for best effect, as only then can
> validators be used.
> 
> Clean up 'check_format' as well (which pretty much amounts to a rewrite).
> No existing functionality is intentionally changed.
> 
> Signed-off-by: Stefan Reiter 
> ---
>  src/PVE/JSONSchema.pm | 87 +++
>  1 file changed, 55 insertions(+), 32 deletions(-)
> 
> diff --git a/src/PVE/JSONSchema.pm b/src/PVE/JSONSchema.pm
> index 84fb694..f987006 100644
> --- a/src/PVE/JSONSchema.pm
> +++ b/src/PVE/JSONSchema.pm
> @@ -121,19 +121,26 @@ register_standard_option('pve-snapshot-name', {
>  });
>  
>  my $format_list = {};
> +my $format_validators = {};
>  
>  sub register_format {
> -my ($format, $code) = @_;
> +my ($name, $format, $validator) = @_;
>  
> -die "JSON schema format '$format' already registered\n"
> - if $format_list->{$format};
> +die "JSON schema format '$name' already registered\n"
> + if $format_list->{$name};
>  
> -$format_list->{$format} = $code;
> +if ($validator) {
> + die "A \$validator function can only be specified for hash-based 
> formats\n"
> + if ref($format) ne 'HASH';
> + $format_validators->{$name} = $validator;
> +}
> +
> +$format_list->{$name} = $format;
>  }
>  
>  sub get_format {
> -my ($format) = @_;
> -return $format_list->{$format};
> +my ($name) = @_;
> +return $format_list->{$name};
>  }
>  
>  my $renderer_hash = {};
> @@ -647,39 +654,47 @@ sub pve_verify_tfa_secret {
>  sub check_format {
>  my ($format, $value, $path) = @_;
>  
> -return parse_property_string($format, $value, $path) if ref($format) eq 
> 'HASH';
> -return if $format eq 'regex';
> +if (ref($format) eq 'HASH') {
> + # hash ref cannot have validator/list/opt handling attached
> + return parse_property_string($format, $value, $path);
> +}
>  
> -if ($format =~ m/^(.*)-a?list$/) {
> +if (ref($format) eq 'CODE') {
> + # we are the (sole, old-style) validator
> + return $format->($value);
> +}
> +
> +return if $format eq 'regex';
>  
> - my $code = $format_list->{$1};
> +my $parsed;
> +$format =~ m/^(.*?)(?:-a?(list|opt))?$/;
> +my ($format_name, $format_type) = ($1, $2 // 'none');
> +my $registered = get_format($format_name);
> +die "undefined format '$format'\n" if !$registered;
>  
> - die "undefined format '$format'\n" if !$code;
> +die "'-$format_type' format must have code ref, not hash\n"
> + if $format_type ne 'none' && ref($registered) ne 'CODE';
>  
> +if ($format_type eq 'list') {
>   # Note: we allow empty lists
>   foreach my $v (split_list($value)) {
> - &$code($v);
> + $parsed = $registered->($v);
>   }
> -
> -} elsif 

[pve-devel] applied: [PATCH storage] Fix 2763: Revert "storage_migrate: check if target storage supports content type"

2020-06-23 Thread Fabian Grünbichler
On May 25, 2020 9:41 am, Fabian Ebner wrote:
> This reverts commit 95015dbbf24b710011965805e689c03923fb830c.
> 
> parse_volname always gives 'images' and not 'rootdir'. In most
> cases the volume name alone does not contain the needed information,
> e.g. vm-123-disk-0 can be both a VM volume or a container volume.
> 
> Signed-off-by: Fabian Ebner 
> ---
> 
> For this reason, we need to have the callers of storage_migrate check
> if the correct content type is available. No further changes are
> needed, because replication and container migration do not
> change storages, and for VM migration, the check is already there.
> 
>  PVE/Storage.pm | 5 -
>  1 file changed, 5 deletions(-)
> 
> diff --git a/PVE/Storage.pm b/PVE/Storage.pm
> index f1e3b19..f523f20 100755
> --- a/PVE/Storage.pm
> +++ b/PVE/Storage.pm
> @@ -619,11 +619,6 @@ sub storage_migrate {
>  
>  my $tcfg = storage_config($cfg, $target_storeid);
>  
> -my $vtype = (parse_volname($cfg, $volid))[0];
> -
> -die "content type '$vtype' is not available on storage 
> '$target_storeid'\n"
> - if !$tcfg->{content}->{$vtype};
> -
>  my $target_volname;
>  if ($opts->{target_volname}) {
>   $target_volname = $opts->{target_volname};
> -- 
> 2.20.1
> 
> 
> ___
> pve-devel mailing list
> pve-devel@pve.proxmox.com
> https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel
> 
> 

___
pve-devel mailing list
pve-devel@pve.proxmox.com
https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel


Re: [pve-devel] [PATCH v2 manager] fix #2810: reset state properly when editing mount features of containers

2020-06-23 Thread Fabian Grünbichler
On June 22, 2020 3:55 pm, Oguz Bektas wrote:
> initializing 'mounts' array in the panel scope causes edits on subsequent
> containers to get the values (mount=nfs) from the previous container. fix 
> this by
> initializing the 'mounts' array in 'onGetValues' and 'setValues'
> separately.
> 
> Signed-off-by: Oguz Bektas 
> ---
>  www/manager6/lxc/FeaturesEdit.js | 10 --
>  1 file changed, 4 insertions(+), 6 deletions(-)
> 
> diff --git a/www/manager6/lxc/FeaturesEdit.js 
> b/www/manager6/lxc/FeaturesEdit.js
> index 1275a2e0..dffd77fd 100644
> --- a/www/manager6/lxc/FeaturesEdit.js
> +++ b/www/manager6/lxc/FeaturesEdit.js
> @@ -2,9 +2,6 @@ Ext.define('PVE.lxc.FeaturesInputPanel', {
>  extend: 'Proxmox.panel.InputPanel',
>  xtype: 'pveLxcFeaturesInputPanel',
>  
> -// used to save the mounts fstypes until sending
> -mounts: [],
> -
>  fstypes: ['nfs', 'cifs'],
>  
>  viewModel: {
> @@ -70,7 +67,7 @@ Ext.define('PVE.lxc.FeaturesInputPanel', {
>  
>  onGetValues: function(values) {
>   var me = this;
> - var mounts = me.mounts;
> + var mounts = [];
>   me.fstypes.forEach(function(fs) {
>   if (values[fs]) {
>   mounts.push(fs);
> @@ -83,6 +80,7 @@ Ext.define('PVE.lxc.FeaturesInputPanel', {
>   }
>  
>   var featuresstring = PVE.Parser.printPropertyString(values, undefined);
> +
>   if (featuresstring == '') {
>   return { 'delete': 'features' };
>   }
> @@ -94,13 +92,13 @@ Ext.define('PVE.lxc.FeaturesInputPanel', {
>  
>   me.viewModel.set('unprivileged', values.unprivileged);
>  
> + var mounts = [];
>   if (values.features) {
>   var res = PVE.Parser.parsePropertyString(values.features);
> - me.mounts = [];
>   if (res.mount) {
>   res.mount.split(/[; ]/).forEach(function(item) {
>   if (me.fstypes.indexOf(item) === -1) {
> - me.mounts.push(item);
> + mounts.push(item);

it works now with regards to the reported issue, but AFAICT the whole 
mounts variable is useless now (previously, it served to preserve any 
mount values not handled by the GUI, which might be a nice thing to 
have, but now those unknown values get stored in this variable but 
never used..).

we try to not touch things we don't understand in the GUI, so it would 
be nice to get this behaviour back without the bug of polluting state 
across containers.


>   } else {
>   res[item] = 1;
>   }
> -- 
> 2.20.1
> 
> ___
> pve-devel mailing list
> pve-devel@pve.proxmox.com
> https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel
> 
> 

___
pve-devel mailing list
pve-devel@pve.proxmox.com
https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel


Re: [pve-devel] [PATCH node_config 1/1] node_config: Allow leading underscore in ACME aliases

2020-06-23 Thread Fabian Grünbichler
LGTM, except for one minor nit. could you please send a CLA as described 
in our Developer Documentation to off...@proxmox.com ?

https://pve.proxmox.com/wiki/Developer_Documentation#Software_License_and_Copyright

Thanks in advance!

On June 22, 2020 12:10 pm, Fabian Möller wrote:
> ---
>  PVE/NodeConfig.pm | 12 +++-
>  1 file changed, 11 insertions(+), 1 deletion(-)
> 
> diff --git a/PVE/NodeConfig.pm b/PVE/NodeConfig.pm
> index ad49e288..017f6b30 100644
> --- a/PVE/NodeConfig.pm
> +++ b/PVE/NodeConfig.pm
> @@ -25,6 +25,16 @@ PVE::JSONSchema::register_format('pve-acme-domain', sub {
>  die "value '$domain' does not look like a valid domain name!\n";
>  });
>  
> +PVE::JSONSchema::register_format('pve-acme-alias', sub {
> +my ($domain, $noerr) = @_;
> +
> +my $label = qr/[a-z0-9_][a-z0-9_-]*/i;
> +
> +return $domain if $domain =~ /^$label(?:\.$label)+$/;
> +return undef if $noerr;
> +die "value '$domain' does not look like a valid domain name!\n";

I'd rename $domain and fixup the message to make it clear that it's an 
alias value, not a regular domain. Can be done as follow-up, or if you 
want, you can also send a v2 ;)

> +});
> +
>  sub config_file {
>  my ($node) = @_;
>  
> @@ -107,7 +117,7 @@ my $acme_domain_desc = {
>  },
>  alias => {
>   type => 'string',
> - format => 'pve-acme-domain',
> + format => 'pve-acme-alias',
>   format_description => 'domain',
>   description => 'Alias for the Domain to verify ACME Challenge over DNS',
>   optional => 1,
> -- 
> 2.27.0
> 
> ___
> pve-devel mailing list
> pve-devel@pve.proxmox.com
> https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel
> 
> 

___
pve-devel mailing list
pve-devel@pve.proxmox.com
https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel


Re: [pve-devel] [PATCH manager] fix #2810: don't add options multiple times to features property

2020-06-22 Thread Fabian Grünbichler
does not fix the issue (which is that the state is not properly resetted 
between editing one container and then another one).

On June 22, 2020 1:54 pm, Oguz Bektas wrote:
> instead of unconditionally pushing to the 'mounts' array we need to check
> if we already have the option in there. without this, we get config
> options like:
> 
> features: nfs;nfs;nfs
> 
> Signed-off-by: Oguz Bektas 
> ---
>  www/manager6/lxc/FeaturesEdit.js | 2 +-
>  1 file changed, 1 insertion(+), 1 deletion(-)
> 
> diff --git a/www/manager6/lxc/FeaturesEdit.js 
> b/www/manager6/lxc/FeaturesEdit.js
> index 1275a2e0..63a9a2a9 100644
> --- a/www/manager6/lxc/FeaturesEdit.js
> +++ b/www/manager6/lxc/FeaturesEdit.js
> @@ -72,7 +72,7 @@ Ext.define('PVE.lxc.FeaturesInputPanel', {
>   var me = this;
>   var mounts = me.mounts;
>   me.fstypes.forEach(function(fs) {
> - if (values[fs]) {
> + if (values[fs] && !mounts.includes(fs)) {
>   mounts.push(fs);
>   }
>   delete values[fs];
> -- 
> 2.20.1
> 
> ___
> pve-devel mailing list
> pve-devel@pve.proxmox.com
> https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel
> 
> 

___
pve-devel mailing list
pve-devel@pve.proxmox.com
https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel


[pve-devel] applied: [PATCH kernel] fix #2814: config: disable lockdown

2020-06-22 Thread Fabian Grünbichler
since it prevents boot with our current way of building ZFS modules in
case a system is booted with secureboot enabled.

Signed-off-by: Fabian Grünbichler 
---

Notes:
requires an ABI bump

 debian/rules | 3 +++
 1 file changed, 3 insertions(+)

diff --git a/debian/rules b/debian/rules
index f531ac5..7c4f9f6 100755
--- a/debian/rules
+++ b/debian/rules
@@ -73,6 +73,9 @@ PVE_CONFIG_OPTS= \
 -d CONFIG_UNWINDER_ORC \
 -d CONFIG_UNWINDER_GUESS \
 -e CONFIG_UNWINDER_FRAME_POINTER \
+-d CONFIG_SECURITY_LOCKDOWN_LSM \
+-d CONFIG_SECURITY_LOCKDOWN_LSM_EARLY \
+--set-str CONFIG_LSM yama,integrity,apparmor \
 -e CONFIG_PAGE_TABLE_ISOLATION
 
 debian/control: $(wildcard debian/*.in)
-- 
2.20.1


___
pve-devel mailing list
pve-devel@pve.proxmox.com
https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel


[pve-devel] applied: [PATCH qemu-server] vncproxy: allow to request a generated VNC password

2020-06-22 Thread Fabian Grünbichler
with R-B/T-B and the following follow-up:

[PATCH qemu-server] gen_rand_chars: handle errors properly

should not really happen on modern systems, but random_bytes just
returns false if it fails to generate random bytes, in which case we
want to die instead of returning an empty 'random' string.

Signed-off-by: Fabian Grünbichler 
---
 PVE/API2/Qemu.pm | 8 ++--
 1 file changed, 6 insertions(+), 2 deletions(-)

diff --git a/PVE/API2/Qemu.pm b/PVE/API2/Qemu.pm
index dcb364d..3965c26 100644
--- a/PVE/API2/Qemu.pm
+++ b/PVE/API2/Qemu.pm
@@ -1597,8 +1597,12 @@ my $gen_rand_chars = sub {
 die "invalid length $length" if $length < 1;
 
 my $min = ord('!'); # first printable ascii
-my @rand_bytes = split '', Crypt::OpenSSL::Random::random_bytes($length);
-my $str = join('', map { chr((ord($_) & 0x3F) + $min) } @rand_bytes);
+
+my $rand_bytes = Crypt::OpenSSL::Random::random_bytes($length);
+die "failed to generate random bytes!\n"
+  if !$rand_bytes;
+
+my $str = join('', map { chr((ord($_) & 0x3F) + $min) } split('', 
$rand_bytes));
 
 return $str;
 };


On June 18, 2020 6:20 pm, Thomas Lamprecht wrote:
> We used the VNC API $ticket as password for VNC, but QEMU limits the
> password to the first 8 chars and ignores the rest[0].
> As our tickets start with a static string (e.g., "PVE") the entropy
> was a bit limited.
> 
> For Proxmox VE this does not matters much as the noVNC viewer
> provided by has to go always over the API call, and so a valid
> ticket and correct permissions for the requested VM are enforced
> anyway.
> 
> This patch helps external users, which often use NoVNC-Websockify,
> circumventing the API and relying solely on the VNC password to avoid
> snooping on VNC sessions.
> 
> A 'generate-password' parameter is added, if set a password from good
> entropy (using libopenssl) is generated.
> 
> For simplicity of mapping random bits to ranges we extract 6 bit of
> entropy per character and add the integer value of '!' (first
> printable ASCII char) to that. This way we get 64^8 possibilities,
> which even with millions of guesses per second one would need years
> of guessing and mostly just DDOS the server with websocket upgrade
> requests.
> 
> Signed-off-by: Thomas Lamprecht 
> ---
> 
> We could also extract the last 8 chars of the ticket, but as that ticket is
> only secure as a whole, as seen with this issue, I'd like to avoid that
> 
> Initially I had a variant with using perls core rand and all 95 printable 
> ascii
> chars, but that wasn't crypthographical secure
> 
>  PVE/API2/Qemu.pm | 39 ---
>  1 file changed, 36 insertions(+), 3 deletions(-)
> 
> diff --git a/PVE/API2/Qemu.pm b/PVE/API2/Qemu.pm
> index 3e0d59f..dcb364d 100644
> --- a/PVE/API2/Qemu.pm
> +++ b/PVE/API2/Qemu.pm
> @@ -7,6 +7,7 @@ use Net::SSLeay;
>  use POSIX;
>  use IO::Socket::IP;
>  use URI::Escape;
> +use Crypt::OpenSSL::Random;
>  
>  use PVE::Cluster qw (cfs_read_file cfs_write_file);;
>  use PVE::RRD;
> @@ -1589,6 +1590,19 @@ __PACKAGE__->register_method({
>   return undef;
>  }});
>  
> +# uses good entropy, each char is limited to 6 bit to get printable chars 
> simply
> +my $gen_rand_chars = sub {
> +my ($length) = @_;
> +
> +die "invalid length $length" if $length < 1;
> +
> +my $min = ord('!'); # first printable ascii
> +my @rand_bytes = split '', Crypt::OpenSSL::Random::random_bytes($length);
> +my $str = join('', map { chr((ord($_) & 0x3F) + $min) } @rand_bytes);
> +
> +return $str;
> +};
> +
>  my $sslcert;
>  
>  __PACKAGE__->register_method({
> @@ -1610,6 +1624,12 @@ __PACKAGE__->register_method({
>   type => 'boolean',
>   description => "starts websockify instead of vncproxy",
>   },
> + 'generate-password' => {
> + optional => 1,
> + type => 'boolean',
> + default => 0,
> + description => "Generates a random password to be used as 
> ticket instead of the API ticket.",
> + },
>   },
>  },
>  returns => {
> @@ -1617,6 +1637,12 @@ __PACKAGE__->register_method({
>   properties => {
>   user => { type => 'string' },
>   ticket => { type => 'string' },
> + password => {
> + optional => 1,
> + description => "Returned if requested with 'generate-password' 
> param."
> + ." Consists of printable ASCII characters ('!' .. '~').",
> + type => 'string',
>

[pve-devel] [PATCH manager] fix #2771: relax cert API endpoints permissions

2020-06-17 Thread Fabian Grünbichler
allow users with Sys.Modify to modify custom or ACME certificates. those
users can already hose the system in plenty of ways, no reason to
restrict this in particular to being root@pam only.

Signed-off-by: Fabian Grünbichler 
---
 PVE/API2/ACME.pm | 9 +
 PVE/API2/Certificates.pm | 6 ++
 2 files changed, 15 insertions(+)

diff --git a/PVE/API2/ACME.pm b/PVE/API2/ACME.pm
index c7d6e7e9..0decfb4a 100644
--- a/PVE/API2/ACME.pm
+++ b/PVE/API2/ACME.pm
@@ -158,6 +158,9 @@ __PACKAGE__->register_method ({
 name => 'new_certificate',
 path => 'certificate',
 method => 'POST',
+permissions => {
+   check => ['perm', '/nodes/{node}', [ 'Sys.Modify' ]],
+},
 description => "Order a new certificate from ACME-compatible CA.",
 protected => 1,
 proxyto => 'node',
@@ -226,6 +229,9 @@ __PACKAGE__->register_method ({
 name => 'renew_certificate',
 path => 'certificate',
 method => 'PUT',
+permissions => {
+   check => ['perm', '/nodes/{node}', [ 'Sys.Modify' ]],
+},
 description => "Renew existing certificate from CA.",
 protected => 1,
 proxyto => 'node',
@@ -303,6 +309,9 @@ __PACKAGE__->register_method ({
 name => 'revoke_certificate',
 path => 'certificate',
 method => 'DELETE',
+permissions => {
+   check => ['perm', '/nodes/{node}', [ 'Sys.Modify' ]],
+},
 description => "Revoke existing certificate from CA.",
 protected => 1,
 proxyto => 'node',
diff --git a/PVE/API2/Certificates.pm b/PVE/API2/Certificates.pm
index fd75ba85..d22e203e 100644
--- a/PVE/API2/Certificates.pm
+++ b/PVE/API2/Certificates.pm
@@ -91,6 +91,9 @@ __PACKAGE__->register_method ({
 name => 'upload_custom_cert',
 path => 'custom',
 method => 'POST',
+permissions => {
+   check => ['perm', '/nodes/{node}', [ 'Sys.Modify' ]],
+},
 description => 'Upload or update custom certificate chain and key.',
 protected => 1,
 proxyto => 'node',
@@ -163,6 +166,9 @@ __PACKAGE__->register_method ({
 name => 'remove_custom_cert',
 path => 'custom',
 method => 'DELETE',
+permissions => {
+   check => ['perm', '/nodes/{node}', [ 'Sys.Modify' ]],
+},
 description => 'DELETE custom certificate chain and key.',
 protected => 1,
 proxyto => 'node',
-- 
2.20.1


___
pve-devel mailing list
pve-devel@pve.proxmox.com
https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel


[pve-devel] [PATCH manager] fix #2784: always compare ACME domains in lower case

2020-06-17 Thread Fabian Grünbichler
otherwise the ACME endpoint might return the ordered domain in lower
case and we fail to find our plugin config.

Signed-off-by: Fabian Grünbichler 
---
 PVE/API2/ACME.pm  | 4 +++-
 PVE/NodeConfig.pm | 9 -
 2 files changed, 11 insertions(+), 2 deletions(-)

diff --git a/PVE/API2/ACME.pm b/PVE/API2/ACME.pm
index c7d6e7e9..f4049db0 100644
--- a/PVE/API2/ACME.pm
+++ b/PVE/API2/ACME.pm
@@ -56,7 +56,9 @@ my $order_certificate = sub {
 for my $auth_url (@{$order->{authorizations}}) {
print "\nGetting authorization details from '$auth_url'\n";
my $auth = $acme->get_authorization($auth_url);
-   my $domain = $auth->{identifier}->{value};
+
+   # force lower case, like get_acme_conf does
+   my $domain = lc($auth->{identifier}->{value});
if ($auth->{status} eq 'valid') {
print "$domain is already validated!\n";
} else {
diff --git a/PVE/NodeConfig.pm b/PVE/NodeConfig.pm
index af726b15..ad49e288 100644
--- a/PVE/NodeConfig.pm
+++ b/PVE/NodeConfig.pm
@@ -236,6 +236,9 @@ sub write_node_config {
 return $raw;
 }
 
+# we always convert domain values to lower case, since DNS entries are not case
+# sensitive and ACME implementations might convert the ordered identifiers
+# to lower case
 sub get_acme_conf {
 my ($node_conf, $noerr) = @_;
 
@@ -253,6 +256,10 @@ sub get_acme_conf {
my $standalone_domains = delete($res->{domains}) // '';
$res->{domains} = {};
for my $domain (split(";", $standalone_domains)) {
+   $domain = lc($domain);
+   die "duplicate domain '$domain' in ACME config properties\n"
+   if defined($res->{domains}->{$domain});
+
$res->{domains}->{$domain}->{plugin} = 'standalone';
$res->{domains}->{$domain}->{_configkey} = 'acme';
}
@@ -271,7 +278,7 @@ sub get_acme_conf {
return undef if $noerr;
die $err;
}
-   my $domain = delete $parsed->{domain};
+   my $domain = lc(delete $parsed->{domain});
if (my $exists = $res->{domains}->{$domain}) {
return undef if $noerr;
die "duplicate domain '$domain' in ACME config properties"
-- 
2.20.1


___
pve-devel mailing list
pve-devel@pve.proxmox.com
https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel


[pve-devel] applied: [PATCH qemu-server] cfg2cmd test: hardcode/mock bridge MTU

2020-06-17 Thread Fabian Grünbichler
otherwise the netdev test reads the MTU value from the test host's vmbr0
bridge, or fails if no such bridge exists.

Signed-off-by: Fabian Grünbichler 
---
might make sense to extend this to actually test the functionality/MTU
handling, but just unbreaking the test for now..

 test/run_config2command_tests.pl | 9 +
 1 file changed, 9 insertions(+)

diff --git a/test/run_config2command_tests.pl b/test/run_config2command_tests.pl
index 3168176..b5c0a27 100755
--- a/test/run_config2command_tests.pl
+++ b/test/run_config2command_tests.pl
@@ -207,6 +207,15 @@ EOF
 },
 );
 
+my $pve_common_network;
+$pve_common_network = Test::MockModule->new('PVE::Network');
+$pve_common_network->mock(
+read_bridge_mtu => sub {
+   return 1500;
+},
+);
+
+
 my $pve_common_inotify;
 $pve_common_inotify = Test::MockModule->new('PVE::INotify');
 $pve_common_inotify->mock(
-- 
2.20.1


___
pve-devel mailing list
pve-devel@pve.proxmox.com
https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel


[pve-devel] [PATCH qemu-server 2/2] properly parse 'vga' for termproxy

2020-06-17 Thread Fabian Grünbichler
'vga' is a property string, we can't just assume it starts with the default 
key's value here either.

Signed-off-by: Fabian Grünbichler 
---
 PVE/API2/Qemu.pm | 5 +++--
 1 file changed, 3 insertions(+), 2 deletions(-)

diff --git a/PVE/API2/Qemu.pm b/PVE/API2/Qemu.pm
index 95b1922..53dc594 100644
--- a/PVE/API2/Qemu.pm
+++ b/PVE/API2/Qemu.pm
@@ -1779,8 +1779,9 @@ __PACKAGE__->register_method({
my $conf = PVE::QemuConfig->load_config($vmid, $node); # check if VM 
exists
 
if (!defined($serial)) {
-   if ($conf->{vga} && $conf->{vga} =~ m/^serial\d+$/) {
-   $serial = $conf->{vga};
+   if ($conf->{vga}) {
+   my $vga = PVE::QemuServer::parse_vga($conf->{vga});
+   $serial = $vga->{type} if $vga->{type} =~ m/^serial\d+$/;
}
}
 
-- 
2.20.1


___
pve-devel mailing list
pve-devel@pve.proxmox.com
https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel


[pve-devel] [PATCH qemu-server 1/2] fix #2787: properly parse 'vga' for vncproxy

2020-06-17 Thread Fabian Grünbichler
'vga' is a property string, we can't just assume it starts with the
default key's value.

Signed-off-by: Fabian Grünbichler 
---
 PVE/API2/Qemu.pm | 13 +
 1 file changed, 9 insertions(+), 4 deletions(-)

diff --git a/PVE/API2/Qemu.pm b/PVE/API2/Qemu.pm
index 974ee3b..95b1922 100644
--- a/PVE/API2/Qemu.pm
+++ b/PVE/API2/Qemu.pm
@@ -1634,7 +1634,12 @@ __PACKAGE__->register_method({
my $websocket = $param->{websocket};
 
my $conf = PVE::QemuConfig->load_config($vmid, $node); # check if VM 
exists
-   my $use_serial = ($conf->{vga} && ($conf->{vga} =~ m/^serial\d+$/));
+   
+   my $serial;
+   if ($conf->{vga}) {
+   my $vga = PVE::QemuServer::parse_vga($conf->{vga});
+   $serial = $vga->{type} if $vga->{type} =~ m/^serial\d+$/;
+   }
 
my $authpath = "/vms/$vmid";
 
@@ -1650,7 +1655,7 @@ __PACKAGE__->register_method({
(undef, $family) = PVE::Cluster::remote_node_ip($node);
my $sshinfo = PVE::SSHInfo::get_ssh_info($node);
# NOTE: kvm VNC traffic is already TLS encrypted or is known 
unsecure
-   $remcmd = PVE::SSHInfo::ssh_info_to_command($sshinfo, $use_serial ? 
'-t' : '-T');
+   $remcmd = PVE::SSHInfo::ssh_info_to_command($sshinfo, 
defined($serial) ? '-t' : '-T');
} else {
$family = PVE::Tools::get_host_address_family($node);
}
@@ -1666,9 +1671,9 @@ __PACKAGE__->register_method({
 
my $cmd;
 
-   if ($use_serial) {
+   if (defined($serial)) {
 
-   my $termcmd = [ '/usr/sbin/qm', 'terminal', $vmid, '-iface', 
$conf->{vga}, '-escape', '0' ];
+   my $termcmd = [ '/usr/sbin/qm', 'terminal', $vmid, '-iface', 
$serial, '-escape', '0' ];
 
$cmd = ['/usr/bin/vncterm', '-rfbport', $port,
'-timeout', $timeout, '-authpath', $authpath,
-- 
2.20.1


___
pve-devel mailing list
pve-devel@pve.proxmox.com
https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel


Re: [pve-devel] Integration of FreeNAS iSCSI target initiator in Proxmox Enterprise repo

2020-06-09 Thread Fabian Grünbichler
On June 8, 2020 5:16 pm, Michael Rasmussen wrote:
> On Mon, 8 Jun 2020 15:00:11 +0200
> Thomas Lamprecht  wrote:
> 
>> 
>> You can effectively provide a full custom plugin, so it has not more
>> limitations than any existing one. What extra functionality regarding
>> interface ABI would the FreeNAS Plugin require?
>> 
> I seem to have read somewhere that a custom plugin could not get access
> to individual volumes.

Custom plugins get called via PVE::Storage just like any other plugin - 
the API 'contract' is a bit underspecified, but when in doubt just ask. 

There are a few places in the guest code where lists of storage 
types/plugins are hard-coded atm (e.g. 'list of storage types supporting 
vzdump backups'), but if one of those lists needs to be extended / 
switched to be generated via the API mechanism (e.g., by each plugin 
stating 'I can serve as a target for vzdump'), I don't see a reason why 
not to do it.

___
pve-devel mailing list
pve-devel@pve.proxmox.com
https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel


[pve-devel] [PATCH qemu-server 1/2] create_disks: fix uninitialized warning

2020-06-02 Thread Fabian Grünbichler
Signed-off-by: Fabian Grünbichler 
---
 PVE/API2/Qemu.pm | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/PVE/API2/Qemu.pm b/PVE/API2/Qemu.pm
index fd51bf3..5e6fd42 100644
--- a/PVE/API2/Qemu.pm
+++ b/PVE/API2/Qemu.pm
@@ -1095,7 +1095,7 @@ my $update_vm_api  = sub {
return if PVE::QemuServer::drive_is_cdrom($drive);
 
my ($storeid, $volname) = PVE::Storage::parse_volume_id($volid, 1);
-   return if $volname eq 'cloudinit';
+   return if defined($volname) && $volname eq 'cloudinit';
 
my $format;
if ($volid =~ $NEW_DISK_RE) {
-- 
2.20.1


___
pve-devel mailing list
pve-devel@pve.proxmox.com
https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel


[pve-devel] [PATCH qemu-server 2/2] fix #2774: add early check for non-managed volumes

2020-06-02 Thread Fabian Grünbichler
when checking whether a to-be-added drive's and the VM's replication
status are matching. otherwise, we end up in a failing generic
'parse_volume_id' with no mention of the actual reason.

adding 'replicate=0' to the new drive string fixes the underlying issue
with and without this patch, so this is just a cosmetic/usability
improvement.

Signed-off-by: Fabian Grünbichler 
---
 PVE/API2/Qemu.pm | 3 +++
 1 file changed, 3 insertions(+)

diff --git a/PVE/API2/Qemu.pm b/PVE/API2/Qemu.pm
index 5e6fd42..974ee3b 100644
--- a/PVE/API2/Qemu.pm
+++ b/PVE/API2/Qemu.pm
@@ -1095,6 +1095,9 @@ my $update_vm_api  = sub {
return if PVE::QemuServer::drive_is_cdrom($drive);
 
my ($storeid, $volname) = PVE::Storage::parse_volume_id($volid, 1);
+   die "cannot add non-managed/pass-through volume to a replicated VM\n"
+   if !defined($storeid);
+
return if defined($volname) && $volname eq 'cloudinit';
 
my $format;
-- 
2.20.1


___
pve-devel mailing list
pve-devel@pve.proxmox.com
https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel


[pve-devel] [PATCH manager] api: improve node index with missing/broken cert

2020-06-02 Thread Fabian Grünbichler
since this API endpoint is used for the node selector in the GUI, which
causes quite widespread breakage.

Signed-off-by: Fabian Grünbichler 
---
 PVE/API2/Nodes.pm | 5 -
 1 file changed, 4 insertions(+), 1 deletion(-)

diff --git a/PVE/API2/Nodes.pm b/PVE/API2/Nodes.pm
index 58497b2b..9008dcad 100644
--- a/PVE/API2/Nodes.pm
+++ b/PVE/API2/Nodes.pm
@@ -2261,7 +2261,10 @@ __PACKAGE__->register_method ({
foreach my $node (@$nodelist) {
my $can_audit = $rpcenv->check($authuser, "/nodes/$node", [ 
'Sys.Audit' ], 1);
my $entry = PVE::API2Tools::extract_node_stats($node, $members, 
$rrd, !$can_audit);
-   $entry->{ssl_fingerprint} = 
PVE::Cluster::get_node_fingerprint($node);
+
+   $entry->{ssl_fingerprint} = eval { 
PVE::Cluster::get_node_fingerprint($node) };
+   warn "$@" if $@;
+
push @$res, $entry;
}
 
-- 
2.20.1


___
pve-devel mailing list
pve-devel@pve.proxmox.com
https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel


Re: [pve-devel] [PATCH kernel] add pve-kernel-X.Y-libc-dev package

2020-05-28 Thread Fabian Grünbichler
On May 27, 2020 7:03 pm, Thomas Lamprecht wrote:
> This was long overdue, allows to access the full feature set of our
> kernel for some tools using the Linux API directly.
> 
> Packaging mostly taken from Debian[0]
> 
> [0]: 
> https://salsa.debian.org/kernel-team/linux/-/blob/debian/4.19.118-2/debian/rules.real#L367
> 
> Signed-off-by: Thomas Lamprecht 
> ---
> 
> Package name could be probably better, just took the first thing coming to my
> mind. Also, the approach of butting Kernel MAJ.MIN version in there or using
> the kernel package version, or none at all, should be evaluated.
> 
> I'd guess none could be preferred as it should be backwards compatible anyway
> (never break userspace™) so the newest one is always wanted.

IMHO we should not encode the version in the package name, since we 
always only want one version of the package installed and different 
-libc packages for different kernel versions are not co-installable 
anyway. The only caveat is that we need to be careful when building new 
opt-in kernels and NOT upload the -libc package until we switch over the 
default kernel series, but even if we do, it should not break anything.

Besides that, but without a full rebuild-test:

Reviewed-By: Fabian Grünbichler 

> 
> note: This was working really quick, almost suspicious... Tested by building
> QEMU (which inspired my doing this now in the first place due to the sizeof 
> bug
> we have with Debian's plin linux-libc-dev package on build)
> 
>  debian/control.in | 12 
>  debian/rules  | 22 --
>  2 files changed, 32 insertions(+), 2 deletions(-)
> 
> diff --git a/debian/control.in b/debian/control.in
> index 9b807c1d40c5..c457564eafe9 100644
> --- a/debian/control.in
> +++ b/debian/control.in
> @@ -69,3 +69,15 @@ Depends: busybox,
>  Recommends: grub-pc | grub-efi-amd64 | grub-efi-ia32 | grub-efi-arm64,
>  Description: The Proxmox PVE Kernel Image
>   This package contains the linux kernel and initial ramdisk used for booting
> +
> +Package: pve-kernel-@KVMAJMIN@-libc-dev
> +Section: devel
> +Priority: optional
> +Architecture: any
> +Provides: linux-libc-dev,
> +Conflicts: linux-libc-dev,
> +Replaces: linux-libc-dev,
> +Depends: ${misc:Depends}
> +Description: Linux support headers for userspace development
> + This package provides userspaces headers from the Linux kernel.  These 
> headers
> + are used by the installed headers for GNU libc and other system libraries.
> diff --git a/debian/rules b/debian/rules
> index e530eb548707..dc839b127507 100755
> --- a/debian/rules
> +++ b/debian/rules
> @@ -15,6 +15,7 @@ CHANGELOG_DATE:=$(shell dpkg-parsechangelog -SDate)
>  
>  PVE_KERNEL_PKG=pve-kernel-${KVNAME}
>  PVE_HEADER_PKG=pve-headers-${KVNAME}
> +PVE_USR_HEADER_PKG=pve-kernel-${KERNEL_MAJMIN}-libc-dev
>  LINUX_TOOLS_PKG=linux-tools-${KERNEL_MAJMIN}
>  KERNEL_SRC_COPY=${KERNEL_SRC}_tmp
>  
> @@ -87,7 +88,7 @@ debian/control: $(wildcard debian/*.in)
>  
>  build: .compile_mark .tools_compile_mark .modules_compile_mark
>  
> -install: .install_mark .tools_install_mark .headers_install_mark
> +install: .install_mark .tools_install_mark .headers_install_mark 
> .usr_headers_install_mark
>   dh_installdocs -A debian/copyright debian/SOURCE
>   dh_installchangelogs
>   dh_installman
> @@ -97,7 +98,7 @@ install: .install_mark .tools_install_mark 
> .headers_install_mark
>  
>  binary: install
>   debian/rules fwcheck abicheck
> - dh_strip -N${PVE_HEADER_PKG}
> + dh_strip -N${PVE_HEADER_PKG} -N${PVE_USR_HEADER_PKG}
>   dh_makeshlibs
>   dh_shlibdeps
>   dh_installdeb
> @@ -207,6 +208,23 @@ binary: install
>   ln -sf /usr/src/linux-headers-${KVNAME} 
> debian/${PVE_HEADER_PKG}/lib/modules/${KVNAME}/build
>   touch $@
>  
> +.usr_headers_install_mark: PKG_DIR = debian/${PVE_USR_HEADER_PKG}
> +.usr_headers_install_mark: OUT_DIR = ${PKG_DIR}/usr
> +.usr_headers_install_mark: .config_mark
> + rm -rf '${PKG_DIR}'
> + mkdir -p  '${PKG_DIR}'
> + $(MAKE) -C ${KERNEL_SRC} headers_check ARCH=$(KERNEL_HEADER_ARCH)
> + $(MAKE) -C ${KERNEL_SRC} headers_install ARCH=$(KERNEL_HEADER_ARCH) 
> INSTALL_HDR_PATH='$(CURDIR)'/$(OUT_DIR)
> + rm -rf $(OUT_DIR)/include/drm $(OUT_DIR)/include/scsi
> + find $(OUT_DIR)/include \( -name .install -o -name ..install.cmd \) 
> -execdir rm {} +
> +
> +# Move include/asm to arch-specific directory
> + mkdir -p $(OUT_DIR)/include/$(DEB_HOST_MULTIARCH)
> + mv $(OUT_DIR)/include/asm $(OUT_DIR)/include/$(DEB_HOST_MULTIARCH)/
> + test ! -d $(OUT_DIR)/include/arch || \
> + mv $(OUT_DIR)/include/arch 
> $(O

[pve-devel] [PATCH cluster 1/2] fix #2727: pass correct format for linkX

2020-05-12 Thread Fabian Grünbichler
to unbreak joining via SSH with an explicit link address.

Signed-off-by: Fabian Grünbichler 
---
 data/PVE/CLI/pvecm.pm | 4 +++-
 1 file changed, 3 insertions(+), 1 deletion(-)

diff --git a/data/PVE/CLI/pvecm.pm b/data/PVE/CLI/pvecm.pm
index b381f4f..fe099d4 100755
--- a/data/PVE/CLI/pvecm.pm
+++ b/data/PVE/CLI/pvecm.pm
@@ -405,9 +405,11 @@ __PACKAGE__->register_method ({
push @$cmd, '--nodeid', $param->{nodeid} if $param->{nodeid};
push @$cmd, '--votes', $param->{votes} if defined($param->{votes});
 
+   my $link_desc = get_standard_option('corosync-link');
+
foreach my $link (keys %$links) {
push @$cmd, "--link$link", 
PVE::JSONSchema::print_property_string(
-   $links->{$link}, get_standard_option('corosync-link'));
+   $links->{$link}, $link_desc->{format});
}
 
# this will be used as fallback if no links are specified
-- 
2.20.1


___
pve-devel mailing list
pve-devel@pve.proxmox.com
https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel


[pve-devel] [PATCH cluster 2/2] pvecm: pass correct nodename to finish_join

2020-05-12 Thread Fabian Grünbichler
only cosmetic, but printing the wrong nodename might cause confusion.

Signed-off-by: Fabian Grünbichler 
---
 data/PVE/CLI/pvecm.pm | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/data/PVE/CLI/pvecm.pm b/data/PVE/CLI/pvecm.pm
index fe099d4..107c4cb 100755
--- a/data/PVE/CLI/pvecm.pm
+++ b/data/PVE/CLI/pvecm.pm
@@ -439,7 +439,7 @@ __PACKAGE__->register_method ({
my $corosync_conf = 
PVE::Tools::file_get_contents("$tmpdir/corosync.conf");
my $corosync_authkey = 
PVE::Tools::file_get_contents("$tmpdir/authkey");
 
-   PVE::Cluster::Setup::finish_join($host, $corosync_conf, 
$corosync_authkey);
+   PVE::Cluster::Setup::finish_join($nodename, $corosync_conf, 
$corosync_authkey);
};
my $err = $@;
 
-- 
2.20.1


___
pve-devel mailing list
pve-devel@pve.proxmox.com
https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel


[pve-devel] [PATCH manager] ACME: fix fallback to implicit standalone plugin

2020-05-07 Thread Fabian Grünbichler
we need to parse the config even if it does not exist - it will return
the 'standalone' entry that's needed to be backwards compatible with
existing setups.

Signed-off-by: Fabian Grünbichler 
---
Note: there is an issue when attempting to WRITE the config if
/etc/pve/priv/acme does not exist, but that shouldn't be a problem for
existing setups ;)

 PVE/API2/ACMEPlugin.pm | 4 +---
 1 file changed, 1 insertion(+), 3 deletions(-)

diff --git a/PVE/API2/ACMEPlugin.pm b/PVE/API2/ACMEPlugin.pm
index 71f53a35..84553011 100644
--- a/PVE/API2/ACMEPlugin.pm
+++ b/PVE/API2/ACMEPlugin.pm
@@ -245,9 +245,7 @@ __PACKAGE__->register_method({
 });
 
 sub load_config {
-my $cfg = {};
-$cfg = cfs_read_file($plugin_config_file) if -e 
"/etc/pve/$plugin_config_file";
-return $cfg;
+return cfs_read_file($plugin_config_file);
 }
 
 1;
-- 
2.20.1


___
pve-devel mailing list
pve-devel@pve.proxmox.com
https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel


[pve-devel] [PATCH qemu-server] qmrestore: fix restore from STDIN

2020-05-07 Thread Fabian Grünbichler
the special case was dropped when moving this to pve-storage.

Signed-off-by: Fabian Grünbichler 
---
 PVE/QemuServer.pm | 3 +++
 1 file changed, 3 insertions(+)

diff --git a/PVE/QemuServer.pm b/PVE/QemuServer.pm
index 8e3fadf..6461da3 100644
--- a/PVE/QemuServer.pm
+++ b/PVE/QemuServer.pm
@@ -5655,6 +5655,9 @@ sub tar_restore_cleanup {
 sub restore_file_archive {
 my ($archive, $vmid, $user, $opts) = @_;
 
+return restore_vma_archive($archive, $vmid, $user, $opts)
+   if $archive eq '-';
+
 my $info = PVE::Storage::archive_info($archive);
 my $format = $opts->{format} // $info->{format};
 my $comp = $info->{compression};
-- 
2.20.1


___
pve-devel mailing list
pve-devel@pve.proxmox.com
https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel


[pve-devel] applied: [PATCH common] network: fix adding vlan tags to bridge

2020-05-06 Thread Fabian Grünbichler
Signed-off-by: Fabian Grünbichler 
---
makes starting VMs fail, see
https://forum.proxmox.com/threads/failed-to-start-vm-failed-to-remove-default-vlan-tags-of-tap104i0-command-sbin-bridge-bridge-vlan-del-dev-tap104i0-vid-1-4094-failed-exit-code.69375/

 src/PVE/Network.pm | 4 ++--
 1 file changed, 2 insertions(+), 2 deletions(-)

diff --git a/src/PVE/Network.pm b/src/PVE/Network.pm
index 21c592c..ac49536 100644
--- a/src/PVE/Network.pm
+++ b/src/PVE/Network.pm
@@ -217,10 +217,10 @@ my $bridge_add_interface = sub {
 
if ($vlan_aware) {
if ($tag) {
-   eval { run_command(['/sbin/bridge', 'bridge', 'vlan', 'del', 'dev', 
$iface, 'vid', '1-4094']) };
+   eval { run_command(['/sbin/bridge', 'vlan', 'del', 'dev', $iface, 
'vid', '1-4094']) };
die "failed to remove default vlan tags of $iface - $@\n" if $@;
 
-   eval { run_command(['/sbin/bridge', 'bridge', 'vlan', 'add', 'dev', 
$iface, 'vid', $tag, 'pvid', 'untagged']) };
+   eval { run_command(['/sbin/bridge', 'vlan', 'add', 'dev', $iface, 
'vid', $tag, 'pvid', 'untagged']) };
die "unable to add vlan $tag to interface $iface - $@\n" if $@;
 
warn "Caution: Setting VLAN ID 1 on a VLAN aware bridge may be 
dangerous\n" if $tag == 1;
-- 
2.20.1


___
pve-devel mailing list
pve-devel@pve.proxmox.com
https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel


[pve-devel] [PATCH qemu-server] vzdump: fix template backup to stdout

2020-05-06 Thread Fabian Grünbichler
redirecting to the saved STDOUT in case of a template backup or a VM
without any disks failed because of the erroneous '=':

Backup of VM 123123 failed - command '/usr/bin/vma create -v -c [...]' failed:
Bad filehandle: =5 at /usr/share/perl/5.28/IPC/Open3.pm line 58.

https://forum.proxmox.com/threads/vzdump-to-stdout.69364

Signed-off-by: Fabian Grünbichler 
---
 PVE/VZDump/QemuServer.pm | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/PVE/VZDump/QemuServer.pm b/PVE/VZDump/QemuServer.pm
index f122539..3a990cf 100644
--- a/PVE/VZDump/QemuServer.pm
+++ b/PVE/VZDump/QemuServer.pm
@@ -518,7 +518,7 @@ sub archive_vma {
$self->loginfo(join(' ', @$cmd));
 
if ($opts->{stdout}) {
-   $self->cmd($cmd, output => ">&=" . fileno($opts->{stdout}));
+   $self->cmd($cmd, output => ">&" . fileno($opts->{stdout}));
} else {
$self->cmd($cmd);
}
-- 
2.20.1


___
pve-devel mailing list
pve-devel@pve.proxmox.com
https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel


[pve-devel] [PATCH container] vzdump: use new 'pbs' option

2020-05-06 Thread Fabian Grünbichler
instead of storage config to determine whether we are in 'PBS mode'

Signed-off-by: Fabian Grünbichler 
---

Notes:
requires a break on pve-manager << version setting this option,
since the dependency is the other way round

 src/PVE/VZDump/LXC.pm | 4 ++--
 1 file changed, 2 insertions(+), 2 deletions(-)

diff --git a/src/PVE/VZDump/LXC.pm b/src/PVE/VZDump/LXC.pm
index 2d003d0..45a3d8f 100644
--- a/src/PVE/VZDump/LXC.pm
+++ b/src/PVE/VZDump/LXC.pm
@@ -303,7 +303,7 @@ sub assemble {
 my $firewall ="/etc/pve/firewall/$vmid.fw";
 my $fwconftmp = "$tmpdir/etc/vzdump/pct.fw";
 
-if ($opts->{scfg}->{type} eq 'pbs') {
+if ($self->{vzdump}->{opts}->{pbs}) {
# fixme: do not store pct.conf and fw.conf into $tmpdir
if (-e  $firewall) {
PVE::Tools::file_copy($firewall, $fwconftmp);
@@ -356,7 +356,7 @@ sub archive {
 
 my $userns_cmd = $task->{userns_cmd};
 
-if ($opts->{scfg}->{type} eq 'pbs') {
+if ($self->{vzdump}->{opts}->{pbs}) {
 
my $rootdir = $default_mount_point;
my $param = [];
-- 
2.20.1


___
pve-devel mailing list
pve-devel@pve.proxmox.com
https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel


[pve-devel] [PATCH qemu-server] vzdump: use new 'pbs' option

2020-05-06 Thread Fabian Grünbichler
instead of storage config to determine whether we are in 'PBS mode'

Signed-off-by: Fabian Grünbichler 
---

Notes:
requires a break on pve-manager << version setting this option,
since the dependency is the other way round.

 PVE/VZDump/QemuServer.pm | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/PVE/VZDump/QemuServer.pm b/PVE/VZDump/QemuServer.pm
index f122539..1bdc490 100644
--- a/PVE/VZDump/QemuServer.pm
+++ b/PVE/VZDump/QemuServer.pm
@@ -248,7 +248,7 @@ sub archive {
 my $opts = $self->{vzdump}->{opts};
 my $scfg = $opts->{scfg};
 
-if ($scfg->{type} eq 'pbs') {
+if ($self->{vzdump}->{opts}->{pbs}) {
$self->archive_pbs($task, $vmid);
 } else {
$self->archive_vma($task, $vmid, $filename, $comp);
-- 
2.20.1


___
pve-devel mailing list
pve-devel@pve.proxmox.com
https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel


[pve-devel] [PATCH manager] vzdump: set 'pbs' option when backing up to PBS target

2020-05-06 Thread Fabian Grünbichler
this unifies the logic into a single place instead of all over this
module and the plugins.

it also fixes tons of 'uninitialized value' warnings when backing up
with --dumpdir but no --storage set, since the existing conditions for
PBS targets are missing a definedness check.

Signed-off-by: Fabian Grünbichler 
---

Notes:
this commit alone does not break anything, but since the plugins in 
qemu-server
and pve-container can't have a versioned depends on pve-manager, we need to
break the old versions of pve-manager in those two packages to ensure we 
get a
version setting the now required option.

 PVE/VZDump.pm | 20 +++-
 1 file changed, 11 insertions(+), 9 deletions(-)

diff --git a/PVE/VZDump.pm b/PVE/VZDump.pm
index 6fd3aeed..623c6f8c 100644
--- a/PVE/VZDump.pm
+++ b/PVE/VZDump.pm
@@ -86,6 +86,7 @@ sub storage_info {
return {
scfg => $scfg,
maxfiles => $scfg->{maxfiles},
+   pbs => 1,
};
 } else {
return {
@@ -459,6 +460,7 @@ sub new {
if ($@);
$opts->{dumpdir} = $info->{dumpdir};
$opts->{scfg} = $info->{scfg};
+   $opts->{pbs} = $info->{pbs};
$maxfiles //= $info->{maxfiles};
 } elsif ($opts->{dumpdir}) {
$errors .= "dumpdir '$opts->{dumpdir}' does not exist"
@@ -651,7 +653,7 @@ sub exec_backup_task {
 my $pbs_group_name;
 my $pbs_snapshot_name;
 
-if ($opts->{scfg}->{type} eq 'pbs') {
+if ($self->{opts}->{pbs}) {
if ($vmtype eq 'lxc') {
$pbs_group_name = "ct/$vmid";
} elsif  ($vmtype eq 'qemu') {
@@ -697,7 +699,7 @@ sub exec_backup_task {
 
if ($maxfiles && !$opts->{remove}) {
my $count;
-   if ($opts->{scfg}->{type} eq 'pbs') {
+   if ($self->{opts}->{pbs}) {
my $res = 
PVE::Storage::PBSPlugin::run_client_cmd($opts->{scfg}, $opts->{storage}, 
'snapshots', $pbs_group_name);
$count = scalar(@$res);
} else {
@@ -710,7 +712,7 @@ sub exec_backup_task {
if $count >= $maxfiles;
}
 
-   if ($opts->{scfg}->{type} ne 'pbs') {
+   if (!$self->{opts}->{pbs}) {
$task->{logfile} = "$opts->{dumpdir}/$basename.log";
}
 
@@ -720,7 +722,7 @@ sub exec_backup_task {
$ext .= ".${comp_ext}";
}
 
-   if ($opts->{scfg}->{type} eq 'pbs') {
+   if ($self->{opts}->{pbs}) {
die "unable to pipe backup to stdout\n" if $opts->{stdout};
} else {
if ($opts->{stdout}) {
@@ -735,7 +737,7 @@ sub exec_backup_task {
 
$task->{vmtype} = $vmtype;
 
-   if ($opts->{scfg}->{type} eq 'pbs') {
+   if ($self->{opts}->{pbs}) {
$task->{tmpdir} = "/var/tmp/vzdumptmp$$"; #fixme
} elsif ($opts->{tmpdir}) {
$task->{tmpdir} = "$opts->{tmpdir}/vzdumptmp$$";
@@ -898,14 +900,14 @@ sub exec_backup_task {
}
 
# fixme: ??
-   if ($opts->{scfg}->{type} eq 'pbs') {
+   if ($self->{opts}->{pbs}) {
debugmsg ('info', "creating pbs archive on storage 
'$opts->{storage}'", $logfd);
} else {
debugmsg ('info', "creating archive '$task->{tarfile}'", $logfd);
}
$plugin->archive($task, $vmid, $task->{tmptar}, $comp);
 
-   if ($opts->{scfg}->{type} eq 'pbs') {
+   if ($self->{opts}->{pbs}) {
# fixme: log size ?
debugmsg ('info', "pbs upload finished", $logfd);
} else {
@@ -921,7 +923,7 @@ sub exec_backup_task {
# purge older backup
if ($maxfiles && $opts->{remove}) {
 
-   if ($opts->{scfg}->{type} eq 'pbs') {
+   if ($self->{opts}->{pbs}) {
my $args = [$pbs_group_name, '--keep-last', $maxfiles];
my $logfunc = sub { my $line = shift; debugmsg ('info', $line, 
$logfd); };
PVE::Storage::PBSPlugin::run_raw_client_cmd(
@@ -1012,7 +1014,7 @@ sub exec_backup_task {
 close ($logfd) if $logfd;
 
 if ($task->{tmplog}) {
-   if ($opts->{scfg}->{type} eq 'pbs') {
+   if ($self->{opts}->{pbs}) {
if ($task->{state} eq 'ok') {
my $param = [$pbs_snapshot_name, $task->{tmplog}];
PVE::Storage::PBSPlugin::run_raw_client_cmd(
-- 
2.20.1


___
pve-devel mailing list
pve-devel@pve.proxmox.com
https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel


[pve-devel] [PATCH RESEND qemu-server] migrate: cleanup forwarding code

2020-05-05 Thread Fabian Grünbichler
fixing the following two issues:
- the legacy code path was never converted to the new fork_tunnel
signature (which probably means that nothing triggers it in practice
anymore?)
- the NBD Unix socket got forwarded multiple times if more than one disk
was migrated via NBD (this is harmless, but wrong)

for the second issue I opted to keep the code compatible with the
possibility that Qemu starts supporting multiple NBD servers in the
future (and the target node could thus return multiple UNIX socket
paths). currently we can only start one NBD server on one socket, and
each drive-mirror simply starts a new connection over that single
socket.

I took the liberty of renaming the variables/keys since I found
'tunnel_addr' and 'sock_addr' rather confusing.

Reviewed-By: Mira Limbeck 
Tested-By: Mira Limbeck 

Signed-off-by: Fabian Grünbichler 
---
Resend for nbdstop context change

 PVE/QemuMigrate.pm | 41 ++---
 1 file changed, 22 insertions(+), 19 deletions(-)

diff --git a/PVE/QemuMigrate.pm b/PVE/QemuMigrate.pm
index fa5df9b..b729940 100644
--- a/PVE/QemuMigrate.pm
+++ b/PVE/QemuMigrate.pm
@@ -146,10 +146,10 @@ sub write_tunnel {
 }
 
 sub fork_tunnel {
-my ($self, $tunnel_addr) = @_;
+my ($self, $ssh_forward_info) = @_;
 
 my @localtunnelinfo = ();
-foreach my $addr (@$tunnel_addr) {
+foreach my $addr (@$ssh_forward_info) {
push @localtunnelinfo, '-L', $addr;
 }
 
@@ -191,9 +191,9 @@ sub finish_tunnel {
 
 $self->finish_command_pipe($tunnel, 30);
 
-if ($tunnel->{sock_addr}) {
+if (my $unix_sockets = $tunnel->{unix_sockets}) {
# ssh does not clean up on local host
-   my $cmd = ['rm', '-f', @{$tunnel->{sock_addr}}]; #
+   my $cmd = ['rm', '-f', @$unix_sockets];
PVE::Tools::run_command($cmd);
 
# .. and just to be sure check on remote side
@@ -707,8 +707,7 @@ sub phase2 {
 }
 
 my $spice_port;
-my $tunnel_addr = [];
-my $sock_addr = [];
+my $unix_socket_info = {};
 # version > 0 for unix socket support
 my $nbd_protocol_version = 1;
 # TODO change to 'spice_ticket: \n' in 7.0
@@ -771,8 +770,7 @@ sub phase2 {
$self->{stopnbd} = 1;
$self->{target_drive}->{$targetdrive}->{drivestr} = $drivestr;
$self->{target_drive}->{$targetdrive}->{nbd_uri} = $nbd_uri;
-   push @$tunnel_addr, "$nbd_unix_addr:$nbd_unix_addr";
-   push @$sock_addr, $nbd_unix_addr;
+   $unix_socket_info->{$nbd_unix_addr} = 1;
} elsif ($line =~ m/^re-using replicated volume: (\S+) - (.*)$/) {
my $drive = $1;
my $volid = $2;
@@ -798,22 +796,28 @@ sub phase2 {
 if ($migration_type eq 'secure') {
 
if ($ruri =~ /^unix:/) {
-   unlink $raddr;
-   push @$tunnel_addr, "$raddr:$raddr";
-   $self->{tunnel} = $self->fork_tunnel($tunnel_addr);
-   push @$sock_addr, $raddr;
+   my $ssh_forward_info = ["$raddr:$raddr"];
+   $unix_socket_info->{$raddr} = 1;
+
+   my $unix_sockets = [ keys %$unix_socket_info ];
+   for my $sock (@$unix_sockets) {
+   push @$ssh_forward_info, "$sock:$sock";
+   unlink $sock;
+   }
+
+   $self->{tunnel} = $self->fork_tunnel($ssh_forward_info);
 
my $unix_socket_try = 0; # wait for the socket to become ready
while ($unix_socket_try <= 100) {
$unix_socket_try++;
my $available = 0;
-   foreach my $sock (@$sock_addr) {
+   foreach my $sock (@$unix_sockets) {
if (-S $sock) {
$available++;
}
}
 
-   if ($available == @$sock_addr) {
+   if ($available == @$unix_sockets) {
last;
}
 
@@ -824,17 +828,18 @@ sub phase2 {
$self->finish_tunnel($self->{tunnel});
die "Timeout, migration socket $ruri did not get ready";
}
+   $self->{tunnel}->{unix_sockets} = $unix_sockets if (@$unix_sockets);
 
} elsif ($ruri =~ /^tcp:/) {
-   my $tunnel_addr;
+   my $ssh_forward_info = [];
if ($raddr eq "localhost") {
# for backwards compatibility with older qemu-server versions
my $pfamily = PVE::Tools::get_host_address_family($nodename);
my $lport = PVE::Tools::next_migrate_port($pfamily);
-   $tunnel_addr = "$lport:localhost:$rport";
+   push @$ssh_forward_info, "$lport:localhost:$rport";
}
 
-   $self->{tunnel} = $self->fork_tunnel($tunnel_addr);
+   $self->{tunnel} = $self->fork_tunnel($ssh_forward_info);
 
} else {
die &q

[pve-devel] applied-series: [PATCH v2 container 01/10] LXC: drop unused imported locking functions

2020-05-05 Thread Fabian Grünbichler
with breaks/versioned-depends ;)

On May 5, 2020 10:27 am, Fabian Ebner wrote:
> From: Fabian Grünbichler 
> 
> Signed-off-by: Fabian Grünbichler 
> Tested-by: Fabian Ebner 
> ---
> 
> Changes from v1:
> * Add patch for container create_vm issue
> * Add patch for snapshot_rollback issue
> * Dropped the two already applied patches for qemu-server
> 
>  src/PVE/LXC.pm | 4 ++--
>  1 file changed, 2 insertions(+), 2 deletions(-)
> 
> diff --git a/src/PVE/LXC.pm b/src/PVE/LXC.pm
> index efbd1d9..4eb1a14 100644
> --- a/src/PVE/LXC.pm
> +++ b/src/PVE/LXC.pm
> @@ -20,8 +20,8 @@ use PVE::SafeSyslog;
>  use PVE::INotify;
>  use PVE::JSONSchema qw(get_standard_option);
>  use PVE::Tools qw(
> -dir_glob_foreach file_get_contents file_set_contents lock_file
> -lock_file_full AT_FDCWD O_PATH $IPV4RE $IPV6RE
> +dir_glob_foreach file_get_contents file_set_contents
> +AT_FDCWD O_PATH $IPV4RE $IPV6RE
>  );
>  use PVE::CpuSet;
>  use PVE::Network;
> -- 
> 2.20.1
> 
> 

___
pve-devel mailing list
pve-devel@pve.proxmox.com
https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel


[pve-devel] applied-series: [PATCH storage v5 00/17] Fix: #2124 zstd

2020-05-04 Thread Fabian Grünbichler
with an additional commit adding '--rsyncable'.

thanks!

On April 28, 2020 3:58 pm, Alwin Antreich wrote:
> Zstandard (zstd) [0] is a data compression algorithm, in addition to
> gzip, lzo for our backup/restore. It can utilize multiple core CPUs. But
> by default it has one compression and one writer thread.
> 
> 
> Here some quick tests I made on my workstation. The files where placed
> on a ram disk. And with dd filled from /dev/urandom and /dev/zero.
> 
> __Compression__
> file size: 1073741824 bytes
>   = urandom = = zero =
>   995ms 1073766414328ms 98192zstd -k
>   732ms 1073766414295ms 98192zstd -k -T4
>   906ms 1073791036562ms 4894779  lzop -k
> 31992ms 1073915558   5594ms 1042087  gzip -k
> 30832ms 1074069541   5776ms 1171491  pigz -k -p 1
>  7814ms 1074069541   1567ms 1171491  pigz -k -p 4
> 
> __Decompression__
> file size: 1073741824 bytes
> = urandom =   = zero =
>712ms  869ms  zstd -d
>685ms  872ms  zstd -k -d -T4
>841ms 2462ms  lzop -d
>   5417ms 4754ms  gzip -k -d
>   1248ms 3118ms  pigz -k -d -p 1
>   1236ms 2379ms  pigz -k -d -p 4
> 
> 
> And I used the same ramdisk to move a VM onto it and run a quick
> backup/restore.
> 
> __vzdump backup__
> INFO: transferred 34359 MB in 69 seconds (497 MB/s) zstd -T1
> INFO: transferred 34359 MB in 37 seconds (928 MB/s) zstd -T4
> INFO: transferred 34359 MB in 51 seconds (673 MB/s) lzo
> INFO: transferred 34359 MB in 1083 seconds (31 MB/s) gzip
> INFO: transferred 34359 MB in 241 seconds (142 MB/s) pigz -n 4
> 
> __qmrestore__
> progress 100% (read 34359738368 bytes, duration 36 sec)
> total bytes read 34359738368, sparse bytes 8005484544 (23.3%) zstd -d -T4
> 
> progress 100% (read 34359738368 bytes, duration 38 sec)
> total bytes read 34359738368, sparse bytes 8005484544 (23.3%) lzo
> 
> progress 100% (read 34359738368 bytes, duration 175 sec)
> total bytes read 34359738368, sparse bytes 8005484544 (23.3%) pigz -n 4
> 
> 
> v4 -> v5:
> * fixup, use File::stat directly without overwriting CORE::stat,
>   thanks Dietmar for pointing this out
>   https://pve.proxmox.com/pipermail/pve-devel/2020-April/043134.html
> * rebase to current master
> 
> v3 -> v4:
> * fixed styling issues discovered by f.ebner (thanks)
> * incorporated tests of d.jaeger into patches
> * added fixes discovered by tests (f.ebner thanks)
> 
> v2 -> v3:
> * split archive_info into decompressor_info and archive_info
> * "compact" regex pattern is now a constant and used in
>   multiple modules
> * added tests for regex matching
> * bug fix for ctime of backup files
> 
> v1 -> v2:
> * factored out the decompressor info first, as Thomas suggested
> * made the regex pattern of backup files more compact, easier to
>   read (hopefully)
> * less code changes for container restores
> 
> Thanks for any comment or suggestion in advance.
> 
> [0] https://facebook.github.io/zstd/
> 
> Alwin Antreich (17):
> __pve-storage__
>   storage: test: split archive format/compressor
>   storage: replace build-in stat with File::stat
>   test: parse_volname
>   test: list_volumes
>   Fix: backup: ctime was from stat not file name
>   test: path_to_volume_id
>   Fix: path_to_volume_id returned wrong content
>   Fix: add missing snippets subdir
>   backup: compact regex for backup file filter
>   Fix: #2124 storage: add zstd support
>   test: get_subdir
>   test: filesystem_path
> 
>  test/Makefile  |   5 +-
>  PVE/Diskmanage.pm  |   9 +-
>  PVE/Storage.pm |  91 --
>  PVE/Storage/Plugin.pm  |  47 ++-
>  test/archive_info_test.pm  | 127 
>  test/filesystem_path_test.pm   |  91 ++
>  test/get_subdir_test.pm|  44 +++
>  test/list_volumes_test.pm  | 537 +
>  test/parse_volname_test.pm | 253 
>  test/path_to_volume_id_test.pm | 274 +
>  test/run_plugin_tests.pl   |  18 ++
>  11 files changed, 1440 insertions(+), 56 deletions(-)
>  create mode 100644 test/archive_info_test.pm
>  create mode 100644 test/filesystem_path_test.pm
>  create mode 100644 test/get_subdir_test.pm
>  create mode 100644 test/list_volumes_test.pm
>  create mode 100644 test/parse_volname_test.pm
>  create mode 100644 test/path_to_volume_id_test.pm
>  create mode 100755 test/run_plugin_tests.pl
> 
> 
> __guest_common__
>   Fix: #2124 add zstd support
> 
>  PVE/VZDump/Common.pm | 9 -
>  1 file changed, 8 insertions(+), 1 deletion(-)
> 
> 
> __qemu-server__
>   restore: replace archive format/compression
>   Fix #2124: Add support for zstd
> 
>  PVE/QemuServer.pm | 38 +++---
>  1 file changed, 7 insertions(+), 31 deletions(-)
> 
> 
> __pve-container__
>   Fix: #2124 add zstd
> 
>  src/PVE/LXC/Create.pm | 1 +
>  1 file changed, 1 insertion(+)
> 
> 
> __pve-manager__
>   Fix #2124: Add support for zstd
> 
>  PVE/VZDump.pm  

Re: [pve-devel] [PATCH manager 2/3] Allow setting targetstorage for offline migration

2020-05-04 Thread Fabian Grünbichler
On May 4, 2020 11:20 am, Tim Marx wrote:
> 
>> Fabian Ebner  hat am 4. Mai 2020 09:26 geschrieben:
>> 
>>  
>> On 5/2/20 11:40 AM, Alexandre DERUMIER wrote:
>> >>> The problem is that offline migration with target storage might not
>> >>> always work depending on supported export/import formats. Then users
>> >>> might start an offline migration, which then fails after a few disks
>> >>> have already been copied.
>> > 
>> > Hi,
>> > 
>> > Technically, it could be possible to support any kind of src/dst format 
>> > for offline migration,
>> > with creating an nbd server on target (with qemu-nbd for example).
>> > 
>> > Maybe it could be implemented as fallback, if any other method exist ?
>> > 
>> > 
>> 
>> Yes, that would be nice to have. Now I think that making the target 
>> storage option for offline migration available via GUI should wait until 
>> we either:
>> 1. have a way to error out early
>> or even better:
>> 2. can support all possible exports/imports with such a general fallback 
>> method.
>> 
 
> Couldn't we check this in the precondition api call or do I miss something?

precondition is not run in a worker, so we can't query ALL storages for 
ALL volumes, like the migration does.

___
pve-devel mailing list
pve-devel@pve.proxmox.com
https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel


[pve-devel] [PATCH manager] NodeConfig: ensure locked context has current view

2020-04-30 Thread Fabian Grünbichler
similar to the recent changes for pve-guest-common - we start each API
call with a cfs_update, but while we were waiting for the flock another
R-M-W cycle might have happened, so we need to refresh after obtaining
the lock.

Signed-off-by: Fabian Grünbichler 
---

Notes:
there's only a single API call using this, so it's pretty straight-forward.

patch generated on-top of ACME patch set

 PVE/NodeConfig.pm | 8 +++-
 1 file changed, 7 insertions(+), 1 deletion(-)

diff --git a/PVE/NodeConfig.pm b/PVE/NodeConfig.pm
index a0921558..6a95d7aa 100644
--- a/PVE/NodeConfig.pm
+++ b/PVE/NodeConfig.pm
@@ -50,7 +50,13 @@ sub write_config {
 }
 
 sub lock_config {
-my ($node, $code, @param) = @_;
+my ($node, $realcode, @param) = @_;
+
+# make sure configuration file is up-to-date
+my $code = sub {
+   PVE::Cluster::cfs_update();
+   $realcode->(@_);
+};
 
 my $res = lock_file($node_config_lock, 10, $code, @param);
 
-- 
2.20.1


___
pve-devel mailing list
pve-devel@pve.proxmox.com
https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel


[pve-devel] [PATCH qemu-server] migrate: don't accidentally take NBD code paths

2020-04-30 Thread Fabian Grünbichler
by avoiding auto-vivification of $self->{online_local_volumes} via
iteration. most code paths don't care whether it's undef or a reference
to an empty list, but this caused the (already) fixed bug of calling
nbd_stop without having started an NBD server in the first place.

Signed-off-by: Fabian Grünbichler 
---
This technically makes the previous NBD stop related patches no longer
needed - but let's keep them until we clean up the whole volume handling
properly to avoid falling into this trap again.

 PVE/QemuMigrate.pm | 12 
 1 file changed, 8 insertions(+), 4 deletions(-)

diff --git a/PVE/QemuMigrate.pm b/PVE/QemuMigrate.pm
index 7644922..d9b104c 100644
--- a/PVE/QemuMigrate.pm
+++ b/PVE/QemuMigrate.pm
@@ -713,10 +713,14 @@ sub phase2 {
 $input .= "nbd_protocol_version: $nbd_protocol_version\n";
 
 my $number_of_online_replicated_volumes = 0;
-foreach my $volid (@{$self->{online_local_volumes}}) {
-   next if !$self->{replicated_volumes}->{$volid};
-   $number_of_online_replicated_volumes++;
-   $input .= "replicated_volume: $volid\n";
+
+# prevent auto-vivification
+if ($self->{online_local_volumes}) {
+   foreach my $volid (@{$self->{online_local_volumes}}) {
+   next if !$self->{replicated_volumes}->{$volid};
+   $number_of_online_replicated_volumes++;
+   $input .= "replicated_volume: $volid\n";
+   }
 }
 
 my $target_replicated_volumes = {};
-- 
2.20.1


___
pve-devel mailing list
pve-devel@pve.proxmox.com
https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel


Re: [pve-devel] [PATCH container] create_vm: fix order of config creation/reading/locking

2020-04-30 Thread Fabian Grünbichler
On April 29, 2020 11:58 am, Fabian Ebner wrote:
> The update_pct_config call leads to a write_config call and so the
> configuration file was created before it was intended to be created.
> 
> When the CFS is updated in between the write_config call and the
> PVE::Cluster::check_vmid_unused call in create_and_lock_config,
> the container file would already exist and so creation would
> fail after writing out a basically empty config.
> (Meaning this is necessary for the proposed "update CFS after
> obtaining a configuration file lock" changes)
> 
> Even worse, a race was possible for two containers created with the
> same ID at the same time:
> Assuming the initial PVE::Cluster::check_vmid_unused check in the
> parameter verification passes for both create_vm calls, the later one
> would potentially overwrite the earlier configuration file with its
> update_pct_config call.
> 
> Additionally, the file read for $old_config was always the one written
> by update_pct_config. Meaning that for a create_vm call with force=1,
> already existing old volumes were not removed.
> 
> Signed-off-by: Fabian Ebner 

hmm, update_pct_config there was not intended to write out the config, 
but just to fill the $conf hash.

three alternative approaches that would return to the original semantics:
1 skip hotplug/apply pending if pending section is empty (should always 
  be the case in this code path)
2 add explicit parameter to skip
3 drop the hotplug/apply pending calls in update_pct_config, add them to 
the update_vm API endpoint instead.

I'd prefer 3 - 1 - 2 in that order ;)

3 would be called with the same semantics in both call sites (create_vm 
and update_vm): here are add/delete/revert parameters, here's a conf 
hash; merge those operating solely on the conf hash.

> ---
> 
> Note that this should be applied before [0] to avoid temporary
> (further ;)) breakage of container creation.
> 
> [0]: https://pve.proxmox.com/pipermail/pve-devel/2020-April/043155.html
> 
>  src/PVE/API2/LXC.pm | 9 -
>  1 file changed, 4 insertions(+), 5 deletions(-)
> 
> diff --git a/src/PVE/API2/LXC.pm b/src/PVE/API2/LXC.pm
> index 148ba6a..46d2fd7 100644
> --- a/src/PVE/API2/LXC.pm
> +++ b/src/PVE/API2/LXC.pm
> @@ -334,10 +334,6 @@ __PACKAGE__->register_method({
>   # check/activate default storage
>   &$check_and_activate_storage($storage) if !defined($mp_param->{rootfs});
>  
> - PVE::LXC::Config->update_pct_config($vmid, $conf, 0, $no_disk_param);
> -
> - $conf->{unprivileged} = 1 if $unprivileged;
> -
>   my $emsg = $restore ? "unable to restore CT $vmid -" : "unable to 
> create CT $vmid -";
>  
>   eval { PVE::LXC::Config->create_and_lock_config($vmid, $force) };
> @@ -346,8 +342,11 @@ __PACKAGE__->register_method({
>   my $code = sub {
>   my $old_conf = PVE::LXC::Config->load_config($vmid);
>   my $was_template;
> -
>   my $vollist = [];
> +
> + PVE::LXC::Config->update_pct_config($vmid, $conf, 0, 
> $no_disk_param);
> + $conf->{unprivileged} = 1 if $unprivileged;
> +
>   eval {
>   my $orig_mp_param; # only used if $restore
>   if ($restore) {
> -- 
> 2.20.1
> 
> 
> ___
> pve-devel mailing list
> pve-devel@pve.proxmox.com
> https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel
> 
> 

___
pve-devel mailing list
pve-devel@pve.proxmox.com
https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel


Re: [pve-devel] applied: [PATCH qemu-server 2/2] qm nbdstop: cope graceful with errors

2020-04-30 Thread Fabian Grünbichler
On April 29, 2020 4:24 pm, Thomas Lamprecht wrote:
> as the nbd server could have been stopped by something else.
> Further, it makes no sense to die and mark the migration thus as
> failed, just because of a NBD server stop issue.
> 
> At this point the migration hand off to the target was done already,
> so normally we're good, if it fails we have other (followup) problems
> anyway.

the second paragraph is not really true, nbd_stop is part of the 
hand off, not afterwards - the sequence is

- migration converges (source VM suspends)
- block mirror converges
- block mirror get's completed via cancel (NBD client connection closed)
- nbd stop on target (to release write blocker)
- resume on target (needs to obtain write blocker)

so if NBD was stopped earlier already it is very likely that something 
fishy is going one. we've passed the point of no return and can't go 
back to the source VM (especially not when we don't know in which error 
state we are ;)), so we might as well try to go ahead and attempt to 
resume, but I'd assume that the most likely reason for nbd_stop to fail 
is that the qemu process has crashed already (or a re-introduction of 
the bug from your first patch that we mistakenly assume an NBD server is 
running but haven't actually started one ;)).

so I guess dieing here would actually be the more safe choice (it's 
unlikely we'll add much stuff on the source side between the nbd_stop 
and the resume calls since it's THE critical section of the whole 
migraiton process, but if we do that might operate under the assumption 
of "every remote operation has worked or has died"). alternatively, move 
the eval to the other side to at least make it explicit that we ignore 
failure of nbd_stop and proceed anyway?

(this would all be nicer if we had a stateful mtunnel on the other end 
that would just know whether it started an NBD server or not :-P)

> 
> Signed-off-by: Thomas Lamprecht 
> ---
>  PVE/CLI/qm.pm | 3 ++-
>  1 file changed, 2 insertions(+), 1 deletion(-)
> 
> diff --git a/PVE/CLI/qm.pm b/PVE/CLI/qm.pm
> index f99d401..282fa86 100755
> --- a/PVE/CLI/qm.pm
> +++ b/PVE/CLI/qm.pm
> @@ -274,7 +274,8 @@ __PACKAGE__->register_method ({
>  
>   my $vmid = $param->{vmid};
>  
> - PVE::QemuServer::nbd_stop($vmid);
> + eval { PVE::QemuServer::nbd_stop($vmid) };
> + warn $@ if $@;
>  
>   return undef;
>  }});
> -- 
> 2.20.1
> 
> 
> ___
> pve-devel mailing list
> pve-devel@pve.proxmox.com
> https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel
> 
> 

___
pve-devel mailing list
pve-devel@pve.proxmox.com
https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel


Re: [pve-devel] applied: [PATCH qemu-server 1/2] migrate: only stop NBD if we got a NBD url from the target

2020-04-30 Thread Fabian Grünbichler
On April 29, 2020 4:24 pm, Thomas Lamprecht wrote:
> Signed-off-by: Thomas Lamprecht 
> ---
> 
> This was rather quickly assembled to fix an obvious issue, some in depth look
> at this would be nice, @Fabi or @Fabian :)

LGTM!

> 
>  PVE/QemuMigrate.pm | 5 -
>  1 file changed, 4 insertions(+), 1 deletion(-)
> 
> diff --git a/PVE/QemuMigrate.pm b/PVE/QemuMigrate.pm
> index 7472e9d..7644922 100644
> --- a/PVE/QemuMigrate.pm
> +++ b/PVE/QemuMigrate.pm
> @@ -750,6 +750,7 @@ sub phase2 {
>   my $targetdrive = $3;
>   $targetdrive =~ s/drive-//g;
>  
> + $self->{stopnbd} = 1;
>   $self->{target_drive}->{$targetdrive}->{drivestr} = $drivestr;
>   $self->{target_drive}->{$targetdrive}->{nbd_uri} = $nbd_uri;
>   } elsif ($line =~ m!^storage migration listens on 
> nbd:unix:(/run/qemu-server/(\d+)_nbd\.migrate):exportname=(\S+) 
> volume:(\S+)$!) {
> @@ -760,6 +761,7 @@ sub phase2 {
>   my $targetdrive = $3;
>   $targetdrive =~ s/drive-//g;
>  
> + $self->{stopnbd} = 1;
>   $self->{target_drive}->{$targetdrive}->{drivestr} = $drivestr;
>   $self->{target_drive}->{$targetdrive}->{nbd_uri} = $nbd_uri;
>   push @$tunnel_addr, "$nbd_unix_addr:$nbd_unix_addr";
> @@ -1177,7 +1179,8 @@ sub phase3_cleanup {
>  $self->switch_replication_job_target() if $self->{replicated_volumes};
>  
>  if ($self->{livemigration}) {
> - if ($self->{storage_migration}) {
> + if ($self->{stopnbd}) {
> + $self->log('info', "stopping NBD storage migration server on 
> target.");
>   # stop nbd server on remote vm - requirement for resume since 2.9
>   my $cmd = [@{$self->{rem_ssh}}, 'qm', 'nbdstop', $vmid];
>  
> -- 
> 2.20.1
> 
> 
> ___
> pve-devel mailing list
> pve-devel@pve.proxmox.com
> https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel
> 
> 

___
pve-devel mailing list
pve-devel@pve.proxmox.com
https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel


[pve-devel] [PATCH firewall 3/7] api: lock configs

2020-04-29 Thread Fabian Grünbichler
wherever we have a r-m-w cycle.

Signed-off-by: Fabian Grünbichler 
---

Notes:
best viewed with -w

 src/PVE/API2/Firewall/Aliases.pm |  80 +---
 src/PVE/API2/Firewall/Cluster.pm |  36 
 src/PVE/API2/Firewall/Groups.pm  |  52 ++-
 src/PVE/API2/Firewall/Host.pm|  42 +
 src/PVE/API2/Firewall/IPSet.pm   | 152 ++-
 src/PVE/API2/Firewall/Rules.pm   |  94 +++
 src/PVE/API2/Firewall/VM.pm  |  43 -
 7 files changed, 280 insertions(+), 219 deletions(-)

diff --git a/src/PVE/API2/Firewall/Aliases.pm b/src/PVE/API2/Firewall/Aliases.pm
index 9ea6f70..33ac669 100644
--- a/src/PVE/API2/Firewall/Aliases.pm
+++ b/src/PVE/API2/Firewall/Aliases.pm
@@ -143,19 +143,23 @@ sub register_create_alias {
code => sub {
my ($param) = @_;
 
-   my ($fw_conf, $aliases) = $class->load_config($param);
+   $class->lock_config($param, sub {
+   my ($param) = @_;
 
-   my $name = lc($param->{name});
+   my ($fw_conf, $aliases) = $class->load_config($param);
 
-   raise_param_exc({ name => "alias '$param->{name}' already exists" })
-   if defined($aliases->{$name});
+   my $name = lc($param->{name});
 
-   my $data = { name => $param->{name}, cidr => $param->{cidr} };
-   $data->{comment} = $param->{comment} if $param->{comment};
+   raise_param_exc({ name => "alias '$param->{name}' already 
exists" })
+   if defined($aliases->{$name});
 
-   $aliases->{$name} = $data;
+   my $data = { name => $param->{name}, cidr => $param->{cidr} };
+   $data->{comment} = $param->{comment} if $param->{comment};
 
-   $class->save_aliases($param, $fw_conf, $aliases);
+   $aliases->{$name} = $data;
+
+   $class->save_aliases($param, $fw_conf, $aliases);
+   });
 
return undef;
}});
@@ -219,35 +223,39 @@ sub register_update_alias {
code => sub {
my ($param) = @_;
 
-   my ($fw_conf, $aliases) = $class->load_config($param);
+   $class->lock_config($param, sub {
+   my ($param) = @_;
 
-   my $list = &$aliases_to_list($aliases);
+   my ($fw_conf, $aliases) = $class->load_config($param);
 
-   my (undef, $digest) = PVE::Firewall::copy_list_with_digest($list);
+   my $list = &$aliases_to_list($aliases);
 
-   PVE::Tools::assert_if_modified($digest, $param->{digest});
+   my (undef, $digest) = 
PVE::Firewall::copy_list_with_digest($list);
 
-   my $name = lc($param->{name});
+   PVE::Tools::assert_if_modified($digest, $param->{digest});
 
-   raise_param_exc({ name => "no such alias" }) if !$aliases->{$name};
+   my $name = lc($param->{name});
 
-   my $data = { name => $param->{name}, cidr => $param->{cidr} };
-   $data->{comment} = $param->{comment} if $param->{comment};
+   raise_param_exc({ name => "no such alias" }) if 
!$aliases->{$name};
 
-   $aliases->{$name} = $data;
+   my $data = { name => $param->{name}, cidr => $param->{cidr} };
+   $data->{comment} = $param->{comment} if $param->{comment};
 
-   my $rename = $param->{rename};
-   $rename = lc($rename) if $rename;
+   $aliases->{$name} = $data;
 
-   if ($rename && ($name ne $rename)) {
-   raise_param_exc({ name => "alias '$param->{rename}' already 
exists" })
-   if defined($aliases->{$rename});
-   $aliases->{$name}->{name} = $param->{rename};
-   $aliases->{$rename} = $aliases->{$name};
-   delete $aliases->{$name};
-   }
+   my $rename = $param->{rename};
+   $rename = lc($rename) if $rename;
 
-   $class->save_aliases($param, $fw_conf, $aliases);
+   if ($rename && ($name ne $rename)) {
+   raise_param_exc({ name => "alias '$param->{rename}' already 
exists" })
+   if defined($aliases->{$rename});
+   $aliases->{$name}->{name} = $param->{rename};
+   $aliases->{$rename} = $aliases->{$name};
+   delete $aliases->{$name};
+   }
+
+   $class->save_aliases($param, $fw_conf, $aliases);
+   });
 
return undef;
}});
@@ -276,16 +284,20 @@ sub register_delete_alias {
code => sub {
my ($param) = @_;
 
-   my ($fw_conf, $aliases) = $cla

[pve-devel] [PATCH firewall 5/7] api/ipsets: parse_cidr before checking for duplicates

2020-04-29 Thread Fabian Grünbichler
for example, the config parser drops a trailing /32 for IPv4, so we
should do the same here.  otherwise we can have one entry for $IP and
one for $IP/32 with different properties until the next R-M-W cycle
drops one of them again.

Signed-off-by: Fabian Grünbichler 
---
 src/PVE/API2/Firewall/IPSet.pm | 10 +++---
 1 file changed, 7 insertions(+), 3 deletions(-)

diff --git a/src/PVE/API2/Firewall/IPSet.pm b/src/PVE/API2/Firewall/IPSet.pm
index 913dd86..ec9326f 100644
--- a/src/PVE/API2/Firewall/IPSet.pm
+++ b/src/PVE/API2/Firewall/IPSet.pm
@@ -195,6 +195,13 @@ sub register_create_ip {
my ($cluster_conf, $fw_conf, $ipset) = 
$class->load_config($param);
 
my $cidr = $param->{cidr};
+   if ($cidr =~ m/^${PVE::Firewall::ip_alias_pattern}$/) {
+   # make sure alias exists (if $cidr is an alias)
+   PVE::Firewall::resolve_alias($cluster_conf, $fw_conf, 
$cidr);
+   } else {
+   # normalize like config parser, otherwise duplicates might 
slip through
+   $cidr = PVE::Firewall::parse_ip_or_cidr($cidr);
+   }
 
foreach my $entry (@$ipset) {
raise_param_exc({ cidr => "address '$cidr' already exists" 
})
@@ -204,9 +211,6 @@ sub register_create_ip {
raise_param_exc({ cidr => "a zero prefix is not allowed in 
ipset entries" })
if $cidr =~ m!/0+$!;
 
-   # make sure alias exists (if $cidr is an alias)
-   PVE::Firewall::resolve_alias($cluster_conf, $fw_conf, $cidr)
-   if $cidr =~ m/^${PVE::Firewall::ip_alias_pattern}$/;
 
my $data = { cidr => $cidr };
 
-- 
2.20.1


___
pve-devel mailing list
pve-devel@pve.proxmox.com
https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel


[pve-devel] [PATCH firewall 2/7] api: add locking helpers

2020-04-29 Thread Fabian Grünbichler
for ipset, rules and alias API generation modules.

Signed-off-by: Fabian Grünbichler 
---

Notes:
separated from using them for easier reviewing

 src/PVE/API2/Firewall/Aliases.pm | 24 
 src/PVE/API2/Firewall/IPSet.pm   | 48 
 src/PVE/API2/Firewall/Rules.pm   | 36 
 3 files changed, 108 insertions(+)

diff --git a/src/PVE/API2/Firewall/Aliases.pm b/src/PVE/API2/Firewall/Aliases.pm
index 2a66abd..9ea6f70 100644
--- a/src/PVE/API2/Firewall/Aliases.pm
+++ b/src/PVE/API2/Firewall/Aliases.pm
@@ -25,6 +25,12 @@ my $api_properties = {
 },
 };
 
+sub lock_config {
+my ($class, $param, $code) = @_;
+
+die "implement this in subclass";
+}
+
 sub load_config {
 my ($class, $param) = @_;
 
@@ -308,6 +314,12 @@ sub rule_env {
 return 'cluster';
 }
 
+sub lock_config {
+my ($class, $param, $code) = @_;
+
+PVE::Firewall::lock_clusterfw_conf(10, $code, $param);
+}
+
 sub load_config {
 my ($class, $param) = @_;
 
@@ -345,6 +357,12 @@ __PACKAGE__->additional_parameters({
 vmid => get_standard_option('pve-vmid'),
 });
 
+sub lock_config {
+my ($class, $param, $code) = @_;
+
+PVE::Firewall::lock_vmfw_conf($param->{vmid}, 10, $code, $param);
+}
+
 sub load_config {
 my ($class, $param) = @_;
 
@@ -383,6 +401,12 @@ __PACKAGE__->additional_parameters({
 vmid => get_standard_option('pve-vmid'),
 });
 
+sub lock_config {
+my ($class, $param, $code) = @_;
+
+PVE::Firewall::lock_vmfw_conf($param->{vmid}, 10, $code, $param);
+}
+
 sub load_config {
 my ($class, $param) = @_;
 
diff --git a/src/PVE/API2/Firewall/IPSet.pm b/src/PVE/API2/Firewall/IPSet.pm
index e59a6f2..72e7524 100644
--- a/src/PVE/API2/Firewall/IPSet.pm
+++ b/src/PVE/API2/Firewall/IPSet.pm
@@ -25,6 +25,12 @@ my $api_properties = {
 },
 };
 
+sub lock_config {
+my ($class, $param, $code) = @_;
+
+die "implement this in subclass";
+}
+
 sub load_config {
 my ($class, $param) = @_;
 
@@ -353,6 +359,12 @@ sub rule_env {
 return 'cluster';
 }
 
+sub lock_config {
+my ($class, $param, $code) = @_;
+
+PVE::Firewall::lock_clusterfw_conf(10, $code, $param);
+}
+
 sub load_config {
 my ($class, $param) = @_;
 
@@ -390,6 +402,12 @@ __PACKAGE__->additional_parameters({
 vmid => get_standard_option('pve-vmid'),
 });
 
+sub lock_config {
+my ($class, $param, $code) = @_;
+
+PVE::Firewall::lock_vmfw_conf($param->{vmid}, 10, $code, $param);
+}
+
 sub load_config {
 my ($class, $param) = @_;
 
@@ -428,6 +446,12 @@ __PACKAGE__->additional_parameters({
 vmid => get_standard_option('pve-vmid'),
 });
 
+sub lock_config {
+my ($class, $param, $code) = @_;
+
+PVE::Firewall::lock_vmfw_conf($param->{vmid}, 10, $code, $param);
+}
+
 sub load_config {
 my ($class, $param) = @_;
 
@@ -457,6 +481,12 @@ use PVE::Firewall;
 
 use base qw(PVE::RESTHandler);
 
+sub lock_config {
+my ($class, $param, $code) = @_;
+
+die "implement this in subclass";
+}
+
 sub load_config {
 my ($class, $param) = @_;
 
@@ -638,6 +668,12 @@ sub rule_env {
 return 'cluster';
 }
 
+sub lock_config {
+my ($class, $param, $code) = @_;
+
+PVE::Firewall::lock_clusterfw_conf(10, $code, $param);
+}
+
 sub load_config {
 my ($class, $param) = @_;
 
@@ -680,6 +716,12 @@ sub rule_env {
 return 'vm';
 }
 
+sub lock_config {
+my ($class, $param, $code) = @_;
+
+PVE::Firewall::lock_vmfw_conf($param->{vmid}, 10, $code, $param);
+}
+
 sub load_config {
 my ($class, $param) = @_;
 
@@ -723,6 +765,12 @@ sub rule_env {
 return 'ct';
 }
 
+sub lock_config {
+my ($class, $param, $code) = @_;
+
+PVE::Firewall::lock_vmfw_conf($param->{vmid}, 10, $code, $param);
+}
+
 sub load_config {
 my ($class, $param) = @_;
 
diff --git a/src/PVE/API2/Firewall/Rules.pm b/src/PVE/API2/Firewall/Rules.pm
index 0e93a4a..1fde596 100644
--- a/src/PVE/API2/Firewall/Rules.pm
+++ b/src/PVE/API2/Firewall/Rules.pm
@@ -17,6 +17,12 @@ my $api_properties = {
 },
 };
 
+sub lock_config {
+my ($class, $param, $code) = @_;
+
+die "implement this in subclass";
+}
+
 sub load_config {
 my ($class, $param) = @_;
 
@@ -380,6 +386,12 @@ sub rule_env {
 return 'group';
 }
 
+sub lock_config {
+my ($class, $param, $code) = @_;
+
+PVE::Firewall::lock_clusterfw_conf(10, $code, $param);
+}
+
 sub load_config {
 my ($class, $param) = @_;
 
@@ -446,6 +458,12 @@ sub rule_env {
 return 'cluster';
 }
 
+sub lock_config {
+my ($class, $param, $code) = @_;
+
+PVE::Firewall::lock_clusterfw_conf(10, $code, $param);
+}
+
 sub load_config {
 my ($class, $param) = @_;
 
@@ -480,6 +498,12 @@ sub rule_env {
 return 'host';
 }
 
+sub lock_config {
+my ($class, $param, $code) = @_;
+
+PVE::Firewall::lock_hostfw_conf(10, $code, $param);
+}
+
 sub load_config {
 my ($class, $param) = @_;
 
@@ 

[pve-devel] [PATCH firewall 6/7] configs: warn about duplicate ipset entries

2020-04-29 Thread Fabian Grünbichler
instead of silently dropping them when writing the config out.

Signed-off-by: Fabian Grünbichler 
---
 src/PVE/Firewall.pm | 15 ++-
 1 file changed, 14 insertions(+), 1 deletion(-)

diff --git a/src/PVE/Firewall.pm b/src/PVE/Firewall.pm
index 7b26ac5..4d86032 100644
--- a/src/PVE/Firewall.pm
+++ b/src/PVE/Firewall.pm
@@ -2897,6 +2897,8 @@ sub generic_fw_config_parser {
 }
 return {} if !$raw;
 
+my $curr_group_keys = {};
+
 my $linenr = 0;
 while ($raw =~ /^\h*(.*?)\h*$/gm) {
my $line = $1;
@@ -2957,6 +2959,8 @@ sub generic_fw_config_parser {
}
 
$res->{$section}->{$group} = [];
+   $curr_group_keys = {};
+
$res->{ipset_comments}->{$group} = decode('utf8', $comment)
if $comment;
next;
@@ -3021,6 +3025,8 @@ sub generic_fw_config_parser {
} else {
$cidr = parse_ip_or_cidr($cidr);
}
+   die "duplicate ipset entry for '$cidr'\n"
+   if defined($curr_group_keys->{$cidr});
};
if (my $err = $@) {
chomp $err;
@@ -3044,6 +3050,7 @@ sub generic_fw_config_parser {
}
 
push @{$res->{$section}->{$group}}, $entry;
+   $curr_group_keys->{$cidr} = 1;
} else {
warn "$prefix: skip line - unknown section\n";
next;
@@ -3221,7 +3228,13 @@ my $format_ipsets = sub {
 
my $nethash = {};
foreach my $entry (@$options) {
-   $nethash->{$entry->{cidr}} = $entry;
+   my $cidr = $entry->{cidr};
+   if (defined($nethash->{$cidr})) {
+   warn "ignoring duplicate ipset entry '$cidr'\n";
+   next;
+   }
+
+   $nethash->{$cidr} = $entry;
}
 
foreach my $cidr (sort keys %$nethash) {
-- 
2.20.1


___
pve-devel mailing list
pve-devel@pve.proxmox.com
https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel


[pve-devel] [PATCH firewall 7/7] rules: verify referenced security group exists

2020-04-29 Thread Fabian Grünbichler
while this was already handled properly (as empty rules), adding this as
error makes it much more visible (in the GUI as well).

Signed-off-by: Fabian Grünbichler 
---
 src/PVE/Firewall.pm | 2 ++
 1 file changed, 2 insertions(+)

diff --git a/src/PVE/Firewall.pm b/src/PVE/Firewall.pm
index 4d86032..40468e4 100644
--- a/src/PVE/Firewall.pm
+++ b/src/PVE/Firewall.pm
@@ -1623,6 +1623,8 @@ sub verify_rule {
if !$allow_groups;
&$add_error('action', "invalid characters in security group name")
if $action && ($action !~ m/^${security_group_name_pattern}$/);
+   &$add_error('action', "security group '$action' does not exist")
+   if $action && !defined($cluster_conf->{groups}->{$action});
} else {
&$add_error('type', "unknown rule type '$type'");
}
-- 
2.20.1


___
pve-devel mailing list
pve-devel@pve.proxmox.com
https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel


[pve-devel] [RFC cluster 2/2] cfs_lock: re-raise exceptions

2020-04-29 Thread Fabian Grünbichler
so that API paths that raise an exception while holding a CFS lock
properly propagate that exception to the client, instead of the
stringified version with added noise about locks added to the front.

Signed-off-by: Fabian Grünbichler 
---

Notes:
there seems to be nothing that matches on the prefix added by cfs_lock.

some API calls in pve-access-control, pve-storage and Replication would now
benefit from raising (parameter) exceptions instead of a plain death while
having a config file locked (mainly checking for duplicates when adding 
entries)

 data/PVE/Cluster.pm | 8 +++-
 1 file changed, 7 insertions(+), 1 deletion(-)

diff --git a/data/PVE/Cluster.pm b/data/PVE/Cluster.pm
index b4de989..cce681b 100644
--- a/data/PVE/Cluster.pm
+++ b/data/PVE/Cluster.pm
@@ -609,7 +609,13 @@ my $cfs_lock = sub {
 alarm($prev_alarm);
 
 if ($err) {
-$@ = "error with cfs lock '$lockid': $err";
+   if (ref($err) eq 'PVE::Exception') {
+   # re-raise defined exceptions
+   $@ = $err;
+   } else {
+   # add lock info for plain errors 
+   $@ = "error with cfs lock '$lockid': $err";
+   }
 return undef;
 }
 
-- 
2.20.1


___
pve-devel mailing list
pve-devel@pve.proxmox.com
https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel


[pve-devel] [PATCH firewall 4/7] clone_vmfw_conf: lock new config

2020-04-29 Thread Fabian Grünbichler
Signed-off-by: Fabian Grünbichler 
---

Notes:
best viewed with -w

 src/PVE/Firewall.pm | 16 +---
 1 file changed, 9 insertions(+), 7 deletions(-)

diff --git a/src/PVE/Firewall.pm b/src/PVE/Firewall.pm
index eda39eb..7b26ac5 100644
--- a/src/PVE/Firewall.pm
+++ b/src/PVE/Firewall.pm
@@ -3282,13 +3282,15 @@ sub clone_vmfw_conf {
 my $sourcevm_conffile = "$pvefw_conf_dir/$vmid.fw";
 my $clonevm_conffile = "$pvefw_conf_dir/$newid.fw";
 
-if (-f $clonevm_conffile) {
-   unlink $clonevm_conffile;
-}
-if (-f $sourcevm_conffile) {
-   my $data = PVE::Tools::file_get_contents($sourcevm_conffile);
-   PVE::Tools::file_set_contents($clonevm_conffile, $data);
-}
+lock_vmfw_conf($newid, 10, sub {
+   if (-f $clonevm_conffile) {
+   unlink $clonevm_conffile;
+   }
+   if (-f $sourcevm_conffile) {
+   my $data = PVE::Tools::file_get_contents($sourcevm_conffile);
+   PVE::Tools::file_set_contents($clonevm_conffile, $data);
+   }
+});
 }
 
 sub read_vm_firewall_configs {
-- 
2.20.1


___
pve-devel mailing list
pve-devel@pve.proxmox.com
https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel


[pve-devel] [PATCH firewall 1/7] configs: add locking helpers

2020-04-29 Thread Fabian Grünbichler
to allow some level of safe concurrent config modification, instead of
the current free for all.

Signed-off-by: Fabian Grünbichler 
---

Notes:
require pve-cluster that provides cfs_lock_firewall, or switching to
cfs_lock_domain as mentioned in pve-cluster#1

lock_hostfw_conf could also use a node-local lock, but IMHO it's easier to 
have
the same locking semantics/interface across all three levels (especially if 
we
do the second patch in pve-cluster).

it's easy enough to switch out though.

 src/PVE/Firewall.pm | 32 
 1 file changed, 32 insertions(+)

diff --git a/src/PVE/Firewall.pm b/src/PVE/Firewall.pm
index d22b15a..eda39eb 100644
--- a/src/PVE/Firewall.pm
+++ b/src/PVE/Firewall.pm
@@ -3053,6 +3053,8 @@ sub generic_fw_config_parser {
 return $res;
 }
 
+# this is only used to prevent concurrent runs of rule compilation/application
+# see lock_*_conf for cfs locks protectiong config modification
 sub run_locked {
 my ($code, @param) = @_;
 
@@ -3101,6 +3103,18 @@ sub read_local_vm_config {
 return $vmdata;
 };
 
+sub lock_vmfw_conf {
+my ($vmid, $timeout, $code, @param) = @_;
+
+die "can't lock VM firewall config for undefined VMID\n"
+   if !defined($vmid);
+
+my $res = PVE::Cluster::cfs_lock_firewall("vm-$vmid", $timeout, $code, 
@param);
+die $@ if $@;
+
+return $res;
+}
+
 sub load_vmfw_conf {
 my ($cluster_conf, $rule_env, $vmid, $dir) = @_;
 
@@ -3448,6 +3462,15 @@ my $set_global_log_ratelimit = sub {
 }
 };
 
+sub lock_clusterfw_conf {
+my ($timeout, $code, @param) = @_;
+
+my $res = PVE::Cluster::cfs_lock_firewall("cluster", $timeout, $code, 
@param);
+die $@ if $@;
+
+return $res;
+}
+
 sub load_clusterfw_conf {
 my ($filename) = @_;
 
@@ -3511,6 +3534,15 @@ sub save_clusterfw_conf {
 }
 }
 
+sub lock_hostfw_conf {
+my ($timeout, $code, @param) = @_;
+
+my $res = PVE::Cluster::cfs_lock_firewall("host-$nodename", $timeout, 
$code, @param);
+die $@ if $@;
+
+return $res;
+}
+
 sub load_hostfw_conf {
 my ($cluster_conf, $filename) = @_;
 
-- 
2.20.1


___
pve-devel mailing list
pve-devel@pve.proxmox.com
https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel


[pve-devel] [PATCH firewall/cluster 0/9] add locking to firewall config changes

2020-04-29 Thread Fabian Grünbichler
the second cluster patch is optional, but improves usability of
non-worker API calls that do

cfs_lock_foo(..., sub {

  raise_foo

});

the last three firewall patches are unrelated bug fixes that I found
while testing.

pve-cluster:

Fabian Grünbichler (2):
  cfs_lock: add firewall lock helper
  cfs_lock: re-raise exceptions

 data/PVE/Cluster.pm | 16 +++-
 1 file changed, 15 insertions(+), 1 deletion(-)

pve-firewall:

Fabian Grünbichler (7):
  configs: add locking helpers
  api: add locking helpers
  api: lock configs
  clone_vmfw_conf: lock new config
  api/ipsets: parse_cidr before checking for duplicates
  configs: warn about duplicate ipset entries
  rules: verify referenced security group exists

 src/PVE/API2/Firewall/Aliases.pm | 104 ++--
 src/PVE/API2/Firewall/Cluster.pm |  36 +++---
 src/PVE/API2/Firewall/Groups.pm  |  52 
 src/PVE/API2/Firewall/Host.pm|  42 ---
 src/PVE/API2/Firewall/IPSet.pm   | 204 +--
 src/PVE/API2/Firewall/Rules.pm   | 130 ++--
 src/PVE/API2/Firewall/VM.pm  |  43 +++
 src/PVE/Firewall.pm  |  65 --
 8 files changed, 449 insertions(+), 227 deletions(-)

-- 
2.20.1


___
pve-devel mailing list
pve-devel@pve.proxmox.com
https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel


[pve-devel] [PATCH cluster 1/2] cfs_lock: add firewall lock helper

2020-04-29 Thread Fabian Grünbichler
Signed-off-by: Fabian Grünbichler 
---
alternatively we could re-use 'cfs_lock_domain', which is currently
only used by HA and was intended as general-purpose cfs_lock wrapper..
I'd shorten the firewall- prefix to fw- in that case though.

domain-fw-host-$foo might be more confusing to end users though, as it's
pretty visible in error messages.

 data/PVE/Cluster.pm | 8 
 1 file changed, 8 insertions(+)

diff --git a/data/PVE/Cluster.pm b/data/PVE/Cluster.pm
index 068d626..b4de989 100644
--- a/data/PVE/Cluster.pm
+++ b/data/PVE/Cluster.pm
@@ -659,6 +659,14 @@ sub cfs_lock_authkey {
 $cfs_lock->('authkey', $timeout, $code, @param);
 }
 
+sub cfs_lock_firewall {
+my ($scope, $timeout, $code, @param) = @_;
+
+my $lockid = "firewall-$scope";
+
+$cfs_lock->($lockid, $timeout, $code, @param);
+}
+
 my $log_levels = {
 "emerg" => 0,
 "alert" => 1,
-- 
2.20.1


___
pve-devel mailing list
pve-devel@pve.proxmox.com
https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel


Re: [pve-devel] [PATCH guest-common 1/3] snapshot_rollback: flock earlier

2020-04-27 Thread Fabian Grünbichler
On April 27, 2020 1:08 pm, Fabian Ebner wrote:
> One not-patch-related observation inline.
> 
> On 27.04.20 10:24, Fabian Grünbichler wrote:
>> to protect checks against concurrent modifications
>> 
>> Signed-off-by: Fabian Grünbichler 
>> ---
>> 
>> Notes:
>>  bested viewed with --patience -w
>> 
>>   PVE/AbstractConfig.pm | 45 +--
>>   1 file changed, 22 insertions(+), 23 deletions(-)
>> 
>> diff --git a/PVE/AbstractConfig.pm b/PVE/AbstractConfig.pm
>> index 15368de..70311df 100644
>> --- a/PVE/AbstractConfig.pm
>> +++ b/PVE/AbstractConfig.pm
>> @@ -919,9 +919,10 @@ sub snapshot_rollback {
>>   
>>   my $storecfg = PVE::Storage::config();
>>   
>> -my $conf = $class->load_config($vmid);
>> +my $data = {};
>>   
>>   my $get_snapshot_config = sub {
>> +my ($conf) = @_;
>>   
>>  die "you can't rollback if vm is a template\n" if 
>> $class->is_template($conf);
>>   
>> @@ -932,32 +933,30 @@ sub snapshot_rollback {
>>  return $res;
>>   };
>>   
>> -my $repl_conf = PVE::ReplicationConfig->new();
>> -if ($repl_conf->check_for_existing_jobs($vmid, 1)) {
>> -# remove all replication snapshots
>> -my $volumes = $class->get_replicatable_volumes($storecfg, $vmid, $conf, 
>> 1);
>> -my $sorted_volids = [ sort keys %$volumes ];
>> -
>> -# remove all local replication snapshots (jobid => undef)
>> -my $logfunc = sub { my $line = shift; chomp $line; print "$line\n"; };
>> -PVE::Replication::prepare($storecfg, $sorted_volids, undef, 0, undef, 
>> $logfunc);
>> -}
>> -
>> -my $snap = &$get_snapshot_config();
>> -
>> -$class->foreach_volume($snap, sub {
>> -my ($vs, $volume) = @_;
>> -
>> -$class->__snapshot_rollback_vol_possible($volume, $snapname);
>> -});
>> -
>> -my $data = {};
>> +my $snap;
>>   
>>   my $updatefn = sub {
>> +my $conf = $class->load_config($vmid);
>> +$snap = $get_snapshot_config->($conf);
>>   
>> -$conf = $class->load_config($vmid);
>> +if ($prepare) {
>> +my $repl_conf = PVE::ReplicationConfig->new();
>> +if ($repl_conf->check_for_existing_jobs($vmid, 1)) {
>> +# remove all replication snapshots
>> +my $volumes = $class->get_replicatable_volumes($storecfg, 
>> $vmid, $conf, 1);
>> +my $sorted_volids = [ sort keys %$volumes ];
>> +
>> +# remove all local replication snapshots (jobid => undef)
>> +my $logfunc = sub { my $line = shift; chomp $line; print 
>> "$line\n"; };
>> +PVE::Replication::prepare($storecfg, $sorted_volids, undef, 0, 
>> undef, $logfunc);
> 
> This has nothing to do with the locking/this patch, but here we'd need 
> to call prepare with last_sync=1 instead of last_sync=0, no? This is 
> because of commit a1dfeff3a8502544123245ea61ad62cbe97803b7 which made 
> last_sync=0 a special case and changed the behavior. I can send a patch 
> once this is applied.

thanks for noticing - yes, that seems to be correct.

> 
>> +}
>> +
>> +$class->foreach_volume($snap, sub {
>> +   my ($vs, $volume) = @_;
>>   
>> -$snap = &$get_snapshot_config();
>> +   $class->__snapshot_rollback_vol_possible($volume, $snapname);
>> +});
>> +}
>>   
>>  die "unable to rollback to incomplete snapshot (snapstate = 
>> $snap->{snapstate})\n"
>>  if $snap->{snapstate};
>> 
> 
> ___
> pve-devel mailing list
> pve-devel@pve.proxmox.com
> https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel
> 

___
pve-devel mailing list
pve-devel@pve.proxmox.com
https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel


[pve-devel] [PATCH guest-common 3/3] lock_config: rename lock_config_mode -> lock_config_shared

2020-04-27 Thread Fabian Grünbichler
and pull the actual lock_file_full handling into a helper, to make the
public interface clearer:

lock_config -> standard exclusive lock with 10s timeout
lock_config_full -> exclusive lock with configurable timeout
lock_config_shared -> shared lock with configurable timeout

the latter only has a single user (qemu-server's clone API call)
currently.

Signed-off-by: Fabian Grünbichler 
---

Notes:
requires breaks on qemu-server that uses lock_config_mode
possibly better to postpone since it does not fix an actual issue
but just improves the readability/design

 PVE/AbstractConfig.pm | 39 ++-
 1 file changed, 22 insertions(+), 17 deletions(-)

diff --git a/PVE/AbstractConfig.pm b/PVE/AbstractConfig.pm
index 8ec27a6..1abe41c 100644
--- a/PVE/AbstractConfig.pm
+++ b/PVE/AbstractConfig.pm
@@ -253,15 +253,6 @@ sub load_current_config {
 return $conf;
 }
 
-
-# Lock config file using flock, run $code with @param, unlock config file.
-# $timeout is the maximum time to acquire the flock
-sub lock_config_full {
-my ($class, $vmid, $timeout, $code, @param) = @_;
-
-return $class->lock_config_mode($vmid, $timeout, 0, $code, @param);
-}
-
 sub create_and_lock_config {
 my ($class, $vmid, $allow_existing, $lock) = @_;
 
@@ -284,27 +275,41 @@ sub destroy_config {
 unlink $config_fn or die "failed to remove config file: $!\n";
 }
 
-# Lock config file using flock, run $code with @param, unlock config file.
-# $timeout is the maximum time to acquire the flock
-# $shared eq 1 creates a non-exclusive ("read") flock
-sub lock_config_mode {
-my ($class, $vmid, $timeout, $shared, $code, @param) = @_;
+my $lock_file_full_wrapper = sub {
+my ($class, $vmid, $timeout, $shared, $realcode, @param) = @_;
 
 my $filename = $class->config_file_lock($vmid);
 
 # make sure configuration file is up-to-date
-my $realcode = sub {
+my $code = sub {
PVE::Cluster::cfs_update();
-   $code->(@param);
+   $realcode->(@_);
 };
 
-my $res = lock_file_full($filename, $timeout, $shared, $realcode, @param);
+my $res = lock_file_full($filename, $timeout, $shared, $code, @param);
 
 die $@ if $@;
 
 return $res;
+};
+
+# Lock config file using non-exclusive ("read") flock, run $code with @param, 
unlock config file.
+# $timeout is the maximum time to acquire the flock
+sub lock_config_shared {
+my ($class, $vmid, $timeout, $code, @param) = @_;
+
+return $lock_file_full_wrapper->($class, $vmid, $timeout, 1, $code, 
@param);
 }
 
+# Lock config file using flock, run $code with @param, unlock config file.
+# $timeout is the maximum time to acquire the flock
+sub lock_config_full {
+my ($class, $vmid, $timeout, $code, @param) = @_;
+
+return $lock_file_full_wrapper->($class, $vmid, $timeout, 0, $code, 
@param);
+}
+
+
 # Lock config file using flock, run $code with @param, unlock config file.
 sub lock_config {
 my ($class, $vmid, $code, @param) = @_;
-- 
2.20.1


___
pve-devel mailing list
pve-devel@pve.proxmox.com
https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel


[pve-devel] [PATCH container 2/2] api/destroy: repeat early checks after locking

2020-04-27 Thread Fabian Grünbichler
and check_lock before forking as well

Signed-off-by: Fabian Grünbichler 
---

Notes:
bested viewed with -w --patience

 src/PVE/API2/LXC.pm | 31 ---
 1 file changed, 20 insertions(+), 11 deletions(-)

diff --git a/src/PVE/API2/LXC.pm b/src/PVE/API2/LXC.pm
index 148ba6a..21899d0 100644
--- a/src/PVE/API2/LXC.pm
+++ b/src/PVE/API2/LXC.pm
@@ -646,20 +646,28 @@ __PACKAGE__->register_method({
my $vmid = $param->{vmid};
 
# test if container exists
+
my $conf = PVE::LXC::Config->load_config($vmid);
-   my $storage_cfg = cfs_read_file("storage.cfg");
-   PVE::LXC::Config->check_protection($conf, "can't remove CT $vmid");
+   my $early_checks = sub {
+   my ($conf) = @_;
+   PVE::LXC::Config->check_protection($conf, "can't remove CT $vmid");
+   PVE::LXC::Config->check_lock($conf);
 
-   my $ha_managed = PVE::HA::Config::service_is_configured("ct:$vmid");
+   my $ha_managed = PVE::HA::Config::service_is_configured("ct:$vmid");
 
-   if (!$param->{purge}) {
-   die "unable to remove CT $vmid - used in HA resources and purge 
parameter not set.\n"
-   if $ha_managed;
+   if (!$param->{purge}) {
+   die "unable to remove CT $vmid - used in HA resources and purge 
parameter not set.\n"
+   if $ha_managed;
 
-   # do not allow destroy if there are replication jobs without purge
-   my $repl_conf = PVE::ReplicationConfig->new();
-   $repl_conf->check_for_existing_jobs($vmid);
-   }
+   # do not allow destroy if there are replication jobs without 
purge
+   my $repl_conf = PVE::ReplicationConfig->new();
+   $repl_conf->check_for_existing_jobs($vmid);
+   }
+
+   return $ha_managed;
+   };
+
+   $early_checks->($conf);
 
my $running_error_msg = "unable to destroy CT $vmid - container is 
running\n";
die $running_error_msg if !$param->{force} && 
PVE::LXC::check_running($vmid); # check early
@@ -667,7 +675,7 @@ __PACKAGE__->register_method({
my $code = sub {
# reload config after lock
$conf = PVE::LXC::Config->load_config($vmid);
-   PVE::LXC::Config->check_lock($conf);
+   my $ha_managed = $early_checks->($conf);
 
if (PVE::LXC::check_running($vmid)) {
die $running_error_msg if !$param->{force};
@@ -679,6 +687,7 @@ __PACKAGE__->register_method({
}
}
 
+   my $storage_cfg = cfs_read_file("storage.cfg");
PVE::LXC::destroy_lxc_container($storage_cfg, $vmid, $conf, { lock 
=> 'destroyed' });
 
PVE::AccessControl::remove_vm_access($vmid);
-- 
2.20.1


___
pve-devel mailing list
pve-devel@pve.proxmox.com
https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel


[pve-devel] [PATCH qemu-server 1/3] QemuServer: drop unused imported locking functions

2020-04-27 Thread Fabian Grünbichler
lock_file is used by PVE::QemuServer::Memory, but it does properly 'use
PVE::Tools ...' itself so we can drop them in the main module.

Signed-off-by: Fabian Grünbichler 
---
 PVE/QemuServer.pm | 4 ++--
 1 file changed, 2 insertions(+), 2 deletions(-)

diff --git a/PVE/QemuServer.pm b/PVE/QemuServer.pm
index 37c7320..6c339ca 100644
--- a/PVE/QemuServer.pm
+++ b/PVE/QemuServer.pm
@@ -26,7 +26,7 @@ use Time::HiRes qw(gettimeofday);
 use URI::Escape;
 use UUID;
 
-use PVE::Cluster qw(cfs_register_file cfs_read_file cfs_write_file 
cfs_lock_file);
+use PVE::Cluster qw(cfs_register_file cfs_read_file cfs_write_file);
 use PVE::DataCenterConfig;
 use PVE::Exception qw(raise raise_param_exc);
 use PVE::GuestHelpers qw(safe_string_ne safe_num_ne safe_boolean_ne);
@@ -37,7 +37,7 @@ use PVE::RPCEnvironment;
 use PVE::Storage;
 use PVE::SysFSTools;
 use PVE::Systemd;
-use PVE::Tools qw(run_command lock_file lock_file_full file_read_firstline 
file_get_contents dir_glob_foreach get_host_arch $IPV6RE);
+use PVE::Tools qw(run_command file_read_firstline file_get_contents 
dir_glob_foreach get_host_arch $IPV6RE);
 
 use PVE::QMPClient;
 use PVE::QemuConfig;
-- 
2.20.1


___
pve-devel mailing list
pve-devel@pve.proxmox.com
https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel


[pve-devel] [PATCH qemu-server 2/3] api/destroy: repeat early checks after lock

2020-04-27 Thread Fabian Grünbichler
to protect against concurrent changes

Signed-off-by: Fabian Grünbichler 
---
best viewed with --patience -w

 PVE/API2/Qemu.pm | 40 +++-
 1 file changed, 23 insertions(+), 17 deletions(-)

diff --git a/PVE/API2/Qemu.pm b/PVE/API2/Qemu.pm
index ec4c18c..f6a98f0 100644
--- a/PVE/API2/Qemu.pm
+++ b/PVE/API2/Qemu.pm
@@ -1468,32 +1468,38 @@ __PACKAGE__->register_method({
raise_param_exc({ skiplock => "Only root may use this option." })
if $skiplock && $authuser ne 'root@pam';
 
-   # test if VM exists
-   my $conf = PVE::QemuConfig->load_config($vmid);
-   my $storecfg = PVE::Storage::config();
-   PVE::QemuConfig->check_protection($conf, "can't remove VM $vmid");
+   my $early_checks = sub {
+   # test if VM exists
+   my $conf = PVE::QemuConfig->load_config($vmid);
+   PVE::QemuConfig->check_protection($conf, "can't remove VM $vmid");
 
-   my $ha_managed = PVE::HA::Config::service_is_configured("vm:$vmid");
+   my $ha_managed = PVE::HA::Config::service_is_configured("vm:$vmid");
 
-   if (!$param->{purge}) {
-   die "unable to remove VM $vmid - used in HA resources and purge 
parameter not set.\n"
-   if $ha_managed;
-   # don't allow destroy if with replication jobs but no purge param
-   my $repl_conf = PVE::ReplicationConfig->new();
-   $repl_conf->check_for_existing_jobs($vmid);
-   }
+   if (!$param->{purge}) {
+   die "unable to remove VM $vmid - used in HA resources and purge 
parameter not set.\n"
+   if $ha_managed;
+   # don't allow destroy if with replication jobs but no purge 
param
+   my $repl_conf = PVE::ReplicationConfig->new();
+   $repl_conf->check_for_existing_jobs($vmid);
+   }
 
-   # early tests (repeat after locking)
-   die "VM $vmid is running - destroy failed\n"
-   if PVE::QemuServer::check_running($vmid);
+   die "VM $vmid is running - destroy failed\n"
+   if PVE::QemuServer::check_running($vmid);
+
+   return $ha_managed;
+   };
+
+   $early_checks->();
 
my $realcmd = sub {
my $upid = shift;
 
+   my $storecfg = PVE::Storage::config();
+
syslog('info', "destroy VM $vmid: $upid\n");
PVE::QemuConfig->lock_config($vmid, sub {
-   die "VM $vmid is running - destroy failed\n"
-   if (PVE::QemuServer::check_running($vmid));
+   # repeat, config might have changed
+   my $ha_managed = $early_checks->();
 
PVE::QemuServer::destroy_vm($storecfg, $vmid, $skiplock, { lock 
=> 'destroyed' });
 
-- 
2.20.1


___
pve-devel mailing list
pve-devel@pve.proxmox.com
https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel


[pve-devel] [PATCH guest-common 2/3] snapshot_delete: check for concurrent modifications at each step

2020-04-27 Thread Fabian Grünbichler
Signed-off-by: Fabian Grünbichler 
---
 PVE/AbstractConfig.pm | 14 ++
 1 file changed, 14 insertions(+)

diff --git a/PVE/AbstractConfig.pm b/PVE/AbstractConfig.pm
index 70311df..8ec27a6 100644
--- a/PVE/AbstractConfig.pm
+++ b/PVE/AbstractConfig.pm
@@ -823,6 +823,15 @@ sub snapshot_delete {
 $class->set_lock($vmid, 'snapshot-delete')
if (!$drivehash); # doesn't already have a 'snapshot' lock
 
+my $expected_lock = $drivehash ? 'snapshot' : 'snapshot-delete';
+
+my $ensure_correct_lock = sub {
+   my ($conf) = @_;
+
+   die "encountered invalid lock, expected '$expected_lock'\n"
+   if !$class->has_lock($conf, $expected_lock);
+};
+
 my $unlink_parent = sub {
my ($confref, $new_parent) = @_;
 
@@ -839,6 +848,8 @@ sub snapshot_delete {
my ($drive) = @_;
 
my $conf = $class->load_config($vmid);
+   $ensure_correct_lock->($conf);
+
$snap = $conf->{snapshots}->{$snapname};
die "snapshot '$snapname' does not exist\n" if !defined($snap);
 
@@ -850,6 +861,7 @@ sub snapshot_delete {
 #prepare
 $class->lock_config($vmid, sub {
my $conf = $class->load_config($vmid);
+   $ensure_correct_lock->($conf);
 
die "you can't delete a snapshot if vm is a template\n"
if $class->is_template($conf);
@@ -890,6 +902,8 @@ sub snapshot_delete {
 # now cleanup config
 $class->lock_config($vmid, sub {
my $conf = $class->load_config($vmid);
+   $ensure_correct_lock->($conf);
+
$snap = $conf->{snapshots}->{$snapname};
die "snapshot '$snapname' does not exist\n" if !defined($snap);
 
-- 
2.20.1


___
pve-devel mailing list
pve-devel@pve.proxmox.com
https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel


[pve-devel] [PATCH container 1/2] LXC: drop unused imported locking functions

2020-04-27 Thread Fabian Grünbichler
Signed-off-by: Fabian Grünbichler 
---
 src/PVE/LXC.pm | 4 ++--
 1 file changed, 2 insertions(+), 2 deletions(-)

diff --git a/src/PVE/LXC.pm b/src/PVE/LXC.pm
index fbe736e..651ec08 100644
--- a/src/PVE/LXC.pm
+++ b/src/PVE/LXC.pm
@@ -20,8 +20,8 @@ use PVE::SafeSyslog;
 use PVE::INotify;
 use PVE::JSONSchema qw(get_standard_option);
 use PVE::Tools qw(
-dir_glob_foreach file_get_contents file_set_contents lock_file
-lock_file_full AT_FDCWD O_PATH $IPV4RE $IPV6RE
+dir_glob_foreach file_get_contents file_set_contents
+AT_FDCWD O_PATH $IPV4RE $IPV6RE
 );
 use PVE::CpuSet;
 use PVE::Network;
-- 
2.20.1


___
pve-devel mailing list
pve-devel@pve.proxmox.com
https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel


[pve-devel] [PATCH qemu-server 3/3] clone: use new config_lock_shared

2020-04-27 Thread Fabian Grünbichler
and move the lock call and decision logic closer together

Signed-off-by: Fabian Grünbichler 
---

Notes:
needs libpve-guest-common-perl with new lock_config_shared
possibly better to postpone since it does not fix an actual issue
but just improves the readability

 PVE/API2/Qemu.pm | 15 +--
 1 file changed, 9 insertions(+), 6 deletions(-)

diff --git a/PVE/API2/Qemu.pm b/PVE/API2/Qemu.pm
index f6a98f0..9de8f9f 100644
--- a/PVE/API2/Qemu.pm
+++ b/PVE/API2/Qemu.pm
@@ -2856,9 +2856,6 @@ __PACKAGE__->register_method({
 
my $running = PVE::QemuServer::check_running($vmid) || 0;
 
-   # exclusive lock if VM is running - else shared lock is enough;
-   my $shared_lock = $running ? 0 : 1;
-
my $clonefn = sub {
# do all tests after lock but before forking worker - if possible
 
@@ -3046,11 +3043,17 @@ __PACKAGE__->register_method({
return $rpcenv->fork_worker('qmclone', $vmid, $authuser, $realcmd);
};
 
-   return PVE::QemuConfig->lock_config_mode($vmid, 1, $shared_lock, sub {
-   # Aquire exclusive lock lock for $newid
+   # Aquire exclusive lock lock for $newid
+   my $lock_target_vm = sub {
return PVE::QemuConfig->lock_config_full($newid, 1, $clonefn);
-   });
+   };
 
+   # exclusive lock if VM is running - else shared lock is enough;
+   if ($running) {
+   return PVE::QemuConfig->lock_config_full($vmid, 1, $lock_target_vm);
+   } else {
+   return PVE::QemuConfig->lock_config_shared($vmid, 1, 
$lock_target_vm);
+   }
 }});
 
 __PACKAGE__->register_method({
-- 
2.20.1


___
pve-devel mailing list
pve-devel@pve.proxmox.com
https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel


[pve-devel] [PATCH guest-common 1/3] snapshot_rollback: flock earlier

2020-04-27 Thread Fabian Grünbichler
to protect checks against concurrent modifications

Signed-off-by: Fabian Grünbichler 
---

Notes:
bested viewed with --patience -w

 PVE/AbstractConfig.pm | 45 +--
 1 file changed, 22 insertions(+), 23 deletions(-)

diff --git a/PVE/AbstractConfig.pm b/PVE/AbstractConfig.pm
index 15368de..70311df 100644
--- a/PVE/AbstractConfig.pm
+++ b/PVE/AbstractConfig.pm
@@ -919,9 +919,10 @@ sub snapshot_rollback {
 
 my $storecfg = PVE::Storage::config();
 
-my $conf = $class->load_config($vmid);
+my $data = {};
 
 my $get_snapshot_config = sub {
+   my ($conf) = @_;
 
die "you can't rollback if vm is a template\n" if 
$class->is_template($conf);
 
@@ -932,32 +933,30 @@ sub snapshot_rollback {
return $res;
 };
 
-my $repl_conf = PVE::ReplicationConfig->new();
-if ($repl_conf->check_for_existing_jobs($vmid, 1)) {
-   # remove all replication snapshots
-   my $volumes = $class->get_replicatable_volumes($storecfg, $vmid, $conf, 
1);
-   my $sorted_volids = [ sort keys %$volumes ];
-
-   # remove all local replication snapshots (jobid => undef)
-   my $logfunc = sub { my $line = shift; chomp $line; print "$line\n"; };
-   PVE::Replication::prepare($storecfg, $sorted_volids, undef, 0, undef, 
$logfunc);
-}
-
-my $snap = &$get_snapshot_config();
-
-$class->foreach_volume($snap, sub {
-   my ($vs, $volume) = @_;
-
-   $class->__snapshot_rollback_vol_possible($volume, $snapname);
-});
-
-my $data = {};
+my $snap;
 
 my $updatefn = sub {
+   my $conf = $class->load_config($vmid);
+   $snap = $get_snapshot_config->($conf);
 
-   $conf = $class->load_config($vmid);
+   if ($prepare) {
+   my $repl_conf = PVE::ReplicationConfig->new();
+   if ($repl_conf->check_for_existing_jobs($vmid, 1)) {
+   # remove all replication snapshots
+   my $volumes = $class->get_replicatable_volumes($storecfg, 
$vmid, $conf, 1);
+   my $sorted_volids = [ sort keys %$volumes ];
+
+   # remove all local replication snapshots (jobid => undef)
+   my $logfunc = sub { my $line = shift; chomp $line; print 
"$line\n"; };
+   PVE::Replication::prepare($storecfg, $sorted_volids, undef, 0, 
undef, $logfunc);
+   }
+
+   $class->foreach_volume($snap, sub {
+  my ($vs, $volume) = @_;
 
-   $snap = &$get_snapshot_config();
+  $class->__snapshot_rollback_vol_possible($volume, $snapname);
+   });
+}
 
die "unable to rollback to incomplete snapshot (snapstate = 
$snap->{snapstate})\n"
if $snap->{snapstate};
-- 
2.20.1


___
pve-devel mailing list
pve-devel@pve.proxmox.com
https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel


[pve-devel] [PATCH qemu-server] api/resume: make nocheck root-only

2020-04-27 Thread Fabian Grünbichler
this is only used for migration via 'qm mtunnel', regular users should
never need to resume a VM that does not logically belong to the node it
is running on

Signed-off-by: Fabian Grünbichler 
---
 PVE/API2/Qemu.pm | 2 ++
 1 file changed, 2 insertions(+)

diff --git a/PVE/API2/Qemu.pm b/PVE/API2/Qemu.pm
index b3683ae..9658a36 100644
--- a/PVE/API2/Qemu.pm
+++ b/PVE/API2/Qemu.pm
@@ -2593,6 +2593,8 @@ __PACKAGE__->register_method({
if $skiplock && $authuser ne 'root@pam';
 
my $nocheck = extract_param($param, 'nocheck');
+   raise_param_exc({ nocheck => "Only root may use this option." })
+   if $nocheck && $authuser ne 'root@pam';
 
my $to_disk_suspended;
eval {
-- 
2.20.1


___
pve-devel mailing list
pve-devel@pve.proxmox.com
https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel


Re: [pve-devel] [PATCH manager] certs: early renew long-lived certificates

2020-04-24 Thread Fabian Grünbichler
On April 23, 2020 9:42 pm, Thomas Lamprecht wrote:
> On 4/23/20 1:59 PM, Fabian Grünbichler wrote:
>> On April 23, 2020 1:07 pm, Dominik Csapak wrote:
>>> LGTM
>>>
>>> maybe we should shorten the lifespan to 1 year already?
>>> according to [0], safari on macos will reject certs
>>> that are longer valid than 398 days, when issued on/after
>>> 2020-09-01
>>>
>>> 0: https://support.apple.com/en-us/HT211025
>>>
>> 
>> forgot to include this tidbit: that change was actually the reason for 
>> looking at it, but it only affects certificates issued by CAs shipped in 
>> the Apple Trust Stores, not those issued by CAs manually trusted by a 
>> user. so our self-signed CA and its certificates are not affected (for 
>> now).
> 
> This all makes me thinking... Wouldn't we need to have the PMG also adapt
> to this? Checked a very recently from (new test) ISO installed test VM gets
> me a 10 year certificate lifespan.. I mean, there more may use a "trusted"
> one, but still..

Apple's 825 days limit affects self-signed as well AFAIU. so yes, we 
should probably port the renewal + shorten lifetime changes to PMG as 
well.

___
pve-devel mailing list
pve-devel@pve.proxmox.com
https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel


Re: [pve-devel] [PATCH manager] certs: early renew long-lived certificates

2020-04-23 Thread Fabian Grünbichler
On April 23, 2020 1:07 pm, Dominik Csapak wrote:
> LGTM
> 
> maybe we should shorten the lifespan to 1 year already?
> according to [0], safari on macos will reject certs
> that are longer valid than 398 days, when issued on/after
> 2020-09-01
> 
> 0: https://support.apple.com/en-us/HT211025
> 

forgot to include this tidbit: that change was actually the reason for 
looking at it, but it only affects certificates issued by CAs shipped in 
the Apple Trust Stores, not those issued by CAs manually trusted by a 
user. so our self-signed CA and its certificates are not affected (for 
now).

I don't have any objections to shortening both the issuance and the 
check here to 1 year though.

> On 4/23/20 12:20 PM, Fabian Grünbichler wrote:
>> if our self-signed certificate expires in more than 825 days, but was
>> created after July 2019 it won't be accepted by modern Apple devices. we
>> fixed the issuance to generate shorter-lived certificates in November
>> 2019, this cleans up the existing ones to fix this and similar future
>> issues.
>> 
>> two years / 730 days as cut-off was chosen since it's our new maximum
>> self-signed certificate lifetime, and should thus catch all old-style
>> certificates.
>> 
>> another positive side-effect is that we can now phase out support for
>> older certificates faster, e.g. if we want to move to bigger keys,
>> different signature algorithms, or anything else in that direction.
>> 
>> Signed-off-by: Fabian Grünbichler 
>> ---
>> I'd also be fine with reducing both even more, e.g. to 1 year ;)
>> 
>>   bin/pveupdate | 15 ---
>>   1 file changed, 12 insertions(+), 3 deletions(-)
>> 
>> diff --git a/bin/pveupdate b/bin/pveupdate
>> index 15a2accc..36ac6814 100755
>> --- a/bin/pveupdate
>> +++ b/bin/pveupdate
>> @@ -79,8 +79,9 @@ eval {
>>   my $certpath = 
>> PVE::CertHelpers::default_cert_path_prefix($nodename).".pem";
>>   my $capath = "/etc/pve/pve-root-ca.pem";
>>   
>> -# check if expiry is < 2W
>> -if (PVE::Certificate::check_expiry($certpath, time() + 14*24*60*60)) {
>> +my $renew = sub {
>> +my ($msg) = @_;
>> +
>>  # get CA info
>>  my $cainfo = PVE::Certificate::get_certificate_info($capath);
>>   
>> @@ -94,13 +95,21 @@ eval {
>>  # TODO: replace by low level ssleay interface if version 1.86 is 
>> available
>>  PVE::Tools::run_command(['/usr/bin/openssl', 'verify', '-CAfile', 
>> $capath, $certpath]);
>>   
>> -print "PVE certificate expires soon, renewing...\n";
>> +print "PVE certificate $msg\n";
>>  # create new certificate
>>  my $ip = PVE::Cluster::remote_node_ip($nodename);
>>  PVE::Cluster::Setup::gen_pve_ssl_cert(1, $nodename, $ip);
>>   
>>  print "Restarting pveproxy after renewing certificate\n";
>>  PVE::Tools::run_command(['systemctl', 'reload-or-restart', 'pveproxy']);
>> +};
>> +
>> +if (PVE::Certificate::check_expiry($certpath, time() + 14*24*60*60)) {
>> +# expires in next 2 weeks
>> +$renew->("expires soon, renewing...");
>> +} elsif (!PVE::Certificate::check_expiry($certpath, time() + 
>> 2*365*24*60*60)) {
>> +# expires in more than 2 years
>> +$renew->("expires in more than 2 years, renewing to reduce certificate 
>> life-span...");
>>   }
>>   };
>>   syslog ('err', "Checking/Renewing SSL certificate failed: $@") if $@;
>> 
> 
> ___
> pve-devel mailing list
> pve-devel@pve.proxmox.com
> https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel
> 

___
pve-devel mailing list
pve-devel@pve.proxmox.com
https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel


[pve-devel] [PATCH manager] certs: early renew long-lived certificates

2020-04-23 Thread Fabian Grünbichler
if our self-signed certificate expires in more than 825 days, but was
created after July 2019 it won't be accepted by modern Apple devices. we
fixed the issuance to generate shorter-lived certificates in November
2019, this cleans up the existing ones to fix this and similar future
issues.

two years / 730 days as cut-off was chosen since it's our new maximum
self-signed certificate lifetime, and should thus catch all old-style
certificates.

another positive side-effect is that we can now phase out support for
older certificates faster, e.g. if we want to move to bigger keys,
different signature algorithms, or anything else in that direction.

Signed-off-by: Fabian Grünbichler 
---
I'd also be fine with reducing both even more, e.g. to 1 year ;)

 bin/pveupdate | 15 ---
 1 file changed, 12 insertions(+), 3 deletions(-)

diff --git a/bin/pveupdate b/bin/pveupdate
index 15a2accc..36ac6814 100755
--- a/bin/pveupdate
+++ b/bin/pveupdate
@@ -79,8 +79,9 @@ eval {
 my $certpath = 
PVE::CertHelpers::default_cert_path_prefix($nodename).".pem";
 my $capath = "/etc/pve/pve-root-ca.pem";
 
-# check if expiry is < 2W
-if (PVE::Certificate::check_expiry($certpath, time() + 14*24*60*60)) {
+my $renew = sub {
+   my ($msg) = @_;
+
# get CA info
my $cainfo = PVE::Certificate::get_certificate_info($capath);
 
@@ -94,13 +95,21 @@ eval {
# TODO: replace by low level ssleay interface if version 1.86 is 
available
PVE::Tools::run_command(['/usr/bin/openssl', 'verify', '-CAfile', 
$capath, $certpath]);
 
-   print "PVE certificate expires soon, renewing...\n";
+   print "PVE certificate $msg\n";
# create new certificate
my $ip = PVE::Cluster::remote_node_ip($nodename);
PVE::Cluster::Setup::gen_pve_ssl_cert(1, $nodename, $ip);
 
print "Restarting pveproxy after renewing certificate\n";
PVE::Tools::run_command(['systemctl', 'reload-or-restart', 'pveproxy']);
+};
+
+if (PVE::Certificate::check_expiry($certpath, time() + 14*24*60*60)) {
+   # expires in next 2 weeks
+   $renew->("expires soon, renewing...");
+} elsif (!PVE::Certificate::check_expiry($certpath, time() + 
2*365*24*60*60)) {
+   # expires in more than 2 years
+   $renew->("expires in more than 2 years, renewing to reduce certificate 
life-span...");
 }
 };
 syslog ('err', "Checking/Renewing SSL certificate failed: $@") if $@;
-- 
2.20.1


___
pve-devel mailing list
pve-devel@pve.proxmox.com
https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel


Re: [pve-devel] [PATCH common 1/1] JSONSchema: extend pve-configid regex by '-'

2020-04-23 Thread Fabian Grünbichler
On April 23, 2020 7:56 am, Thomas Lamprecht wrote:
> On 4/9/20 4:10 PM, Dominik Csapak wrote:
>> we use this format for all 'delete' options but we have some options
>> that have a '-' in the name (e.g. 'sync-defaults-options') that cannot
>> be deleted if it is not included
>> 
>> Signed-off-by: Dominik Csapak 
>> ---
>>  src/PVE/JSONSchema.pm | 2 +-
>>  1 file changed, 1 insertion(+), 1 deletion(-)
>> 
>> diff --git a/src/PVE/JSONSchema.pm b/src/PVE/JSONSchema.pm
>> index 01a3cce..1d28f36 100644
>> --- a/src/PVE/JSONSchema.pm
>> +++ b/src/PVE/JSONSchema.pm
>> @@ -169,7 +169,7 @@ register_format('pve-configid', \_verify_configid);
>>  sub pve_verify_configid {
>>  my ($id, $noerr) = @_;
>>  
>> -if ($id !~ m/^[a-z][a-z0-9_]+$/i) {
>> +if ($id !~ m/^[a-z][a-z0-9_-]+$/i) {
>>  return undef if $noerr;
>>  die "invalid configuration ID '$id'\n";
>>  }
> 
> Fabian, Wolfgang: Any objection here? I think it should be OK, but we may
> want to adapt the pve-container and qemu-server config parsers to accept
> also the minus in keys, as it will be for sure sooner or later used there
> then too. Quick-checking other parsers wouldn't hurt either. :)

had some off-list discussions with Dominik about this already, it seems 
like he checked all the relevant call sites. the only thing I'd still 
like to see as follow up is re-using this verifier for the guest config 
parsers when parsing the snapshot section header (by chance, they 
already contain the '-', but the next time we add a character that might 
not be the case and we might miss it which would cause some fun bugs ;)).

Reviewed-By: Fabian Grünbichler 

___
pve-devel mailing list
pve-devel@pve.proxmox.com
https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel


[pve-devel] [PATCH docs] certs: extend ACME section with DNS/plugin info

2020-04-22 Thread Fabian Grünbichler
Signed-off-by: Fabian Grünbichler 
---
 certificate-management.adoc | 131 ++--
 1 file changed, 125 insertions(+), 6 deletions(-)

diff --git a/certificate-management.adoc b/certificate-management.adoc
index db76062..dc2834b 100644
--- a/certificate-management.adoc
+++ b/certificate-management.adoc
@@ -54,18 +54,31 @@ interface with Let's Encrypt for easy setup of trusted TLS 
certificates which
 are accepted out of the box on most modern operating systems and browsers.
 
 Currently the two ACME endpoints implemented are Let's Encrypt (LE) and its
-staging environment (see https://letsencrypt.org), both using the standalone
-HTTP challenge.
+staging environment (see https://letsencrypt.org). Our ACME client supports
+validation of `http-01` challenges using a built-in webserver and validation of
+`dns-01` challenges using a DNS plugin.
 
 Because of https://letsencrypt.org/docs/rate-limits/[rate-limits] you should 
use
 LE `staging` for experiments.
 
+ACME Plugin configuration is stored in `/etc/pve/priv/acme/plugins.cfg`. There
+is always an implicitly configured `standalone` plugin for validating `http-01`
+challenges via the built-in webserver spawned on port 80.
+
 There are a few prerequisites to use Let's Encrypt:
 
-1. **Port 80** of the node needs to be reachable from the internet.
-2. There **must** be no other listener on port 80.
-3. The requested (sub)domain needs to resolve to a public IP of the Node.
-4. You have to accept the ToS of Let's Encrypt.
+* You have to accept the ToS of Let's Encrypt to register an account.
+
+For `http-01` challenges:
+
+* **Port 80** of the node needs to be reachable from the internet.
+* There **must** be no other listener on port 80.
+* The requested (sub)domain needs to resolve to a public IP of the Node.
+
+For `dns-01` challenges:
+
+* DNS needs to be handled by a server that allows automatic provisioning of
+TXT records
 
 At the moment the GUI uses only the default ACME account.
 
@@ -173,3 +186,109 @@ If a node has been successfully configured with an 
ACME-provided certificate
 (either via pvenode or via the GUI), the certificate will be automatically
 renewed by the pve-daily-update.service. Currently, renewal will be attempted
 if the certificate has expired already, or will expire in the next 30 days.
+
+Configuring DNS APIs for validation
+^^^
+
+On systems where external access for validation via the `http-01` method is
+not possible or desired, it is possible to use the `dns-01` validation method.
+This validation method requires a DNS server that allows provisioning of `TXT`
+records via an API.
+
+{PVE} re-uses the DNS plugins developed for the `acme.sh`
+footnote:[acme.sh https://github.com/acmesh-official/acme.sh]
+project, please refer to its documentation for details on configuration of
+specific APIs.
+
+Combining `http-01` and `dns-01` validation is possible in case your node is
+reachable via multiple domains with different requirements / DNS provisioning
+capabilities. Mixing DNS APIs from multiple providers or instances is also
+possible by specifying different plugin instances per domain.
+
+A special `alias` mode can be used to handle the validation on a different
+domain/DNS server, in case your primary/real DNS does not support provisioning
+via an API. Manually set up a permanent `CNAME` record for
+`_acme-challenge.domain1.example` pointing to `_acme-challenge.domain2.example`
+and set the `alias` property in the {PVE} node configuration file to
+`domain2.example` to allow the DNS server of `domain2.example` to validate all
+challenges for `domain1.example`.
+
+.Example: Setting up the OVH API for validating a domain
+
+Note:: the account registration steps are the same no matter which plugins are 
used, and are not repeated here.
+Note:: `OVH_AK` and `OVH_AS` need to be obtained from OVH according to the OVH 
API documentation
+
+
+root@proxmox:~# cat /path/to/api-token
+OVH_AK=
+OVH_AS=
+root@proxmox:~# source /path/to/api-token
+root@proxmox:~# curl -XPOST -H"X-Ovh-Application: $OVH_AK" -H "Content-type: 
application/json" \
+https://eu.api.ovh.com/1.0/auth/credential  -d '{
+  "accessRules": [
+{"method": "GET","path": "/auth/time"},
+{"method": "GET","path": "/domain"},
+{"method": "GET","path": "/domain/zone/*"},
+{"method": "GET","path": "/domain/zone/*/record"},
+{"method": "POST","path": "/domain/zone/*/record"},
+{"method": "POST","path": "/domain/zone/*/refresh"},
+{"method": "PUT","path": "/domain/zone/*/record/"},
+{"method": "DELETE","

Re: [pve-devel] [PATCH qemu-server] fix #2697: map netdev_add options to correct json types

2020-04-21 Thread Fabian Grünbichler
there is some issue with hot-plugging queues (with and without your 
patch):

- virtio NIC, no queues set, start VM OR
- virtio NIC, queues X set, start VM
- change NIC to have a queues value of Y

400 Parameter verification failed.
net0: hotplug problem - error on hot-unplugging device 'net0'

- no NIC, start VM
- add NIC with queues value

no problem

- change NIC to have a different or no queues value

no problem

but besides this-pre-existing problem this seems to work for older Qemu 
binaries as well.

On April 21, 2020 4:01 pm, Dominik Csapak wrote:
> netdev_add is now a proper qmp command, which means that it verifies
> the parameter types properly
> 
> instead of sending strings, we now have to choose the correct
> types for the parameters
> 
> bool for vhost
> and uint64 for queues
> 
> Signed-off-by: Dominik Csapak 
> ---
> i checked and i found no other parameter or qmp call that needs
> changing, but maybe someone else can also check this, just to be sure
> 
>  PVE/QemuServer.pm | 8 
>  1 file changed, 8 insertions(+)
> 
> diff --git a/PVE/QemuServer.pm b/PVE/QemuServer.pm
> index 37c7320..030e04b 100644
> --- a/PVE/QemuServer.pm
> +++ b/PVE/QemuServer.pm
> @@ -4047,6 +4047,14 @@ sub qemu_netdevadd {
>  my $netdev = print_netdev_full($vmid, $conf, $arch, $device, $deviceid, 
> 1);
>  my %options =  split(/[=,]/, $netdev);
>  
> +if (defined(my $vhost = $options{vhost})) {
> + $options{vhost} = JSON::boolean(PVE::JSONSchema::parse_boolean($vhost));
> +}
> +
> +if (defined(my $queues = $options{queues})) {
> + $options{queues} = $queues + 0;
> +}
> +
>  mon_cmd($vmid, "netdev_add",  %options);
>  return 1;
>  }
> -- 
> 2.20.1
> 
> 
> ___
> pve-devel mailing list
> pve-devel@pve.proxmox.com
> https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel
> 
> 

___
pve-devel mailing list
pve-devel@pve.proxmox.com
https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel


Re: [pve-devel] [PATCH common 1/2] JSONSchema: add format validator support and cleanup check_format

2020-04-21 Thread Fabian Grünbichler
sorry, took a bit longer than I wanted, but here it is. thanks for 
looking into this! :) at first glance it seemed like it's more 
straight-forward than I expected - but as always with JSONSchema.pm, 
some rabbit-hole chasing later I am not quite so sure anymore. I hope my 
walls of text below are not too confusing ;)

one sentence summary:
to make this workable with the current approach, we need to switch all 
calls of check_format and parse_property_string that use a registered 
format to use the format's ID instead of the format description hash.

On April 16, 2020 4:39 pm, Stefan Reiter wrote:
> Adds a third, optional parameter to register_format that allows specifying
> a function that will be called after parsing and can validate the parsed
> data. A validator should die on failed validation, and can also change the
> parsed object by returning a modified version of it.
> 
> This is useful so one can register a format with its hash, thus allowing
> documentation to be generated automatically, while still enforcing certain
> validation rules.
> 
> And since I found it impossible to extend the current check_format code,
> clean that function up as well (which pretty much amounts to a rewrite).
> Also use get_format consistently to avoid future breakages.
> 
> No existing functionality is intentionally changed.
> 
> Signed-off-by: Stefan Reiter 
> ---
> 
> @Fabian G.: Since we discussed this off-list, is this good? At least there
> shouldn't be a possibility for a parser/hash format drift the way I've
> implemented it.
> 
>  src/PVE/JSONSchema.pm | 60 +++
>  1 file changed, 38 insertions(+), 22 deletions(-)
> 
> diff --git a/src/PVE/JSONSchema.pm b/src/PVE/JSONSchema.pm
> index 01a3cce..106a812 100644
> --- a/src/PVE/JSONSchema.pm
> +++ b/src/PVE/JSONSchema.pm
> @@ -120,19 +120,26 @@ register_standard_option('pve-snapshot-name', {
>  });
>  
>  my $format_list = {};
> +my $format_validators = {};
>  
>  sub register_format {
> -my ($format, $code) = @_;
> +my ($name, $format, $validator) = @_;
>  
> -die "JSON schema format '$format' already registered\n"
> - if $format_list->{$format};
> +die "JSON schema format '$name' already registered\n"
> + if $format_list->{$name};
>  
> -$format_list->{$format} = $code;
> +if ($validator) {
> + die "A \$validator function can only be specified for hash-based 
> formats\n"
> + if ref($format) ne 'HASH';
> + $format_validators->{$name} = $validator;
> +}
> +
> +$format_list->{$name} = $format;
>  }
>  
>  sub get_format {
> -my ($format) = @_;
> -return $format_list->{$format};
> +my ($name) = @_;
> +return $format_list->{$name};
>  }
>  
>  my $renderer_hash = {};
> @@ -646,13 +653,16 @@ sub pve_verify_tfa_secret {
>  sub check_format {
>  my ($format, $value, $path) = @_;
>  
> -return parse_property_string($format, $value, $path) if ref($format) eq 
> 'HASH';

>  return if $format eq 'regex';
>  
> -if ($format =~ m/^(.*)-a?list$/) {
> +my $parsed;


>  
> - my $code = $format_list->{$1};
> +if (ref($format) eq 'HASH') {
> + # in case it's a hash we can't have a validator registered, so return

it took me several tries to full get this comment and the 
implications. if we pass a format hash to check_format, then 
check_format can't know whether there is an associated validator 
since it doesn't know the name. so in effect this means we should 
avoid calling check_format with a hash directly, when that hash is 
also registered as format with an ID. this should probably be 
documented somewhere, as it also extends to parse_property_string 
(see comments there), and needs careful analysis of the code paths 
ending up in check_format so that no caller already resolves format ID 
to HASH prematurely.

I think we could switch this around and start check_format with:

if (ref($format) eq 'HASH') {
  # don't know whether a validator exists
  return parse_property_string($format, ...);
}

if (ref($format) eq 'CODE') {
  # we are the (sole, old-style) validator
  return $format->(...);
}

return if $format eq 'regex';
# AFAICT regex is not registered as actual format, so this should not be able 
to clash with the rest

my ($name, $registered_format, $parsed);

big existing if, but adapted to set those three variables as 
appropriate

if (defined($name) && ref($registered_format) eq 'HASH') {
  my $validator = $format_validators->{$name};
  $parsed = $validator->($parsed) if $validator;
  return $parsed;
}

if we want to support -opt and -a?list with hash based formats that's 
fine as well (and I guess it's not unlikely we'll run into that if 
we start converting some existing parse_/verify_foo into hash + 
validator). those branches then just need to check whether 
$registered_format is a CODE-ref (existing behaviour) or not (it's a 
HASH, so parse it accordingly). at that point we could probably combine 
the -opt and else 

[pve-devel] applied: [PATCH container] Fix move_volume by using result from parse_volume

2020-04-21 Thread Fabian Grünbichler
thanks!

On April 20, 2020 1:12 pm, Fabian Ebner wrote:
> This was changed by accident by my refactoring in
> commit e4034859fd0e3491fd1aefb4f9ef44ee585aa404
> 
> Signed-off-by: Fabian Ebner 
> ---
> 
> AFAICS the other call sites affected by the refactoring
> still use the results, but it might not hurt if somebody
> else also re-checks that commit.
> 
>  src/PVE/API2/LXC.pm | 2 +-
>  1 file changed, 1 insertion(+), 1 deletion(-)
> 
> diff --git a/src/PVE/API2/LXC.pm b/src/PVE/API2/LXC.pm
> index 3a8694a..79ca025 100644
> --- a/src/PVE/API2/LXC.pm
> +++ b/src/PVE/API2/LXC.pm
> @@ -1783,7 +1783,7 @@ __PACKAGE__->register_method({
>  
>   die "cannot move volumes of a running container\n" if 
> PVE::LXC::check_running($vmid);
>  
> - PVE::LXC::Config->parse_volume($mpkey, $conf->{$mpkey});
> + $mpdata = PVE::LXC::Config->parse_volume($mpkey, $conf->{$mpkey});
>   $old_volid = $mpdata->{volume};
>  
>   die "you can't move a volume with snapshots and delete the source\n"
> -- 
> 2.20.1
> 
> 
> ___
> pve-devel mailing list
> pve-devel@pve.proxmox.com
> https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel
> 
> 

___
pve-devel mailing list
pve-devel@pve.proxmox.com
https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel


[pve-devel] applied: [PATCH qemu-server] Fix live migration with replicated unused volumes

2020-04-20 Thread Fabian Grünbichler
with fixup to improve readability:

 my $number_of_online_replicated_volumes = 0;
-if ($self->{online_local_volumes}) {
-   foreach my $volid (keys %{$self->{replicated_volumes}}) {
-   next if !(grep { $volid eq $_ } @{$self->{online_local_volumes}});
+foreach my $volid (@{$self->{online_local_volumes}}) {
+   next if !$self->{replicated_volumes}->{$volid};
$number_of_online_replicated_volumes++;
$input .= "replicated_volume: $volid\n";
 }
-}

thanks!

On April 20, 2020 10:31 am, Fabian Ebner wrote:
> by counting only local volumes that will be live-migrated via 
> qemu_drive_mirror,
> i.e. those listed in $self->{online_local_volumes}.
> 
> Signed-off-by: Fabian Ebner 
> ---
> 
> Also (correctly) would not count vmstate volumes anymore,
> but live migration with snapshots is currently not allowed anyways.
> 
>  PVE/QemuMigrate.pm | 12 +---
>  1 file changed, 9 insertions(+), 3 deletions(-)
> 
> diff --git a/PVE/QemuMigrate.pm b/PVE/QemuMigrate.pm
> index 97fd994..b596a92 100644
> --- a/PVE/QemuMigrate.pm
> +++ b/PVE/QemuMigrate.pm
> @@ -708,8 +708,14 @@ sub phase2 {
>  # TODO change to 'spice_ticket: \n' in 7.0
>  my $input = $spice_ticket ? "$spice_ticket\n" : "\n";
>  $input .= "nbd_protocol_version: $nbd_protocol_version\n";
> -foreach my $volid (keys %{$self->{replicated_volumes}}) {
> - $input .= "replicated_volume: $volid\n";
> +
> +my $number_of_online_replicated_volumes = 0;
> +if ($self->{online_local_volumes}) {
> + foreach my $volid (keys %{$self->{replicated_volumes}}) {
> + next if !(grep { $volid eq $_ } @{$self->{online_local_volumes}});
> + $number_of_online_replicated_volumes++;
> + $input .= "replicated_volume: $volid\n";
> + }
>  }
>  
>  my $target_replicated_volumes = {};
> @@ -773,7 +779,7 @@ sub phase2 {
>  
>  die "unable to detect remote migration address\n" if !$raddr;
>  
> -if (scalar(keys %$target_replicated_volumes) != scalar(keys 
> %{$self->{replicated_volumes}})) {
> +if (scalar(keys %$target_replicated_volumes) != 
> $number_of_online_replicated_volumes) {
>   die "number of replicated disks on source and target node do not match 
> - target node too old?\n"
>  }
>  
> -- 
> 2.20.1
> 
> 
> ___
> pve-devel mailing list
> pve-devel@pve.proxmox.com
> https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel
> 
> 

___
pve-devel mailing list
pve-devel@pve.proxmox.com
https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel


Re: [pve-devel] [PATCH v3 storage 2/3] Incorporate wipe_disks from PVE::Ceph::Tools

2020-04-16 Thread Fabian Grünbichler
On April 16, 2020 8:57 am, Dominik Csapak wrote:
> 
>> 
>> 
>> Thanks for clearing that up, I mostly poked at it as Dominic came to me
>> asking what to do as you told him to use [0-9] and Fabian told him to go
>> back again, not knowing about your suggestion.
>> 
>> The question is, does the matching of other digit really matters at all?
>> 
>> I mean it naturally depends on what happens whit the parsed data, but most
>> of the time it should not really matter at all, or?
>> 

> bottom line: i guess both are ok when used right, but with
> [0-9] there is no ambiguity as to what it matches
> 

fine for me as well (I prefer \d for readability/shortness, but I don't 
care that strongly either way :-P).

let's put it into the style guide? 

___
pve-devel mailing list
pve-devel@pve.proxmox.com
https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel


[pve-devel] applied: [PATCH storage 2/2] fix nvme wearout parsing

2020-04-15 Thread Fabian Grünbichler
with switch to \s* instead of .*?, as discussed.

On April 14, 2020 4:17 pm, Dominik Csapak wrote:
> the '.*' was greedy, also consuming all but one digits of the real percentage
> 
> Signed-off-by: Dominik Csapak 
> ---
>  PVE/Diskmanage.pm | 2 +-
>  1 file changed, 1 insertion(+), 1 deletion(-)
> 
> diff --git a/PVE/Diskmanage.pm b/PVE/Diskmanage.pm
> index abb90a7..e9036fe 100644
> --- a/PVE/Diskmanage.pm
> +++ b/PVE/Diskmanage.pm
> @@ -134,7 +134,7 @@ sub get_smart_data {
>   $smartdata->{text} = '' if !defined $smartdata->{text};
>   $smartdata->{text} .= "$line\n";
>   # extract wearout from nvme text, allow for decimal values
> - if ($line =~ m/Percentage Used:.*(\d+(?:\.\d+)?)\%/i) {
> + if ($line =~ m/Percentage Used:.*?(\d+(?:\.\d+)?)\%/i) {
>   $smartdata->{wearout} = 100 - $1;
>   }
>   } elsif ($line =~ m/SMART Disabled/) {
> -- 
> 2.20.1
> 
> 
> ___
> pve-devel mailing list
> pve-devel@pve.proxmox.com
> https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel
> 
> 

___
pve-devel mailing list
pve-devel@pve.proxmox.com
https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel


[pve-devel] applied: [PATCH storage 1/2] disk_tests: improve nvme smart test

2020-04-15 Thread Fabian Grünbichler
On April 14, 2020 4:17 pm, Dominik Csapak wrote:
> by using an actual percentage for 'Percentage Used' instead of 0%
> 
> Signed-off-by: Dominik Csapak 
> ---
> this test now fails, but is repaired by the next patch of the series
> 
>  test/disk_tests/nvme_smart/disklist_expected.json  | 2 +-
>  test/disk_tests/nvme_smart/nvme0_smart | 2 +-
>  test/disk_tests/nvme_smart/nvme0n1_smart_expected.json | 4 ++--
>  3 files changed, 4 insertions(+), 4 deletions(-)
> 
> diff --git a/test/disk_tests/nvme_smart/disklist_expected.json 
> b/test/disk_tests/nvme_smart/disklist_expected.json
> index 0b043b1..1bcdb7d 100644
> --- a/test/disk_tests/nvme_smart/disklist_expected.json
> +++ b/test/disk_tests/nvme_smart/disklist_expected.json
> @@ -1,6 +1,6 @@
>  {
>  "nvme0n1" : {
> - "wearout" : 100,
> + "wearout" : 69,
>   "vendor" : "unknown",
>   "size" : 512000,
>   "health" : "PASSED",
> diff --git a/test/disk_tests/nvme_smart/nvme0_smart 
> b/test/disk_tests/nvme_smart/nvme0_smart
> index 2e6aaea..f371d5b 100644
> --- a/test/disk_tests/nvme_smart/nvme0_smart
> +++ b/test/disk_tests/nvme_smart/nvme0_smart
> @@ -9,7 +9,7 @@ Critical Warning:   0x00
>  Temperature:32 Celsius
>  Available Spare:100%
>  Available Spare Threshold:  10%
> -Percentage Used:0%
> +Percentage Used:31%
>  Data Units Read:1,299,288 [665 GB]
>  Data Units Written: 5,592,478 [2.86 TB]
>  Host Read Commands: 30,360,807
> diff --git a/test/disk_tests/nvme_smart/nvme0n1_smart_expected.json 
> b/test/disk_tests/nvme_smart/nvme0n1_smart_expected.json
> index 6ea2d67..af8eade 100644
> --- a/test/disk_tests/nvme_smart/nvme0n1_smart_expected.json
> +++ b/test/disk_tests/nvme_smart/nvme0n1_smart_expected.json
> @@ -1,6 +1,6 @@
>  {
> -"text" : "\nSMART/Health Information (NVMe Log 0x02, NSID 
> 0x)\nCritical Warning:   0x00\nTemperature:   
>  32 Celsius\nAvailable Spare:100%\nAvailable 
> Spare Threshold:  10%\nPercentage Used:0%\nData 
> Units Read:1,299,288 [665 GB]\nData Units Written:
>  5,592,478 [2.86 TB]\nHost Read Commands: 
> 30,360,807\nHost Write Commands:470,356,196\nController Busy 
> Time:   12\nPower Cycles:   98\nPower On 
> Hours: 687\nUnsafe Shutdowns:   21\nMedia 
> and Data Integrity Errors:0\nError Information Log Entries:  0\n",
> +"text" : "\nSMART/Health Information (NVMe Log 0x02, NSID 
> 0x)\nCritical Warning:   0x00\nTemperature:   
>  32 Celsius\nAvailable Spare:100%\nAvailable 
> Spare Threshold:  10%\nPercentage Used:31%\nData 
> Units Read:1,299,288 [665 GB]\nData Units Written:
>  5,592,478 [2.86 TB]\nHost Read Commands: 
> 30,360,807\nHost Write Commands:470,356,196\nController Busy 
> Time:   12\nPower Cycles:   98\nPower On 
> Hours: 687\nUnsafe Shutdowns:   21\nMedia 
> and Data Integrity Errors:0\nError Information Log Entries:  0\n",
>  "health" : "PASSED",
>  "type" : "text",
> -"wearout": 100
> +"wearout": 69
>  }
> -- 
> 2.20.1
> 
> 
> ___
> pve-devel mailing list
> pve-devel@pve.proxmox.com
> https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel
> 
> 

___
pve-devel mailing list
pve-devel@pve.proxmox.com
https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel


Re: [pve-devel] [PATCH v5 qemu-server 14/19] Update volume IDs in one go

2020-04-15 Thread Fabian Grünbichler
On April 15, 2020 10:04 am, Fabian Ebner wrote:
> On 14.04.20 12:41, Fabian Ebner wrote:
>> On 09.04.20 09:53, Fabian Grünbichler wrote:
>>> small nit inline
>>>
>>> On April 8, 2020 11:25 am, Fabian Ebner wrote:
>>>> Use 'update_volume_ids' for the live-migrated disks as well.
>>>>
>>>> Signed-off-by: Fabian Ebner 
>>>> ---
>>>>   PVE/QemuMigrate.pm | 26 ++
>>>>   1 file changed, 10 insertions(+), 16 deletions(-)
>>>>
>>>> diff --git a/PVE/QemuMigrate.pm b/PVE/QemuMigrate.pm
>>>> index 14f3fcb..eb78537 100644
>>>> --- a/PVE/QemuMigrate.pm
>>>> +++ b/PVE/QemuMigrate.pm
>>>> @@ -838,14 +838,20 @@ sub phase2 {
>>>>   my $source_drive = PVE::QemuServer::parse_drive($drive, 
>>>> $conf->{$drive});
>>>>   my $target_drive = PVE::QemuServer::parse_drive($drive, 
>>>> $target->{drivestr});
>>>> -    my $source_sid = 
>>>> PVE::Storage::Plugin::parse_volume_id($source_drive->{file});
>>>> -    my $target_sid = 
>>>> PVE::Storage::Plugin::parse_volume_id($target_drive->{file});
>>>> +    my $source_volid = $source_drive->{file};
>>>> +    my $target_volid = $target_drive->{file};
>>>> +
>>>> +    my $source_sid = 
>>>> PVE::Storage::Plugin::parse_volume_id($source_volid);
>>>> +    my $target_sid = 
>>>> PVE::Storage::Plugin::parse_volume_id($target_volid);
>>>>   my $bwlimit = 
>>>> PVE::Storage::get_bandwidth_limit('migration', [$source_sid, 
>>>> $target_sid], $opt_bwlimit);
>>>>   my $bitmap = $target->{bitmap};
>>>>   $self->log('info', "$drive: start migration to $nbd_uri");
>>>>   PVE::QemuServer::qemu_drive_mirror($vmid, $drive, $nbd_uri, 
>>>> $vmid, undef, $self->{storage_migration_jobs}, 'skip', undef, 
>>>> $bwlimit, $bitmap);
>>>> +
>>>> +    $self->{volume_map}->{$source_volid} = $target_volid;
>>>> +    $self->log('info', "volume '$source_volid' is 
>>>> '$target_volid' on the target\n");
>>>
>>> potential followup: if we do that before attempting to start the 
>>> mirror which might
>>> die, it would make the cleanup handling easier.
>> 
>> You're right. The reason I decided to do it here, was to avoid code 
>> duplication, because of the distinction between of unix socket/no unix 
>> socket above. And the drivestrings need to be parsed just for this 
>> otherwise.
>> 
>> I guess instead of saving the drivestring in the target_drive hash, we 
>> only need to save the volume ID. The other information that is in 
>> drivestring is not used anymore, because the old settings from the 
>> config are copied over, just the volume ID is updated.
>> 
>> This makes me realize a potential bug with only updating the volume_ids. 
>> If the newly allocated volume would have a different format than the 
>> original, but 'format' was explicitly set for the volume in the old 
>> config, the config will be wrong after migration. AFAICT this currently 
>> does not happen, but might change in the future. Also the size might be 
>> slightly off (e.g. image was a raw file, but newly allocated one is 
>> bigger because it's an LVM and uses extents).
>> 
> 
> Just tested around a bit. For online migration this does happen, because 
> the format can change when e.g. going from a directory storage with a 
> qcow2 image to an LVM storage.

yes, sorry for missing it during review.

> 
>> For offline migration we (currently) don't have a way to send back the 
>> drivestring and in general it's desireable to preserve options for a 
>> drive. Does it make sense to add a method to be executed after 
>> migration, which would check/fix such errors, or is doing a rescan enough?
>> 
> 
> Now I think it might be enough to extend rescan to also check+fix the 
> format (the needed information is already present) and call it after the 
> migration.
> Another way would be to just delete eventual 'format' entries from the 
> volume hashes in update_volids or is there any case where the format 
> isn't encoded in the volid (of course taking into account the storage 
> type) and the 'format=...' part of the property string is actually needed?

I *think* the format=... part is only needed for the shorthand

qm set/create XXX -scsi0 STORAGE:SIZE

syntax, where you can use 

Re: [pve-devel] [PATCH v3 storage 2/3] Incorporate wipe_disks from PVE::Ceph::Tools

2020-04-15 Thread Fabian Grünbichler
some style/design nits below

On March 11, 2020 2:05 pm, Dominic Jäger wrote:
> Move wipe_disks from PVE::Ceph::Tools to PVE::Diskmanage and improve it by
>  - Handling invalid parameters
>  - Adding options for wiping
>  - Making names clearer
>  - Adding tests
> 
> Relies on the corresponding patch in pve-manager.
> 
> Signed-off-by: Dominic Jäger 
> ---
> v2->v3:
>  - Relative instead of full paths
>  - Use run_command for system calls
>  - Die when a device is invalid instead of skipping
>  - More concise syntax
>  - More tests (with fake blockdevice path)
> v1->v2:
>  - Fix typo
>  - Add improvements
> 
> @Dominik: Thank you for the feedback!
> 
>  PVE/Diskmanage.pm |  58 
>  test/disklist_test.pm | 125 ++
>  2 files changed, 183 insertions(+)
> 
> diff --git a/PVE/Diskmanage.pm b/PVE/Diskmanage.pm
> index abb90a7..448f753 100644
> --- a/PVE/Diskmanage.pm
> +++ b/PVE/Diskmanage.pm
> @@ -761,4 +761,62 @@ sub append_partition {
>  return $partition;
>  }
>  
> +# Return undef on error, the size of the blockdev otherwise

not so sure about this - why not make it 'die' for now, if we use it 
otherwise we can add $noerr with the usual semantics later? we die at 
the only call site if this returns undef ;)

> +sub get_blockdev_size {
> +my ($dev) = @_;
> +if (!defined($dev)) {
> + warn "Undefined parameter";

die "undefined blockdev\n" if !defined...

> + return undef;
> +}
> +if (!assert_blockdev($dev, 1)) {
> + warn "$dev is not a blockdevice";
> + return undef;

same here. just call assert_blockdev without $noerr.

> +}
> +my $size;
> +my $exitcode = eval { run_command(

$exitcode is unused

> + ['blockdev', '--getsize64', $dev],
> + (outfunc => sub { $size = shift })
> +)};
> +warn "run_command for \'blockdev\' syscall failed: $@" if $@;

nit: this is not a syscall ;) also, don't eval, just let it die.

> +return $size;
> +}
> +
> +# Wipe the first part of a device
> +#
> +# Calls dd and optionally wipefs. This is useful to clear off leftovers from
> +# previous use as it avoids failing OSD and directory creation.
> +#
> +# $devices array of blockdevices (including partitions), e.g. ['/dev/sda3']
> +# $options hash with
> +#   - {amount} number of bytes that will be deleted. Will wipe at most the
> +#device size. Default: 209715200.

200MB?

> +#   - {wipefs} additionally erase all available signatures from all devices.
> +#Default: false
> +sub wipe_blockdevices {
> +my ($devices, $options) = @_;
> +my @dd_cmd = qw(dd if=/dev/zero iflag=count_bytes conv=fdatasync);
> +my $max_amount = $options->{amount} // 209715200;

// 200 * 1024 * 1024;

makes it readable at a glance.

> +die "$max_amount is not a natural number and thus an invalid value for 
> max_amount."

die "invalid max_amount '$max_amount', must be a positive integer\n"

> + if ($max_amount !~ /^[0-9]+$/);

\d instead of [0-9]

> +
> +my %toDelete = ();

we could simple call this dev_sizes, and just store the retrieved sizes 
inside.

> +foreach my $dev (@$devices) {
> + my $device_size = get_blockdev_size($dev);

add an eval here, and set $dev_sizes->{$dev} directly

> + die "Not wiping anything because could not get size of $dev." if 
> !defined($device_size);

and add ' - $@' to the message here. but note - you probably want to 
check $dev for definedness at the start of the loop, else this triggers 
a warning for printing undef ;)

or, if you want to make it shorter with something like:

my $dev;
my $dev_sizes = { eval { map { $dev = $_ // ''; $_ => get_blockdev_size($_) } 
@$devices } };
die "... $dev ... - $@\n" if $@;

(could be made shorter if we don't care about including $dev in the 
error message, or make get_blockdev_size include it).

> + my $amount = ($device_size < $max_amount) ? $device_size : $max_amount;

move this to the actual wiping below where it gets used

> + $toDelete{$dev} = $amount;

can be dropped

> +}
> +
> +foreach my $dev (keys %toDelete) {
> + print "Wiping device: $dev\n";
> + if ($options->{wipefs}) {
> + eval { run_command(['wipefs', '-a', $dev]) };
> + warn $@ if $@;
> + }

limit amount by $dev_sizes->{$dev} here

> + # 1M seems reasonable for this case
> + eval { run_command([@dd_cmd, "count=$toDelete{$dev}", "bs=1M", 
> "of=$dev"]) };
> + warn $@ if $@;
> +}
> +}
> +
>  1;
> diff --git a/test/disklist_test.pm b/test/disklist_test.pm
> index 9cb6763..e5d339b 100644
> --- a/test/disklist_test.pm
> +++ b/test/disklist_test.pm
> @@ -10,6 +10,7 @@ use PVE::Tools;
>  
>  use Test::MockModule;
>  use Test::More;
> +use Capture::Tiny;
>  use JSON;
>  use Data::Dumper;
>  
> @@ -248,4 +249,128 @@ while (readdir $dh) {
>  test_disk_list($dir);
>  }
>  
> +subtest 'get_blockdev_size' => sub {
> +plan tests => 8;
> +
> +my $blockdevice = '/fake/block/device';
> +my 

[pve-devel] applied: [PATCH manager 1/3] ceph: remove unused variable assignment

2020-04-15 Thread Fabian Grünbichler
On March 11, 2020 4:22 pm, Alwin Antreich wrote:
> Signed-off-by: Alwin Antreich 
> ---
>  PVE/Ceph/Services.pm | 1 -
>  1 file changed, 1 deletion(-)
> 
> diff --git a/PVE/Ceph/Services.pm b/PVE/Ceph/Services.pm
> index c17008cf..7015cafe 100644
> --- a/PVE/Ceph/Services.pm
> +++ b/PVE/Ceph/Services.pm
> @@ -63,7 +63,6 @@ sub get_cluster_service {
>  sub ceph_service_cmd {
>  my ($action, $service) = @_;
>  
> -my $pve_ceph_cfgpath = PVE::Ceph::Tools::get_config('pve_ceph_cfgpath');
>  if ($service && $service =~ 
> m/^(mon|osd|mds|mgr|radosgw)(\.(${\SERVICE_REGEX}))?$/) {
>   $service = defined($3) ? "ceph-$1\@$3" : "ceph-$1.target";
>  } else {
> -- 
> 2.20.1
> 
> 
> ___
> pve-devel mailing list
> pve-devel@pve.proxmox.com
> https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel
> 
> 

___
pve-devel mailing list
pve-devel@pve.proxmox.com
https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel


[pve-devel] applied: [PATCH manager 2/3] Fix: ceph: mon_address not considered by new MON

2020-04-15 Thread Fabian Grünbichler
On March 11, 2020 4:22 pm, Alwin Antreich wrote:
> The public_addr option for creating a new MON is only valid for manual
> startup (since Ceph Jewel) and is just ignored by ceph-mon during setup.
> As the MON is started after the creation through systemd without an IP
> specified. It is trying to auto-select an IP.
> 
> Before this patch the public_addr was only explicitly written to the
> ceph.conf if no public_network was set. The mon_address is only needed
> in the config on the first start of the MON.
> 
> The ceph-mon itself tries to select an IP on the following conditions.
> - no public_network or public_addr is in the ceph.conf
> * startup fails
> 
> - public_network is in the ceph.conf
> * with a single network, take the first available IP
> * on multiple networks, walk through the list orderly and start on
>   the first network where an IP is found
> 
> Signed-off-by: Alwin Antreich 
> ---
>  PVE/API2/Ceph/MON.pm | 9 +++--
>  1 file changed, 3 insertions(+), 6 deletions(-)
> 
> diff --git a/PVE/API2/Ceph/MON.pm b/PVE/API2/Ceph/MON.pm
> index 18b563c9..3baeac52 100644
> --- a/PVE/API2/Ceph/MON.pm
> +++ b/PVE/API2/Ceph/MON.pm
> @@ -255,7 +255,7 @@ __PACKAGE__->register_method ({
>   run_command("monmaptool --create --clobber --addv 
> $monid '[v2:$monaddr:3300,v1:$monaddr:6789]' --print $monmap");
>   }
>  
> - run_command("ceph-mon --mkfs -i $monid --monmap $monmap 
> --keyring $mon_keyring --public-addr $ip");
> + run_command("ceph-mon --mkfs -i $monid --monmap $monmap 
> --keyring $mon_keyring");
>   run_command("chown ceph:ceph -R $mondir");
>   };
>   my $err = $@;
> @@ -275,11 +275,8 @@ __PACKAGE__->register_method ({
>   }
>   $monhost .= " $ip";
>   $cfg->{global}->{mon_host} = $monhost;
> - if (!defined($cfg->{global}->{public_network})) {
> - # if there is no info about the public_network
> - # we have to set it explicitly for the monitor
> - $cfg->{$monsection}->{public_addr} = $ip;
> - }
> + # The IP is needed in the ceph.conf for the first boot
> + $cfg->{$monsection}->{public_addr} = $ip;
>  
>   cfs_write_file('ceph.conf', $cfg);
>  
> -- 
> 2.20.1
> 
> 
> ___
> pve-devel mailing list
> pve-devel@pve.proxmox.com
> https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel
> 
> 

___
pve-devel mailing list
pve-devel@pve.proxmox.com
https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel


Re: [pve-devel] [PATCH manager 3/3] Fix #2422: allow multiple Ceph public networks

2020-04-15 Thread Fabian Grünbichler
On March 11, 2020 4:22 pm, Alwin Antreich wrote:
> Multiple public networks can be defined in the ceph.conf. The networks
> need to be routed to each other.
> 
> On first service start the Ceph MON will register itself with one of the
> IPs configured locally, matching one of the public networks defined in
> the ceph.conf.
> 
> Signed-off-by: Alwin Antreich 
> ---
>  PVE/API2/Ceph/MON.pm | 10 +-
>  1 file changed, 9 insertions(+), 1 deletion(-)
> 
> diff --git a/PVE/API2/Ceph/MON.pm b/PVE/API2/Ceph/MON.pm
> index 3baeac52..5128fea2 100644
> --- a/PVE/API2/Ceph/MON.pm
> +++ b/PVE/API2/Ceph/MON.pm
> @@ -33,11 +33,19 @@ my $find_mon_ip = sub {
>  }
>  $pubnet //= $cfg->{global}->{public_network};
>  
> +my $public_nets = [ PVE::Tools::split_list($pubnet) ];
> +warn "Multiple ceph public networks detected on $node: $pubnet\n".
> +"Networks must be capable of routing to each other.\n" if 
> scalar(@$public_nets) > 1;
> +

style nit: indentation/alignment, also - postfix if for multiline 
statement is confusing.

>  if (!$pubnet) {
>   return $overwrite_ip // PVE::Cluster::remote_node_ip($node);
>  }

so here we check whether $pubnet is true, but in the new hunk above we 
already split and print it? works in practice (PVE::Tools::split_list 
handles undef fine, and then $public_nets does not have entries, so the 
warn does not trigger), but is the wrong way round ;)

>  
> -my $allowed_ips = PVE::Network::get_local_ip_from_cidr($pubnet);
> +my $allowed_ips;

could possibly be converted to a hash, and make the code using it below 
more readably by getting rid of 'check for duplicate/set membership via grep'

> +foreach my $net (@$public_nets) {
> +push @$allowed_ips, @{ PVE::Network::get_local_ip_from_cidr($net) };

would it make sense to catch errors from $net not being a valid CIDR 
string here? so that we get a nice error message, instead of whatever 
'ip' does with the invalid input/command line..

> +}
> +
>  die "No active IP found for the requested ceph public network '$pubnet' 
> on node '$node'\n"
>   if scalar(@$allowed_ips) < 1;

further below there is still a code path that treats $pubnet as 
containing a single CIDR string:

62  die "Monitor IP '$overwrite_ip' not in ceph public network '$pubnet'\n"
63  if !PVE::Network::is_ip_in_cidr($overwrite_ip, $pubnet);

>  
> -- 
> 2.20.1
> 
> 
> ___
> pve-devel mailing list
> pve-devel@pve.proxmox.com
> https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel
> 
> 

___
pve-devel mailing list
pve-devel@pve.proxmox.com
https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel


  1   2   3   4   5   6   7   8   9   10   >