Re: [pve-devel] [PATCH v3 storage 13/22] Introduce allow_rename parameter for pvesm import and storage_migrate

2020-03-16 Thread Fabian Grünbichler
two nits in-line

On March 12, 2020 1:08 pm, Fabian Ebner wrote:
> and also return the ID of the allocated volume. This option
> allows plugins to choose a new name if there is a collision.
> 
> In storage_migrate, the API version for the receiving side is checked.
> 
> In Storage.pm's volume_import, when a plugin returns 'undef',
> it can be assumed that the import with the requested volid was
> successful (it should've died otherwise) and so volid is returned.
> This is done for backwards compatibility with foreign plugins.
> 
> Signed-off-by: Fabian Ebner 
> ---
>  PVE/CLI/pvesm.pm | 22 ++
>  PVE/Storage.pm   | 56 +++-
>  PVE/Storage/LVMPlugin.pm | 17 +++
>  PVE/Storage/Plugin.pm| 16 +++
>  PVE/Storage/ZFSPoolPlugin.pm | 13 +
>  5 files changed, 89 insertions(+), 35 deletions(-)
> 
> diff --git a/PVE/CLI/pvesm.pm b/PVE/CLI/pvesm.pm
> index 7c0e259..bc7e5cc 100755
> --- a/PVE/CLI/pvesm.pm
> +++ b/PVE/CLI/pvesm.pm
> @@ -321,9 +321,16 @@ __PACKAGE__->register_method ({
>   maxLength => 80,
>   optional => 1,
>   },
> + 'allow-rename' => {
> + description => "Choose a new volume ID if the requested " .
> +   "volume ID already exists, instead of throwing an error.",
> + type => 'boolean',
> + optional => 1,
> + default => 0,
> + },
>   },
>  },
> -returns => { type => 'null' },
> +returns => { type => 'string' },
>  code => sub {
>   my ($param) = @_;
>  
> @@ -376,11 +383,11 @@ __PACKAGE__->register_method ({
>   my $cfg = PVE::Storage::config();
>   my $volume = $param->{volume};
>   my $delete = $param->{'delete-snapshot'};
> - PVE::Storage::volume_import($cfg, $infh, $volume, $param->{format},
> - $param->{base}, $param->{'with-snapshots'});
> - PVE::Storage::volume_snapshot_delete($cfg, $volume, $delete)
> + my $imported_volid = PVE::Storage::volume_import($cfg, $infh, $volume, 
> $param->{format},
> + $param->{base}, $param->{'with-snapshots'}, 
> $param->{'allow-rename'});
> + PVE::Storage::volume_snapshot_delete($cfg, $imported_volid, $delete)
>   if defined($delete);
> - return;
> + return $imported_volid;
>  }
>  });
>  
> @@ -801,7 +808,10 @@ our $cmddef = {
>  path => [ __PACKAGE__, 'path', ['volume']],
>  extractconfig => [__PACKAGE__, 'extractconfig', ['volume']],
>  export => [ __PACKAGE__, 'export', ['volume', 'format', 'filename']],
> -import => [ __PACKAGE__, 'import', ['volume', 'format', 'filename']],
> +import => [ __PACKAGE__, 'import', ['volume', 'format', 'filename'], {}, 
> sub  {
> + my $volid = shift;
> + print PVE::Storage::volume_imported_message($volid);
> +}],
>  apiinfo => [ __PACKAGE__, 'apiinfo', [], {}, sub {
>   my $res = shift;
>  
> diff --git a/PVE/Storage.pm b/PVE/Storage.pm
> index ab6a543..cac3ba7 100755
> --- a/PVE/Storage.pm
> +++ b/PVE/Storage.pm
> @@ -39,11 +39,11 @@ use PVE::Storage::DRBDPlugin;
>  use PVE::Storage::PBSPlugin;
>  
>  # Storage API version. Icrement it on changes in storage API interface.
> -use constant APIVER => 3;
> +use constant APIVER => 4;
>  # Age is the number of versions we're backward compatible with.
>  # This is like having 'current=APIVER' and age='APIAGE' in libtool,
>  # see 
> https://www.gnu.org/software/libtool/manual/html_node/Libtool-versioning.html
> -use constant APIAGE => 2;
> +use constant APIAGE => 3;
>  
>  # load standard plugins
>  PVE::Storage::DirPlugin->register();
> @@ -578,7 +578,7 @@ sub storage_migrate {
>  my $scfg = storage_config($cfg, $storeid);
>  
>  # no need to migrate shared content
> -return if $storeid eq $target_storeid && $scfg->{shared};
> +return $volid if $storeid eq $target_storeid && $scfg->{shared};
>  
>  my $tcfg = storage_config($cfg, $target_storeid);
>  
> @@ -611,6 +611,12 @@ sub storage_migrate {
>   $import_fn = "tcp://$net";
>  }
>  
> +my $target_apiver = 1; # if there is no apiinfo call, assume 1
> +my $get_api_version = [@$ssh, 'pvesm', 'apiinfo'];
> +my $match_api_version = sub { $target_apiver = $1 if $_[0] =~ m!^APIVER 
> (\d+)$!; };
> +eval { run_command($get_api_version, logfunc => $match_api_version); };
> +
> +$with_snapshots = $with_snapshots ? 1 : 0; # sanitize for passing as cli 
> parameter

this slipped through? already happens at the start of this sub

>  my $send = ['pvesm', 'export', $volid, $format, '-', '-with-snapshots', 
> $with_snapshots];
>  my $recv = [@$ssh, '--', 'pvesm', 'import', $target_volid, $format, 
> $import_fn, '-with-snapshots', $with_snapshots];
>  if (defined($snapshot)) {
> @@ -619,6 +625,7 @@ sub storage_migrate {
>  if ($migration_snapshot) {
>   push @$recv, '-delete-snapshot', $snapshot;
>  }
> +push @$recv, '-allow-rename', 

Re: [pve-devel] [PATCH v3 storage 14/22] storage_migrate: add volname_for_storage helper

2020-03-16 Thread Fabian Grünbichler
On March 12, 2020 1:08 pm, Fabian Ebner wrote:
> to guess a valid volname for a targetstorage of a different type.
> This makes it possible to migrate raw volumes between 'dir' and 'lvm'
> storages.
> 
> It is only used when the storage type for the source storage X
> and target storage Y differ and should work as long as Y uses
> the standard naming scheme (VMID/vm-VMID-name.fmt respectively vm-VMID-name).
> If it doesn't, we get an invalid name and fail, which is the old
> behavior (except if X and Y have different types but the same
> non-standard naming-scheme, where the old behavior did work).
> 
> The original name is preserved, except for a possible extension
> and it's also checked whether the format is valid for the target storage.
> Example: mylvm:vm-123-disk-4 <-> mydir:123/vm-123-disk-4.raw
> 
> Signed-off-by: Fabian Ebner 
> ---
>  PVE/Storage.pm | 30 --
>  1 file changed, 28 insertions(+), 2 deletions(-)
> 
> diff --git a/PVE/Storage.pm b/PVE/Storage.pm
> index cac3ba7..91b1ec8 100755
> --- a/PVE/Storage.pm
> +++ b/PVE/Storage.pm
> @@ -562,6 +562,25 @@ sub abs_filesystem_path {
>  return $path;
>  }
>  

might benefit from a short comment here, something like:

use by storage_migrate to convert raw volume names between path and 
non-path based storages (e.g., LVM).

at least AFAIU it ;) if the above is true, it might also make sense to limit 
it to format 'raw' for now?

> +my $volname_for_storage = sub {
> +my ($cfg, $volid, $target_storeid) = @_;
> +
> +my (undef, $name, $vmid, undef, undef, undef, $format) = 
> parse_volname($cfg, $volid);
> +my $target_scfg = storage_config($cfg, $target_storeid);
> +
> +my (undef, $valid_formats) = 
> PVE::Storage::Plugin::default_format($target_scfg);
> +my $format_is_valid = grep { $_ eq $format } @$valid_formats;
> +die "unsupported format '$format' for storage type 
> $target_scfg->{type}\n" if !$format_is_valid;
> +
> +(my $name_without_extension = $name) =~ s/\.$format$//;
> +
> +if ($target_scfg->{path}) {
> +   return "$vmid/$name_without_extension.$format";
> +} else {
> +   return "$name_without_extension";
> +}
> +};
> +
>  sub storage_migrate {
>  my ($cfg, $volid, $target_sshinfo, $target_storeid, $opts, $logfunc) = 
> @_;
>  
> @@ -573,14 +592,21 @@ sub storage_migrate {
>  my $allow_rename = $opts->{allow_rename} ? 1 : 0;
>  
>  my ($storeid, $volname) = parse_volume_id($volid);
> -my $target_volname = $opts->{target_volname} || $volname;
>  
>  my $scfg = storage_config($cfg, $storeid);
> +my $tcfg = storage_config($cfg, $target_storeid);
>  
>  # no need to migrate shared content
>  return $volid if $storeid eq $target_storeid && $scfg->{shared};
>  
> -my $tcfg = storage_config($cfg, $target_storeid);
> +my $target_volname;
> +if ($opts->{target_volname}) {
> + $target_volname = $opts->{target_volname};
> +} elsif ($scfg->{type} eq $tcfg->{type}) {

  || ($scfg->{path} && $tcfg->{path}) || (!$scfg->{path} && !$tcfg->{path}))

?

> + $target_volname = $volname;
> +} else {
> + $target_volname = $volname_for_storage->($cfg, $volid, $target_storeid);
> +}
>  
>  my $target_volid = "${target_storeid}:${target_volname}";
>  
> -- 
> 2.20.1
> 
> 
> ___
> pve-devel mailing list
> pve-devel@pve.proxmox.com
> https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel
> 
> 

___
pve-devel mailing list
pve-devel@pve.proxmox.com
https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel


Re: [pve-devel] [PATCH v3 qemu-server 11/22] Use new storage_migrate interface

2020-03-16 Thread Fabian Grünbichler
On March 12, 2020 1:08 pm, Fabian Ebner wrote:
> Signed-off-by: Fabian Ebner 
> ---
>  PVE/QemuMigrate.pm | 12 
>  1 file changed, 8 insertions(+), 4 deletions(-)
> 
> diff --git a/PVE/QemuMigrate.pm b/PVE/QemuMigrate.pm
> index 44e4c57..464abc6 100644
> --- a/PVE/QemuMigrate.pm
> +++ b/PVE/QemuMigrate.pm
> @@ -478,15 +478,19 @@ sub sync_disks {
>   next if $self->{replicated_volumes}->{$volid};
>   push @{$self->{volumes}}, $volid;
>   my $opts = $self->{opts};
> - my $insecure = $opts->{migration_type} eq 'insecure';
> - my $with_snapshots = $local_volumes->{$volid}->{snapshots};
>   # use 'migrate' limit for transfer to other node
>   my $bwlimit = PVE::Storage::get_bandwidth_limit('migration', 
> [$targetsid, $sid], $opts->{bwlimit});
>   # JSONSchema and get_bandwidth_limit use kbps - storage_migrate 
> bps
>   $bwlimit = $bwlimit * 1024 if defined($bwlimit);
>  
> - PVE::Storage::storage_migrate($self->{storecfg}, $volid, 
> $self->{ssh_info}, $targetsid,
> -   undef, undef, undef, $bwlimit, 
> $insecure, $with_snapshots);
> + my $storage_migrate_opts = {
> + 'bwlimit' => $bwlimit,
> + 'insecure' => $opts->{migration_type} eq 'insecure',
> + 'with_snapshots' => $local_volumes->{$volid}->{snapshots},
> + };
> +
> + PVE::Storage::storage_migrate($self->{storecfg}, $volid,
> + $self->{ssh_info}, $targetsid, $storage_migrate_opts);

nit: wrong alignment

>   }
>   }
>  };
> -- 
> 2.20.1
> 
> 
> ___
> pve-devel mailing list
> pve-devel@pve.proxmox.com
> https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel
> 
> 

___
pve-devel mailing list
pve-devel@pve.proxmox.com
https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel


Re: [pve-devel] [PATCH v3 storage 08/23] Add apiinfo helper to pvesm

2020-03-16 Thread Fabian Grünbichler
On March 12, 2020 1:08 pm, Fabian Ebner wrote:
> Signed-off-by: Fabian Ebner 
> ---
>  PVE/CLI/pvesm.pm | 30 ++
>  1 file changed, 30 insertions(+)
> 
> diff --git a/PVE/CLI/pvesm.pm b/PVE/CLI/pvesm.pm
> index 510faba..7c0e259 100755
> --- a/PVE/CLI/pvesm.pm
> +++ b/PVE/CLI/pvesm.pm
> @@ -47,6 +47,30 @@ sub setup_environment {
>  PVE::RPCEnvironment->setup_default_cli_env();
>  }
>  
> +__PACKAGE__->register_method ({
> +name => 'apiinfo',
> +path => 'apiinfo',
> +method => 'GET',
> +description => "For internal use. Returns APIVER and APIAGE.",

IMHO this is just as much for internal as for third-party use. E.g., 
some config management tool might use this to check which version of a 
third-party plugin to deploy. or the install script of a third-party 
plugin might use this select a compatible version.

> +parameters => {
> + additionalProperties => 0,
> + properties => {},
> +},
> +returns => {
> + type => 'object',
> + properties => {
> + apiver => { type => 'integer' },
> + apiage => { type => 'integer' },
> + },
> +},
> +code => sub {
> + return {
> + apiver => PVE::Storage::APIVER,
> + apiage => PVE::Storage::APIAGE,
> + };
> +}
> +});
> +
>  __PACKAGE__->register_method ({
>  name => 'path',
>  path => 'path',
> @@ -778,6 +802,12 @@ our $cmddef = {
>  extractconfig => [__PACKAGE__, 'extractconfig', ['volume']],
>  export => [ __PACKAGE__, 'export', ['volume', 'format', 'filename']],
>  import => [ __PACKAGE__, 'import', ['volume', 'format', 'filename']],
> +apiinfo => [ __PACKAGE__, 'apiinfo', [], {}, sub {
> + my $res = shift;
> +
> + print "APIVER $res->{apiver}\n";
> + print "APIAGE $res->{apiage}\n";
> +}],
>  };
>  
>  1;
> -- 
> 2.20.1
> 
> 
> ___
> pve-devel mailing list
> pve-devel@pve.proxmox.com
> https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel
> 
> 

___
pve-devel mailing list
pve-devel@pve.proxmox.com
https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel


Re: [pve-devel] [PATCH v3 guest-common 04/22] Add update_volume_ids

2020-03-16 Thread Fabian Grünbichler
On March 12, 2020 1:08 pm, Fabian Ebner wrote:
> This function is intened to be used after doing a migration where some
> of the volume IDs changed.
> 
> Signed-off-by: Fabian Ebner 
> ---
>  PVE/AbstractConfig.pm | 30 ++
>  1 file changed, 30 insertions(+)
> 
> diff --git a/PVE/AbstractConfig.pm b/PVE/AbstractConfig.pm
> index f2e130c..dc788c2 100644
> --- a/PVE/AbstractConfig.pm
> +++ b/PVE/AbstractConfig.pm
> @@ -457,6 +457,36 @@ sub foreach_volume {
>  $class->foreach_unused_volume($conf, $func, @param) if 
> $opts->{include_unused};
>  }
>  
> +# $volume_map is a hash of 'old_volid' => 'new_volid' pairs.
> +# This method replaces 'old_volid' by 'new_volid' throughout
> +# the config including snapshots and unused and vmstate volumes
> +sub update_volume_ids {
> +my ($class, $conf, $volume_map) = @_;

$conf here

> +
> +my $volid_key = $class->volid_key();
> +
> +my $do_replace = sub {
> + my ($key, $volume, $conf) = @_;

and $conf here

are easy to confuse. probably makes sense to rename either of them ;)

> +
> + my $old_volid = $volume->{$volid_key};
> + if (my $new_volid = $volume_map->{$old_volid}) {
> + $volume->{$volid_key} = $new_volid;
> + $conf->{$key} = $class->print_volume($key, $volume);
> + }
> +};
> +
> +my $opts = {
> + 'include_unused' => 1,
> + 'extra_keys' => ['vmstate'],
> +};
> +
> +$class->foreach_volume($conf, $opts, $do_replace, $conf);
> +foreach my $snap (keys %{$conf->{snapshots}}) {
> + my $snap_conf = $conf->{snapshots}->{$snap};
> + $class->foreach_volume($snap_conf, $opts, $do_replace, $snap_conf);
> +}
> +}
> +
>  # Returns whether the template parameter is set in $conf.
>  sub is_template {
>  my ($class, $conf) = @_;
> -- 
> 2.20.1
> 
> 
> ___
> pve-devel mailing list
> pve-devel@pve.proxmox.com
> https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel
> 
> 

___
pve-devel mailing list
pve-devel@pve.proxmox.com
https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel


Re: [pve-devel] [PATCH v3 guest-common 03/22] foreach_volume: generalize and implement function

2020-03-16 Thread Fabian Grünbichler
On March 12, 2020 1:08 pm, Fabian Ebner wrote:
> Introduce a parameter $opts to allow for better control of which
> keys/volumes to use for the iteration and ability to reverse the order.
> Also, allow extra parameters for the function.
> 
> Removes the '__snapshot'-prefix for future use from outside the module.
> 
> Signed-off-by: Fabian Ebner 
> ---
>  PVE/AbstractConfig.pm | 40 +---
>  1 file changed, 29 insertions(+), 11 deletions(-)
> 
> diff --git a/PVE/AbstractConfig.pm b/PVE/AbstractConfig.pm
> index 5b1683b..f2e130c 100644
> --- a/PVE/AbstractConfig.pm
> +++ b/PVE/AbstractConfig.pm
> @@ -432,6 +432,31 @@ sub foreach_unused_volume {
>  }
>  }
>  
> +# Iterate over all configured volumes, calling $func for each key/value pair
> +# with additional parameters @param.
> +# By default, unused volumes and specials like vmstate are excluded.
> +# Options: reverse - reverses the order for the iteration
> +#   include_unused  - also iterate over unused volumes
> +#   extra_keys  - an array of extra keys to use for the iteration
> +sub foreach_volume {
> +my ($class, $conf, $opts, $func, @param) = @_;

would it make sense to have foreach_volume with empty $opts as a wrapper 
around foreach_volume_full which is your foreach_volume? I expect many 
calls will just use the default/empty options..

it would allow us to drop foreach_mountpoint as wrapper around 
foreach_volume with empty config as well. all but one(!) call in 
pve-container are plain foreach_mountpoint..

many calls in qemu-server now use PVE::QemuServer::Drive::foreach_drive, 
but could actually use PVE::QemuConfig->foreach_volume (in fact, I think 
if we move back foreach_volid to QemuServer.pm we could even drop 
foreach_drive altogether? as always, hindsight is 20/20 ;))

I'd really like to avoid adding an abstract, general iterator over 
volumes for pve-container and qemu-server, but then only using it for 
half of the call sites. it's okay to have some specialized helpers like 
foreach_volid if they are used more than once, but the end result should 
not be that we now have

- foreach_volume (abstract/general)
- foreach_mountpoint(_reverse) (pve-container, as wrapper around 
  foreach-volume)
- foreach_drive (qemu-server, which basically does the same as 
  foreach_volume, but in a file where we can't call foreach_volume)

> +
> +my @keys = $class->valid_volume_keys($opts->{reverse});
> +push @keys, @{$opts->{extra_keys}} if $opts->{extra_keys};

I am not sure what the semantics with extra_keys and reverse should be 
(currently reverse is just used for mounting LXC mountpoints IIRC, where 
we probably never have any extra_keys and don't set unused volumes 
either.

but the whole purpose for the reverse option is to do

foreach_volume(..., do_something())

...

foreach_volume(..., { reverse => 1 }, undo_something())

where ordering is important. so we should either

die "'reverse' iteration only supported for default keys\n"
if reverse && (extra_keys || include_unused)

or make sure that extra_keys and include_unused also follow reverse 
semantics.

> +
> +foreach my $key (@keys) {
> + my $volume_string = $conf->{$key};
> + next if !defined($volume_string);
> +
> + my $volume = $class->parse_volume($key, $volume_string, 1);
> + next if !defined($volume);
> +
> + $func->($key, $volume, @param);
> +}
> +
> +$class->foreach_unused_volume($conf, $func, @param) if 
> $opts->{include_unused};
> +}
> +
>  # Returns whether the template parameter is set in $conf.
>  sub is_template {
>  my ($class, $conf) = @_;
> @@ -583,13 +608,6 @@ sub __snapshot_rollback_get_unused {
>  die "abstract method - implement me\n";
>  }
>  
> -# Iterate over all configured volumes, calling $func for each key/value pair.
> -sub __snapshot_foreach_volume {
> -my ($class, $conf, $func) = @_;
> -
> -die "abstract method - implement me\n";
> -}
> -
>  # Copy the current config $source to the snapshot config $dest
>  sub __snapshot_copy_config {
>  my ($class, $source, $dest) = @_;
> @@ -722,7 +740,7 @@ sub snapshot_create {
>  
>   $class->__snapshot_create_vol_snapshots_hook($vmid, $snap, $running, 
> "before");
>  
> - $class->__snapshot_foreach_volume($snap, sub {
> + $class->foreach_volume($snap, undef, sub {
>   my ($vs, $volume) = @_;
>  
>   $class->__snapshot_create_vol_snapshot($vmid, $vs, $volume, 
> $snapname);
> @@ -814,7 +832,7 @@ sub snapshot_delete {
>  };
>  
>  # now remove all volume snapshots
> -$class->__snapshot_foreach_volume($snap, sub {
> +$class->foreach_volume($snap, undef, sub {
>   my ($vs, $volume) = @_;
>  
>   return if $snapname eq 'vzdump' && $vs ne 'rootfs' && 
> !$volume->{backup};
> @@ -888,7 +906,7 @@ sub snapshot_rollback {
>  
>  my $snap = &$get_snapshot_config();
>  
> -$class->__snapshot_foreach_volume($snap, sub {
> +

[pve-devel] [PATCH pve-network] fix total removal of zones|vnets|controllers

2020-03-16 Thread Alexandre Derumier
Signed-off-by: Alexandre Derumier 
---
 PVE/Network/SDN/Controllers.pm | 3 ++-
 PVE/Network/SDN/Vnets.pm   | 2 +-
 PVE/Network/SDN/Zones.pm   | 3 ++-
 3 files changed, 5 insertions(+), 3 deletions(-)

diff --git a/PVE/Network/SDN/Controllers.pm b/PVE/Network/SDN/Controllers.pm
index 16c664d..1741c95 100644
--- a/PVE/Network/SDN/Controllers.pm
+++ b/PVE/Network/SDN/Controllers.pm
@@ -33,7 +33,8 @@ sub sdn_controllers_config {
 
 sub config {
 my $config = cfs_read_file("sdn/controllers.cfg.new");
-$config = cfs_read_file("sdn/controllers.cfg") if !keys %{$config->{ids}};
+$config = cfs_read_file("sdn/controllers.cfg") if !-e 
"/etc/pve/sdn/controllers.cfg.new";
+
 return $config;
 }
 
diff --git a/PVE/Network/SDN/Vnets.pm b/PVE/Network/SDN/Vnets.pm
index 725605b..ca6e1d0 100644
--- a/PVE/Network/SDN/Vnets.pm
+++ b/PVE/Network/SDN/Vnets.pm
@@ -23,7 +23,7 @@ sub sdn_vnets_config {
 
 sub config {
 my $config = cfs_read_file("sdn/vnets.cfg.new");
-$config = cfs_read_file("sdn/vnets.cfg") if !keys %{$config->{ids}};
+$config = cfs_read_file("sdn/vnets.cfg") if !-e 
"/etc/pve/sdn/vnets.cfg.new";
 return $config;
 }
 
diff --git a/PVE/Network/SDN/Zones.pm b/PVE/Network/SDN/Zones.pm
index 17ef507..ae31fea 100644
--- a/PVE/Network/SDN/Zones.pm
+++ b/PVE/Network/SDN/Zones.pm
@@ -39,7 +39,8 @@ sub sdn_zones_config {
 
 sub config {
 my $config = cfs_read_file("sdn/zones.cfg.new");
-$config = cfs_read_file("sdn/zones.cfg") if !keys %{$config->{ids}};
+$config = cfs_read_file("sdn/zones.cfg") if !-e 
"/etc/pve/sdn/zones.cfg.new";
+
 return $config;
 }
 
-- 
2.20.1

___
pve-devel mailing list
pve-devel@pve.proxmox.com
https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel


Re: [pve-devel] [PATCH-SERIES v3] Make storage migration more flexible

2020-03-16 Thread Fabian Grünbichler
On March 12, 2020 1:08 pm, Fabian Ebner wrote:
> This is the second half for the previous series [0].
> 
> This series aims to allow offline migration with '--targetstorage'
> and improve handling unsued/orphaned disks when migrating.
> It also makes it possible to migrate volumes between storages with
> a 'path' and storages without if the target storage uses the standard
> naming scheme and the source format is supported (e.g. migrating raw
> volumes between storages with a path and lvm storages).
> 
> It also adds an apiinfo call to pvesm that can be used to determine
> APIVER and APIAGE of the remote node and does some general refactoring
> regarding volume iterators.
> 
> The series is organised as follows:
> #1-#7 introduce and implement volume related helpers
> Mutual dependencies for both qemu-server and container with guest-common,
> to be precise #1-#3 <-> #5 and #1-#3 <-> #7.
> #8 adds the apiinfo helper
> #9-#12 changes storage_migrate interface
> Another round of mutual dependencies, this time for storage with each
> of guest-common, qemu-server, container.
> #13-#18 implement the goals for this series, and the last inter-package
> dependency is here, because #15 (qemu-server) depends on #13 (storage)
> #19-#22 improve logging and refactor some code with foreach_volume
> 
> Changes from v2:
> * add apiinfo call and check if remote side supports allow_rename
> * add valid_volume_keys and implement (and make use of) foreach_volume 
> iterator
> * make volname_for_storage a local function and avoid
>   requiring a new print_volname for plugins
> * parse vm_state in parse_volume instead of parse_drive
> 
> [0]: https://pve.proxmox.com/pipermail/pve-devel/2020-February/041910.html

some individual comments (mainly about the refactoring where some areas 
could still be improved) on individual patches.

gave this a limited test run as well, seems to work as expected :) only 
issue that I noticed (possibly also affects the live migration code 
path?) is that --targetstorage does not respect a storage's content type 
limit. we are also still lacking cleanup for failed sync_disks (e.g., 
after M out of N disks have been synced), or failed migration after 
successful sync_disks.

> 
> 
> guest-common:
> 
> Fabian Ebner (5):
>   Add interfaces for volume-related helpers
>   Add foreach_unused_volume
>   foreach_volume: generalize and implement function
>   Use new storage_migrate interface
>   Add update_volume_ids
> 
>  PVE/AbstractConfig.pm | 113 ++
>  PVE/Replication.pm|  12 -
>  2 files changed, 112 insertions(+), 13 deletions(-)
> 
> 
> container:
> 
> Fabian Ebner (3):
>   Implement volume-related helpers and use new foreach_volume
>   Use new storage_migrate interface
>   Use foreach_volume instead of foreach_mountpoint_full to avoid
> duplication
> 
>  src/PVE/API2/LXC.pm|  4 +--
>  src/PVE/API2/LXC/Config.pm |  2 +-
>  src/PVE/CLI/pct.pm |  2 +-
>  src/PVE/LXC/Config.pm  | 53 +-
>  src/PVE/LXC/Migrate.pm | 10 ---
>  5 files changed, 40 insertions(+), 31 deletions(-)
> 
> 
> qemu-server
> 
> Fabian Ebner (10):
>   Implement volume-related helpers and use new foreach_volume
>   Allow parsing vmstate entries
>   Use new storage_migrate interface
>   Take note of changes to the volume IDs when migrating and update the
> config
>   Allow specifying targetstorage for offline migration
>   Update volume IDs in one go
>   sync_disks: use allow_rename to avoid collisions on the target storage
>   sync_disks: be more verbose if storage_migrate fails
>   sync_disks: log output of storage_migrate
>   update_disk_config: use config volume iterators instead of loops
> 
>  PVE/API2/Qemu.pm|  3 ---
>  PVE/QemuConfig.pm   | 51 +++--
>  PVE/QemuMigrate.pm  | 44 +++
>  PVE/QemuServer.pm   | 50 +---
>  PVE/QemuServer/Drive.pm |  1 +
>  5 files changed, 105 insertions(+), 44 deletions(-)
> 
> 
> storage:
> 
> Fabian Ebner (4):
>   Add apiinfo helper to pvesm
>   Collect optional parameters for storage_migrate into $opts
>   Introduce allow_rename parameter for pvesm import and storage_migrate
>   storage_migrate: add volname_for_storage helper
> 
>  PVE/API2/Storage/Content.pm  |  2 +-
>  PVE/CLI/pvesm.pm | 52 +---
>  PVE/Storage.pm   | 94 ++--
>  PVE/Storage/LVMPlugin.pm | 17 ---
>  PVE/Storage/Plugin.pm| 16 --
>  PVE/Storage/ZFSPoolPlugin.pm | 13 +++--
>  6 files changed, 155 insertions(+), 39 deletions(-)
> 
> -- 
> 2.20.1
> 
> 
> ___
> pve-devel mailing list
> pve-devel@pve.proxmox.com
> https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel
> 
> 

___
pve-devel mailing 

[pve-devel] [PATCH v2 qemu-server] cfg2cmd: Add the audiodev property to audio devs

2020-03-16 Thread Aaron Lauterer
With Qemu 4.2 a new `audiodev` property was introduced [0] to explicitly
specify the backend to be used for the audio device. This is accompanied
with a warning that the fallback to the default audio backend is
deprecated.

[0] https://wiki.qemu.org/ChangeLog/4.2#Audio

Signed-off-by: Aaron Lauterer 
---
v1 [1]->v2:
the audiodev property is always set, omitting the machine version check.

tested live migration with all 3 audio devs in both directions.


[1] https://pve.proxmox.com/pipermail/pve-devel/2020-March/042381.html

 PVE/QemuServer.pm | 6 +++---
 1 file changed, 3 insertions(+), 3 deletions(-)

diff --git a/PVE/QemuServer.pm b/PVE/QemuServer.pm
index b2ff515..0905e04 100644
--- a/PVE/QemuServer.pm
+++ b/PVE/QemuServer.pm
@@ -3221,11 +3221,11 @@ sub config_to_command {
 
my $id = $audio->{dev_id};
if ($audio->{dev} eq 'AC97') {
-   push @$devices, '-device', "AC97,id=${id}${audiopciaddr}";
+   push @$devices, '-device', 
"AC97,id=${id}${audiopciaddr},audiodev=$audio->{backend_id}";
} elsif ($audio->{dev} =~ /intel\-hda$/) {
push @$devices, '-device', "$audio->{dev},id=${id}${audiopciaddr}";
-   push @$devices, '-device', 
"hda-micro,id=${id}-codec0,bus=${id}.0,cad=0";
-   push @$devices, '-device', 
"hda-duplex,id=${id}-codec1,bus=${id}.0,cad=1";
+   push @$devices, '-device', 
"hda-micro,id=${id}-codec0,bus=${id}.0,cad=0,audiodev=$audio->{backend_id}";
+   push @$devices, '-device', 
"hda-duplex,id=${id}-codec1,bus=${id}.0,cad=1,audiodev=$audio->{backend_id}";
} else {
die "unkown audio device '$audio->{dev}', implement me!";
}
-- 
2.20.1


___
pve-devel mailing list
pve-devel@pve.proxmox.com
https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel


Re: [pve-devel] [PATCH v4 qemu-server] Change format for unused drives

2020-03-16 Thread Fabian Grünbichler
On March 12, 2020 11:19 am, Fabian Ebner wrote:
> and make it match with what parse_drive does. Even though the 'real' format
> was pve-volume-id, callers already expected that parse_drive returns a hash
> with a valid 'file' key (e.g. PVE/API2/Qemu.pm:1147ff).
> 
> Signed-off-by: Fabian Ebner 
> ---
> 
> This is the last patch left over from the series
> refactoring the drive code [0], although not directly related.
> 
> [0]: https://pve.proxmox.com/pipermail/pve-devel/2020-March/042007.html
> 
> Changes from v3:
> * make sure the format describes a hash with a 'file' key
>   as that's what callers expect
> 
>  PVE/QemuServer/Drive.pm | 15 +--
>  1 file changed, 13 insertions(+), 2 deletions(-)
> 
> diff --git a/PVE/QemuServer/Drive.pm b/PVE/QemuServer/Drive.pm
> index e927abf..d412147 100644
> --- a/PVE/QemuServer/Drive.pm
> +++ b/PVE/QemuServer/Drive.pm
> @@ -355,9 +355,20 @@ for (my $i = 0; $i < $MAX_VIRTIO_DISKS; $i++)  {
>  
>  $drivedesc_hash->{efidisk0} = $efidisk_desc;
>  
> +my $unused_fmt = {
> +volume => { alias => 'file' },
> +file => {
> + type => 'string',
> + format => 'pve-volume-id',
> + default_key => 1,
> + format_description => 'volume',
> + description => "The drive's backing volume.",
> +},
> +};
> +
>  our $unuseddesc = {
>  optional => 1,
> -type => 'string', format => 'pve-volume-id',
> +type => 'string', format => $unused_fmt,
>  description => "Reference to unused volumes. This is used internally, 
> and should not be modified manually.",
>  };
>  
> @@ -418,7 +429,7 @@ sub parse_drive {
>   return undef;
>  }
>  
> -my $desc = $key =~ /^unused\d+$/ ? $alldrive_fmt
> +my $desc = $key =~ /^unused\d+$/ ? $unuseddesc->{format}
>   : $drivedesc_hash->{$key}->{format};

couldn't we just put the unused schema into $drivedesc_hash as well?

is_valid_drivename would need to skip them[1], but we'd have all the disk 
schema in one place.

that being said - the patch as is LGTM and the above can be done as 
follow-up just as well:

Reviewed-By: Fabian Grünbichler 

1: in general? or just by default? maybe there are call sites that would 
benefit/get simpler by including unused as well..


>  if (!$desc) {
>   warn "invalid drive key: $key\n";
> -- 
> 2.20.1
> 
> 
> ___
> pve-devel mailing list
> pve-devel@pve.proxmox.com
> https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel
> 
> 

___
pve-devel mailing list
pve-devel@pve.proxmox.com
https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel


Re: [pve-devel] [PATCH qemu-server] cfg2cmd: Fix audio device backend config

2020-03-16 Thread Aaron Lauterer



On 3/16/20 8:04 AM, Fabian Grünbichler wrote:

On March 13, 2020 4:21 pm, Aaron Lauterer wrote:

Add the `,audiodev=` property to audio devices when machine version is
4.2 or higher.

With Qemu 4.2 a new `audiodev` property was introduced [0] to explicitly
specify the backend to be used for the audio device. This is accompanied
with a warning that the fallback to the default audio backend is
deprecated.

[0] https://wiki.qemu.org/ChangeLog/4.2#Audio

Signed-off-by: Aaron Lauterer 
---

tested this with all 3 audio devices available in a local VM and one
live migration from a qemu 4.1 node to a 4.2 node with the default
ich9-intel-hda dev.

  PVE/QemuServer.pm | 12 +---
  1 file changed, 9 insertions(+), 3 deletions(-)

diff --git a/PVE/QemuServer.pm b/PVE/QemuServer.pm
index b2ff515..6808623 100644
--- a/PVE/QemuServer.pm
+++ b/PVE/QemuServer.pm
@@ -3220,12 +3220,18 @@ sub config_to_command {
my $audiopciaddr = print_pci_addr("audio0", $bridges, $arch, 
$machine_type);
  
  	my $id = $audio->{dev_id};

+
+   my $audiodevsuffix = '';


this is not a very good variable name


yep, naming is hard ;)



+   if (min_version($machine_version, 4, 2)) {
+   $audiodevsuffix = ",audiodev=$audio->{backend_id}";
+   }
+
if ($audio->{dev} eq 'AC97') {
-   push @$devices, '-device', "AC97,id=${id}${audiopciaddr}";
+   push @$devices, '-device', 
"AC97,id=${id}${audiopciaddr}${audiodevsuffix}";


because now it looks like this is some suffix of the 'id' option

how about $audio_backend ? would it work to just add
',audiodev=$audio->{backend_id}' unconditionally? the -audiodev gets
added unconditionally, so it should always be available?


I will have to test if this will break live migration. Otherwise naming 
it $audio_backend does seem like a decent name.





} elsif ($audio->{dev} =~ /intel\-hda$/) {
push @$devices, '-device', "$audio->{dev},id=${id}${audiopciaddr}";
-   push @$devices, '-device', 
"hda-micro,id=${id}-codec0,bus=${id}.0,cad=0";
-   push @$devices, '-device', 
"hda-duplex,id=${id}-codec1,bus=${id}.0,cad=1";
+   push @$devices, '-device', 
"hda-micro,id=${id}-codec0,bus=${id}.0,cad=0${audiodevsuffix}";
+   push @$devices, '-device', 
"hda-duplex,id=${id}-codec1,bus=${id}.0,cad=1${audiodevsuffix}";
} else {
die "unkown audio device '$audio->{dev}', implement me!";
}
--
2.20.1


___
pve-devel mailing list
pve-devel@pve.proxmox.com
https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel




___
pve-devel mailing list
pve-devel@pve.proxmox.com
https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel



___
pve-devel mailing list
pve-devel@pve.proxmox.com
https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel


[pve-devel] applied: [PATCH container] allow restoring non-volume backups again

2020-03-16 Thread Fabian Grünbichler
this got broken with PBS integration patches

Signed-off-by: Fabian Grünbichler 
---
https://forum.proxmox.com/threads/proxmox-6-1-7-pct-restore-unable-to-parse-volume-id.67076/

 src/PVE/LXC/Create.pm | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/src/PVE/LXC/Create.pm b/src/PVE/LXC/Create.pm
index 22a6e76..9faec63 100644
--- a/src/PVE/LXC/Create.pm
+++ b/src/PVE/LXC/Create.pm
@@ -253,7 +253,7 @@ sub recover_config_from_tar {
 sub restore_configuration {
 my ($vmid, $storage_cfg, $archive, $rootdir, $conf, $restricted, $unique, 
$skip_fw) = @_;
 
-my ($storeid, $volname) = PVE::Storage::parse_volume_id($archive);
+my ($storeid, $volname) = PVE::Storage::parse_volume_id($archive, 1);
 if (defined($storeid)) {
my $scfg = PVE::Storage::storage_config($storage_cfg, $storeid);
if ($scfg->{type} eq 'pbs') {
-- 
2.20.1


___
pve-devel mailing list
pve-devel@pve.proxmox.com
https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel


Re: [pve-devel] [PATCH qemu-server] cfg2cmd: Fix audio device backend config

2020-03-16 Thread Fabian Grünbichler
On March 13, 2020 4:21 pm, Aaron Lauterer wrote:
> Add the `,audiodev=` property to audio devices when machine version is
> 4.2 or higher.
> 
> With Qemu 4.2 a new `audiodev` property was introduced [0] to explicitly
> specify the backend to be used for the audio device. This is accompanied
> with a warning that the fallback to the default audio backend is
> deprecated.
> 
> [0] https://wiki.qemu.org/ChangeLog/4.2#Audio
> 
> Signed-off-by: Aaron Lauterer 
> ---
> 
> tested this with all 3 audio devices available in a local VM and one
> live migration from a qemu 4.1 node to a 4.2 node with the default
> ich9-intel-hda dev.
> 
>  PVE/QemuServer.pm | 12 +---
>  1 file changed, 9 insertions(+), 3 deletions(-)
> 
> diff --git a/PVE/QemuServer.pm b/PVE/QemuServer.pm
> index b2ff515..6808623 100644
> --- a/PVE/QemuServer.pm
> +++ b/PVE/QemuServer.pm
> @@ -3220,12 +3220,18 @@ sub config_to_command {
>   my $audiopciaddr = print_pci_addr("audio0", $bridges, $arch, 
> $machine_type);
>  
>   my $id = $audio->{dev_id};
> +
> + my $audiodevsuffix = '';

this is not a very good variable name

> + if (min_version($machine_version, 4, 2)) {
> + $audiodevsuffix = ",audiodev=$audio->{backend_id}";
> + }
> +
>   if ($audio->{dev} eq 'AC97') {
> - push @$devices, '-device', "AC97,id=${id}${audiopciaddr}";
> + push @$devices, '-device', 
> "AC97,id=${id}${audiopciaddr}${audiodevsuffix}";

because now it looks like this is some suffix of the 'id' option

how about $audio_backend ? would it work to just add 
',audiodev=$audio->{backend_id}' unconditionally? the -audiodev gets 
added unconditionally, so it should always be available?

>   } elsif ($audio->{dev} =~ /intel\-hda$/) {
>   push @$devices, '-device', "$audio->{dev},id=${id}${audiopciaddr}";
> - push @$devices, '-device', 
> "hda-micro,id=${id}-codec0,bus=${id}.0,cad=0";
> - push @$devices, '-device', 
> "hda-duplex,id=${id}-codec1,bus=${id}.0,cad=1";
> + push @$devices, '-device', 
> "hda-micro,id=${id}-codec0,bus=${id}.0,cad=0${audiodevsuffix}";
> + push @$devices, '-device', 
> "hda-duplex,id=${id}-codec1,bus=${id}.0,cad=1${audiodevsuffix}";
>   } else {
>   die "unkown audio device '$audio->{dev}', implement me!";
>   }
> -- 
> 2.20.1
> 
> 
> ___
> pve-devel mailing list
> pve-devel@pve.proxmox.com
> https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel
> 
> 

___
pve-devel mailing list
pve-devel@pve.proxmox.com
https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel


[pve-devel] [PATCH manager 2/5] ceph: add 'scope' parameter to metadata api call

2020-03-16 Thread Dominik Csapak
so that we can choose to only include the versions and not all metadata
this is done to avoid having a seperate 'versions' api call

Signed-off-by: Dominik Csapak 
---
 PVE/API2/Cluster/Ceph.pm | 12 +++-
 1 file changed, 11 insertions(+), 1 deletion(-)

diff --git a/PVE/API2/Cluster/Ceph.pm b/PVE/API2/Cluster/Ceph.pm
index 71d5cde6..e18d421e 100644
--- a/PVE/API2/Cluster/Ceph.pm
+++ b/PVE/API2/Cluster/Ceph.pm
@@ -59,7 +59,14 @@ __PACKAGE__->register_method ({
 },
 parameters => {
additionalProperties => 0,
-   properties => {},
+   properties => {
+   scope => {
+   type => 'string',
+   optional => 1,
+   default => 'all',
+   enum => ['all', 'versions', ],
+   },
+   },
 },
 returns => { type => 'object' },
 code => sub {
@@ -68,6 +75,7 @@ __PACKAGE__->register_method ({
PVE::Ceph::Tools::check_ceph_inited();
 
my $rados = PVE::RADOS->new();
+   my $scope = $param->{scope} // 'all';
 
my $res = {
# FIXME: remove with 7.0 depreacated by structured 'versions'
@@ -78,6 +86,8 @@ __PACKAGE__->register_method ({
$res->{node} = $vers;
}
 
+   return $res if ($scope eq 'versions');
+
for my $type ( qw(mon mgr mds) ) {
my $typedata = PVE::Ceph::Services::get_cluster_service($type);
my $data = {};
-- 
2.20.1


___
pve-devel mailing list
pve-devel@pve.proxmox.com
https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel


[pve-devel] [PATCH manager 4/5] gui: ceph: icons for versions in ServiceList

2020-03-16 Thread Dominik Csapak
this makes it closer to the OSD list, using the new 'version-only'
metadata api call

this partially fixes #2468

Signed-off-by: Dominik Csapak 
---
 www/manager6/ceph/ServiceList.js | 58 +++-
 1 file changed, 57 insertions(+), 1 deletion(-)

diff --git a/www/manager6/ceph/ServiceList.js b/www/manager6/ceph/ServiceList.js
index 9da42cd4..77254acf 100644
--- a/www/manager6/ceph/ServiceList.js
+++ b/www/manager6/ceph/ServiceList.js
@@ -65,6 +65,48 @@ Ext.define('PVE.node.CephServiceList', {
 controller: {
xclass: 'Ext.app.ViewController',
 
+   render_version: function(value, metadata, rec) {
+   let me = this.getView();
+   let host = rec.data.host;
+   let icon = "";
+   let v = value;
+   let nodev = [0];
+   if (me.nodeversions[host] !== undefined) {
+   nodev = me.nodeversions[host].version.parts;
+   }
+   let maxv = me.maxversion;
+
+   if (PVE.Utils.compare_ceph_versions(maxv, nodev) > 0) {
+   icon = PVE.Utils.get_ceph_icon_html('HEALTH_UPGRADE');
+   } else if (PVE.Utils.compare_ceph_versions(nodev, v) > 0) {
+   icon = PVE.Utils.get_ceph_icon_html('HEALTH_OLD');
+   } else if (me.mixedversions) {
+   icon = PVE.Utils.get_ceph_icon_html('HEALTH_OK');
+   }
+
+   return icon + v;
+   },
+
+   getMaxVersions: function(store, records, success) {
+   if (!success || records.length < 1) {
+   return;
+   }
+   let me = this;
+   let view = me.getView();
+
+   view.nodeversions = records[0].data.node;
+   view.maxversion = [];
+   view.mixedversions = false;
+   for (const [nodename, data] of Object.entries(view.nodeversions)) {
+   if (PVE.Utils.compare_ceph_versions(data.version.parts, 
view.maxversion) > 0) {
+   if (view.maxversion.length > 0) {
+   view.mixedversions = true;
+   }
+   view.maxversion = data.version.parts;
+   }
+   }
+   },
+
init: function(view) {
if (view.pveSelNode) {
view.nodename = view.pveSelNode.data.node;
@@ -77,6 +119,19 @@ Ext.define('PVE.node.CephServiceList', {
throw "no type specified";
}
 
+   view.versionsstore = Ext.create('Proxmox.data.UpdateStore', {
+   autoLoad: true,
+   autoStart: true,
+   interval: 1,
+   storeid: 'ceph-versions-' + view.type + '-list' + view.nodename,
+   proxy: {
+   type: 'proxmox',
+   url: "/api2/json/cluster/ceph/metadata?scope=versions"
+   }
+   });
+
+   view.versionsstore.on('load', this.getMaxVersions, this);
+
view.rstore = Ext.create('Proxmox.data.UpdateStore', {
autoLoad: true,
autoStart: true,
@@ -296,7 +351,8 @@ Ext.define('PVE.node.CephServiceList', {
header: gettext('Version'),
flex: 3,
sortable: true,
-   dataIndex: 'version'
+   dataIndex: 'version',
+   renderer: 'render_version',
}
 ],
 
-- 
2.20.1


___
pve-devel mailing list
pve-devel@pve.proxmox.com
https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel


[pve-devel] [PATCH manager 1/5] ceph: factor out get/broadcast ceph versions to ceph::services

2020-03-16 Thread Dominik Csapak
which also removes some dead code
(the my $local_last_version variable was never used)

Signed-off-by: Dominik Csapak 
---
 PVE/API2/Cluster/Ceph.pm |  6 ++
 PVE/Ceph/Services.pm | 30 ++
 PVE/Service/pvestatd.pm  | 19 +--
 3 files changed, 33 insertions(+), 22 deletions(-)

diff --git a/PVE/API2/Cluster/Ceph.pm b/PVE/API2/Cluster/Ceph.pm
index 9e3cce50..71d5cde6 100644
--- a/PVE/API2/Cluster/Ceph.pm
+++ b/PVE/API2/Cluster/Ceph.pm
@@ -74,10 +74,8 @@ __PACKAGE__->register_method ({
version => PVE::Cluster::get_node_kv("ceph-version"),
};
 
-   if (defined(my $vers = PVE::Cluster::get_node_kv("ceph-versions"))) {
-   $res->{node} = {
-   map { eval { $_ => decode_json($vers->{$_}) } } keys %$vers
-   };
+   if (defined(my $vers = PVE::Ceph::Services::get_ceph_versions())) {
+   $res->{node} = $vers;
}
 
for my $type ( qw(mon mgr mds) ) {
diff --git a/PVE/Ceph/Services.pm b/PVE/Ceph/Services.pm
index c17008cf..ef46b9ea 100644
--- a/PVE/Ceph/Services.pm
+++ b/PVE/Ceph/Services.pm
@@ -47,6 +47,36 @@ sub broadcast_ceph_services {
 }
 }
 
+sub broadcast_ceph_versions {
+my ($version, $buildcommit, $vers_parts) = 
PVE::Ceph::Tools::get_local_version(1);
+
+if ($version) {
+   # FIXME: remove with 7.0 - for backward compat only
+   PVE::Cluster::broadcast_node_kv("ceph-version", $version);
+
+   my $node_versions = {
+   version => {
+   str => $version,
+   parts => $vers_parts,
+   },
+   buildcommit => $buildcommit,
+   };
+   PVE::Cluster::broadcast_node_kv("ceph-versions", 
encode_json($node_versions));
+}
+}
+
+sub get_ceph_versions {
+my $res;
+
+if (defined(my $vers = PVE::Cluster::get_node_kv("ceph-versions"))) {
+   $res = {
+   map { eval { $_ => decode_json($vers->{$_}) } } keys %$vers
+   };
+}
+
+return $res;
+}
+
 sub get_cluster_service {
 my ($type) = @_;
 
diff --git a/PVE/Service/pvestatd.pm b/PVE/Service/pvestatd.pm
index 88fb31ac..7899427d 100755
--- a/PVE/Service/pvestatd.pm
+++ b/PVE/Service/pvestatd.pm
@@ -465,24 +465,7 @@ sub update_ceph_metadata {
 
 PVE::Ceph::Services::broadcast_ceph_services();
 
-my ($version, $buildcommit, $vers_parts) = 
PVE::Ceph::Tools::get_local_version(1);
-
-
-my $local_last_version = PVE::Cluster::get_node_kv('ceph-versions');
-
-if ($version) {
-   # FIXME: remove with 7.0 - for backward compat only
-   PVE::Cluster::broadcast_node_kv("ceph-version", $version);
-
-   my $node_versions = {
-   version => {
-   str => $version,
-   parts => $vers_parts,
-   },
-   buildcommit => $buildcommit,
-   };
-   PVE::Cluster::broadcast_node_kv("ceph-versions", 
encode_json($node_versions));
-}
+PVE::Ceph::Services::broadcast_ceph_versions();
 }
 
 sub update_sdn_status {
-- 
2.20.1


___
pve-devel mailing list
pve-devel@pve.proxmox.com
https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel


[pve-devel] [PATCH manager 0/5] improve ux for ceph versions

2020-03-16 Thread Dominik Csapak
adds more visual hints about the ceph versions (needs upgrade/restart)
and a 'highest cluster version' on the ceph dashboard
to let the admin know what the expected version is

Dominik Csapak (5):
  ceph: factor out get/broadcast ceph versions to ceph::services
  ceph: add 'scope' parameter to metadata api call
  gui: ceph: let compare_ceph_versions accept arrays directly
  gui: ceph: icons for versions in ServiceList
  gui: ceph: add highest cluster version in summary

 PVE/API2/Cluster/Ceph.pm | 18 +++---
 PVE/Ceph/Services.pm | 30 +
 PVE/Service/pvestatd.pm  | 19 +--
 www/manager6/Utils.js| 17 --
 www/manager6/ceph/ServiceList.js | 58 +++-
 www/manager6/ceph/Status.js  | 51 ++--
 6 files changed, 164 insertions(+), 29 deletions(-)

-- 
2.20.1


___
pve-devel mailing list
pve-devel@pve.proxmox.com
https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel


Re: [pve-devel] [PATCH v4 qemu-server] Change format for unused drives

2020-03-16 Thread Fabian Ebner



On 16.03.20 10:10, Fabian Grünbichler wrote:

On March 12, 2020 11:19 am, Fabian Ebner wrote:

and make it match with what parse_drive does. Even though the 'real' format
was pve-volume-id, callers already expected that parse_drive returns a hash
with a valid 'file' key (e.g. PVE/API2/Qemu.pm:1147ff).

Signed-off-by: Fabian Ebner 
---

This is the last patch left over from the series
refactoring the drive code [0], although not directly related.

[0]: https://pve.proxmox.com/pipermail/pve-devel/2020-March/042007.html

Changes from v3:
 * make sure the format describes a hash with a 'file' key
   as that's what callers expect

  PVE/QemuServer/Drive.pm | 15 +--
  1 file changed, 13 insertions(+), 2 deletions(-)

diff --git a/PVE/QemuServer/Drive.pm b/PVE/QemuServer/Drive.pm
index e927abf..d412147 100644
--- a/PVE/QemuServer/Drive.pm
+++ b/PVE/QemuServer/Drive.pm
@@ -355,9 +355,20 @@ for (my $i = 0; $i < $MAX_VIRTIO_DISKS; $i++)  {
  
  $drivedesc_hash->{efidisk0} = $efidisk_desc;
  
+my $unused_fmt = {

+volume => { alias => 'file' },
+file => {
+   type => 'string',
+   format => 'pve-volume-id',
+   default_key => 1,
+   format_description => 'volume',
+   description => "The drive's backing volume.",
+},
+};
+
  our $unuseddesc = {
  optional => 1,
-type => 'string', format => 'pve-volume-id',
+type => 'string', format => $unused_fmt,
  description => "Reference to unused volumes. This is used internally, and 
should not be modified manually.",
  };
  
@@ -418,7 +429,7 @@ sub parse_drive {

return undef;
  }
  
-my $desc = $key =~ /^unused\d+$/ ? $alldrive_fmt

+my $desc = $key =~ /^unused\d+$/ ? $unuseddesc->{format}
   : $drivedesc_hash->{$key}->{format};


couldn't we just put the unused schema into $drivedesc_hash as well?

is_valid_drivename would need to skip them[1], but we'd have all the disk
schema in one place.



Ok, I'll check the call sites and do a follow-up for this.


that being said - the patch as is LGTM and the above can be done as
follow-up just as well:

Reviewed-By: Fabian Grünbichler 

1: in general? or just by default? maybe there are call sites that would
benefit/get simpler by including unused as well..
 >

  if (!$desc) {
warn "invalid drive key: $key\n";
--
2.20.1


___
pve-devel mailing list
pve-devel@pve.proxmox.com
https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel




___
pve-devel mailing list
pve-devel@pve.proxmox.com
https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel



___
pve-devel mailing list
pve-devel@pve.proxmox.com
https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel


Re: [pve-devel] [PATCH v3 guest-common 03/22] foreach_volume: generalize and implement function

2020-03-16 Thread Fabian Ebner

On 16.03.20 12:05, Fabian Grünbichler wrote:

On March 12, 2020 1:08 pm, Fabian Ebner wrote:

Introduce a parameter $opts to allow for better control of which
keys/volumes to use for the iteration and ability to reverse the order.
Also, allow extra parameters for the function.

Removes the '__snapshot'-prefix for future use from outside the module.

Signed-off-by: Fabian Ebner 
---
  PVE/AbstractConfig.pm | 40 +---
  1 file changed, 29 insertions(+), 11 deletions(-)

diff --git a/PVE/AbstractConfig.pm b/PVE/AbstractConfig.pm
index 5b1683b..f2e130c 100644
--- a/PVE/AbstractConfig.pm
+++ b/PVE/AbstractConfig.pm
@@ -432,6 +432,31 @@ sub foreach_unused_volume {
  }
  }
  
+# Iterate over all configured volumes, calling $func for each key/value pair

+# with additional parameters @param.
+# By default, unused volumes and specials like vmstate are excluded.
+# Options: reverse - reverses the order for the iteration
+# include_unused  - also iterate over unused volumes
+# extra_keys  - an array of extra keys to use for the iteration
+sub foreach_volume {
+my ($class, $conf, $opts, $func, @param) = @_;


would it make sense to have foreach_volume with empty $opts as a wrapper
around foreach_volume_full which is your foreach_volume? I expect many
calls will just use the default/empty options..

it would allow us to drop foreach_mountpoint as wrapper around
foreach_volume with empty config as well. all but one(!) call in
pve-container are plain foreach_mountpoint..

many calls in qemu-server now use PVE::QemuServer::Drive::foreach_drive,
but could actually use PVE::QemuConfig->foreach_volume (in fact, I think
if we move back foreach_volid to QemuServer.pm we could even drop
foreach_drive altogether? as always, hindsight is 20/20 ;))

I'd really like to avoid adding an abstract, general iterator over
volumes for pve-container and qemu-server, but then only using it for
half of the call sites. it's okay to have some specialized helpers like
foreach_volid if they are used more than once, but the end result should
not be that we now have

- foreach_volume (abstract/general)
- foreach_mountpoint(_reverse) (pve-container, as wrapper around
   foreach-volume)
- foreach_drive (qemu-server, which basically does the same as
   foreach_volume, but in a file where we can't call foreach_volume)



Ok, I didn't think about getting rid of foreach_drive altogether, but it 
sounds like the right approach.



+
+my @keys = $class->valid_volume_keys($opts->{reverse});
+push @keys, @{$opts->{extra_keys}} if $opts->{extra_keys};


I am not sure what the semantics with extra_keys and reverse should be
(currently reverse is just used for mounting LXC mountpoints IIRC, where
we probably never have any extra_keys and don't set unused volumes
either.

but the whole purpose for the reverse option is to do

foreach_volume(..., do_something())

...

foreach_volume(..., { reverse => 1 }, undo_something())

where ordering is important. so we should either

die "'reverse' iteration only supported for default keys\n"
 if reverse && (extra_keys || include_unused)

or make sure that extra_keys and include_unused also follow reverse
semantics.



I feel like dying is the best approach here, as it keeps the function 
simple and can always be changed later if reversing with non-default 
keys is ever needed.



+
+foreach my $key (@keys) {
+   my $volume_string = $conf->{$key};
+   next if !defined($volume_string);
+
+   my $volume = $class->parse_volume($key, $volume_string, 1);
+   next if !defined($volume);
+
+   $func->($key, $volume, @param);
+}
+
+$class->foreach_unused_volume($conf, $func, @param) if 
$opts->{include_unused};
+}
+
  # Returns whether the template parameter is set in $conf.
  sub is_template {
  my ($class, $conf) = @_;
@@ -583,13 +608,6 @@ sub __snapshot_rollback_get_unused {
  die "abstract method - implement me\n";
  }
  
-# Iterate over all configured volumes, calling $func for each key/value pair.

-sub __snapshot_foreach_volume {
-my ($class, $conf, $func) = @_;
-
-die "abstract method - implement me\n";
-}
-
  # Copy the current config $source to the snapshot config $dest
  sub __snapshot_copy_config {
  my ($class, $source, $dest) = @_;
@@ -722,7 +740,7 @@ sub snapshot_create {
  
  	$class->__snapshot_create_vol_snapshots_hook($vmid, $snap, $running, "before");
  
-	$class->__snapshot_foreach_volume($snap, sub {

+   $class->foreach_volume($snap, undef, sub {
my ($vs, $volume) = @_;
  
  	$class->__snapshot_create_vol_snapshot($vmid, $vs, $volume, $snapname);

@@ -814,7 +832,7 @@ sub snapshot_delete {
  };
  
  # now remove all volume snapshots

-$class->__snapshot_foreach_volume($snap, sub {
+$class->foreach_volume($snap, undef, sub {
my ($vs, $volume) = @_;
  
  	return if $snapname eq 'vzdump' && $vs ne 'rootfs' && !$volume->{backup};

@@ 

Re: [pve-devel] [PATCH v2 container] vzdump: move include logic for mountpoints to method

2020-03-16 Thread Fabian Grünbichler
needs a rebase (please always rebase before sending - the last commit on 
src/PVE/VZDump/LXC.pm was done before you sent this patch out!)

On February 27, 2020 11:01 am, Aaron Lauterer wrote:
> Move the logic which mountpoints are included in the backup job to its
> own method and adapt the VZDump code accordingly. This makes it possible
> to develop other features around backup jobs.
> 
> Signed-off-by: Aaron Lauterer 
> ---
> 
> v1 -> v2: implemented the suggestions from Fabian [0]
> 
> [0] https://pve.proxmox.com/pipermail/pve-devel/2020-January/041359.html
> 
>  src/PVE/LXC/Config.pm | 24 
>  src/PVE/VZDump/LXC.pm | 31 ---
>  2 files changed, 40 insertions(+), 15 deletions(-)
> 
> diff --git a/src/PVE/LXC/Config.pm b/src/PVE/LXC/Config.pm
> index 00f4884..af728a8 100644
> --- a/src/PVE/LXC/Config.pm
> +++ b/src/PVE/LXC/Config.pm
> @@ -1576,4 +1576,28 @@ sub get_replicatable_volumes {
>  return $volhash;
>  }
>  
> +sub get_backup_volumes {
> +my ($class, $conf) = @_;
> +
> +my $ret_volumes = [];
> +
> +my $test_mountpoint = sub {
> + my ($key, $volume) = @_;
> +
> + my $data = {};
> + my ($included, $reason) = $class->mountpoint_backup_enabled($key, 
> $volume);
> +
> + $data->{key} = $key;
> + $data->{included} = $included;
> + $data->{reason} = $reason;
> + $data->{data} = $volume;
> +
> + push @$ret_volumes, $data;
> +};
> +
> +PVE::LXC::Config->foreach_mountpoint($conf, $test_mountpoint);
> +
> +return $ret_volumes;
> +}
> +
>  1;
> diff --git a/src/PVE/VZDump/LXC.pm b/src/PVE/VZDump/LXC.pm
> index 7d6aefd..f36e6d8 100644
> --- a/src/PVE/VZDump/LXC.pm
> +++ b/src/PVE/VZDump/LXC.pm
> @@ -120,24 +120,25 @@ sub prepare {
>  $task->{rootgid} = $rootgid;
>  
>  my $volids = $task->{volids} = [];
> -PVE::LXC::Config->foreach_mountpoint($conf, sub {
> - my ($name, $data) = @_;
> - my $volid = $data->{volume};
> - my $mount = $data->{mp};
> - my $type = $data->{type};
> -
> - return if !$volid || !$mount;
> -
> - if (!PVE::LXC::Config->mountpoint_backup_enabled($name, $data)) {
> - push @$exclude_dirs, $mount;
> - $self->loginfo("excluding $type mount point $name ('$mount') from 
> backup");
> - return;
> +
> +my $mountpoint_info = PVE::LXC::Config->get_backup_volumes($conf);
> +
> +foreach my $mp (@{$mountpoint_info}) {
> + my $name = $mp->{key};
> + my $data = $mp->{data};
> +
> + next if !$data->{volume} || !$data->{mp};
> +
> + if (!$mp->{included}) {
> + push @$exclude_dirs, $data->{mp};
> + $self->loginfo("excluding $data->{type} mount point $name 
> ('$data->{mp}') from backup");
> + next;
>   }
>  
> + $self->loginfo("including mount point $name ('$data->{mp}') in backup");
>   push @$disks, $data;
> - push @$volids, $volid
> - if $type eq 'volume';
> -});
> + push @$volids, $data->{volume} if $mp->{included};
> +}
>  
>  if ($mode eq 'snapshot') {
>   if (!PVE::LXC::Config->has_feature('snapshot', $conf, $storage_cfg, 
> undef, undef, 1)) {
> -- 
> 2.20.1
> 
> 
> ___
> pve-devel mailing list
> pve-devel@pve.proxmox.com
> https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel
> 
> 

___
pve-devel mailing list
pve-devel@pve.proxmox.com
https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel


[pve-devel] applied: [PATCH v4 qemu-server] Change format for unused drives

2020-03-16 Thread Thomas Lamprecht
On 3/16/20 10:10 AM, Fabian Grünbichler wrote:
> On March 12, 2020 11:19 am, Fabian Ebner wrote:
>> and make it match with what parse_drive does. Even though the 'real' format
>> was pve-volume-id, callers already expected that parse_drive returns a hash
>> with a valid 'file' key (e.g. PVE/API2/Qemu.pm:1147ff).
>>
>> Signed-off-by: Fabian Ebner 
>> ---
>>
>> This is the last patch left over from the series
>> refactoring the drive code [0], although not directly related.
>>
>> [0]: https://pve.proxmox.com/pipermail/pve-devel/2020-March/042007.html
>>
>> Changes from v3:
>> * make sure the format describes a hash with a 'file' key
>>   as that's what callers expect
>>
>>  PVE/QemuServer/Drive.pm | 15 +--
>>  1 file changed, 13 insertions(+), 2 deletions(-)
>>

> [...]
> 
> that being said - the patch as is LGTM and the above can be done as 
> follow-up just as well:
> 
> Reviewed-By: Fabian Grünbichler 
> 


with that, applied, thanks!


___
pve-devel mailing list
pve-devel@pve.proxmox.com
https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel


Re: [pve-devel] [PATCH v2 container] vzdump: move include logic for mountpoints to method

2020-03-16 Thread Aaron Lauterer



On 3/16/20 1:16 PM, Fabian Grünbichler wrote:

needs a rebase (please always rebase before sending - the last commit on
src/PVE/VZDump/LXC.pm was done before you sent this patch out!)



Sorry. I'll send a rebased series later today.

___
pve-devel mailing list
pve-devel@pve.proxmox.com
https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel


Re: [pve-devel] [PATCH manager 4/5] gui: ceph: icons for versions in ServiceList

2020-03-16 Thread Dominik Csapak

On 3/16/20 1:56 PM, Dominik Csapak wrote:

this makes it closer to the OSD list, using the new 'version-only'
metadata api call

this partially fixes #2468

Signed-off-by: Dominik Csapak 
---
  www/manager6/ceph/ServiceList.js | 58 +++-
  1 file changed, 57 insertions(+), 1 deletion(-)

diff --git a/www/manager6/ceph/ServiceList.js b/www/manager6/ceph/ServiceList.js
index 9da42cd4..77254acf 100644
--- a/www/manager6/ceph/ServiceList.js
+++ b/www/manager6/ceph/ServiceList.js
@@ -65,6 +65,48 @@ Ext.define('PVE.node.CephServiceList', {
  controller: {
xclass: 'Ext.app.ViewController',
  
+	render_version: function(value, metadata, rec) {

+   let me = this.getView();
+   let host = rec.data.host;
+   let icon = "";
+   let v = value;
+   let nodev = [0];
+   if (me.nodeversions[host] !== undefined) {
+   nodev = me.nodeversions[host].version.parts;
+   }
+   let maxv = me.maxversion;
+
+   if (PVE.Utils.compare_ceph_versions(maxv, nodev) > 0) {
+   icon = PVE.Utils.get_ceph_icon_html('HEALTH_UPGRADE');
+   } else if (PVE.Utils.compare_ceph_versions(nodev, v) > 0) {
+   icon = PVE.Utils.get_ceph_icon_html('HEALTH_OLD');
+   } else if (me.mixedversions) {
+   icon = PVE.Utils.get_ceph_icon_html('HEALTH_OK');
+   }
+
+   return icon + v;
+   },
+
+   getMaxVersions: function(store, records, success) {
+   if (!success || records.length < 1) {
+   return;
+   }
+   let me = this;
+   let view = me.getView();
+
+   view.nodeversions = records[0].data.node;
+   view.maxversion = [];
+   view.mixedversions = false;
+   for (const [nodename, data] of Object.entries(view.nodeversions)) {
+   if (PVE.Utils.compare_ceph_versions(data.version.parts, 
view.maxversion) > 0) {
+   if (view.maxversion.length > 0) {
+   view.mixedversions = true;
+   }
+   view.maxversion = data.version.parts;
+   }


here is a small bug, (it only sets mixedversion if the maxversion is not 
the first), will either send a v2 or a follow up pending review



+   }
+   },
+
init: function(view) {
if (view.pveSelNode) {
view.nodename = view.pveSelNode.data.node;
@@ -77,6 +119,19 @@ Ext.define('PVE.node.CephServiceList', {
throw "no type specified";
}
  
+	view.versionsstore = Ext.create('Proxmox.data.UpdateStore', {

+   autoLoad: true,
+   autoStart: true,
+   interval: 1,
+   storeid: 'ceph-versions-' + view.type + '-list' + view.nodename,
+   proxy: {
+   type: 'proxmox',
+   url: "/api2/json/cluster/ceph/metadata?scope=versions"
+   }
+   });
+
+   view.versionsstore.on('load', this.getMaxVersions, this);
+
view.rstore = Ext.create('Proxmox.data.UpdateStore', {
autoLoad: true,
autoStart: true,
@@ -296,7 +351,8 @@ Ext.define('PVE.node.CephServiceList', {
header: gettext('Version'),
flex: 3,
sortable: true,
-   dataIndex: 'version'
+   dataIndex: 'version',
+   renderer: 'render_version',
}
  ],
  




___
pve-devel mailing list
pve-devel@pve.proxmox.com
https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel


[pve-devel] [PATCH manager 3/5] gui: ceph: let compare_ceph_versions accept arrays directly

2020-03-16 Thread Dominik Csapak
instead of always expecting a '.' separated version string
we will use this for the 'structured' version data

Signed-off-by: Dominik Csapak 
---
 www/manager6/Utils.js | 17 +++--
 1 file changed, 15 insertions(+), 2 deletions(-)

diff --git a/www/manager6/Utils.js b/www/manager6/Utils.js
index a42dbf91..1bc6ef03 100644
--- a/www/manager6/Utils.js
+++ b/www/manager6/Utils.js
@@ -121,11 +121,24 @@ Ext.define('PVE.Utils', { utilities: {
 },
 
 compare_ceph_versions: function(a, b) {
+   let avers = [];
+   let bvers = [];
+
if (a === b) {
return 0;
}
-   let avers = a.toString().split('.');
-   let bvers = b.toString().split('.');
+
+   if (Ext.isArray(a)) {
+   avers = a.slice(); // copy array
+   } else {
+   avers = a.toString().split('.');
+   }
+
+   if (Ext.isArray(b)) {
+   bvers = b.slice(); // copy array
+   } else {
+   bvers = b.toString().split('.');
+   }
 
while (true) {
let av = avers.shift();
-- 
2.20.1


___
pve-devel mailing list
pve-devel@pve.proxmox.com
https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel


[pve-devel] [PATCH manager 5/5] gui: ceph: add highest cluster version in summary

2020-03-16 Thread Dominik Csapak
so that the admin knows what the 'expected' ceph version is
partially fixes #2468

Signed-off-by: Dominik Csapak 
---
 www/manager6/ceph/Status.js | 51 ++---
 1 file changed, 48 insertions(+), 3 deletions(-)

diff --git a/www/manager6/ceph/Status.js b/www/manager6/ceph/Status.js
index f1fcda00..e85df558 100644
--- a/www/manager6/ceph/Status.js
+++ b/www/manager6/ceph/Status.js
@@ -38,10 +38,39 @@ Ext.define('PVE.node.CephStatus', {
},
items: [
{
+   xtype: 'container',
+   layout: {
+   type: 'vbox',
+   align: 'stretch',
+   },
flex: 1,
-   itemId: 'overallhealth',
-   xtype: 'pveHealthWidget',
-   title: gettext('Status')
+   items: [
+   {
+   flex: 1,
+   itemId: 'overallhealth',
+   xtype: 'pveHealthWidget',
+   title: gettext('Status')
+   },
+   {
+   itemId: 'versioninfo',
+   xtype: 'component',
+   data: {
+   version: "",
+   git: "",
+   },
+   padding: '10 0 0 0',
+   style: {
+   'text-align': 'center',
+   },
+   tpl: [
+   '',
+   '', gettext('Highest Version in Cluster'), 
'',
+   '',
+   '{version} (git: {git})',
+   ''
+   ],
+   }
+   ]
},
{
flex: 2,
@@ -363,6 +392,22 @@ Ext.define('PVE.node.CephStatus', {
// update detailstatus panel
me.getComponent('statusdetail').updateAll(rec.data, me.status || 
{});
 
+   let maxversion = [];
+   let maxversiontext = "";
+   let git = "";
+   for (const [nodename, data] of Object.entries(me.metadata.node)) {
+   let version = data.version.parts;
+   if (PVE.Utils.compare_ceph_versions(version, maxversion) > 0) {
+   maxversion = version;
+   git = data.buildcommit.substr(0,8);
+   maxversiontext = data.version.str;
+   }
+   }
+
+   me.down('#versioninfo').update({
+   version: maxversiontext,
+   git: git,
+   });
}, me);
 
me.on('destroy', me.store.stopUpdate);
-- 
2.20.1


___
pve-devel mailing list
pve-devel@pve.proxmox.com
https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel


[pve-devel] lvm2: global_filter not ',' separated

2020-03-16 Thread Roland Kammerer
Hi all,

Commit[1] extended the filter, but if you look closely, there is no ','
between the second and third rule. Quite frankly, I don't know what LVM
makes out of that, most likely it even considers them as 2 separate
rules. Or it concats them and then still parses them as 2. LVM is
mysterious on so many levels, not only the parser :)

Would have sent a patch, but was not sure if you consider it something
to be fixed and if, if you wanted a patch on top of of "0002", or how to
deal with a fixed "0002".

Regards, rck

[1] 
https://git.proxmox.com/?p=lvm.git;a=blob;f=patchdir/0002-fix-2184-filter-lvs-from-guests.patch;h=cf72538905e783808dadf90bc23daf8b65235c1f;hb=b442e040bc7952ad8de550db18d0e9f0f8691d03
___
pve-devel mailing list
pve-devel@pve.proxmox.com
https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel


Re: [pve-devel] [PATCH-SERIES v3] Make storage migration more flexible

2020-03-16 Thread Fabian Ebner


On 16.03.20 12:07, Fabian Grünbichler wrote:

On March 12, 2020 1:08 pm, Fabian Ebner wrote:

This is the second half for the previous series [0].

This series aims to allow offline migration with '--targetstorage'
and improve handling unsued/orphaned disks when migrating.
It also makes it possible to migrate volumes between storages with
a 'path' and storages without if the target storage uses the standard
naming scheme and the source format is supported (e.g. migrating raw
volumes between storages with a path and lvm storages).

It also adds an apiinfo call to pvesm that can be used to determine
APIVER and APIAGE of the remote node and does some general refactoring
regarding volume iterators.

The series is organised as follows:
#1-#7 introduce and implement volume related helpers
Mutual dependencies for both qemu-server and container with guest-common,
to be precise #1-#3 <-> #5 and #1-#3 <-> #7.
#8 adds the apiinfo helper
#9-#12 changes storage_migrate interface
Another round of mutual dependencies, this time for storage with each
of guest-common, qemu-server, container.
#13-#18 implement the goals for this series, and the last inter-package
dependency is here, because #15 (qemu-server) depends on #13 (storage)
#19-#22 improve logging and refactor some code with foreach_volume

Changes from v2:
 * add apiinfo call and check if remote side supports allow_rename
 * add valid_volume_keys and implement (and make use of) foreach_volume 
iterator
 * make volname_for_storage a local function and avoid
   requiring a new print_volname for plugins
 * parse vm_state in parse_volume instead of parse_drive

[0]: https://pve.proxmox.com/pipermail/pve-devel/2020-February/041910.html


some individual comments (mainly about the refactoring where some areas
could still be improved) on individual patches.



Ack-ed the comments and nits. Thanks for the review and test!


gave this a limited test run as well, seems to work as expected :) only
issue that I noticed (possibly also affects the live migration code
path?) is that --targetstorage does not respect a storage's content type
limit. we are also still lacking cleanup for failed sync_disks (e.g.,
after M out of N disks have been synced), or failed migration after
successful sync_disks.


Just tested this. It's also present with (current) online migration with 
--targetstorage. The GUI filters target storages for content type, but 
it seems that the backend has no check. I'll add one for the next 
version and also add the disk cleanup.





guest-common:

Fabian Ebner (5):
   Add interfaces for volume-related helpers
   Add foreach_unused_volume
   foreach_volume: generalize and implement function
   Use new storage_migrate interface
   Add update_volume_ids

  PVE/AbstractConfig.pm | 113 ++
  PVE/Replication.pm|  12 -
  2 files changed, 112 insertions(+), 13 deletions(-)


container:

Fabian Ebner (3):
   Implement volume-related helpers and use new foreach_volume
   Use new storage_migrate interface
   Use foreach_volume instead of foreach_mountpoint_full to avoid
 duplication

  src/PVE/API2/LXC.pm|  4 +--
  src/PVE/API2/LXC/Config.pm |  2 +-
  src/PVE/CLI/pct.pm |  2 +-
  src/PVE/LXC/Config.pm  | 53 +-
  src/PVE/LXC/Migrate.pm | 10 ---
  5 files changed, 40 insertions(+), 31 deletions(-)


qemu-server

Fabian Ebner (10):
   Implement volume-related helpers and use new foreach_volume
   Allow parsing vmstate entries
   Use new storage_migrate interface
   Take note of changes to the volume IDs when migrating and update the
 config
   Allow specifying targetstorage for offline migration
   Update volume IDs in one go
   sync_disks: use allow_rename to avoid collisions on the target storage
   sync_disks: be more verbose if storage_migrate fails
   sync_disks: log output of storage_migrate
   update_disk_config: use config volume iterators instead of loops

  PVE/API2/Qemu.pm|  3 ---
  PVE/QemuConfig.pm   | 51 +++--
  PVE/QemuMigrate.pm  | 44 +++
  PVE/QemuServer.pm   | 50 +---
  PVE/QemuServer/Drive.pm |  1 +
  5 files changed, 105 insertions(+), 44 deletions(-)


storage:

Fabian Ebner (4):
   Add apiinfo helper to pvesm
   Collect optional parameters for storage_migrate into $opts
   Introduce allow_rename parameter for pvesm import and storage_migrate
   storage_migrate: add volname_for_storage helper

  PVE/API2/Storage/Content.pm  |  2 +-
  PVE/CLI/pvesm.pm | 52 +---
  PVE/Storage.pm   | 94 ++--
  PVE/Storage/LVMPlugin.pm | 17 ---
  PVE/Storage/Plugin.pm| 16 --
  PVE/Storage/ZFSPoolPlugin.pm | 13 +++--
  6 files changed, 155 insertions(+), 39 deletions(-)

--
2.20.1



Re: [pve-devel] [PATCH-SERIES v3] Make storage migration more flexible

2020-03-16 Thread Fabian Grünbichler
On March 16, 2020 2:53 pm, Fabian Ebner wrote:
> 
> On 16.03.20 12:07, Fabian Grünbichler wrote:
>> On March 12, 2020 1:08 pm, Fabian Ebner wrote:
>>> This is the second half for the previous series [0].
>>>
>>> This series aims to allow offline migration with '--targetstorage'
>>> and improve handling unsued/orphaned disks when migrating.
>>> It also makes it possible to migrate volumes between storages with
>>> a 'path' and storages without if the target storage uses the standard
>>> naming scheme and the source format is supported (e.g. migrating raw
>>> volumes between storages with a path and lvm storages).
>>>
>>> It also adds an apiinfo call to pvesm that can be used to determine
>>> APIVER and APIAGE of the remote node and does some general refactoring
>>> regarding volume iterators.
>>>
>>> The series is organised as follows:
>>> #1-#7 introduce and implement volume related helpers
>>> Mutual dependencies for both qemu-server and container with guest-common,
>>> to be precise #1-#3 <-> #5 and #1-#3 <-> #7.
>>> #8 adds the apiinfo helper
>>> #9-#12 changes storage_migrate interface
>>> Another round of mutual dependencies, this time for storage with each
>>> of guest-common, qemu-server, container.
>>> #13-#18 implement the goals for this series, and the last inter-package
>>> dependency is here, because #15 (qemu-server) depends on #13 (storage)
>>> #19-#22 improve logging and refactor some code with foreach_volume
>>>
>>> Changes from v2:
>>>  * add apiinfo call and check if remote side supports allow_rename
>>>  * add valid_volume_keys and implement (and make use of) foreach_volume 
>>> iterator
>>>  * make volname_for_storage a local function and avoid
>>>requiring a new print_volname for plugins
>>>  * parse vm_state in parse_volume instead of parse_drive
>>>
>>> [0]: https://pve.proxmox.com/pipermail/pve-devel/2020-February/041910.html
>> 
>> some individual comments (mainly about the refactoring where some areas
>> could still be improved) on individual patches.
>> 
> 
> Ack-ed the comments and nits. Thanks for the review and test!
> 
>> gave this a limited test run as well, seems to work as expected :) only
>> issue that I noticed (possibly also affects the live migration code
>> path?) is that --targetstorage does not respect a storage's content type
>> limit. we are also still lacking cleanup for failed sync_disks (e.g.,
>> after M out of N disks have been synced), or failed migration after
>> successful sync_disks.
> 
> Just tested this. It's also present with (current) online migration with 
> --targetstorage. The GUI filters target storages for content type, but 
> it seems that the backend has no check. I'll add one for the next 
> version and also add the disk cleanup.
> 

that would be great! for the disk cleanup, we need to be really careful 
to only delete those disks that we actually allocated (so 
differentiation between 'attempted to storage migrate but failed because 
of naming conflict' and 'attempted to storage migrate but failed after 
allocation' is key, and it's better to err on the side of caution and 
skip cleanup than to delete some pre-existing disk).
>>>
>>>
>>> guest-common:
>>>
>>> Fabian Ebner (5):
>>>Add interfaces for volume-related helpers
>>>Add foreach_unused_volume
>>>foreach_volume: generalize and implement function
>>>Use new storage_migrate interface
>>>Add update_volume_ids
>>>
>>>   PVE/AbstractConfig.pm | 113 ++
>>>   PVE/Replication.pm|  12 -
>>>   2 files changed, 112 insertions(+), 13 deletions(-)
>>>
>>>
>>> container:
>>>
>>> Fabian Ebner (3):
>>>Implement volume-related helpers and use new foreach_volume
>>>Use new storage_migrate interface
>>>Use foreach_volume instead of foreach_mountpoint_full to avoid
>>>  duplication
>>>
>>>   src/PVE/API2/LXC.pm|  4 +--
>>>   src/PVE/API2/LXC/Config.pm |  2 +-
>>>   src/PVE/CLI/pct.pm |  2 +-
>>>   src/PVE/LXC/Config.pm  | 53 +-
>>>   src/PVE/LXC/Migrate.pm | 10 ---
>>>   5 files changed, 40 insertions(+), 31 deletions(-)
>>>
>>>
>>> qemu-server
>>>
>>> Fabian Ebner (10):
>>>Implement volume-related helpers and use new foreach_volume
>>>Allow parsing vmstate entries
>>>Use new storage_migrate interface
>>>Take note of changes to the volume IDs when migrating and update the
>>>  config
>>>Allow specifying targetstorage for offline migration
>>>Update volume IDs in one go
>>>sync_disks: use allow_rename to avoid collisions on the target storage
>>>sync_disks: be more verbose if storage_migrate fails
>>>sync_disks: log output of storage_migrate
>>>update_disk_config: use config volume iterators instead of loops
>>>
>>>   PVE/API2/Qemu.pm|  3 ---
>>>   PVE/QemuConfig.pm   | 51 +++--
>>>   PVE/QemuMigrate.pm  | 44 

[pve-devel] [PATCH series v3 0/4] add needed changes for backup detail view

2020-03-16 Thread Aaron Lauterer
This is a follow up to [0] but only includes the refactored changes to
make ti possible to have other features around the VZDump functionality.

It is moving the following logic into its own methods:
* which guests are included in a backup job
* which volumes / mountpoints of a guest are included in the backup

The behaviour of VZDump should still be the same.

Changes from v2[1] to v3:
* patches to guest-common: abstractconfig were applied [2]
* rebased patches

[0] https://pve.proxmox.com/pipermail/pve-devel/2020-January/041268.html
[1] https://pve.proxmox.com/pipermail/pve-devel/2020-February/041972.html
[2] https://pve.proxmox.com/pipermail/pve-devel/2020-March/042258.html

pve-manager: Aaron Lauterer (1):
  backup: move logic to include guests into method

 PVE/API2/VZDump.pm | 36 +++---
 PVE/VZDump.pm  | 74 --
 2 files changed, 55 insertions(+), 55 deletions(-)

pve-container: Aaron Lauterer (2):
  vzdump: add reason for mountpoint backup inclusion
  vzdump: move include logic for mountpoints to method

 src/PVE/LXC/Config.pm | 48 ---
 src/PVE/VZDump/LXC.pm | 31 ++--
 2 files changed, 57 insertions(+), 22 deletions(-)

qemu-server: Aaron Lauterer (1):
  vzdump: move include logic for volumes to method

 PVE/QemuConfig.pm| 31 +++
 PVE/VZDump/QemuServer.pm | 38 +++---
 2 files changed, 50 insertions(+), 19 deletions(-)
-- 
2.20.1


___
pve-devel mailing list
pve-devel@pve.proxmox.com
https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel


[pve-devel] [PATCH v3 qemu-server] vzdump: move include logic for volumes to method

2020-03-16 Thread Aaron Lauterer
Move the logic which volumes are included in the backup job to its own
method and adapt the VZDump code accordingly. This makes it possible to
develop other features around backup jobs.

Signed-off-by: Aaron Lauterer 
---

v2 -> v3: rebased

v1 -> v2:
* implemented the suggestions from Fabian [0]
* incorporated changes that happened in the meantime, most notably the
  check for efidisk without omvf bios set

I decided to keep the check for IOThreaded VMs where it is because it
will only trigger if the backup is run with an older qemu version. By
the time this patch series gets applied and shipped I think it unlikely
that this will actually be triggered anymore.

There also seems to be a problem with the way the IFs are nested in that
section. I sent in separate small patch to fix it [1]. I wasn't sure if
I should have implemented that patch here as well. Depending on which
patch gets applied first, the other will need some rebasing.

[0] https://pve.proxmox.com/pipermail/pve-devel/2020-January/041360.html
[1] https://pve.proxmox.com/pipermail/pve-devel/2020-February/041968.html

 PVE/QemuConfig.pm| 31 +++
 PVE/VZDump/QemuServer.pm | 38 +++---
 2 files changed, 50 insertions(+), 19 deletions(-)

diff --git a/PVE/QemuConfig.pm b/PVE/QemuConfig.pm
index 1ba728a..f5b737c 100644
--- a/PVE/QemuConfig.pm
+++ b/PVE/QemuConfig.pm
@@ -130,6 +130,37 @@ sub get_replicatable_volumes {
 return $volhash;
 }
 
+sub get_backup_volumes {
+my ($class, $conf) = @_;
+
+my $ret_volumes = [];
+
+my $test_volume = sub {
+   my ($key, $drive) = @_;
+
+   return if PVE::QemuServer::drive_is_cdrom($drive);
+
+   my $volume = {};
+   my $included = $drive->{backup} // 1;;
+   my $reason = defined($drive->{backup}) ? 'disabled' : 'enabled';
+
+   if ($key =~ m/^efidisk/ && (!defined($conf->{bios}) || $conf->{bios} ne 
'ovmf')) {
+   $included = 0;
+   $reason = "efidisk with non omvf bios";
+   }
+   $volume->{key} = $key;
+   $volume->{included} = $included;
+   $volume->{reason} = $reason;
+   $volume->{data} = $drive;
+
+   push @$ret_volumes, $volume;
+};
+
+PVE::QemuServer::foreach_drive($conf, $test_volume);
+
+return $ret_volumes;
+}
+
 sub __snapshot_save_vmstate {
 my ($class, $vmid, $conf, $snapname, $storecfg, $statestorage, $suspend) = 
@_;
 
diff --git a/PVE/VZDump/QemuServer.pm b/PVE/VZDump/QemuServer.pm
index 7695ad6..38471de 100644
--- a/PVE/VZDump/QemuServer.pm
+++ b/PVE/VZDump/QemuServer.pm
@@ -69,37 +69,37 @@ sub prepare {
 
 my $vollist = [];
 my $drivehash = {};
-PVE::QemuServer::foreach_drive($conf, sub {
-   my ($ds, $drive) = @_;
+my $backup_included = PVE::QemuConfig->get_backup_volumes($conf);
 
-   return if PVE::QemuServer::drive_is_cdrom($drive);
+foreach my $volume(@{$backup_included}) {
+   my $volid = $volume->{data}->{file};
+   my $name = $volume->{key};
 
-   my $volid = $drive->{file};
-
-   if (defined($drive->{backup}) && !$drive->{backup}) {
-   $self->loginfo("exclude disk '$ds' '$volid' (backup=no)");
-   return;
-   } elsif ($self->{vm_was_running} && $drive->{iothread}) {
+   if (!$volume->{included}) {
+   if ($volume->{reason} eq 'efidisk with non omvf bios') {
+   $self->loginfo("excluding '$name' (efidisks can only be backed 
up when BIOS is set to 'ovmf')");
+   next;
+   }
+   $self->loginfo("exclude disk '$name' '$volid' (backup=no)");
+   next;
+   } elsif ($self->{vm_was_running} && $volume->{data}->{iothread}) {
if (!PVE::QemuServer::Machine::runs_at_least_qemu_version($vmid, 4, 
0, 1)) {
-   die "disk '$ds' '$volid' (iothread=on) can't use backup feature 
with running QEMU " .
+   die "disk '$name' '$volid' (iothread=on) can't use backup 
feature with running QEMU " .
"version < 4.0.1! Either set backup=no for this drive or 
upgrade QEMU and restart VM\n";
}
-   } elsif ($ds =~ m/^efidisk/ && (!defined($conf->{bios}) || 
$conf->{bios} ne 'ovmf')) {
-   $self->loginfo("excluding '$ds' (efidisks can only be backed up 
when BIOS is set to 'ovmf')");
-   return;
} else {
-   my $log = "include disk '$ds' '$volid'";
-  if (defined $drive->{size}) {
-   my $readable_size = 
PVE::JSONSchema::format_size($drive->{size});
+   my $log = "include disk '$name' '$volid'";
+   if (defined $volume->{data}->{size}) {
+   my $readable_size = 
PVE::JSONSchema::format_size($volume->{data}->{size});
$log .= " $readable_size";
-  }
+   }
$self->loginfo($log);
}
 
my ($storeid, $volname) = PVE::Storage::parse_volume_id($volid, 1);
push @$vollist, $volid if $storeid;
-   $drivehash->{$ds} = 

[pve-devel] [PATCH v3 container 2/2] vzdump: move include logic for mountpoints to method

2020-03-16 Thread Aaron Lauterer
Move the logic which mountpoints are included in the backup job to its
own method and adapt the VZDump code accordingly. This makes it possible
to develop other features around backup jobs.

Signed-off-by: Aaron Lauterer 
---
v2 -> v3: rebased

v1 -> v2: implemented the suggestion from Fabian [0]

[0] https://pve.proxmox.com/pipermail/pve-devel/2020-January/041359.html

 src/PVE/LXC/Config.pm | 24 
 src/PVE/VZDump/LXC.pm | 31 ---
 2 files changed, 40 insertions(+), 15 deletions(-)

diff --git a/src/PVE/LXC/Config.pm b/src/PVE/LXC/Config.pm
index 00f4884..af728a8 100644
--- a/src/PVE/LXC/Config.pm
+++ b/src/PVE/LXC/Config.pm
@@ -1576,4 +1576,28 @@ sub get_replicatable_volumes {
 return $volhash;
 }
 
+sub get_backup_volumes {
+my ($class, $conf) = @_;
+
+my $ret_volumes = [];
+
+my $test_mountpoint = sub {
+   my ($key, $volume) = @_;
+
+   my $data = {};
+   my ($included, $reason) = $class->mountpoint_backup_enabled($key, 
$volume);
+
+   $data->{key} = $key;
+   $data->{included} = $included;
+   $data->{reason} = $reason;
+   $data->{data} = $volume;
+
+   push @$ret_volumes, $data;
+};
+
+PVE::LXC::Config->foreach_mountpoint($conf, $test_mountpoint);
+
+return $ret_volumes;
+}
+
 1;
diff --git a/src/PVE/VZDump/LXC.pm b/src/PVE/VZDump/LXC.pm
index 25a50d1..e3aa76c 100644
--- a/src/PVE/VZDump/LXC.pm
+++ b/src/PVE/VZDump/LXC.pm
@@ -120,25 +120,26 @@ sub prepare {
 $task->{rootgid} = $rootgid;
 
 my $volids = $task->{volids} = [];
-PVE::LXC::Config->foreach_mountpoint($conf, sub {
-   my ($name, $data) = @_;
-   my $volid = $data->{volume};
-   my $mount = $data->{mp};
-   my $type = $data->{type};
-
-   return if !$volid || !$mount;
-
-   if (!PVE::LXC::Config->mountpoint_backup_enabled($name, $data)) {
-   push @$exclude_dirs, $mount;
-   $self->loginfo("excluding $type mount point $name ('$mount') from 
backup");
-   return;
+
+my $mountpoint_info = PVE::LXC::Config->get_backup_volumes($conf);
+
+foreach my $mp (@{$mountpoint_info}) {
+   my $name = $mp->{key};
+   my $data = $mp->{data};
+
+   next if !$data->{volume} || !$data->{mp};
+
+   if (!$mp->{included}) {
+   push @$exclude_dirs, $data->{mp};
+   $self->loginfo("excluding $data->{type} mount point $name 
('$data->{mp}') from backup");
+   next;
}
 
$data->{name} = $name;
+   $self->loginfo("including mount point $name ('$data->{mp}') in backup");
push @$disks, $data;
-   push @$volids, $volid
-   if $type eq 'volume';
-});
+   push @$volids, $data->{volume} if $mp->{included};
+}
 
 if ($mode eq 'snapshot') {
if (!PVE::LXC::Config->has_feature('snapshot', $conf, $storage_cfg, 
undef, undef, 1)) {
-- 
2.20.1


___
pve-devel mailing list
pve-devel@pve.proxmox.com
https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel


[pve-devel] [PATCH v3 manager 1/1] backup: move logic to include guests into method

2020-03-16 Thread Aaron Lauterer
This extracts the logic which guests are to be included in a backup job
into its own method 'get_included_guests'. This makes it possible to
develop other features around backup jobs.

Logic which was spread out accross the API2/VZDump.pm file and the
VZDump.pm file is refactored into the new method, most notably the
logic that dealt with excluded guests.

The new method will return the VMIDs for the whole cluster. If a VMID is
present on the local node and thus part of the local backup job is still
determined in the VZDump::exec_backup method.

Permission checks are kept where they are to be able to handle missing
permissions according to the current context. The old behavior to die
on a backup job when the user is missing the permission to a guest and
the job is not an 'all' or 'exclude' job is kept.

The creation of the skiplist is moved to the VZDump::exec_backup method,
close to the place where it is used.

Signed-off-by: Aaron Lauterer 
---
changes: rebased

 PVE/API2/VZDump.pm | 36 +++---
 PVE/VZDump.pm  | 74 --
 2 files changed, 55 insertions(+), 55 deletions(-)

diff --git a/PVE/API2/VZDump.pm b/PVE/API2/VZDump.pm
index f01e4de0..bc380099 100644
--- a/PVE/API2/VZDump.pm
+++ b/PVE/API2/VZDump.pm
@@ -69,43 +69,15 @@ __PACKAGE__->register_method ({
return 'OK' if $param->{node} && $param->{node} ne $nodename;
 
my $cmdline = PVE::VZDump::Common::command_line($param);
-   my @vmids;
-   # convert string lists to arrays
-   if ($param->{pool}) {
-   @vmids = 
@{PVE::API2Tools::get_resource_pool_guest_members($param->{pool})};
-   } else {
-   @vmids = PVE::Tools::split_list(extract_param($param, 'vmid'));
-   }
+
+   $param->{vmids} = PVE::VZDump->get_included_guests($param, $nodename);
+   my @vmids = @{$param->{vmids}};
 
if($param->{stop}){
PVE::VZDump::stop_running_backups();
return 'OK' if !scalar(@vmids);
}
 
-   my $skiplist = [];
-   if (!$param->{all}) {
-   if (!$param->{node} || $param->{node} eq $nodename) {
-   my $vmlist = PVE::Cluster::get_vmlist();
-   my @localvmids = ();
-   foreach my $vmid (@vmids) {
-   my $d = $vmlist->{ids}->{$vmid};
-   if ($d && ($d->{node} ne $nodename)) {
-   push @$skiplist, $vmid;
-   } else {
-   push @localvmids, $vmid;
-   }
-   }
-   @vmids = @localvmids;
-   # silent exit if specified VMs run on other nodes
-   return "OK" if !scalar(@vmids);
-   }
-
-   $param->{vmids} = PVE::VZDump::check_vmids(@vmids)
-   }
-
-   my @exclude = PVE::Tools::split_list(extract_param($param, 'exclude'));
-   $param->{exclude} = PVE::VZDump::check_vmids(@exclude);
-
# exclude-path list need to be 0 separated
if (defined($param->{'exclude-path'})) {
my @expaths = split(/\0/, $param->{'exclude-path'} || '');
@@ -130,7 +102,7 @@ __PACKAGE__->register_method ({
die "interrupted by signal\n";
};
 
-   my $vzdump = PVE::VZDump->new($cmdline, $param, $skiplist);
+   my $vzdump = PVE::VZDump->new($cmdline, $param);
 
eval {
$vzdump->getlock($upid); # only one process allowed
diff --git a/PVE/VZDump.pm b/PVE/VZDump.pm
index f3274196..9368d439 100644
--- a/PVE/VZDump.pm
+++ b/PVE/VZDump.pm
@@ -21,6 +21,7 @@ use PVE::RPCEnvironment;
 use PVE::Storage;
 use PVE::VZDump::Common;
 use PVE::VZDump::Plugin;
+use PVE::Tools qw(extract_param);
 
 my @posix_filesystems = qw(ext3 ext4 nfs nfs4 reiserfs xfs);
 
@@ -379,7 +380,7 @@ sub sendmail {
 };
 
 sub new {
-my ($class, $cmdline, $opts, $skiplist) = @_;
+my ($class, $cmdline, $opts) = @_;
 
 mkpath $logdir;
 
@@ -418,8 +419,7 @@ sub new {
 $opts->{dumpdir} =~ s|/+$|| if ($opts->{dumpdir});
 $opts->{tmpdir} =~ s|/+$|| if ($opts->{tmpdir});
 
-$skiplist = [] if !$skiplist;
-my $self = bless { cmdline => $cmdline, opts => $opts, skiplist => 
$skiplist };
+my $self = bless { cmdline => $cmdline, opts => $opts };
 
 my $findexcl = $self->{findexcl} = [];
 if ($defaults->{'exclude-path'}) {
@@ -1026,33 +1026,33 @@ sub exec_backup {
 
 my $opts = $self->{opts};
 
+my $nodename = PVE::INotify::nodename();
+my $skiplist = [];
+
+if ($nodename && (!$opts->{node} ||$opts->{node} eq $nodename)) {
+   my $vmlist = PVE::Cluster::get_vmlist();
+   foreach my $vmid (@{$opts->{vmids}}) {
+   my $d = $vmlist->{ids}->{$vmid};
+   if ($d && ($d->{node} ne $nodename)) {
+   push @$skiplist, $vmid;
+   }
+   }
+}
+
 debugmsg ('info', "starting new backup job: $self->{cmdline}", undef, 1);
-debugmsg ('info', "skip external VMs: " . join(', ', 

[pve-devel] [PATCH v3 container 1/2] vzdump: add reason for mountpoint backup inclusion

2020-03-16 Thread Aaron Lauterer
Signed-off-by: Aaron Lauterer 
---
v2->v3:
rebased

v1->v2:
moved the determination of the reason to the same method that decides if
a mountpoint is included in the backup.

src/PVE/LXC/Config.pm | 24 +---
 1 file changed, 17 insertions(+), 7 deletions(-)

diff --git a/src/PVE/LXC/Config.pm b/src/PVE/LXC/Config.pm
index e88ba0b..00f4884 100644
--- a/src/PVE/LXC/Config.pm
+++ b/src/PVE/LXC/Config.pm
@@ -49,13 +49,23 @@ sub cfs_config_path {
 sub mountpoint_backup_enabled {
 my ($class, $mp_key, $mountpoint) = @_;
 
-return 1 if $mp_key eq 'rootfs';
-
-return 0 if $mountpoint->{type} ne 'volume';
-
-return 1 if $mountpoint->{backup};
-
-return 0;
+my $enabled;
+my $reason;
+
+if ($mp_key eq 'rootfs') {
+   $enabled = 1;
+   $reason = 'rootfs';
+} elsif ($mountpoint->{type} ne 'volume') {
+   $enabled = 0;
+   $reason = 'not a volume';
+} elsif ($mountpoint->{backup}) {
+   $enabled = 1;
+   $reason = 'enabled';
+} else {
+   $enabled = 0;
+   $reason = 'disabled';
+}
+return wantarray ? ($enabled, $reason) : $enabled;
 }
 
 sub has_feature {
-- 
2.20.1


___
pve-devel mailing list
pve-devel@pve.proxmox.com
https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel


Re: [pve-devel] [PATCH v2 container] vzdump: move include logic for mountpoints to method

2020-03-16 Thread Aaron Lauterer
new rebased series it out. I did it for each repo to be rather on the 
overdone side than missing something :)


On 3/16/20 1:33 PM, Aaron Lauterer wrote:



On 3/16/20 1:16 PM, Fabian Grünbichler wrote:

needs a rebase (please always rebase before sending - the last commit on
src/PVE/VZDump/LXC.pm was done before you sent this patch out!)



Sorry. I'll send a rebased series later today.

___
pve-devel mailing list
pve-devel@pve.proxmox.com
https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel


___
pve-devel mailing list
pve-devel@pve.proxmox.com
https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel