Re: [pve-devel] [PATCH manager] ceph: extend pveceph purge

2020-05-04 Thread Thomas Lamprecht
On 5/3/20 5:53 PM, Alwin Antreich wrote:
> to clean service directories as well as disable and stop Ceph services.
> Addtionally provide the option to remove crash and log information.
> 
> This patch is in addtion to #2607, as the current cleanup doesn't allow
> to re-configure Ceph, without manual steps during purge.
> 
> Signed-off-by: Alwin Antreich 
> ---
>  PVE/CLI/pveceph.pm | 47 +-
>  PVE/Ceph/Tools.pm  | 71 ++
>  2 files changed, 99 insertions(+), 19 deletions(-)
> 
> diff --git a/PVE/CLI/pveceph.pm b/PVE/CLI/pveceph.pm
> index 064ae545..e77cca2b 100755
> --- a/PVE/CLI/pveceph.pm
> +++ b/PVE/CLI/pveceph.pm
> @@ -18,6 +18,7 @@ use PVE::Storage;
>  use PVE::Tools qw(run_command);
>  use PVE::JSONSchema qw(get_standard_option);
>  use PVE::Ceph::Tools;
> +use PVE::Ceph::Services;
>  use PVE::API2::Ceph;
>  use PVE::API2::Ceph::FS;
>  use PVE::API2::Ceph::MDS;
> @@ -49,25 +50,57 @@ __PACKAGE__->register_method ({
>  parameters => {
>   additionalProperties => 0,
>   properties => {
> + logs => {
> + description => 'Additionally purge Ceph logs, /var/log/ceph.',
> + type => 'boolean',
> + optional => 1,
> + },
> + crash => {
> + description => 'Additionally purge Ceph crash logs, 
> /var/lib/ceph/crash.',
> + type => 'boolean',
> + optional => 1,
> + },
>   },
>  },
>  returns => { type => 'null' },
>  code => sub {
>   my ($param) = @_;
>  
> - my $monstat;
> + my $message;
> + my $pools = [];
> + my $monstat = {};
> + my $mdsstat = {};
> + my $osdstat = [];
>  
>   eval {
>   my $rados = PVE::RADOS->new();
> - my $monstat = $rados->mon_command({ prefix => 'mon_status' });
> + $pools = PVE::Ceph::Tools::ls_pools(undef, $rados);
> + $monstat = PVE::Ceph::Services::get_services_info('mon', undef, 
> $rados);
> + $mdsstat = PVE::Ceph::Services::get_services_info('mds', undef, 
> $rados);
> + $osdstat = $rados->mon_command({ prefix => 'osd metadata' });
>   };
> - my $err = $@;

maybe still a `warn $@ if $@` 

Nonetheless of above I get "unable to get monitor info from DNS SRV with 
service name: ceph-mon"
but with the warn I see that the rados commands failed too, giving some context.

>  
> - die "detected running ceph services- unable to purge data\n"
> - if !$err;
> + my $osd = map { $_->{hostname} eq $nodename ? 1 : () } @$osdstat;

hmm, the map here is not really intuitive, IMO, why not grep {} ? as you check
if some condition is there not really mapping anything to something else.

my $osd = grep { $_->{hostname} eq $nodename } @$osdstat;

is even shorter :)

> + my $mds = map { $mdsstat->{$_}->{host} eq $nodename ? 1 : () } keys 
> %$mdsstat;
> + my $mon = map { $monstat->{$_}->{host} eq $nodename ? 1 : () } keys 
> %$monstat;

same as above for above two.

> +
> + # no pools = no data
> + $message .= "- remove pools, this will !!DESTROY DATA!!\n" if @$pools;
> + $message .= "- remove active OSD on $nodename\n" if $osd;
> + $message .= "- remove active MDS on $nodename\n" if $mds;
> + $message .= "- remove other MONs, $nodename is not the last MON\n"
> + if scalar(keys %$monstat) > 1 && $mon;
> +
> + # display all steps at once
> + die "Unable to purge Ceph!\n\nTo continue:\n$message" if $message;
> +
> + my $services = PVE::Ceph::Services::get_local_services();
> + $services->{mon} = $monstat if $mon;
> + $services->{crash}->{$nodename} = { direxists => 1 } if $param->{crash};
> + $services->{logs}->{$nodename} = { direxists => 1 } if $param->{logs};
>  
> - # fixme: this is dangerous - should we really support this function?
> - PVE::Ceph::Tools::purge_all_ceph_files();
> + PVE::Ceph::Tools::purge_all_ceph_services($services);
> + PVE::Ceph::Tools::purge_all_ceph_files($services);
>  
>   return undef;
>  }});
> diff --git a/PVE/Ceph/Tools.pm b/PVE/Ceph/Tools.pm
> index e6225b78..09d22d36 100644
> --- a/PVE/Ceph/Tools.pm
> +++ b/PVE/Ceph/Tools.pm
> @@ -11,6 +11,8 @@ use JSON;
>  use PVE::Tools qw(run_command dir_glob_foreach);
>  use PVE::Cluster qw(cfs_read_file);
>  use PVE::RADOS;
> +use PVE::Ceph::Services;
> +use PVE::CephConfig;
>  
>  my $ccname = 'ceph'; # ceph cluster name
>  my $ceph_cfgdir = "/etc/ceph";
> @@ -42,6 +44,7 @@ my $config_hash = {
>  ceph_bootstrap_mds_keyring => $ceph_bootstrap_mds_keyring,
>  ceph_mds_data_dir => $ceph_mds_data_dir,
>  long_rados_timeout => 60,
> +ceph_cfgpath => $ceph_cfgpath,
>  };
>  
>  sub get_local_version {
> @@ -89,20 +92,64 @@ sub get_config {
>  }
>  
>  sub purge_all_ceph_files {
> -# fixme: this is very dangerous - should we really support this function?
> -
> -unlink $ceph_cfgpath;
> -
> -unlink $pve_ceph_cfgpath;
> -unlink 

[pve-devel] applied: [PATCH v2 common] print_text_table: handle undefined values in comparision

2020-05-04 Thread Thomas Lamprecht
On 4/28/20 12:00 PM, Oğuz Bektaş wrote:
>> Fabian Ebner  hat am 28. April 2020 10:18 geschrieben:
>>
>>  
>> by introducing a safe_compare helper. Fixes warnings, e.g.
>> pvesh get /nodes//network
>> would print "use of uninitialized"-warnings if there are inactive
>> network interfaces, because for those, 'active' is undef.
>>
>> Signed-off-by: Fabian Ebner 
>> ---
>>
>> Changes from v1:
>> * don't change sort_key depending on data
>> * instead handle undefined values directly
>>   in comparision
>>
>>  src/PVE/CLIFormatter.pm | 11 +++
>>  src/PVE/Tools.pm|  9 +
>>  2 files changed, 16 insertions(+), 4 deletions(-)
>>
>
> Tested-by: Oguz Bektas 

 
with that: applied, thanks!


___
pve-devel mailing list
pve-devel@pve.proxmox.com
https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel


[pve-devel] applied: [PATCH widget-toolkit] window/LanguageEdit: make window non-resizable

2020-05-04 Thread Thomas Lamprecht
On 4/27/20 4:40 PM, Dominik Csapak wrote:
> Signed-off-by: Dominik Csapak 
> ---
>  window/LanguageEdit.js | 1 +
>  1 file changed, 1 insertion(+)
> 
> diff --git a/window/LanguageEdit.js b/window/LanguageEdit.js
> index dd7393c..9176cfd 100644
> --- a/window/LanguageEdit.js
> +++ b/window/LanguageEdit.js
> @@ -30,6 +30,7 @@ Ext.define('Proxmox.window.LanguageEditWindow', {
>  title: gettext('Language'),
>  modal: true,
>  bodyPadding: 10,
> +resizable: false,
>  items: [
>   {
>   xtype: 'proxmoxLanguageSelector',
> 

applied, thanks!

___
pve-devel mailing list
pve-devel@pve.proxmox.com
https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel


[pve-devel] applied: [PATCH common] fix #2696: avoid 'undefined value' warning in 'pvesh help'

2020-05-04 Thread Thomas Lamprecht
On 5/4/20 2:02 PM, Stefan Reiter wrote:
> Signed-off-by: Stefan Reiter 
> ---
>  src/PVE/CLIHandler.pm | 2 +-
>  1 file changed, 1 insertion(+), 1 deletion(-)
> 
> diff --git a/src/PVE/CLIHandler.pm b/src/PVE/CLIHandler.pm
> index 763cd60..9955d77 100644
> --- a/src/PVE/CLIHandler.pm
> +++ b/src/PVE/CLIHandler.pm
> @@ -235,8 +235,8 @@ sub generate_usage_str {
>  
>   }
>   } else {
> + $abort->("unknown command '$cmd->[0]'") if !$def;
>   my ($class, $name, $arg_param, $fixed_param, undef, 
> $formatter_properties) = @$def;
> - $abort->("unknown command '$cmd'") if !$class;
>  
>   $str .= $indent;
>   $str .= $class->usage_str($name, $prefix, $arg_param, $fixed_param, 
> $format, $param_cb, $formatter_properties);
> 

applied, thanks!

___
pve-devel mailing list
pve-devel@pve.proxmox.com
https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel


Re: [pve-devel] [PATCH guest-common 1/2] Avoid duplication by using lock_config_mode

2020-05-04 Thread Thomas Lamprecht
On 4/23/20 1:51 PM, Fabian Ebner wrote:
> No functional change is intended.
> 
> Signed-off-by: Fabian Ebner 
> ---
>  PVE/AbstractConfig.pm | 8 +---
>  1 file changed, 1 insertion(+), 7 deletions(-)
> 
> diff --git a/PVE/AbstractConfig.pm b/PVE/AbstractConfig.pm
> index beb10c7..f1b395c 100644
> --- a/PVE/AbstractConfig.pm
> +++ b/PVE/AbstractConfig.pm
> @@ -259,13 +259,7 @@ sub load_current_config {
>  sub lock_config_full {
>  my ($class, $vmid, $timeout, $code, @param) = @_;
>  
> -my $filename = $class->config_file_lock($vmid);
> -
> -my $res = lock_file($filename, $timeout, $code, @param);
> -
> -die $@ if $@;
> -
> -return $res;
> +return $class->lock_config_mode($vmid, $timeout, 0, $code, @param);
>  }
>  
>  sub create_and_lock_config {
> 

so I lost a bit track on the remaining patches from both of you. There seem
some nits/unrelated hunks which got commented (e.g., patch guest-common 1/3).
I'd appreciated if one of you could pick this up and send a nice v2 with
those addressed and R-b tags where applicable.

___
pve-devel mailing list
pve-devel@pve.proxmox.com
https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel


[pve-devel] applied: [PATCH qemu-server 2/3] api/destroy: repeat early checks after lock

2020-05-04 Thread Thomas Lamprecht
On 4/27/20 10:24 AM, Fabian Grünbichler wrote:
> to protect against concurrent changes
> 
> Signed-off-by: Fabian Grünbichler 
> ---
> best viewed with --patience -w
> 
>  PVE/API2/Qemu.pm | 40 +++-
>  1 file changed, 23 insertions(+), 17 deletions(-)
> 

applied, thanks!


___
pve-devel mailing list
pve-devel@pve.proxmox.com
https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel


[pve-devel] applied: [PATCH qemu-server 1/3] QemuServer: drop unused imported locking functions

2020-05-04 Thread Thomas Lamprecht
On 4/27/20 10:24 AM, Fabian Grünbichler wrote:
> lock_file is used by PVE::QemuServer::Memory, but it does properly 'use
> PVE::Tools ...' itself so we can drop them in the main module.
> 
> Signed-off-by: Fabian Grünbichler 
> ---
>  PVE/QemuServer.pm | 4 ++--
>  1 file changed, 2 insertions(+), 2 deletions(-)
> 
> diff --git a/PVE/QemuServer.pm b/PVE/QemuServer.pm
> index 37c7320..6c339ca 100644
> --- a/PVE/QemuServer.pm
> +++ b/PVE/QemuServer.pm
> @@ -26,7 +26,7 @@ use Time::HiRes qw(gettimeofday);
>  use URI::Escape;
>  use UUID;
>  
> -use PVE::Cluster qw(cfs_register_file cfs_read_file cfs_write_file 
> cfs_lock_file);
> +use PVE::Cluster qw(cfs_register_file cfs_read_file cfs_write_file);
>  use PVE::DataCenterConfig;
>  use PVE::Exception qw(raise raise_param_exc);
>  use PVE::GuestHelpers qw(safe_string_ne safe_num_ne safe_boolean_ne);
> @@ -37,7 +37,7 @@ use PVE::RPCEnvironment;
>  use PVE::Storage;
>  use PVE::SysFSTools;
>  use PVE::Systemd;
> -use PVE::Tools qw(run_command lock_file lock_file_full file_read_firstline 
> file_get_contents dir_glob_foreach get_host_arch $IPV6RE);
> +use PVE::Tools qw(run_command file_read_firstline file_get_contents 
> dir_glob_foreach get_host_arch $IPV6RE);
>  
>  use PVE::QMPClient;
>  use PVE::QemuConfig;
> 

applied, thanks!


___
pve-devel mailing list
pve-devel@pve.proxmox.com
https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel


[pve-devel] applied: [PATCH manager] NodeConfig: ensure locked context has current view

2020-05-04 Thread Thomas Lamprecht
On 4/30/20 10:37 AM, Fabian Grünbichler wrote:
> similar to the recent changes for pve-guest-common - we start each API
> call with a cfs_update, but while we were waiting for the flock another
> R-M-W cycle might have happened, so we need to refresh after obtaining
> the lock.
> 
> Signed-off-by: Fabian Grünbichler 
> ---
> 
> Notes:
> there's only a single API call using this, so it's pretty 
> straight-forward.
> 
> patch generated on-top of ACME patch set
> 
>  PVE/NodeConfig.pm | 8 +++-
>  1 file changed, 7 insertions(+), 1 deletion(-)
> 

applied, thanks!


___
pve-devel mailing list
pve-devel@pve.proxmox.com
https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel


[pve-devel] applied: [PATCH qemu-server] Revert "resize_vm: request new size from storage after resizing"

2020-05-04 Thread Thomas Lamprecht
On 3/4/20 10:51 AM, Fabian Ebner wrote:
> This reverts commit b5490d8a98e5e7328eb4cebb0ae0b60e6d406c38.
> 
> When resizing a volume of a running VM, a qmp block_resize command
> is issued. This is non-blocking, so the size on the storage immediately
> after issuing the command might still be the old one.
> 
> This is part of the issue reported in bug #2621.
> 
> Signed-off-by: Fabian Ebner 
> ---
>  PVE/API2/Qemu.pm | 3 +--
>  1 file changed, 1 insertion(+), 2 deletions(-)
> 
> diff --git a/PVE/API2/Qemu.pm b/PVE/API2/Qemu.pm
> index caca430..d0dd2dc 100644
> --- a/PVE/API2/Qemu.pm
> +++ b/PVE/API2/Qemu.pm
> @@ -3586,8 +3586,7 @@ __PACKAGE__->register_method({
>  
>   PVE::QemuServer::qemu_block_resize($vmid, "drive-$disk", $storecfg, 
> $volid, $newsize);
>  
> - my $effective_size = eval { 
> PVE::Storage::volume_size_info($storecfg, $volid, 3); };
> - $drive->{size} = $effective_size // $newsize;
> + $drive->{size} = $newsize;
>   $conf->{$disk} = PVE::QemuServer::print_drive($drive);
>  
>   PVE::QemuConfig->write_config($vmid, $conf);
> 

applied, thanks!

___
pve-devel mailing list
pve-devel@pve.proxmox.com
https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel


[pve-devel] applied: [PATCH qemu-server] migrate: don't accidentally take NBD code paths

2020-05-04 Thread Thomas Lamprecht
On 4/30/20 9:35 AM, Fabian Grünbichler wrote:
> by avoiding auto-vivification of $self->{online_local_volumes} via
> iteration. most code paths don't care whether it's undef or a reference
> to an empty list, but this caused the (already) fixed bug of calling
> nbd_stop without having started an NBD server in the first place.
> 
> Signed-off-by: Fabian Grünbichler 
> ---
> This technically makes the previous NBD stop related patches no longer
> needed - but let's keep them until we clean up the whole volume handling
> properly to avoid falling into this trap again.
> 
>  PVE/QemuMigrate.pm | 12 
>  1 file changed, 8 insertions(+), 4 deletions(-)
> 
> diff --git a/PVE/QemuMigrate.pm b/PVE/QemuMigrate.pm
> index 7644922..d9b104c 100644
> --- a/PVE/QemuMigrate.pm
> +++ b/PVE/QemuMigrate.pm
> @@ -713,10 +713,14 @@ sub phase2 {
>  $input .= "nbd_protocol_version: $nbd_protocol_version\n";
>  
>  my $number_of_online_replicated_volumes = 0;
> -foreach my $volid (@{$self->{online_local_volumes}}) {
> - next if !$self->{replicated_volumes}->{$volid};
> - $number_of_online_replicated_volumes++;
> - $input .= "replicated_volume: $volid\n";
> +
> +# prevent auto-vivification
> +if ($self->{online_local_volumes}) {
> + foreach my $volid (@{$self->{online_local_volumes}}) {
> + next if !$self->{replicated_volumes}->{$volid};
> + $number_of_online_replicated_volumes++;
> + $input .= "replicated_volume: $volid\n";
> + }
>  }
>  
>  my $target_replicated_volumes = {};
> 

applied


___
pve-devel mailing list
pve-devel@pve.proxmox.com
https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel


[pve-devel] applied: [PATCH qemu-server] migrate: skip rescan for efidisk and shared volumes

2020-05-04 Thread Thomas Lamprecht
On 4/30/20 12:44 PM, Dominik Csapak wrote:
> we really only want to rescan the disk size of the disks we actually
> need, and that are only the local disks (for which we have to allocate
> the correct size on the target)
> 
> also we want to always skip the efidisk, since we get the wanted
> size after the loop, and this produced a confusing log line
> (for details why we do not want the 'real' size,
> see commit 818ce80ec1a89c4abee61145c858b9323180e31b)
> 
> Signed-off-by: Dominik Csapak 
> ---
>  PVE/QemuMigrate.pm | 3 +++
>  1 file changed, 3 insertions(+)
> 

applied, thanks!

___
pve-devel mailing list
pve-devel@pve.proxmox.com
https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel


[pve-devel] [PATCH v2 manager 1/2] vzdump: make guest include logic testable

2020-05-04 Thread Aaron Lauterer
As a first step to make the whole guest include logic more testable the
part from the API endpoint has been moved to its own method with as
little changes as possible.

Everything concerning `all` and `exclude` logic is still in the
PVE::VZDump->exec_backup() method.

Signed-off-by: Aaron Lauterer 
---

v1 -> v2:
* fixed return value. Array refs inside an array lead to nested
  arrays not working with `my ($foo, $bar) = method();`


As talked with thomas on[0] and off list, this patch series is meant to
have more confidence in the ongoing changes.

My other ongoing patch series [1] will move the all the logic, even the
one in the `exec_backup()` method into one single method.

[0] https://pve.proxmox.com/pipermail/pve-devel/2020-April/042795.html
[1] https://pve.proxmox.com/pipermail/pve-devel/2020-April/042753.html

 PVE/API2/VZDump.pm | 36 ++--
 PVE/VZDump.pm  | 36 
 2 files changed, 42 insertions(+), 30 deletions(-)

diff --git a/PVE/API2/VZDump.pm b/PVE/API2/VZDump.pm
index f01e4de0..68a3de89 100644
--- a/PVE/API2/VZDump.pm
+++ b/PVE/API2/VZDump.pm
@@ -69,39 +69,15 @@ __PACKAGE__->register_method ({
return 'OK' if $param->{node} && $param->{node} ne $nodename;
 
my $cmdline = PVE::VZDump::Common::command_line($param);
-   my @vmids;
-   # convert string lists to arrays
-   if ($param->{pool}) {
-   @vmids = 
@{PVE::API2Tools::get_resource_pool_guest_members($param->{pool})};
-   } else {
-   @vmids = PVE::Tools::split_list(extract_param($param, 'vmid'));
-   }
+   my ($vmids, $skiplist) = PVE::VZDump->get_included_guests($param);
 
if($param->{stop}){
PVE::VZDump::stop_running_backups();
-   return 'OK' if !scalar(@vmids);
+   return 'OK' if !scalar(@{$vmids});
}
 
-   my $skiplist = [];
-   if (!$param->{all}) {
-   if (!$param->{node} || $param->{node} eq $nodename) {
-   my $vmlist = PVE::Cluster::get_vmlist();
-   my @localvmids = ();
-   foreach my $vmid (@vmids) {
-   my $d = $vmlist->{ids}->{$vmid};
-   if ($d && ($d->{node} ne $nodename)) {
-   push @$skiplist, $vmid;
-   } else {
-   push @localvmids, $vmid;
-   }
-   }
-   @vmids = @localvmids;
-   # silent exit if specified VMs run on other nodes
-   return "OK" if !scalar(@vmids);
-   }
-
-   $param->{vmids} = PVE::VZDump::check_vmids(@vmids)
-   }
+   # silent exit if specified VMs run on other nodes
+   return "OK" if !scalar(@{$vmids});
 
my @exclude = PVE::Tools::split_list(extract_param($param, 'exclude'));
$param->{exclude} = PVE::VZDump::check_vmids(@exclude);
@@ -118,7 +94,7 @@ __PACKAGE__->register_method ({
}
 
die "you can only backup a single VM with option --stdout\n"
-   if $param->{stdout} && scalar(@vmids) != 1;
+   if $param->{stdout} && scalar(@{$vmids}) != 1;
 
$rpcenv->check($user, "/storage/$param->{storage}", [ 
'Datastore.AllocateSpace' ])
if $param->{storage};
@@ -167,7 +143,7 @@ __PACKAGE__->register_method ({
}
 
my $taskid;
-   $taskid = $vmids[0] if scalar(@vmids) == 1;
+   $taskid = ${$vmids}[0] if scalar(@{$vmids}) == 1;
 
return $rpcenv->fork_worker('vzdump', $taskid, $user, $worker);
}});
diff --git a/PVE/VZDump.pm b/PVE/VZDump.pm
index f3274196..73ad9088 100644
--- a/PVE/VZDump.pm
+++ b/PVE/VZDump.pm
@@ -21,6 +21,7 @@ use PVE::RPCEnvironment;
 use PVE::Storage;
 use PVE::VZDump::Common;
 use PVE::VZDump::Plugin;
+use PVE::Tools qw(extract_param);
 
 my @posix_filesystems = qw(ext3 ext4 nfs nfs4 reiserfs xfs);
 
@@ -1156,4 +1157,39 @@ sub stop_running_backups {
 }
 }
 
+sub get_included_guests {
+my ($self, $job) = @_;
+
+my $nodename = PVE::INotify::nodename();
+my $vmids = [];
+
+# convert string lists to arrays
+if ($job->{pool}) {
+   $vmids = PVE::API2Tools::get_resource_pool_guest_members($job->{pool});
+} else {
+   $vmids = [ PVE::Tools::split_list(extract_param($job, 'vmid')) ];
+}
+
+my $skiplist = [];
+if (!$job->{all}) {
+   if (!$job->{node} || $job->{node} eq $nodename) {
+   my $vmlist = PVE::Cluster::get_vmlist();
+   my $localvmids = [];
+   foreach my $vmid (@{$vmids}) {
+   my $d = $vmlist->{ids}->{$vmid};
+   if ($d && ($d->{node} ne $nodename)) {
+   push @{$skiplist}, $vmid;
+   } else {
+   push @{$localvmids}, $vmid;
+   }
+   }
+   $vmids = $localvmids;
+   }
+
+   $job->{vmids} = PVE::VZDump::check_vmids(@{$vmids})
+}
+
+return ($vmids, $skiplist);
+}
+
 1;
-- 
2.20.1



[pve-devel] [PATCH v2 manager 2/2] vzdump: test: add first tests to the guest include logic

2020-05-04 Thread Aaron Lauterer
Signed-off-by: Aaron Lauterer 
---
v1 -> v2: adapt handling of return values, closer to what is used in
production code.

 test/Makefile  |   5 +-
 test/vzdump_guest_included_test.pl | 191 +
 2 files changed, 195 insertions(+), 1 deletion(-)
 create mode 100755 test/vzdump_guest_included_test.pl

diff --git a/test/Makefile b/test/Makefile
index c26e16b1..44f3b0d7 100644
--- a/test/Makefile
+++ b/test/Makefile
@@ -5,7 +5,7 @@ all:
 export PERLLIB=..
 
 .PHONY: check
-check: replication_test balloon_test mail_test
+check: replication_test balloon_test mail_test vzdump_test
 
 balloon_test:
./balloontest.pl
@@ -21,6 +21,9 @@ replication_test:
 mail_test:
./mail_test.pl
 
+vzdump_test:
+   ./vzdump_guest_included_test.pl
+
 .PHONY: install
 install:
 
diff --git a/test/vzdump_guest_included_test.pl 
b/test/vzdump_guest_included_test.pl
new file mode 100755
index ..378ad474
--- /dev/null
+++ b/test/vzdump_guest_included_test.pl
@@ -0,0 +1,191 @@
+#!/usr/bin/perl
+
+# Some of the tests can only be applied once the whole include logic is moved
+# into one single method. Right now parts of it, (all, exclude)  are in the
+# PVE::VZDump->exec_backup() method.
+
+use strict;
+use warnings;
+
+use lib '..';
+
+use Test::More tests => 7;
+use Test::MockModule;
+
+use PVE::VZDump;
+
+use Data::Dumper;
+
+my $vmlist = {
+'ids' => {
+   100 => {
+   'type' => 'qemu',
+   'node' => 'node1',
+   },
+   101 => {
+   'type' => 'qemu',
+   'node' => 'node1',
+   },
+   112 => {
+   'type' => 'lxc',
+   'node' => 'node1',
+   },
+   113 => {
+   'type' => 'lxc',
+   'node' => 'node1',
+   },
+   200 => {
+   'type' => 'qemu',
+   'node' => 'node2',
+   },
+   201 => {
+   'type' => 'qemu',
+   'node' => 'node2',
+   },
+   212 => {
+   'type' => 'lxc',
+   'node' => 'node2',
+   },
+   213 => {
+   'type' => 'lxc',
+   'node' => 'node2',
+   },
+}
+};
+
+my $pve_cluster_module = Test::MockModule->new('PVE::Cluster');
+$pve_cluster_module->mock(
+get_vmlist => sub {
+   return $vmlist;
+}
+);
+
+my $pve_inotify = Test::MockModule->new('PVE::INotify');
+$pve_inotify->mock(
+nodename => sub {
+   return 'node1';
+}
+);
+
+my $pve_api2tools = Test::MockModule->new('PVE::API2Tools');
+$pve_api2tools->mock(
+get_resource_pool_guest_members => sub {
+   return [100, 101, 200, 201];
+}
+);
+
+
+
+my $tests = {};
+
+# is handled in the PVE::VZDump->exec_backup() method for now
+# $tests->{'Test all guests'} = {
+# expected_vmids => [ 100, 101, 112, 113, 200, 201, 212, 213 ],
+# expected_skiplist => [ ],
+# param => {
+#  all => 1,
+# }
+# };
+
+# is handled in the PVE::VZDump->exec_backup() method for now
+# $tests->{'Test all guests with cluster node limit'} = {
+# expected_vmids => [ 100, 101, 112, 113, 200, 201, 212, 213 ],
+# expected_skiplist => [],
+# param => {
+#  all => 1,
+#  node => 'node2',
+# }
+# };
+
+# is handled in the PVE::VZDump->exec_backup() method for now
+# $tests->{'Test all guests with local node limit'} = {
+# expected_vmids => [ 100, 101, 112, 113 ],
+# expected_skiplist => [ 200, 201, 212, 213 ],
+# param => {
+#  all => 1,
+#  node => 'node1',
+# }
+# };
+#
+# TODO: all test variants with exclude
+
+$tests->{'Test pool members'} = {
+expected_vmids => [ 100, 101 ],
+expected_skiplist => [ 200, 201 ],
+param => {
+   pool => 'testpool',
+}
+};
+
+$tests->{'Test pool members with cluster node limit'} = {
+expected_vmids => [ 100, 101, 200, 201 ],
+expected_skiplist => [],
+param => {
+   pool => 'testpool',
+   node => 'node2',
+}
+};
+
+$tests->{'Test pool members with local node limit'} = {
+expected_vmids => [ 100, 101 ],
+expected_skiplist => [ 200, 201 ],
+param => {
+   pool => 'testpool',
+   node => 'node1',
+}
+};
+
+$tests->{'Test selected VMIDs'} = {
+expected_vmids => [ 100 ],
+expected_skiplist => [ 201, 212 ],
+param => {
+   vmid => '100, 201, 212',
+}
+};
+
+
+$tests->{'Test selected VMIDs with cluster node limit'} = {
+expected_vmids => [ 100, 201, 212 ],
+expected_skiplist => [],
+param => {
+   vmid => '100, 201, 212',
+   node => 'node2',
+}
+};
+
+$tests->{'Test selected VMIDs with local node limit'} = {
+expected_vmids => [ 100 ],
+expected_skiplist => [ 201, 212 ],
+param => {
+   vmid => '100, 201, 212',
+   node => 'node1',
+}
+};
+
+$tests->{'Test selected VMIDs on other nodes'} = {
+expected_vmids => [],
+expected_skiplist => [ 201, 212 ],
+param => {
+   vmid => '201, 212',
+   node => 'node1',
+}
+};
+
+
+
+foreach my $testname (sort keys %{$tests}) {

[pve-devel] applied-series: [PATCH v2 qemu 1/2] experimentally move savevm-async back into a coroutine

2020-05-04 Thread Thomas Lamprecht
On 5/4/20 2:35 PM, Wolfgang Bumiller wrote:
> Move qemu_savevm_state_{header,setup} into the main loop and
> the rest of the iteration into a coroutine. The former need
> to lock the iothread (and we can't unlock it in the
> coroutine), and the latter can't deal with being in a
> separate thread, so a coroutine it must be.
> 
> Signed-off-by: Wolfgang Bumiller 
> ---
> No change in this one
> 
>  ...e-savevm-async-back-into-a-coroutine.patch | 111 ++
>  debian/patches/series |   1 +
>  2 files changed, 112 insertions(+)
>  create mode 100644 
> debian/patches/pve/0043-move-savevm-async-back-into-a-coroutine.patch
> 

applied-series, thanks!

___
pve-devel mailing list
pve-devel@pve.proxmox.com
https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel


[pve-devel] [PATCH docs] remove SLAAC reference from cloudinit docs

2020-05-04 Thread Mira Limbeck
As we don't currently support SLAAC in the nocloud network format code, remove
the reference from the docs.

Signed-off-by: Mira Limbeck 
---
We have removed SLAAC from the GUI a while ago because cloud-init did
not support it back then but missed the reference in the docs.

 qm-cloud-init-opts.adoc | 1 -
 1 file changed, 1 deletion(-)

diff --git a/qm-cloud-init-opts.adoc b/qm-cloud-init-opts.adoc
index 43e4415..2fa1735 100644
--- a/qm-cloud-init-opts.adoc
+++ b/qm-cloud-init-opts.adoc
@@ -33,7 +33,6 @@ Specify IP addresses and gateways for the corresponding 
interface.
 IP addresses use CIDR notation, gateways are optional but need an IP of the 
same type specified.
 +
 The special string 'dhcp' can be used for IP addresses to use DHCP, in which 
case no explicit gateway should be provided.
-For IPv6 the special string 'auto' can be used to use stateless 
autoconfiguration.
 +
 If cloud-init is enabled and neither an IPv4 nor an IPv6 address is specified, 
it defaults to using dhcp on IPv4.
 
-- 
2.20.1


___
pve-devel mailing list
pve-devel@pve.proxmox.com
https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel


[pve-devel] [PATCH docs v3] add documenation for ldap syncing

2020-05-04 Thread Dominik Csapak
explaining the main Requirements and limitations, as well as the
most important sync options

Signed-off-by: Dominik Csapak 
---
changes from v2:
* incorporated suggestions from aaron
@aaron, regarding linking to character limitations,
sadly no, this is a sub based format, so even if we would have a place
were we could link to for formats (we don't afaik) since it's sub
based there would be no auto-generated information

just for completeness, atm the limit is >2 and <60 characters, and no
'/', ':'

(maybe we can change it to a regex?)

 pveum.adoc | 48 
 1 file changed, 48 insertions(+)

diff --git a/pveum.adoc b/pveum.adoc
index c89d4b8..7f8bd67 100644
--- a/pveum.adoc
+++ b/pveum.adoc
@@ -170,6 +170,54 @@ A server and authentication domain need to be specified. 
Like with
 ldap an optional fallback server, optional port, and SSL
 encryption can be configured.
 
+[[pveum_ldap_sync]]
+Syncing LDAP-based realms
+~
+
+It is possible to sync users and groups for LDAP based realms using
+  pveum sync 
+or in the `Authentication` panel of the GUI. Users and groups are synced
+to `/etc/pve/user.cfg`.
+
+Requirements and limitations
+
+
+The `bind_dn` is used to query the users and groups. This account
+needs access to all desired entries.
+
+The fields which represent the names of the users and groups can be configured
+via the `user_attr` and `group_name_attr` respectively. Only entries which
+adhere to the usual character limitations of the user.cfg are synced.
+
+Groups are synced with `-$realm` attached to the name, to avoid naming
+conflicts. Please make sure that a sync does not overwrite manually created
+groups.
+
+Options
+^^^
+
+The main options for syncing are:
+
+* `dry-run`: No data is written to the config. This is useful if you want to
+  see which users and groups would get synced to the user.cfg. This is set
+  when you click `Preview` in the GUI.
+
+* `enable-new`: If set, the newly synced users are enabled and can login.
+  The default is `true`.
+
+* `full`: If set, the sync uses the LDAP Directory as a source of truth,
+  overwriting information set manually in the user.cfg and deletes users
+  and groups which are not present in the LDAP directory. If not set,
+  only new data is written to the config, and no stale users are deleted.
+
+* `purge`: If set, sync removes all corresponding ACLs when removing users
+  and groups. This is only useful with the option `full`.
+
+* `scope`: The scope of what to sync. It can be either `users`, `groups` or
+  `both`.
+
+These options are either set as parameters or as defaults, via the
+realm option `sync-defaults-options`.
 
 [[pveum_tfa_auth]]
 Two-factor authentication
-- 
2.20.1


___
pve-devel mailing list
pve-devel@pve.proxmox.com
https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel


[pve-devel] applied: [PATCH acme] plugin id: limit to 'pve-configid' format

2020-05-04 Thread Thomas Lamprecht
Else one can pass almost arbitrary data as ID and break editing or
deletion of a plugin.

Signed-off-by: Thomas Lamprecht 
---
 src/PVE/ACME/Challenge.pm | 1 +
 1 file changed, 1 insertion(+)

diff --git a/src/PVE/ACME/Challenge.pm b/src/PVE/ACME/Challenge.pm
index 0af77a3..0c679fc 100644
--- a/src/PVE/ACME/Challenge.pm
+++ b/src/PVE/ACME/Challenge.pm
@@ -13,6 +13,7 @@ my $defaultData = {
id => {
description => "ACME Plugin ID name",
type => 'string',
+   format => 'pve-configid',
},
type => {
description => "ACME challenge type.",
-- 
2.20.1


___
pve-devel mailing list
pve-devel@pve.proxmox.com
https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel


[pve-devel] [PATCH v2 qemu 2/2] add optional buffer size to QEMUFile

2020-05-04 Thread Wolfgang Bumiller
and use 4M for our savevm-async buffer size

Signed-off-by: Wolfgang Bumiller 
---
Changes to v1: add missing call to free() in qemu_fclose.

 ...add-optional-buffer-size-to-QEMUFile.patch | 183 ++
 debian/patches/series |   1 +
 2 files changed, 184 insertions(+)
 create mode 100644 
debian/patches/pve/0044-add-optional-buffer-size-to-QEMUFile.patch

diff --git a/debian/patches/pve/0044-add-optional-buffer-size-to-QEMUFile.patch 
b/debian/patches/pve/0044-add-optional-buffer-size-to-QEMUFile.patch
new file mode 100644
index 000..daad1f5
--- /dev/null
+++ b/debian/patches/pve/0044-add-optional-buffer-size-to-QEMUFile.patch
@@ -0,0 +1,183 @@
+From  Mon Sep 17 00:00:00 2001
+From: Wolfgang Bumiller 
+Date: Mon, 4 May 2020 11:05:08 +0200
+Subject: [PATCH] add optional buffer size to QEMUFile
+
+So we can use a 4M buffer for savevm-async which should
+increase performance storing the state onto ceph.
+
+Signed-off-by: Wolfgang Bumiller 
+---
+ migration/qemu-file.c | 36 
+ migration/qemu-file.h |  1 +
+ savevm-async.c|  4 ++--
+ 3 files changed, 27 insertions(+), 14 deletions(-)
+
+diff --git a/migration/qemu-file.c b/migration/qemu-file.c
+index 1c3a358a14..7362e51c71 100644
+--- a/migration/qemu-file.c
 b/migration/qemu-file.c
+@@ -30,7 +30,7 @@
+ #include "trace.h"
+ #include "qapi/error.h"
+ 
+-#define IO_BUF_SIZE 32768
++#define DEFAULT_IO_BUF_SIZE 32768
+ #define MAX_IOV_SIZE MIN(IOV_MAX, 64)
+ 
+ struct QEMUFile {
+@@ -45,7 +45,8 @@ struct QEMUFile {
+ when reading */
+ int buf_index;
+ int buf_size; /* 0 when writing */
+-uint8_t buf[IO_BUF_SIZE];
++size_t buf_allocated_size;
++uint8_t *buf;
+ 
+ DECLARE_BITMAP(may_free, MAX_IOV_SIZE);
+ struct iovec iov[MAX_IOV_SIZE];
+@@ -101,7 +102,7 @@ bool qemu_file_mode_is_not_valid(const char *mode)
+ return false;
+ }
+ 
+-QEMUFile *qemu_fopen_ops(void *opaque, const QEMUFileOps *ops)
++QEMUFile *qemu_fopen_ops_sized(void *opaque, const QEMUFileOps *ops, size_t 
buffer_size)
+ {
+ QEMUFile *f;
+ 
+@@ -109,9 +110,17 @@ QEMUFile *qemu_fopen_ops(void *opaque, const QEMUFileOps 
*ops)
+ 
+ f->opaque = opaque;
+ f->ops = ops;
++f->buf_allocated_size = buffer_size;
++f->buf = malloc(buffer_size);
++
+ return f;
+ }
+ 
++QEMUFile *qemu_fopen_ops(void *opaque, const QEMUFileOps *ops)
++{
++return qemu_fopen_ops_sized(opaque, ops, DEFAULT_IO_BUF_SIZE);
++}
++
+ 
+ void qemu_file_set_hooks(QEMUFile *f, const QEMUFileHooks *hooks)
+ {
+@@ -346,7 +355,7 @@ static ssize_t qemu_fill_buffer(QEMUFile *f)
+ }
+ 
+ len = f->ops->get_buffer(f->opaque, f->buf + pending, f->pos,
+- IO_BUF_SIZE - pending, _error);
++ f->buf_allocated_size - pending, _error);
+ if (len > 0) {
+ f->buf_size += len;
+ f->pos += len;
+@@ -386,6 +395,9 @@ int qemu_fclose(QEMUFile *f)
+ ret = ret2;
+ }
+ }
++
++free(f->buf);
++
+ /* If any error was spotted before closing, we should report it
+  * instead of the close() return value.
+  */
+@@ -435,7 +447,7 @@ static void add_buf_to_iovec(QEMUFile *f, size_t len)
+ {
+ if (!add_to_iovec(f, f->buf + f->buf_index, len, false)) {
+ f->buf_index += len;
+-if (f->buf_index == IO_BUF_SIZE) {
++if (f->buf_index == f->buf_allocated_size) {
+ qemu_fflush(f);
+ }
+ }
+@@ -461,7 +473,7 @@ void qemu_put_buffer(QEMUFile *f, const uint8_t *buf, 
size_t size)
+ }
+ 
+ while (size > 0) {
+-l = IO_BUF_SIZE - f->buf_index;
++l = f->buf_allocated_size - f->buf_index;
+ if (l > size) {
+ l = size;
+ }
+@@ -508,8 +520,8 @@ size_t qemu_peek_buffer(QEMUFile *f, uint8_t **buf, size_t 
size, size_t offset)
+ size_t index;
+ 
+ assert(!qemu_file_is_writable(f));
+-assert(offset < IO_BUF_SIZE);
+-assert(size <= IO_BUF_SIZE - offset);
++assert(offset < f->buf_allocated_size);
++assert(size <= f->buf_allocated_size - offset);
+ 
+ /* The 1st byte to read from */
+ index = f->buf_index + offset;
+@@ -559,7 +571,7 @@ size_t qemu_get_buffer(QEMUFile *f, uint8_t *buf, size_t 
size)
+ size_t res;
+ uint8_t *src;
+ 
+-res = qemu_peek_buffer(f, , MIN(pending, IO_BUF_SIZE), 0);
++res = qemu_peek_buffer(f, , MIN(pending, f->buf_allocated_size), 
0);
+ if (res == 0) {
+ return done;
+ }
+@@ -593,7 +605,7 @@ size_t qemu_get_buffer(QEMUFile *f, uint8_t *buf, size_t 
size)
+  */
+ size_t qemu_get_buffer_in_place(QEMUFile *f, uint8_t **buf, size_t size)
+ {
+-if (size < IO_BUF_SIZE) {
++if (size < f->buf_allocated_size) {
+ size_t res;
+ uint8_t *src;
+ 
+@@ -618,7 +630,7 @@ int qemu_peek_byte(QEMUFile *f, int offset)
+ int index = f->buf_index + 

[pve-devel] [PATCH v2 qemu 1/2] experimentally move savevm-async back into a coroutine

2020-05-04 Thread Wolfgang Bumiller
Move qemu_savevm_state_{header,setup} into the main loop and
the rest of the iteration into a coroutine. The former need
to lock the iothread (and we can't unlock it in the
coroutine), and the latter can't deal with being in a
separate thread, so a coroutine it must be.

Signed-off-by: Wolfgang Bumiller 
---
No change in this one

 ...e-savevm-async-back-into-a-coroutine.patch | 111 ++
 debian/patches/series |   1 +
 2 files changed, 112 insertions(+)
 create mode 100644 
debian/patches/pve/0043-move-savevm-async-back-into-a-coroutine.patch

diff --git 
a/debian/patches/pve/0043-move-savevm-async-back-into-a-coroutine.patch 
b/debian/patches/pve/0043-move-savevm-async-back-into-a-coroutine.patch
new file mode 100644
index 000..f4945db
--- /dev/null
+++ b/debian/patches/pve/0043-move-savevm-async-back-into-a-coroutine.patch
@@ -0,0 +1,111 @@
+From  Mon Sep 17 00:00:00 2001
+From: Wolfgang Bumiller 
+Date: Thu, 30 Apr 2020 15:55:37 +0200
+Subject: [PATCH] move savevm-async back into a coroutine
+
+Move qemu_savevm_state_{header,setup} into the main loop and
+the rest of the iteration into a coroutine. The former need
+to lock the iothread (and we can't unlock it in the
+coroutine), and the latter can't deal with being in a
+separate thread, so a coroutine it must be.
+
+Signed-off-by: Wolfgang Bumiller 
+---
+ savevm-async.c | 28 +---
+ 1 file changed, 9 insertions(+), 19 deletions(-)
+
+diff --git a/savevm-async.c b/savevm-async.c
+index a38b15d652..af865b9a0a 100644
+--- a/savevm-async.c
 b/savevm-async.c
+@@ -51,7 +51,7 @@ static struct SnapshotState {
+ QEMUFile *file;
+ int64_t total_time;
+ QEMUBH *cleanup_bh;
+-QemuThread thread;
++Coroutine *co;
+ } snap_state;
+ 
+ SaveVMInfo *qmp_query_savevm(Error **errp)
+@@ -201,11 +201,9 @@ static void process_savevm_cleanup(void *opaque)
+ int ret;
+ qemu_bh_delete(snap_state.cleanup_bh);
+ snap_state.cleanup_bh = NULL;
++snap_state.co = NULL;
+ qemu_savevm_state_cleanup();
+ 
+-qemu_mutex_unlock_iothread();
+-qemu_thread_join(_state.thread);
+-qemu_mutex_lock_iothread();
+ ret = save_snapshot_cleanup();
+ if (ret < 0) {
+ save_snapshot_error("save_snapshot_cleanup error %d", ret);
+@@ -221,18 +219,13 @@ static void process_savevm_cleanup(void *opaque)
+ }
+ }
+ 
+-static void *process_savevm_thread(void *opaque)
++static void process_savevm_coro(void *opaque)
+ {
+ int ret;
+ int64_t maxlen;
+ MigrationState *ms = migrate_get_current();
+ 
+-rcu_register_thread();
+-
+-qemu_savevm_state_header(snap_state.file);
+-qemu_savevm_state_setup(snap_state.file);
+ ret = qemu_file_get_error(snap_state.file);
+-
+ if (ret < 0) {
+ save_snapshot_error("qemu_savevm_state_setup failed");
+ goto out;
+@@ -247,16 +240,13 @@ static void *process_savevm_thread(void *opaque)
+ maxlen = blk_getlength(snap_state.target) - 30*1024*1024;
+ 
+ if (pending_size > 40 && snap_state.bs_pos + pending_size < 
maxlen) {
+-qemu_mutex_lock_iothread();
+ ret = qemu_savevm_state_iterate(snap_state.file, false);
+ if (ret < 0) {
+ save_snapshot_error("qemu_savevm_state_iterate error %d", 
ret);
+ break;
+ }
+-qemu_mutex_unlock_iothread();
+ DPRINTF("savevm inerate pending size %lu ret %d\n", pending_size, 
ret);
+ } else {
+-qemu_mutex_lock_iothread();
+ qemu_system_wakeup_request(QEMU_WAKEUP_REASON_OTHER, NULL);
+ ret = global_state_store();
+ if (ret) {
+@@ -285,16 +275,12 @@ static void *process_savevm_thread(void *opaque)
+ }
+ 
+ qemu_bh_schedule(snap_state.cleanup_bh);
+-qemu_mutex_unlock_iothread();
+ 
+ out:
+ /* set migration state accordingly and clear soon-to-be stale file */
+ migrate_set_state(>state, MIGRATION_STATUS_SETUP,
+   ret ? MIGRATION_STATUS_FAILED : 
MIGRATION_STATUS_COMPLETED);
+ ms->to_dst_file = NULL;
+-
+-rcu_unregister_thread();
+-return NULL;
+ }
+ 
+ void qmp_savevm_start(bool has_statefile, const char *statefile, Error **errp)
+@@ -373,8 +359,12 @@ void qmp_savevm_start(bool has_statefile, const char 
*statefile, Error **errp)
+ 
+ snap_state.state = SAVE_STATE_ACTIVE;
+ snap_state.cleanup_bh = qemu_bh_new(process_savevm_cleanup, _state);
+-qemu_thread_create(_state.thread, "savevm-async", 
process_savevm_thread,
+-   NULL, QEMU_THREAD_JOINABLE);
++snap_state.co = qemu_coroutine_create(_savevm_coro, NULL);
++qemu_mutex_unlock_iothread();
++qemu_savevm_state_header(snap_state.file);
++qemu_savevm_state_setup(snap_state.file);
++qemu_mutex_lock_iothread();
++aio_co_schedule(iohandler_get_aio_context(), snap_state.co);
+ 
+ return;
+ 
diff --git 

[pve-devel] applied-series: [PATCH storage v5 00/17] Fix: #2124 zstd

2020-05-04 Thread Fabian Grünbichler
with an additional commit adding '--rsyncable'.

thanks!

On April 28, 2020 3:58 pm, Alwin Antreich wrote:
> Zstandard (zstd) [0] is a data compression algorithm, in addition to
> gzip, lzo for our backup/restore. It can utilize multiple core CPUs. But
> by default it has one compression and one writer thread.
> 
> 
> Here some quick tests I made on my workstation. The files where placed
> on a ram disk. And with dd filled from /dev/urandom and /dev/zero.
> 
> __Compression__
> file size: 1073741824 bytes
>   = urandom = = zero =
>   995ms 1073766414328ms 98192zstd -k
>   732ms 1073766414295ms 98192zstd -k -T4
>   906ms 1073791036562ms 4894779  lzop -k
> 31992ms 1073915558   5594ms 1042087  gzip -k
> 30832ms 1074069541   5776ms 1171491  pigz -k -p 1
>  7814ms 1074069541   1567ms 1171491  pigz -k -p 4
> 
> __Decompression__
> file size: 1073741824 bytes
> = urandom =   = zero =
>712ms  869ms  zstd -d
>685ms  872ms  zstd -k -d -T4
>841ms 2462ms  lzop -d
>   5417ms 4754ms  gzip -k -d
>   1248ms 3118ms  pigz -k -d -p 1
>   1236ms 2379ms  pigz -k -d -p 4
> 
> 
> And I used the same ramdisk to move a VM onto it and run a quick
> backup/restore.
> 
> __vzdump backup__
> INFO: transferred 34359 MB in 69 seconds (497 MB/s) zstd -T1
> INFO: transferred 34359 MB in 37 seconds (928 MB/s) zstd -T4
> INFO: transferred 34359 MB in 51 seconds (673 MB/s) lzo
> INFO: transferred 34359 MB in 1083 seconds (31 MB/s) gzip
> INFO: transferred 34359 MB in 241 seconds (142 MB/s) pigz -n 4
> 
> __qmrestore__
> progress 100% (read 34359738368 bytes, duration 36 sec)
> total bytes read 34359738368, sparse bytes 8005484544 (23.3%) zstd -d -T4
> 
> progress 100% (read 34359738368 bytes, duration 38 sec)
> total bytes read 34359738368, sparse bytes 8005484544 (23.3%) lzo
> 
> progress 100% (read 34359738368 bytes, duration 175 sec)
> total bytes read 34359738368, sparse bytes 8005484544 (23.3%) pigz -n 4
> 
> 
> v4 -> v5:
> * fixup, use File::stat directly without overwriting CORE::stat,
>   thanks Dietmar for pointing this out
>   https://pve.proxmox.com/pipermail/pve-devel/2020-April/043134.html
> * rebase to current master
> 
> v3 -> v4:
> * fixed styling issues discovered by f.ebner (thanks)
> * incorporated tests of d.jaeger into patches
> * added fixes discovered by tests (f.ebner thanks)
> 
> v2 -> v3:
> * split archive_info into decompressor_info and archive_info
> * "compact" regex pattern is now a constant and used in
>   multiple modules
> * added tests for regex matching
> * bug fix for ctime of backup files
> 
> v1 -> v2:
> * factored out the decompressor info first, as Thomas suggested
> * made the regex pattern of backup files more compact, easier to
>   read (hopefully)
> * less code changes for container restores
> 
> Thanks for any comment or suggestion in advance.
> 
> [0] https://facebook.github.io/zstd/
> 
> Alwin Antreich (17):
> __pve-storage__
>   storage: test: split archive format/compressor
>   storage: replace build-in stat with File::stat
>   test: parse_volname
>   test: list_volumes
>   Fix: backup: ctime was from stat not file name
>   test: path_to_volume_id
>   Fix: path_to_volume_id returned wrong content
>   Fix: add missing snippets subdir
>   backup: compact regex for backup file filter
>   Fix: #2124 storage: add zstd support
>   test: get_subdir
>   test: filesystem_path
> 
>  test/Makefile  |   5 +-
>  PVE/Diskmanage.pm  |   9 +-
>  PVE/Storage.pm |  91 --
>  PVE/Storage/Plugin.pm  |  47 ++-
>  test/archive_info_test.pm  | 127 
>  test/filesystem_path_test.pm   |  91 ++
>  test/get_subdir_test.pm|  44 +++
>  test/list_volumes_test.pm  | 537 +
>  test/parse_volname_test.pm | 253 
>  test/path_to_volume_id_test.pm | 274 +
>  test/run_plugin_tests.pl   |  18 ++
>  11 files changed, 1440 insertions(+), 56 deletions(-)
>  create mode 100644 test/archive_info_test.pm
>  create mode 100644 test/filesystem_path_test.pm
>  create mode 100644 test/get_subdir_test.pm
>  create mode 100644 test/list_volumes_test.pm
>  create mode 100644 test/parse_volname_test.pm
>  create mode 100644 test/path_to_volume_id_test.pm
>  create mode 100755 test/run_plugin_tests.pl
> 
> 
> __guest_common__
>   Fix: #2124 add zstd support
> 
>  PVE/VZDump/Common.pm | 9 -
>  1 file changed, 8 insertions(+), 1 deletion(-)
> 
> 
> __qemu-server__
>   restore: replace archive format/compression
>   Fix #2124: Add support for zstd
> 
>  PVE/QemuServer.pm | 38 +++---
>  1 file changed, 7 insertions(+), 31 deletions(-)
> 
> 
> __pve-container__
>   Fix: #2124 add zstd
> 
>  src/PVE/LXC/Create.pm | 1 +
>  1 file changed, 1 insertion(+)
> 
> 
> __pve-manager__
>   Fix #2124: Add support for zstd
> 
>  PVE/VZDump.pm  

[pve-devel] applied: [PATCH firewall 2/3] fix wrong icmpv6 types

2020-05-04 Thread Thomas Lamprecht
On 4/29/20 3:45 PM, Mira Limbeck wrote:
> This removes icmpv6-type 'any' as it is not supported by ip6tables. Also
> introduced new icmpv6 types 'beyond-scope', 'failed-policy' and
> 'reject-route'. These values were taken from 'ip6tables -p icmpv6 -h'.
> 
> Signed-off-by: Mira Limbeck 
> ---
>  src/PVE/Firewall.pm | 4 +++-
>  1 file changed, 3 insertions(+), 1 deletion(-)
> 
> diff --git a/src/PVE/Firewall.pm b/src/PVE/Firewall.pm
> index 39f1bfc..0cae9d8 100644
> --- a/src/PVE/Firewall.pm
> +++ b/src/PVE/Firewall.pm
> @@ -785,12 +785,14 @@ my $icmp_type_names = {
>  # ip6tables -p icmpv6 -h
>  
>  my $icmpv6_type_names = {
> -'any' => 1,
>  'destination-unreachable' => 1,
>  'no-route' => 1,
>  'communication-prohibited' => 1,
> +'beyond-scope' => 1,
>  'address-unreachable' => 1,
>  'port-unreachable' => 1,
> +'failed-policy' => 1,
> +'reject-route' => 1,
>  'packet-too-big' => 1,
>  'time-exceeded' => 1,
>  'ttl-zero-during-transit' => 1,
> 

applied, thanks!

___
pve-devel mailing list
pve-devel@pve.proxmox.com
https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel


[pve-devel] applied: [PATCH firewall 4/4] add dport: factor out ICMP-type validity checking

2020-05-04 Thread Thomas Lamprecht
Signed-off-by: Thomas Lamprecht 
---
 src/PVE/Firewall.pm | 26 ++
 1 file changed, 14 insertions(+), 12 deletions(-)

diff --git a/src/PVE/Firewall.pm b/src/PVE/Firewall.pm
index a6157e3..eadfc6b 100644
--- a/src/PVE/Firewall.pm
+++ b/src/PVE/Firewall.pm
@@ -812,6 +812,17 @@ my $icmpv6_type_names = {
 'redirect' => 1,
 };
 
+my $is_valid_icmp_type = sub {
+my ($type, $valid_types) = @_;
+
+if ($type =~ m/^\d+$/) {
+   # values for icmp-type range between 0 and 255 (8 bit field)
+   die "invalid icmp-type '$type'\n" if $type > 255;
+} else {
+   die "unknown icmp-type '$type'\n" if !defined($valid_types->{$type});
+}
+};
+
 sub init_firewall_macros {
 
 $pve_fw_parsed_macros = {};
@@ -2041,21 +2052,12 @@ sub ipt_rule_to_cmds {
my $add_dport = sub {
return if !defined($rule->{dport});
 
+   # NOTE: we re-use dport to store --icmp-type for icmp* protocol
if ($proto eq 'icmp') {
-   # Note: we use dport to store --icmp-type
-   die "unknown icmp-type '$rule->{dport}'\n"
-   if $rule->{dport} !~ /^\d+$/ && 
!defined($icmp_type_names->{$rule->{dport}});
-   # values for icmp-type range between 0 and 255
-   # higher values and iptables-restore fails
-   die "invalid icmp-type '$rule->{dport}'\n" if 
($rule->{dport} =~ m/^(\d+)$/) && ($1 > 255);
+   $is_valid_icmp_type->($rule->{dport}, $icmp_type_names);
push @match, "-m icmp --icmp-type $rule->{dport}";
} elsif ($proto eq 'icmpv6') {
-   # Note: we use dport to store --icmpv6-type
-   die "unknown icmpv6-type '$rule->{dport}'\n"
-   if $rule->{dport} !~ /^\d+$/ && 
!defined($icmpv6_type_names->{$rule->{dport}});
-   # values for icmpv6-type range between 0 and 255
-   # higher values and iptables-restore fails
-   die "invalid icmpv6-type '$rule->{dport}'\n" if 
($rule->{dport} =~ m/^(\d+)$/) && ($1 > 255);
+   $is_valid_icmp_type->($rule->{dport}, $icmpv6_type_names);
push @match, "-m icmpv6 --icmpv6-type $rule->{dport}";
} elsif (!$PROTOCOLS_WITH_PORTS->{$proto}) {
die "protocol $proto does not have ports\n";
-- 
2.20.1


___
pve-devel mailing list
pve-devel@pve.proxmox.com
https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel


[pve-devel] applied: [PATCH firewall 2/4] fix typo: s/ICPM/ICMP/

2020-05-04 Thread Thomas Lamprecht
Signed-off-by: Thomas Lamprecht 
---
 src/PVE/Firewall.pm | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/src/PVE/Firewall.pm b/src/PVE/Firewall.pm
index 5d1a584..28dbb19 100644
--- a/src/PVE/Firewall.pm
+++ b/src/PVE/Firewall.pm
@@ -1082,7 +1082,7 @@ sub parse_port_name_number_or_range {
}
 }
 
-die "ICPM ports not allowed in port range\n" if $icmp_port && $count > 0;
+die "ICMP ports not allowed in port range\n" if $icmp_port && $count > 0;
 
 # I really don't like to use the word number here, but it's the only thing
 # that makes sense in a literal way. The range 1:100 counts as 2, not as
-- 
2.20.1


___
pve-devel mailing list
pve-devel@pve.proxmox.com
https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel


[pve-devel] applied: [PATCH firewall 3/4] icmp: allow to specify the echo-reply (0) type as integer

2020-05-04 Thread Thomas Lamprecht
Signed-off-by: Thomas Lamprecht 
---
 src/PVE/Firewall.pm | 3 ++-
 1 file changed, 2 insertions(+), 1 deletion(-)

diff --git a/src/PVE/Firewall.pm b/src/PVE/Firewall.pm
index 28dbb19..a6157e3 100644
--- a/src/PVE/Firewall.pm
+++ b/src/PVE/Firewall.pm
@@ -2039,7 +2039,7 @@ sub ipt_rule_to_cmds {
my $multisport = defined($rule->{sport}) && 
parse_port_name_number_or_range($rule->{sport}, 0);
 
my $add_dport = sub {
-   return if !$rule->{dport};
+   return if !defined($rule->{dport});
 
if ($proto eq 'icmp') {
# Note: we use dport to store --icmp-type
@@ -2062,6 +2062,7 @@ sub ipt_rule_to_cmds {
} elsif ($multidport) {
push @match, "--match multiport", "--dports $rule->{dport}";
} else {
+   return if !$rule->{dport};
push @match, "--dport $rule->{dport}";
}
};
-- 
2.20.1


___
pve-devel mailing list
pve-devel@pve.proxmox.com
https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel


[pve-devel] applied: [PATCH firewall 1/4] test/simulator: add very basic ICMP type functionallity

2020-05-04 Thread Thomas Lamprecht
For now without integer to full-name, and vice versa, mapping of
ICMP types.

Signed-off-by: Thomas Lamprecht 
---
 src/PVE/FirewallSimulator.pm | 9 +++--
 test/test-basic1/100.fw  | 2 ++
 test/test-basic1/tests   | 4 
 3 files changed, 13 insertions(+), 2 deletions(-)

diff --git a/src/PVE/FirewallSimulator.pm b/src/PVE/FirewallSimulator.pm
index 4f46b74..140c46e 100644
--- a/src/PVE/FirewallSimulator.pm
+++ b/src/PVE/FirewallSimulator.pm
@@ -29,9 +29,7 @@ my $NUMBER_RE = qr/0x[0-9a-fA-F]+|\d+/;
 
 sub debug {
 my $new_value = shift;
-
 $debug = $new_value if defined($new_value);
-
 return $debug;
 }
 
@@ -140,6 +138,13 @@ sub rule_match {
return undef if $atype ne $pkg->{dsttype};
}
 
+   if ($rule =~ s/^-m icmp(v6)? --icmp-type (\S+)\s*//) {
+   my $icmpv6 = !!$1;
+   my $icmptype = $2;
+   die "missing destination address type (dsttype)\n" if 
!defined($pkg->{dport});
+   return undef if $icmptype ne $pkg->{dport};
+   }
+
if ($rule =~ s/^-i (\S+)\s*//) {
my $devre = $1;
die "missing interface (iface_in)\n" if !$pkg->{iface_in};
diff --git a/test/test-basic1/100.fw b/test/test-basic1/100.fw
index c9d675e..9dbe2f3 100644
--- a/test/test-basic1/100.fw
+++ b/test/test-basic1/100.fw
@@ -5,4 +5,6 @@ enable: 1
 [RULES]
 
 IN ACCEPT -p tcp -dport 443
+IN ACCEPT -p icmp -dport 0
+IN ACCEPT -p icmp -dport host-unreachable
 OUT REJECT -p tcp -dport 81
diff --git a/test/test-basic1/tests b/test/test-basic1/tests
index d575bbe..a993e5d 100644
--- a/test/test-basic1/tests
+++ b/test/test-basic1/tests
@@ -21,6 +21,10 @@
 { from => 'vm110', to => 'vm100', dport => 22, action => 'DROP' }
 { from => 'vm110', to => 'vm100', dport => 443, action => 'ACCEPT' }
 
+{ from => 'vm110', to => 'vm100', dport => 0, proto => 'icmp', action => 
'ACCEPT' }
+{ from => 'vm110', to => 'vm100', dport => 'host-unreachable', proto => 
'icmp', action => 'ACCEPT' }
+{ from => 'vm110', to => 'vm100', dport => 255, proto => 'icmpv6', action => 
'DROP' }
+
 { from => 'outside', to => 'ct200', dport => 22, action => 'ACCEPT' }
 { from => 'outside', to => 'ct200', dport => 23, action => 'DROP' }
 { from => 'outside', to => 'vm100', dport => 22, action => 'DROP' }
-- 
2.20.1


___
pve-devel mailing list
pve-devel@pve.proxmox.com
https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel


[pve-devel] applied: [PATCH firewall 1/3] fix iptables-restore failing if icmp-type value > 255

2020-05-04 Thread Thomas Lamprecht
On 4/29/20 3:45 PM, Mira Limbeck wrote:
> This has to be done in both icmp and icmpv6 cases. Currently if
> 'ipv6-icmp' is set via the GUI ('icmpv6' is not available there) there
> is no icmp-type handling. As this is meant to fix the iptables-restore
> failure if an icmp-type > 255 is specified, no ipv6-icmp handling is
> introduced.
> 
> These error messages are not logged as warnings are ignored. To get
> these messages you have to run pve-firewall compile and look at the
> output.
> 
> Signed-off-by: Mira Limbeck 
> ---
>  src/PVE/Firewall.pm | 6 ++
>  1 file changed, 6 insertions(+)
> 
> diff --git a/src/PVE/Firewall.pm b/src/PVE/Firewall.pm
> index d22b15a..39f1bfc 100644
> --- a/src/PVE/Firewall.pm
> +++ b/src/PVE/Firewall.pm
> @@ -2041,11 +2041,17 @@ sub ipt_rule_to_cmds {
>   # Note: we use dport to store --icmp-type
>   die "unknown icmp-type '$rule->{dport}'\n"
>   if $rule->{dport} !~ /^\d+$/ && 
> !defined($icmp_type_names->{$rule->{dport}});
> + # values for icmp-type range between 0 and 255
> + # higher values and iptables-restore fails
> + die "invalid icmp-type '$rule->{dport}'\n" if 
> ($rule->{dport} =~ m/^(\d+)$/) && ($1 > 255);
>   push @match, "-m icmp --icmp-type $rule->{dport}";
>   } elsif ($proto eq 'icmpv6') {
>   # Note: we use dport to store --icmpv6-type
>   die "unknown icmpv6-type '$rule->{dport}'\n"
>   if $rule->{dport} !~ /^\d+$/ && 
> !defined($icmpv6_type_names->{$rule->{dport}});
> + # values for icmpv6-type range between 0 and 255
> + # higher values and iptables-restore fails
> + die "invalid icmpv6-type '$rule->{dport}'\n" if 
> ($rule->{dport} =~ m/^(\d+)$/) && ($1 > 255);
>   push @match, "-m icmpv6 --icmpv6-type $rule->{dport}";
>   } elsif (!$PROTOCOLS_WITH_PORTS->{$proto}) {
>   die "protocol $proto does not have ports\n";
> 

applied, thanks!

___
pve-devel mailing list
pve-devel@pve.proxmox.com
https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel


Re: [pve-devel] [PATCH qemu 1/2] experimentally move savevm-async back into a coroutine

2020-05-04 Thread Wolfgang Bumiller


> On May 4, 2020 12:43 PM Stefan Reiter  wrote:
> 
>  
> Fixes the SIGSEGV issues on Ceph with snapshot and rollback for me, so:
> 
> Tested-by: Stefan Reiter 
> 
> Just for reference, I also bisected the bug this fixes to upstream 
> commit 8c6b0356b53 ("util/async: make bh_aio_poll() O(1)"), i.e. it only 
> breaks after this commit. Might be an upstream bug too somewhere? But I 
> don't see an issue with doing this in a coroutine either.
> 
> See also inline.
> 
> On 5/4/20 12:02 PM, Wolfgang Bumiller wrote:
> > Move qemu_savevm_state_{header,setup} into the main loop and
> > the rest of the iteration into a coroutine. The former need
> > to lock the iothread (and we can't unlock it in the
> > coroutine), and the latter can't deal with being in a
> > separate thread, so a coroutine it must be.
> > 
> > Signed-off-by: Wolfgang Bumiller 
> > ---
> >   ...e-savevm-async-back-into-a-coroutine.patch | 111 ++
> >   debian/patches/series |   1 +
> >   2 files changed, 112 insertions(+)
> >   create mode 100644 
> > debian/patches/pve/0043-move-savevm-async-back-into-a-coroutine.patch
> > 
> > diff --git 
> > a/debian/patches/pve/0043-move-savevm-async-back-into-a-coroutine.patch 
> > b/debian/patches/pve/0043-move-savevm-async-back-into-a-coroutine.patch
> > new file mode 100644
> > index 000..f4945db
> > --- /dev/null
> > +++ b/debian/patches/pve/0043-move-savevm-async-back-into-a-coroutine.patch
> > @@ -0,0 +1,111 @@
> > +From  Mon Sep 17 00:00:00 2001
> > +From: Wolfgang Bumiller 
> > +Date: Thu, 30 Apr 2020 15:55:37 +0200
> > +Subject: [PATCH] move savevm-async back into a coroutine
> > +
> > +Move qemu_savevm_state_{header,setup} into the main loop and
> > +the rest of the iteration into a coroutine. The former need
> > +to lock the iothread (and we can't unlock it in the
> > +coroutine), and the latter can't deal with being in a
> > +separate thread, so a coroutine it must be.
> > +
> > +Signed-off-by: Wolfgang Bumiller 
> > +---
> > + savevm-async.c | 28 +---
> > + 1 file changed, 9 insertions(+), 19 deletions(-)
> > +
> > +diff --git a/savevm-async.c b/savevm-async.c
> > +index a38b15d652..af865b9a0a 100644
> > +--- a/savevm-async.c
> >  b/savevm-async.c
> > +@@ -51,7 +51,7 @@ static struct SnapshotState {
> > + QEMUFile *file;
> > + int64_t total_time;
> > + QEMUBH *cleanup_bh;
> > +-QemuThread thread;
> > ++Coroutine *co;
> > + } snap_state;
> > +
> > + SaveVMInfo *qmp_query_savevm(Error **errp)
> > +@@ -201,11 +201,9 @@ static void process_savevm_cleanup(void *opaque)
> > + int ret;
> > + qemu_bh_delete(snap_state.cleanup_bh);
> > + snap_state.cleanup_bh = NULL;
> > ++snap_state.co = NULL;
> > + qemu_savevm_state_cleanup();
> > +
> > +-qemu_mutex_unlock_iothread();
> > +-qemu_thread_join(_state.thread);
> > +-qemu_mutex_lock_iothread();
> > + ret = save_snapshot_cleanup();
> > + if (ret < 0) {
> > + save_snapshot_error("save_snapshot_cleanup error %d", ret);
> > +@@ -221,18 +219,13 @@ static void process_savevm_cleanup(void *opaque)
> > + }
> > + }
> > +
> > +-static void *process_savevm_thread(void *opaque)
> > ++static void process_savevm_coro(void *opaque)
> > + {
> > + int ret;
> > + int64_t maxlen;
> > + MigrationState *ms = migrate_get_current();
> > +
> > +-rcu_register_thread();
> > +-
> > +-qemu_savevm_state_header(snap_state.file);
> > +-qemu_savevm_state_setup(snap_state.file);
> > + ret = qemu_file_get_error(snap_state.file);
> > +-
> > + if (ret < 0) {
> > + save_snapshot_error("qemu_savevm_state_setup failed");
> > + goto out;
> > +@@ -247,16 +240,13 @@ static void *process_savevm_thread(void *opaque)
> > + maxlen = blk_getlength(snap_state.target) - 30*1024*1024;
> > +
> > + if (pending_size > 40 && snap_state.bs_pos + pending_size < 
> > maxlen) {
> > +-qemu_mutex_lock_iothread();
> > + ret = qemu_savevm_state_iterate(snap_state.file, false);
> > + if (ret < 0) {
> > + save_snapshot_error("qemu_savevm_state_iterate error %d", 
> > ret);
> > + break;
> > + }
> > +-qemu_mutex_unlock_iothread();
> > + DPRINTF("savevm inerate pending size %lu ret %d\n", 
> > pending_size, ret);
> > + } else {
> > +-qemu_mutex_lock_iothread();
> > + qemu_system_wakeup_request(QEMU_WAKEUP_REASON_OTHER, NULL);
> > + ret = global_state_store();
> > + if (ret) {
> > +@@ -285,16 +275,12 @@ static void *process_savevm_thread(void *opaque)
> > + }
> > +
> > + qemu_bh_schedule(snap_state.cleanup_bh);
> > +-qemu_mutex_unlock_iothread();
> > +
> > + out:
> > + /* set migration state accordingly and clear soon-to-be stale file */
> > + migrate_set_state(>state, MIGRATION_STATUS_SETUP,
> > + 

[pve-devel] [PATCH common] fix #2696: avoid 'undefined value' warning in 'pvesh help'

2020-05-04 Thread Stefan Reiter
Signed-off-by: Stefan Reiter 
---
 src/PVE/CLIHandler.pm | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/src/PVE/CLIHandler.pm b/src/PVE/CLIHandler.pm
index 763cd60..9955d77 100644
--- a/src/PVE/CLIHandler.pm
+++ b/src/PVE/CLIHandler.pm
@@ -235,8 +235,8 @@ sub generate_usage_str {
 
}
} else {
+   $abort->("unknown command '$cmd->[0]'") if !$def;
my ($class, $name, $arg_param, $fixed_param, undef, 
$formatter_properties) = @$def;
-   $abort->("unknown command '$cmd'") if !$class;
 
$str .= $indent;
$str .= $class->usage_str($name, $prefix, $arg_param, $fixed_param, 
$format, $param_cb, $formatter_properties);
-- 
2.20.1


___
pve-devel mailing list
pve-devel@pve.proxmox.com
https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel


[pve-devel] [PATCH v2 manager 5/6] ui: CPUModelSelector: use API call for store

2020-05-04 Thread Stefan Reiter
CPU models are retrieved from the new /nodes/X/cpu call and ordered by
vendor to approximate the previous sort order (less change for accustomed
users).

With this, custom CPU models are now selectable via the GUI.

Signed-off-by: Stefan Reiter 
---

v2:
* Put vendor map and order map into PVE.Utils
* Add gettext for 'Custom'

I thought about the sorting method with the 'calculated' field Dominik
suggested, but I felt it just made the code a bit more confusing (since it
splits the ordering stuff across the file). Left it as is for now.

 www/manager6/Utils.js |  14 ++
 www/manager6/form/CPUModelSelector.js | 209 +-
 2 files changed, 83 insertions(+), 140 deletions(-)

diff --git a/www/manager6/Utils.js b/www/manager6/Utils.js
index 872b7c29..4301f00e 100644
--- a/www/manager6/Utils.js
+++ b/www/manager6/Utils.js
@@ -1454,6 +1454,20 @@ Ext.define('PVE.Utils', { utilities: {
}
});
 },
+
+cpu_vendor_map: {
+   'default': 'QEMU',
+   'AuthenticAMD': 'AMD',
+   'GenuineIntel': 'Intel'
+},
+
+cpu_vendor_order: {
+   "AMD": 1,
+   "Intel": 2,
+   "QEMU": 3,
+   "Host": 4,
+   "_default_": 5, // includes custom models
+},
 },
 
 singleton: true,
diff --git a/www/manager6/form/CPUModelSelector.js 
b/www/manager6/form/CPUModelSelector.js
index 1d28ee88..deef23fb 100644
--- a/www/manager6/form/CPUModelSelector.js
+++ b/www/manager6/form/CPUModelSelector.js
@@ -1,9 +1,19 @@
+Ext.define('PVE.data.CPUModel', {
+extend: 'Ext.data.Model',
+fields: [
+   {name: 'name'},
+   {name: 'vendor'},
+   {name: 'custom'},
+   {name: 'displayname'}
+]
+});
+
 Ext.define('PVE.form.CPUModelSelector', {
 extend: 'Proxmox.form.ComboGrid',
 alias: ['widget.CPUModelSelector'],
 
-valueField: 'value',
-displayField: 'value',
+valueField: 'name',
+displayField: 'displayname',
 
 emptyText: Proxmox.Utils.defaultText + ' (kvm64)',
 allowBlank: true,
@@ -19,157 +29,76 @@ Ext.define('PVE.form.CPUModelSelector', {
columns: [
{
header: gettext('Model'),
-   dataIndex: 'value',
+   dataIndex: 'displayname',
hideable: false,
sortable: true,
-   flex: 2
+   flex: 3
},
{
header: gettext('Vendor'),
dataIndex: 'vendor',
hideable: false,
sortable: true,
-   flex: 1
+   flex: 2
}
],
-   width: 320
+   width: 360
 },
 
 store: {
-   fields: [ 'value', 'vendor' ],
-   data: [
-   {
-   value: 'athlon',
-   vendor: 'AMD'
-   },
-   {
-   value: 'phenom',
-   vendor: 'AMD'
-   },
-   {
-   value: 'Opteron_G1',
-   vendor: 'AMD'
-   },
-   {
-   value: 'Opteron_G2',
-   vendor: 'AMD'
-   },
-   {
-   value: 'Opteron_G3',
-   vendor: 'AMD'
-   },
-   {
-   value: 'Opteron_G4',
-   vendor: 'AMD'
-   },
-   {
-   value: 'Opteron_G5',
-   vendor: 'AMD'
-   },
-   {
-   value: 'EPYC',
-   vendor: 'AMD'
-   },
-   {
-   value: '486',
-   vendor: 'Intel'
-   },
-   {
-   value: 'core2duo',
-   vendor: 'Intel'
-   },
-   {
-   value: 'coreduo',
-   vendor: 'Intel'
-   },
-   {
-   value: 'pentium',
-   vendor: 'Intel'
-   },
-   {
-   value: 'pentium2',
-   vendor: 'Intel'
-   },
-   {
-   value: 'pentium3',
-   vendor: 'Intel'
-   },
-   {
-   value: 'Conroe',
-   vendor: 'Intel'
-   },
-   {
-   value: 'Penryn',
-   vendor: 'Intel'
-   },
-   {
-   value: 'Nehalem',
-   vendor: 'Intel'
-   },
-   {
-   value: 'Westmere',
-   vendor: 'Intel'
-   },
-   {
-   value: 'SandyBridge',
-   vendor: 'Intel'
-   },
-   {
-   value: 'IvyBridge',
-   vendor: 'Intel'
-   },
-   {
-   value: 'Haswell',
-   vendor: 'Intel'
-   },
-   {
-   value: 'Haswell-noTSX',
-   vendor: 'Intel'
-   },
-   {
-   value: 'Broadwell',
-   vendor: 'Intel'
-   },
-   {
-   value: 'Broadwell-noTSX',
-   vendor: 'Intel'
- 

[pve-devel] [PATCH v2 qemu-server 1/6] api: check Sys.Audit permissions when setting a custom CPU model

2020-05-04 Thread Stefan Reiter
Explicitly allows changing other properties than the cputype, even if
the currently set cputype is not accessible by the user. This way, an
administrator can assign a custom CPU type to a VM for a less privileged
user without breaking edit functionality for them.

Signed-off-by: Stefan Reiter 
---
 PVE/API2/Qemu.pm | 25 +
 1 file changed, 25 insertions(+)

diff --git a/PVE/API2/Qemu.pm b/PVE/API2/Qemu.pm
index 7ffc538..4cf0a11 100644
--- a/PVE/API2/Qemu.pm
+++ b/PVE/API2/Qemu.pm
@@ -21,6 +21,7 @@ use PVE::GuestHelpers;
 use PVE::QemuConfig;
 use PVE::QemuServer;
 use PVE::QemuServer::Drive;
+use PVE::QemuServer::CPUConfig;
 use PVE::QemuServer::Monitor qw(mon_cmd);
 use PVE::QemuMigrate;
 use PVE::RPCEnvironment;
@@ -236,6 +237,26 @@ my $create_disks = sub {
 return $vollist;
 };
 
+my $check_cpu_model_access = sub {
+my ($rpcenv, $authuser, $new, $existing) = @_;
+
+return if !defined($new->{cpu});
+
+my $cpu = PVE::JSONSchema::check_format('pve-vm-cpu-conf', $new->{cpu});
+return if !$cpu || !$cpu->{cputype}; # always allow default
+my $cputype = $cpu->{cputype};
+
+if ($existing && $existing->{cpu}) {
+   # changing only other settings doesn't require permissions for CPU model
+   my $existingCpu = PVE::JSONSchema::check_format('pve-vm-cpu-conf', 
$existing->{cpu});
+   return if $existingCpu->{cputype} eq $cputype;
+}
+
+if (PVE::QemuServer::CPUConfig::is_custom_model($cputype)) {
+   $rpcenv->check($authuser, "/nodes", ['Sys.Audit']);
+}
+};
+
 my $cpuoptions = {
 'cores' => 1,
 'cpu' => 1,
@@ -543,6 +564,8 @@ __PACKAGE__->register_method({
 
&$check_vm_modify_config_perm($rpcenv, $authuser, $vmid, $pool, [ 
keys %$param]);
 
+   &$check_cpu_model_access($rpcenv, $authuser, $param);
+
foreach my $opt (keys %$param) {
if (PVE::QemuServer::is_valid_drivename($opt)) {
my $drive = PVE::QemuServer::parse_drive($opt, 
$param->{$opt});
@@ -1122,6 +1145,8 @@ my $update_vm_api  = sub {
die "checksum missmatch (file change by other user?)\n"
if $digest && $digest ne $conf->{digest};
 
+   &$check_cpu_model_access($rpcenv, $authuser, $param, $conf);
+
# FIXME: 'suspended' lock should probabyl be a state or "weak" lock?!
if (scalar(@delete) && grep { $_ eq 'vmstate'} @delete) {
if (defined($conf->{lock}) && $conf->{lock} eq 'suspended') {
-- 
2.20.1


___
pve-devel mailing list
pve-devel@pve.proxmox.com
https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel


[pve-devel] [PATCH v2 0/6] Custom CPU models API/GUI basics

2020-05-04 Thread Stefan Reiter
Permission handling, the beginnings of the API and getting the GUI to play nice
with custom models (no editor yet, but it'll behave as expected if a determined
user creates a custom model by editing the config).

First 3 patches are API stuff, 4 is an independent UI fix/cleanup, rest are new
GUI stuff.

v1 -> v2:
* fix things noted in Dominik's review (see notes on patches)


qemu-server: Stefan Reiter (2):
  api: check Sys.Audit permissions when setting a custom CPU model
  api: allow listing custom and default CPU models

 PVE/API2/Qemu.pm| 25 +++
 PVE/API2/Qemu/CPU.pm| 61 +
 PVE/API2/Qemu/Makefile  |  2 +-
 PVE/QemuServer/CPUConfig.pm | 30 ++
 4 files changed, 117 insertions(+), 1 deletion(-)
 create mode 100644 PVE/API2/Qemu/CPU.pm

manager: Stefan Reiter (4):
  api: register /nodes/X/cpu call for CPU models
  ui: ProcessorEdit: fix total core calculation and use view model
  ui: CPUModelSelector: use API call for store
  ui: ProcessorEdit: allow modifications with inaccessible CPU model

 PVE/API2/Nodes.pm |   7 +
 www/manager6/Utils.js |  14 ++
 www/manager6/form/CPUModelSelector.js | 209 +-
 www/manager6/qemu/ProcessorEdit.js| 104 +
 4 files changed, 168 insertions(+), 166 deletions(-)

-- 
2.20.1

___
pve-devel mailing list
pve-devel@pve.proxmox.com
https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel


[pve-devel] [PATCH v2 qemu-server 2/6] api: allow listing custom and default CPU models

2020-05-04 Thread Stefan Reiter
More API calls will follow for this path, for now add the 'index' call to
list all custom and default CPU models.

Any user can list the default CPU models, as these are public anyway, but
custom models are restricted to users with Sys.Audit on /nodes.

Signed-off-by: Stefan Reiter 
---
 PVE/API2/Qemu/CPU.pm| 61 +
 PVE/API2/Qemu/Makefile  |  2 +-
 PVE/QemuServer/CPUConfig.pm | 30 ++
 3 files changed, 92 insertions(+), 1 deletion(-)
 create mode 100644 PVE/API2/Qemu/CPU.pm

diff --git a/PVE/API2/Qemu/CPU.pm b/PVE/API2/Qemu/CPU.pm
new file mode 100644
index 000..b0bb32d
--- /dev/null
+++ b/PVE/API2/Qemu/CPU.pm
@@ -0,0 +1,61 @@
+package PVE::API2::Qemu::CPU;
+
+use strict;
+use warnings;
+
+use PVE::RESTHandler;
+use PVE::JSONSchema qw(get_standard_option);
+use PVE::QemuServer::CPUConfig;
+
+use base qw(PVE::RESTHandler);
+
+__PACKAGE__->register_method({
+name => 'index',
+path => '',
+method => 'GET',
+description => 'List all custom and default CPU models.',
+permissions => {
+   user => 'all',
+   description => 'Only returns custom models when the current user has'
+. ' Sys.Audit on /nodes.',
+},
+parameters => {
+   additionalProperties => 0,
+   properties => {
+   node => get_standard_option('pve-node'),
+   },
+},
+returns => {
+   type => 'array',
+   items => {
+   type => 'object',
+   properties => {
+   name => {
+   type => 'string',
+   description => "Name of the CPU model. Identifies it for"
+. " subsequent API calls. Prefixed with"
+. " 'custom-' for custom models.",
+   },
+   custom => {
+   type => 'boolean',
+   description => "True if this is a custom CPU model.",
+   },
+   vendor => {
+   type => 'string',
+   description => "CPU vendor visible to the guest when this"
+. " model is selected. Vendor of"
+. " 'reported-model' in case of custom 
models.",
+   },
+   },
+   },
+   links => [ { rel => 'child', href => '{name}' } ],
+},
+code => sub {
+   my $rpcenv = PVE::RPCEnvironment::get();
+   my $authuser = $rpcenv->get_user();
+   my $include_custom = $rpcenv->check($authuser, "/nodes", ['Sys.Audit'], 
1);
+
+   return PVE::QemuServer::CPUConfig::get_cpu_models($include_custom);
+}});
+
+1;
diff --git a/PVE/API2/Qemu/Makefile b/PVE/API2/Qemu/Makefile
index 20c2a6c..f4b7be6 100644
--- a/PVE/API2/Qemu/Makefile
+++ b/PVE/API2/Qemu/Makefile
@@ -1,4 +1,4 @@
-SOURCES=Agent.pm
+SOURCES=Agent.pm CPU.pm
 
 .PHONY: install
 install:
diff --git a/PVE/QemuServer/CPUConfig.pm b/PVE/QemuServer/CPUConfig.pm
index 61744dc..b884498 100644
--- a/PVE/QemuServer/CPUConfig.pm
+++ b/PVE/QemuServer/CPUConfig.pm
@@ -293,6 +293,36 @@ sub write_config {
 $class->SUPER::write_config($filename, $cfg);
 }
 
+sub get_cpu_models {
+my ($include_custom) = @_;
+
+my $models = [];
+
+for my $default_model (keys %{$cpu_vendor_list}) {
+   push @$models, {
+   name => $default_model,
+   custom => 0,
+   vendor => $cpu_vendor_list->{$default_model},
+   };
+}
+
+return $models if !$include_custom;
+
+my $conf = load_custom_model_conf();
+for my $custom_model (keys %{$conf->{ids}}) {
+   my $reported_model = $conf->{ids}->{$custom_model}->{'reported-model'};
+   $reported_model //= $cpu_fmt->{'reported-model'}->{default};
+   my $vendor = $cpu_vendor_list->{$reported_model};
+   push @$models, {
+   name => "custom-$custom_model",
+   custom => 1,
+   vendor => $vendor,
+   };
+}
+
+return $models;
+}
+
 sub is_custom_model {
 my ($cputype) = @_;
 return $cputype =~ m/^custom-/;
-- 
2.20.1


___
pve-devel mailing list
pve-devel@pve.proxmox.com
https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel


[pve-devel] [PATCH v2 manager 6/6] ui: ProcessorEdit: allow modifications with inaccessible CPU model

2020-05-04 Thread Stefan Reiter
An administrator can set a custom CPU model for a VM where the general user
does not have permission to use this particular model. Prior to this change
the ProcessorEdit component would be broken by this, since the store of the
CPU type selector did not contain the configured CPU model.

Add it in manually if this situation occurs (with 'Unknown' vendor, since
we cannot retrieve it from the API), but warn the user that changing it
would be an irreversible action.

Signed-off-by: Stefan Reiter 
---

v2:
* Anchor displayname-regex with ^

 www/manager6/qemu/ProcessorEdit.js | 49 ++
 1 file changed, 49 insertions(+)

diff --git a/www/manager6/qemu/ProcessorEdit.js 
b/www/manager6/qemu/ProcessorEdit.js
index f437a82c..e65631e1 100644
--- a/www/manager6/qemu/ProcessorEdit.js
+++ b/www/manager6/qemu/ProcessorEdit.js
@@ -9,6 +9,7 @@ Ext.define('PVE.qemu.ProcessorInputPanel', {
data: {
socketCount: 1,
coreCount: 1,
+   showCustomModelPermWarning: false,
},
formulas: {
totalCoreCount: get => get('socketCount') * get('coreCount'),
@@ -66,6 +67,33 @@ Ext.define('PVE.qemu.ProcessorInputPanel', {
return values;
 },
 
+setValues: function(values) {
+   let me = this;
+
+   let type = values.cputype;
+   let typeSelector = me.lookupReference('cputype');
+   let typeStore = typeSelector.getStore();
+   typeStore.on('load', (store, records, success) => {
+   if (!success || !type || records.some(x => x.data.name === type)) {
+   return;
+   }
+
+   // if we get here, a custom CPU model is selected for the VM but we
+   // don't have permission to configure it - it will not be in the
+   // list retrieved from the API, so add it manually to allow changing
+   // other processor options
+   typeStore.add({
+   name: type,
+   displayname: type.replace(/^custom-/, ''),
+   custom: 1,
+   vendor: gettext("Unknown"),
+   });
+   typeSelector.select(type);
+   });
+
+   me.callParent([values]);
+},
+
 cpu: {},
 
 column1: [
@@ -99,6 +127,7 @@ Ext.define('PVE.qemu.ProcessorInputPanel', {
{
xtype: 'CPUModelSelector',
name: 'cputype',
+   reference: 'cputype',
fieldLabel: gettext('Type')
},
{
@@ -112,6 +141,18 @@ Ext.define('PVE.qemu.ProcessorInputPanel', {
},
 ],
 
+columnB: [
+   {
+   xtype: 'displayfield',
+   userCls: 'pmx-hint',
+   value: gettext('WARNING: You do not have permission to configure 
custom CPU types, if you change the type you will not be able to go back!'),
+   hidden: true,
+   bind: {
+   hidden: '{!showCustomModelPermWarning}',
+   },
+   },
+],
+
 advancedColumn1: [
{
xtype: 'proxmoxintegerfield',
@@ -199,6 +240,14 @@ Ext.define('PVE.qemu.ProcessorEdit', {
if (cpu.flags) {
data.flags = cpu.flags;
}
+
+   let caps = Ext.state.Manager.get('GuiCap');
+   if (data.cputype.indexOf('custom-') === 0 &&
+   !caps.nodes['Sys.Audit'])
+   {
+   let vm = ipanel.getViewModel();
+   vm.set("showCustomModelPermWarning", true);
+   }
}
me.setValues(data);
}
-- 
2.20.1


___
pve-devel mailing list
pve-devel@pve.proxmox.com
https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel


[pve-devel] [PATCH v2 manager 3/6] api: register /nodes/X/cpu call for CPU models

2020-05-04 Thread Stefan Reiter
Signed-off-by: Stefan Reiter 
---

Depends on updated qemu-server.

 PVE/API2/Nodes.pm | 7 +++
 1 file changed, 7 insertions(+)

diff --git a/PVE/API2/Nodes.pm b/PVE/API2/Nodes.pm
index 58497b2b..2ac838ea 100644
--- a/PVE/API2/Nodes.pm
+++ b/PVE/API2/Nodes.pm
@@ -34,6 +34,7 @@ use PVE::API2::Tasks;
 use PVE::API2::Scan;
 use PVE::API2::Storage::Status;
 use PVE::API2::Qemu;
+use PVE::API2::Qemu::CPU;
 use PVE::API2::LXC;
 use PVE::API2::LXC::Status;
 use PVE::API2::VZDump;
@@ -65,6 +66,11 @@ __PACKAGE__->register_method ({
 path => 'qemu',
 });
 
+__PACKAGE__->register_method ({
+subclass => "PVE::API2::Qemu::CPU",
+path => 'cpu',
+});
+
 __PACKAGE__->register_method ({
 subclass => "PVE::API2::LXC",
 path => 'lxc',
@@ -241,6 +247,7 @@ __PACKAGE__->register_method ({
{ name => 'certificates' },
{ name => 'config' },
{ name => 'hosts' },
+   { name => 'cpu' },
];
 
push @$result, { name => 'sdn' } if $have_sdn;
-- 
2.20.1


___
pve-devel mailing list
pve-devel@pve.proxmox.com
https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel


[pve-devel] [PATCH v2 manager 4/6] ui: ProcessorEdit: fix total core calculation and use view model

2020-05-04 Thread Stefan Reiter
Clean up the code in ProcessorEdit with a view model and fix a bug while at
it - previously, pressing the 'Reset' button on the form would always set
the value of the total core count field to 1, so mark 'totalcores' with
'isFormField: false' to avoid reset.

Signed-off-by: Stefan Reiter 
---

v2:
* Use 'isFormField: false' and drop setValues

 www/manager6/qemu/ProcessorEdit.js | 55 --
 1 file changed, 29 insertions(+), 26 deletions(-)

diff --git a/www/manager6/qemu/ProcessorEdit.js 
b/www/manager6/qemu/ProcessorEdit.js
index bc17e152..f437a82c 100644
--- a/www/manager6/qemu/ProcessorEdit.js
+++ b/www/manager6/qemu/ProcessorEdit.js
@@ -5,28 +5,18 @@ Ext.define('PVE.qemu.ProcessorInputPanel', {
 
 insideWizard: false,
 
-controller: {
-   xclass: 'Ext.app.ViewController',
-
-   updateCores: function() {
-   var me = this.getView();
-   var sockets = me.down('field[name=sockets]').getValue();
-   var cores = me.down('field[name=cores]').getValue();
-   me.down('field[name=totalcores]').setValue(sockets*cores);
-   var vcpus = me.down('field[name=vcpus]');
-   vcpus.setMaxValue(sockets*cores);
-   vcpus.setEmptyText(sockets*cores);
-   vcpus.validate();
+viewModel: {
+   data: {
+   socketCount: 1,
+   coreCount: 1,
},
+   formulas: {
+   totalCoreCount: get => get('socketCount') * get('coreCount'),
+   },
+},
 
-   control: {
-   'field[name=sockets]': {
-   change: 'updateCores'
-   },
-   'field[name=cores]': {
-   change: 'updateCores'
-   }
-   }
+controller: {
+   xclass: 'Ext.app.ViewController',
 },
 
 onGetValues: function(values) {
@@ -86,7 +76,10 @@ Ext.define('PVE.qemu.ProcessorInputPanel', {
maxValue: 4,
value: '1',
fieldLabel: gettext('Sockets'),
-   allowBlank: false
+   allowBlank: false,
+   bind: {
+   value: '{socketCount}',
+   },
},
{
xtype: 'proxmoxintegerfield',
@@ -95,8 +88,11 @@ Ext.define('PVE.qemu.ProcessorInputPanel', {
maxValue: 128,
value: '1',
fieldLabel: gettext('Cores'),
-   allowBlank: false
-   }
+   allowBlank: false,
+   bind: {
+   value: '{coreCount}',
+   },
+   },
 ],
 
 column2: [
@@ -109,8 +105,11 @@ Ext.define('PVE.qemu.ProcessorInputPanel', {
xtype: 'displayfield',
fieldLabel: gettext('Total cores'),
name: 'totalcores',
-   value: '1'
-   }
+   isFormField: false,
+   bind: {
+   value: '{totalCoreCount}',
+   },
+   },
 ],
 
 advancedColumn1: [
@@ -123,7 +122,11 @@ Ext.define('PVE.qemu.ProcessorInputPanel', {
fieldLabel: gettext('VCPUs'),
deleteEmpty: true,
allowBlank: true,
-   emptyText: '1'
+   emptyText: '1',
+   bind: {
+   emptyText: '{totalCoreCount}',
+   maxValue: '{totalCoreCount}',
+   },
},
{
xtype: 'numberfield',
-- 
2.20.1


___
pve-devel mailing list
pve-devel@pve.proxmox.com
https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel


Re: [pve-devel] [PATCH qemu 2/2] add optional buffer size to QEMUFile

2020-05-04 Thread Stefan Reiter

On 5/4/20 12:02 PM, Wolfgang Bumiller wrote:

and use 4M for our savevm-async buffer size

Signed-off-by: Wolfgang Bumiller 
---
  ...add-optional-buffer-size-to-QEMUFile.patch | 173 ++
  debian/patches/series |   1 +
  2 files changed, 174 insertions(+)
  create mode 100644 
debian/patches/pve/0044-add-optional-buffer-size-to-QEMUFile.patch

diff --git a/debian/patches/pve/0044-add-optional-buffer-size-to-QEMUFile.patch 
b/debian/patches/pve/0044-add-optional-buffer-size-to-QEMUFile.patch
new file mode 100644
index 000..d990582
--- /dev/null
+++ b/debian/patches/pve/0044-add-optional-buffer-size-to-QEMUFile.patch
@@ -0,0 +1,173 @@
+From  Mon Sep 17 00:00:00 2001
+From: Wolfgang Bumiller 
+Date: Mon, 4 May 2020 11:05:08 +0200
+Subject: [PATCH] add optional buffer size to QEMUFile
+
+So we can use a 4M buffer for savevm-async which should
+increase performance storing the state onto ceph.
+
+Signed-off-by: Wolfgang Bumiller 
+---
+ migration/qemu-file.c | 33 +
+ migration/qemu-file.h |  1 +
+ savevm-async.c|  4 ++--
+ 3 files changed, 24 insertions(+), 14 deletions(-)
+
+diff --git a/migration/qemu-file.c b/migration/qemu-file.c
+index 1c3a358a14..b595d0ba34 100644
+--- a/migration/qemu-file.c
 b/migration/qemu-file.c
+@@ -30,7 +30,7 @@
+ #include "trace.h"
+ #include "qapi/error.h"
+
+-#define IO_BUF_SIZE 32768
++#define DEFAULT_IO_BUF_SIZE 32768
+ #define MAX_IOV_SIZE MIN(IOV_MAX, 64)
+
+ struct QEMUFile {
+@@ -45,7 +45,8 @@ struct QEMUFile {
+ when reading */
+ int buf_index;
+ int buf_size; /* 0 when writing */
+-uint8_t buf[IO_BUF_SIZE];
++size_t buf_allocated_size;
++uint8_t *buf;
+
+ DECLARE_BITMAP(may_free, MAX_IOV_SIZE);
+ struct iovec iov[MAX_IOV_SIZE];
+@@ -101,7 +102,7 @@ bool qemu_file_mode_is_not_valid(const char *mode)
+ return false;
+ }
+
+-QEMUFile *qemu_fopen_ops(void *opaque, const QEMUFileOps *ops)
++QEMUFile *qemu_fopen_ops_sized(void *opaque, const QEMUFileOps *ops, size_t 
buffer_size)
+ {
+ QEMUFile *f;
+
+@@ -109,9 +110,17 @@ QEMUFile *qemu_fopen_ops(void *opaque, const QEMUFileOps 
*ops)
+
+ f->opaque = opaque;
+ f->ops = ops;
++f->buf_allocated_size = buffer_size;
++f->buf = malloc(buffer_size);


Does this not require an explicit 'free' somewhere? E.g. qemu_fclose?


++
+ return f;
+ }
+
++QEMUFile *qemu_fopen_ops(void *opaque, const QEMUFileOps *ops)
++{
++return qemu_fopen_ops_sized(opaque, ops, DEFAULT_IO_BUF_SIZE);
++}
++
+
+ void qemu_file_set_hooks(QEMUFile *f, const QEMUFileHooks *hooks)
+ {
+@@ -346,7 +355,7 @@ static ssize_t qemu_fill_buffer(QEMUFile *f)
+ }
+
+ len = f->ops->get_buffer(f->opaque, f->buf + pending, f->pos,
+- IO_BUF_SIZE - pending, _error);
++ f->buf_allocated_size - pending, _error);
+ if (len > 0) {
+ f->buf_size += len;
+ f->pos += len;
+@@ -435,7 +444,7 @@ static void add_buf_to_iovec(QEMUFile *f, size_t len)
+ {
+ if (!add_to_iovec(f, f->buf + f->buf_index, len, false)) {
+ f->buf_index += len;
+-if (f->buf_index == IO_BUF_SIZE) {
++if (f->buf_index == f->buf_allocated_size) {
+ qemu_fflush(f);
+ }
+ }
+@@ -461,7 +470,7 @@ void qemu_put_buffer(QEMUFile *f, const uint8_t *buf, 
size_t size)
+ }
+
+ while (size > 0) {
+-l = IO_BUF_SIZE - f->buf_index;
++l = f->buf_allocated_size - f->buf_index;
+ if (l > size) {
+ l = size;
+ }
+@@ -508,8 +517,8 @@ size_t qemu_peek_buffer(QEMUFile *f, uint8_t **buf, size_t 
size, size_t offset)
+ size_t index;
+
+ assert(!qemu_file_is_writable(f));
+-assert(offset < IO_BUF_SIZE);
+-assert(size <= IO_BUF_SIZE - offset);
++assert(offset < f->buf_allocated_size);
++assert(size <= f->buf_allocated_size - offset);
+
+ /* The 1st byte to read from */
+ index = f->buf_index + offset;
+@@ -559,7 +568,7 @@ size_t qemu_get_buffer(QEMUFile *f, uint8_t *buf, size_t 
size)
+ size_t res;
+ uint8_t *src;
+
+-res = qemu_peek_buffer(f, , MIN(pending, IO_BUF_SIZE), 0);
++res = qemu_peek_buffer(f, , MIN(pending, f->buf_allocated_size), 
0);
+ if (res == 0) {
+ return done;
+ }
+@@ -593,7 +602,7 @@ size_t qemu_get_buffer(QEMUFile *f, uint8_t *buf, size_t 
size)
+  */
+ size_t qemu_get_buffer_in_place(QEMUFile *f, uint8_t **buf, size_t size)
+ {
+-if (size < IO_BUF_SIZE) {
++if (size < f->buf_allocated_size) {
+ size_t res;
+ uint8_t *src;
+
+@@ -618,7 +627,7 @@ int qemu_peek_byte(QEMUFile *f, int offset)
+ int index = f->buf_index + offset;
+
+ assert(!qemu_file_is_writable(f));
+-assert(offset < IO_BUF_SIZE);
++assert(offset < f->buf_allocated_size);
+
+ if (index >= f->buf_size) {
+ qemu_fill_buffer(f);

Re: [pve-devel] [PATCH qemu 1/2] experimentally move savevm-async back into a coroutine

2020-05-04 Thread Stefan Reiter

Fixes the SIGSEGV issues on Ceph with snapshot and rollback for me, so:

Tested-by: Stefan Reiter 

Just for reference, I also bisected the bug this fixes to upstream 
commit 8c6b0356b53 ("util/async: make bh_aio_poll() O(1)"), i.e. it only 
breaks after this commit. Might be an upstream bug too somewhere? But I 
don't see an issue with doing this in a coroutine either.


See also inline.

On 5/4/20 12:02 PM, Wolfgang Bumiller wrote:

Move qemu_savevm_state_{header,setup} into the main loop and
the rest of the iteration into a coroutine. The former need
to lock the iothread (and we can't unlock it in the
coroutine), and the latter can't deal with being in a
separate thread, so a coroutine it must be.

Signed-off-by: Wolfgang Bumiller 
---
  ...e-savevm-async-back-into-a-coroutine.patch | 111 ++
  debian/patches/series |   1 +
  2 files changed, 112 insertions(+)
  create mode 100644 
debian/patches/pve/0043-move-savevm-async-back-into-a-coroutine.patch

diff --git 
a/debian/patches/pve/0043-move-savevm-async-back-into-a-coroutine.patch 
b/debian/patches/pve/0043-move-savevm-async-back-into-a-coroutine.patch
new file mode 100644
index 000..f4945db
--- /dev/null
+++ b/debian/patches/pve/0043-move-savevm-async-back-into-a-coroutine.patch
@@ -0,0 +1,111 @@
+From  Mon Sep 17 00:00:00 2001
+From: Wolfgang Bumiller 
+Date: Thu, 30 Apr 2020 15:55:37 +0200
+Subject: [PATCH] move savevm-async back into a coroutine
+
+Move qemu_savevm_state_{header,setup} into the main loop and
+the rest of the iteration into a coroutine. The former need
+to lock the iothread (and we can't unlock it in the
+coroutine), and the latter can't deal with being in a
+separate thread, so a coroutine it must be.
+
+Signed-off-by: Wolfgang Bumiller 
+---
+ savevm-async.c | 28 +---
+ 1 file changed, 9 insertions(+), 19 deletions(-)
+
+diff --git a/savevm-async.c b/savevm-async.c
+index a38b15d652..af865b9a0a 100644
+--- a/savevm-async.c
 b/savevm-async.c
+@@ -51,7 +51,7 @@ static struct SnapshotState {
+ QEMUFile *file;
+ int64_t total_time;
+ QEMUBH *cleanup_bh;
+-QemuThread thread;
++Coroutine *co;
+ } snap_state;
+
+ SaveVMInfo *qmp_query_savevm(Error **errp)
+@@ -201,11 +201,9 @@ static void process_savevm_cleanup(void *opaque)
+ int ret;
+ qemu_bh_delete(snap_state.cleanup_bh);
+ snap_state.cleanup_bh = NULL;
++snap_state.co = NULL;
+ qemu_savevm_state_cleanup();
+
+-qemu_mutex_unlock_iothread();
+-qemu_thread_join(_state.thread);
+-qemu_mutex_lock_iothread();
+ ret = save_snapshot_cleanup();
+ if (ret < 0) {
+ save_snapshot_error("save_snapshot_cleanup error %d", ret);
+@@ -221,18 +219,13 @@ static void process_savevm_cleanup(void *opaque)
+ }
+ }
+
+-static void *process_savevm_thread(void *opaque)
++static void process_savevm_coro(void *opaque)
+ {
+ int ret;
+ int64_t maxlen;
+ MigrationState *ms = migrate_get_current();
+
+-rcu_register_thread();
+-
+-qemu_savevm_state_header(snap_state.file);
+-qemu_savevm_state_setup(snap_state.file);
+ ret = qemu_file_get_error(snap_state.file);
+-
+ if (ret < 0) {
+ save_snapshot_error("qemu_savevm_state_setup failed");
+ goto out;
+@@ -247,16 +240,13 @@ static void *process_savevm_thread(void *opaque)
+ maxlen = blk_getlength(snap_state.target) - 30*1024*1024;
+
+ if (pending_size > 40 && snap_state.bs_pos + pending_size < 
maxlen) {
+-qemu_mutex_lock_iothread();
+ ret = qemu_savevm_state_iterate(snap_state.file, false);
+ if (ret < 0) {
+ save_snapshot_error("qemu_savevm_state_iterate error %d", 
ret);
+ break;
+ }
+-qemu_mutex_unlock_iothread();
+ DPRINTF("savevm inerate pending size %lu ret %d\n", pending_size, 
ret);
+ } else {
+-qemu_mutex_lock_iothread();
+ qemu_system_wakeup_request(QEMU_WAKEUP_REASON_OTHER, NULL);
+ ret = global_state_store();
+ if (ret) {
+@@ -285,16 +275,12 @@ static void *process_savevm_thread(void *opaque)
+ }
+
+ qemu_bh_schedule(snap_state.cleanup_bh);
+-qemu_mutex_unlock_iothread();
+
+ out:
+ /* set migration state accordingly and clear soon-to-be stale file */
+ migrate_set_state(>state, MIGRATION_STATUS_SETUP,
+   ret ? MIGRATION_STATUS_FAILED : 
MIGRATION_STATUS_COMPLETED);
+ ms->to_dst_file = NULL;
+-
+-rcu_unregister_thread();
+-return NULL;
+ }
+
+ void qmp_savevm_start(bool has_statefile, const char *statefile, Error **errp)
+@@ -373,8 +359,12 @@ void qmp_savevm_start(bool has_statefile, const char 
*statefile, Error **errp)
+
+ snap_state.state = SAVE_STATE_ACTIVE;
+ snap_state.cleanup_bh = qemu_bh_new(process_savevm_cleanup, _state);
+-qemu_thread_create(_state.thread, "savevm-async", 

[pve-devel] [PATCH qemu 2/2] add optional buffer size to QEMUFile

2020-05-04 Thread Wolfgang Bumiller
and use 4M for our savevm-async buffer size

Signed-off-by: Wolfgang Bumiller 
---
 ...add-optional-buffer-size-to-QEMUFile.patch | 173 ++
 debian/patches/series |   1 +
 2 files changed, 174 insertions(+)
 create mode 100644 
debian/patches/pve/0044-add-optional-buffer-size-to-QEMUFile.patch

diff --git a/debian/patches/pve/0044-add-optional-buffer-size-to-QEMUFile.patch 
b/debian/patches/pve/0044-add-optional-buffer-size-to-QEMUFile.patch
new file mode 100644
index 000..d990582
--- /dev/null
+++ b/debian/patches/pve/0044-add-optional-buffer-size-to-QEMUFile.patch
@@ -0,0 +1,173 @@
+From  Mon Sep 17 00:00:00 2001
+From: Wolfgang Bumiller 
+Date: Mon, 4 May 2020 11:05:08 +0200
+Subject: [PATCH] add optional buffer size to QEMUFile
+
+So we can use a 4M buffer for savevm-async which should
+increase performance storing the state onto ceph.
+
+Signed-off-by: Wolfgang Bumiller 
+---
+ migration/qemu-file.c | 33 +
+ migration/qemu-file.h |  1 +
+ savevm-async.c|  4 ++--
+ 3 files changed, 24 insertions(+), 14 deletions(-)
+
+diff --git a/migration/qemu-file.c b/migration/qemu-file.c
+index 1c3a358a14..b595d0ba34 100644
+--- a/migration/qemu-file.c
 b/migration/qemu-file.c
+@@ -30,7 +30,7 @@
+ #include "trace.h"
+ #include "qapi/error.h"
+ 
+-#define IO_BUF_SIZE 32768
++#define DEFAULT_IO_BUF_SIZE 32768
+ #define MAX_IOV_SIZE MIN(IOV_MAX, 64)
+ 
+ struct QEMUFile {
+@@ -45,7 +45,8 @@ struct QEMUFile {
+ when reading */
+ int buf_index;
+ int buf_size; /* 0 when writing */
+-uint8_t buf[IO_BUF_SIZE];
++size_t buf_allocated_size;
++uint8_t *buf;
+ 
+ DECLARE_BITMAP(may_free, MAX_IOV_SIZE);
+ struct iovec iov[MAX_IOV_SIZE];
+@@ -101,7 +102,7 @@ bool qemu_file_mode_is_not_valid(const char *mode)
+ return false;
+ }
+ 
+-QEMUFile *qemu_fopen_ops(void *opaque, const QEMUFileOps *ops)
++QEMUFile *qemu_fopen_ops_sized(void *opaque, const QEMUFileOps *ops, size_t 
buffer_size)
+ {
+ QEMUFile *f;
+ 
+@@ -109,9 +110,17 @@ QEMUFile *qemu_fopen_ops(void *opaque, const QEMUFileOps 
*ops)
+ 
+ f->opaque = opaque;
+ f->ops = ops;
++f->buf_allocated_size = buffer_size;
++f->buf = malloc(buffer_size);
++
+ return f;
+ }
+ 
++QEMUFile *qemu_fopen_ops(void *opaque, const QEMUFileOps *ops)
++{
++return qemu_fopen_ops_sized(opaque, ops, DEFAULT_IO_BUF_SIZE);
++}
++
+ 
+ void qemu_file_set_hooks(QEMUFile *f, const QEMUFileHooks *hooks)
+ {
+@@ -346,7 +355,7 @@ static ssize_t qemu_fill_buffer(QEMUFile *f)
+ }
+ 
+ len = f->ops->get_buffer(f->opaque, f->buf + pending, f->pos,
+- IO_BUF_SIZE - pending, _error);
++ f->buf_allocated_size - pending, _error);
+ if (len > 0) {
+ f->buf_size += len;
+ f->pos += len;
+@@ -435,7 +444,7 @@ static void add_buf_to_iovec(QEMUFile *f, size_t len)
+ {
+ if (!add_to_iovec(f, f->buf + f->buf_index, len, false)) {
+ f->buf_index += len;
+-if (f->buf_index == IO_BUF_SIZE) {
++if (f->buf_index == f->buf_allocated_size) {
+ qemu_fflush(f);
+ }
+ }
+@@ -461,7 +470,7 @@ void qemu_put_buffer(QEMUFile *f, const uint8_t *buf, 
size_t size)
+ }
+ 
+ while (size > 0) {
+-l = IO_BUF_SIZE - f->buf_index;
++l = f->buf_allocated_size - f->buf_index;
+ if (l > size) {
+ l = size;
+ }
+@@ -508,8 +517,8 @@ size_t qemu_peek_buffer(QEMUFile *f, uint8_t **buf, size_t 
size, size_t offset)
+ size_t index;
+ 
+ assert(!qemu_file_is_writable(f));
+-assert(offset < IO_BUF_SIZE);
+-assert(size <= IO_BUF_SIZE - offset);
++assert(offset < f->buf_allocated_size);
++assert(size <= f->buf_allocated_size - offset);
+ 
+ /* The 1st byte to read from */
+ index = f->buf_index + offset;
+@@ -559,7 +568,7 @@ size_t qemu_get_buffer(QEMUFile *f, uint8_t *buf, size_t 
size)
+ size_t res;
+ uint8_t *src;
+ 
+-res = qemu_peek_buffer(f, , MIN(pending, IO_BUF_SIZE), 0);
++res = qemu_peek_buffer(f, , MIN(pending, f->buf_allocated_size), 
0);
+ if (res == 0) {
+ return done;
+ }
+@@ -593,7 +602,7 @@ size_t qemu_get_buffer(QEMUFile *f, uint8_t *buf, size_t 
size)
+  */
+ size_t qemu_get_buffer_in_place(QEMUFile *f, uint8_t **buf, size_t size)
+ {
+-if (size < IO_BUF_SIZE) {
++if (size < f->buf_allocated_size) {
+ size_t res;
+ uint8_t *src;
+ 
+@@ -618,7 +627,7 @@ int qemu_peek_byte(QEMUFile *f, int offset)
+ int index = f->buf_index + offset;
+ 
+ assert(!qemu_file_is_writable(f));
+-assert(offset < IO_BUF_SIZE);
++assert(offset < f->buf_allocated_size);
+ 
+ if (index >= f->buf_size) {
+ qemu_fill_buffer(f);
+@@ -770,7 +779,7 @@ static int qemu_compress_data(z_stream *stream, uint8_t 
*dest, size_t dest_len,
+ 

Re: [pve-devel] [PATCH manager 2/3] Allow setting targetstorage for offline migration

2020-05-04 Thread Fabian Grünbichler
On May 4, 2020 11:20 am, Tim Marx wrote:
> 
>> Fabian Ebner  hat am 4. Mai 2020 09:26 geschrieben:
>> 
>>  
>> On 5/2/20 11:40 AM, Alexandre DERUMIER wrote:
>> >>> The problem is that offline migration with target storage might not
>> >>> always work depending on supported export/import formats. Then users
>> >>> might start an offline migration, which then fails after a few disks
>> >>> have already been copied.
>> > 
>> > Hi,
>> > 
>> > Technically, it could be possible to support any kind of src/dst format 
>> > for offline migration,
>> > with creating an nbd server on target (with qemu-nbd for example).
>> > 
>> > Maybe it could be implemented as fallback, if any other method exist ?
>> > 
>> > 
>> 
>> Yes, that would be nice to have. Now I think that making the target 
>> storage option for offline migration available via GUI should wait until 
>> we either:
>> 1. have a way to error out early
>> or even better:
>> 2. can support all possible exports/imports with such a general fallback 
>> method.
>> 
 
> Couldn't we check this in the precondition api call or do I miss something?

precondition is not run in a worker, so we can't query ALL storages for 
ALL volumes, like the migration does.

___
pve-devel mailing list
pve-devel@pve.proxmox.com
https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel


Re: [pve-devel] [PATCH manager 2/3] Allow setting targetstorage for offline migration

2020-05-04 Thread Tim Marx

> Fabian Ebner  hat am 4. Mai 2020 09:26 geschrieben:
> 
>  
> On 5/2/20 11:40 AM, Alexandre DERUMIER wrote:
> >>> The problem is that offline migration with target storage might not
> >>> always work depending on supported export/import formats. Then users
> >>> might start an offline migration, which then fails after a few disks
> >>> have already been copied.
> > 
> > Hi,
> > 
> > Technically, it could be possible to support any kind of src/dst format for 
> > offline migration,
> > with creating an nbd server on target (with qemu-nbd for example).
> > 
> > Maybe it could be implemented as fallback, if any other method exist ?
> > 
> > 
> 
> Yes, that would be nice to have. Now I think that making the target 
> storage option for offline migration available via GUI should wait until 
> we either:
> 1. have a way to error out early
> or even better:
> 2. can support all possible exports/imports with such a general fallback 
> method.
> 
> > 
> > 
> > - Mail original -
> > De: "Fabian Ebner" 
> > À: "pve-devel" 
> > Envoyé: Jeudi 30 Avril 2020 13:15:59
> > Objet: Re: [pve-devel] [PATCH manager 2/3] Allow setting targetstorage for 
> > offline migration
> > 
> > Let's make this one an RFC ;)
> > The problem is that offline migration with target storage might not
> > always work depending on supported export/import formats. Then users
> > might start an offline migration, which then fails after a few disks
> > have already been copied.
> > If we had a check for the export/import formats before starting
> > migration, it would improve. But currently the erroring out happens on a
> > per disk basis inside storage_migrate.
> > 
> > So I'm not sure anymore if this is an improvement. If not, and if patch
> > #3 is fine, I'll send a v2 without this one.
> > 
> > On 30.04.20 12:59, Fabian Ebner wrote:
> >> Signed-off-by: Fabian Ebner 
> >> ---
> >> www/manager6/window/Migrate.js | 12 ++--
> >> 1 file changed, 2 insertions(+), 10 deletions(-)
> >>
> >> diff --git a/www/manager6/window/Migrate.js 
> >> b/www/manager6/window/Migrate.js
> >> index 9fc66a9b..20e057ad 100644
> >> --- a/www/manager6/window/Migrate.js
> >> +++ b/www/manager6/window/Migrate.js
> >> @@ -44,13 +44,6 @@ Ext.define('PVE.window.Migrate', {
> >> return gettext('Offline');
> >> }
> >> },
> >> - setStorageselectorHidden: function(get) {
> >> - if (get('migration.with-local-disks') && get('running')) {
> >> - return false;
> >> - } else {
> >> - return true;
> >> - }
> >> - },
> >> setLocalResourceCheckboxHidden: function(get) {
> >> if (get('running') || !get('migration.hasLocalResources') ||
> >> Proxmox.UserName !== 'root@pam') {
> >> @@ -126,8 +119,7 @@ Ext.define('PVE.window.Migrate', {
> >> if (vm.get('migration.with-local-disks')) {
> >> params['with-local-disks'] = 1;
> >> }
> >> - //only submit targetstorage if vm is running, storage migration to 
> >> different storage is only possible online
> >> - if (vm.get('migration.with-local-disks') && vm.get('running')) {
> >> + if (vm.get('migration.with-local-disks')) {
> >> params.targetstorage = values.targetstorage;
> >> }
> >>
> >> @@ -352,7 +344,7 @@ Ext.define('PVE.window.Migrate', {
> >> fieldLabel: gettext('Target storage'),
> >> storageContent: 'images',
> >> bind: {
> >> - hidden: '{setStorageselectorHidden}'
> >> + hidden: '{!migration.with-local-disks}',
> >> }
> >> },
> >> {
> >>
> > 
> > ___
> > pve-devel mailing list
> > pve-devel@pve.proxmox.com
> > https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel
> > 
> > ___
> > pve-devel mailing list
> > pve-devel@pve.proxmox.com
> > https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel
> > 
> 
> ___
> pve-devel mailing list
> pve-devel@pve.proxmox.com
> https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel

Couldn't we check this in the precondition api call or do I miss something?

___
pve-devel mailing list
pve-devel@pve.proxmox.com
https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel


[pve-devel] [PATCH v2 manager 2/2] Allow setting no target storage and make it default

2020-05-04 Thread Fabian Ebner
so the current disk locations can be preserved even if
there are multiple local disks. And users don't have to
manually select the current storage if there is only one
local disk.

Signed-off-by: Fabian Ebner 
---
 www/manager6/window/Migrate.js | 8 ++--
 1 file changed, 6 insertions(+), 2 deletions(-)

diff --git a/www/manager6/window/Migrate.js b/www/manager6/window/Migrate.js
index 42a54246..d97dd589 100644
--- a/www/manager6/window/Migrate.js
+++ b/www/manager6/window/Migrate.js
@@ -126,8 +126,9 @@ Ext.define('PVE.window.Migrate', {
if (vm.get('migration.with-local-disks')) {
params['with-local-disks'] = 1;
}
-   //only submit targetstorage if vm is running, storage migration to 
different storage is only possible online
-   if (vm.get('migration.with-local-disks') && vm.get('running')) {
+   //offline migration to a different storage currently might fail at 
a late stage
+   //(i.e. after some disks have been moved), so don't expose it yet 
in the GUI
+   if (vm.get('migration.with-local-disks') && vm.get('running') && 
values.targetstorage) {
params.targetstorage = values.targetstorage;
}
 
@@ -352,6 +353,9 @@ Ext.define('PVE.window.Migrate', {
name: 'targetstorage',
fieldLabel: gettext('Target storage'),
storageContent: 'images',
+   allowBlank: true,
+   autoSelect: false,
+   emptyText: 'use current layout',
bind: {
hidden: '{setStorageselectorHidden}'
}
-- 
2.20.1


___
pve-devel mailing list
pve-devel@pve.proxmox.com
https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel


[pve-devel] [PATCH v2 manager 1/2] Don't show empty parentheses when size is not known

2020-05-04 Thread Fabian Ebner
The size of VM state files and the size of unused disks not
referenced by any snapshot is not saved in the VM configuration,
so it's not available here either.

Signed-off-by: Fabian Ebner 
---

Changes from v1:
* use variable for size text and use format string
* drop patch exposing target storage for offline migration
* and adapt comment

 www/manager6/window/Migrate.js | 5 +++--
 1 file changed, 3 insertions(+), 2 deletions(-)

diff --git a/www/manager6/window/Migrate.js b/www/manager6/window/Migrate.js
index 61bc6a49..42a54246 100644
--- a/www/manager6/window/Migrate.js
+++ b/www/manager6/window/Migrate.js
@@ -266,10 +266,11 @@ Ext.define('PVE.window.Migrate', {
});
}
} else {
+   var size_string = disk.size ? '(' + 
PVE.Utils.render_size(disk.size) + ')' : '';
migration['with-local-disks'] = 1;
migration.preconditions.push({
-   text:'Migration with local disk might take 
long: ' + disk.volid
-   +' (' + 
PVE.Utils.render_size(disk.size) + ')',
+   text: Ext.String.format('Migration with 
local disk might take long: {0} {1}',
+ disk.volid, size_string),
severity: 'warning'
});
}
-- 
2.20.1


___
pve-devel mailing list
pve-devel@pve.proxmox.com
https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel


Re: [pve-devel] [PATCH manager 2/3] Allow setting targetstorage for offline migration

2020-05-04 Thread Fabian Ebner

On 5/2/20 11:40 AM, Alexandre DERUMIER wrote:

The problem is that offline migration with target storage might not
always work depending on supported export/import formats. Then users
might start an offline migration, which then fails after a few disks
have already been copied.


Hi,

Technically, it could be possible to support any kind of src/dst format for 
offline migration,
with creating an nbd server on target (with qemu-nbd for example).

Maybe it could be implemented as fallback, if any other method exist ?




Yes, that would be nice to have. Now I think that making the target 
storage option for offline migration available via GUI should wait until 
we either:

1. have a way to error out early
or even better:
2. can support all possible exports/imports with such a general fallback 
method.





- Mail original -
De: "Fabian Ebner" 
À: "pve-devel" 
Envoyé: Jeudi 30 Avril 2020 13:15:59
Objet: Re: [pve-devel] [PATCH manager 2/3] Allow setting targetstorage for 
offline migration

Let's make this one an RFC ;)
The problem is that offline migration with target storage might not
always work depending on supported export/import formats. Then users
might start an offline migration, which then fails after a few disks
have already been copied.
If we had a check for the export/import formats before starting
migration, it would improve. But currently the erroring out happens on a
per disk basis inside storage_migrate.

So I'm not sure anymore if this is an improvement. If not, and if patch
#3 is fine, I'll send a v2 without this one.

On 30.04.20 12:59, Fabian Ebner wrote:

Signed-off-by: Fabian Ebner 
---
www/manager6/window/Migrate.js | 12 ++--
1 file changed, 2 insertions(+), 10 deletions(-)

diff --git a/www/manager6/window/Migrate.js b/www/manager6/window/Migrate.js
index 9fc66a9b..20e057ad 100644
--- a/www/manager6/window/Migrate.js
+++ b/www/manager6/window/Migrate.js
@@ -44,13 +44,6 @@ Ext.define('PVE.window.Migrate', {
return gettext('Offline');
}
},
- setStorageselectorHidden: function(get) {
- if (get('migration.with-local-disks') && get('running')) {
- return false;
- } else {
- return true;
- }
- },
setLocalResourceCheckboxHidden: function(get) {
if (get('running') || !get('migration.hasLocalResources') ||
Proxmox.UserName !== 'root@pam') {
@@ -126,8 +119,7 @@ Ext.define('PVE.window.Migrate', {
if (vm.get('migration.with-local-disks')) {
params['with-local-disks'] = 1;
}
- //only submit targetstorage if vm is running, storage migration to different 
storage is only possible online
- if (vm.get('migration.with-local-disks') && vm.get('running')) {
+ if (vm.get('migration.with-local-disks')) {
params.targetstorage = values.targetstorage;
}

@@ -352,7 +344,7 @@ Ext.define('PVE.window.Migrate', {
fieldLabel: gettext('Target storage'),
storageContent: 'images',
bind: {
- hidden: '{setStorageselectorHidden}'
+ hidden: '{!migration.with-local-disks}',
}
},
{



___
pve-devel mailing list
pve-devel@pve.proxmox.com
https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel

___
pve-devel mailing list
pve-devel@pve.proxmox.com
https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel



___
pve-devel mailing list
pve-devel@pve.proxmox.com
https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel