Re: [pve-devel] applied-series: [PATCH v3 storage 1/5] Allow passing options to volume_has_feature

2020-04-07 Thread Fabian Grünbichler
On April 7, 2020 2:19 pm, Fabian Ebner wrote:
> On 06.04.20 11:52, Fabian Grünbichler wrote:
>> On April 6, 2020 10:46 am, Fabian Ebner wrote:
>>> On 27.03.20 10:09, Fabian Grünbichler wrote:
 with a small follow-up for vzdump (see separate mail), and comment below

 On March 23, 2020 12:18 pm, Fabian Ebner wrote:
> With the option valid_target_formats it's possible
> to let the caller specify possible formats for the target
> of an operation.
> [0]: If the option is not set, assume that every format is valid.
>
> In most cases the format of the the target and the format
> of the source will agree (and therefore assumption [0] is
> not actually assuming very much and ensures backwards
> compatability). But when cloning a volume on a storage
> using Plugin.pm's implementation (e.g. directory based
> storages), the result is always a qcow2 image.
>
> When cloning containers, the new option can be used to detect
> that qcow2 is not valid and hence the clone feature is not
> available.
>
> Signed-off-by: Fabian Ebner 
> ---

 it would be a good follow-up to think which other storage plugins should
 also implement this. e.g. plugins supporting subvol and raw/qcow2/vmdk -
 can they copy/clone from one to the other? future people might only look
 at Plugin.pm, see that such an option exists, and assume it works for all
 plugins/features ;)

>>>
>>> The copy operations are actually implemented as the non-full clone_disk
>>> for QEMU and as copy_volume in LXC. AFAIU they should always work.
>> 
>> right, those are actually not implemented in the storage layer
>> (anymore?), so checking there does not make much sense. I wonder whether
>> we even need the 'copy' feature then? export/import uses it's own format
>> matching, and the full clone stuff is already in
>> pve-container/qemu-server with lots of special handling.
>> 
> 
> [2]: There is a single caller (except for the has_feature API calls) in 
> API2/Qemu.pm's clone_vm.
> 
> ("allowed" in the next paragraph means has_feature returns 1)
> Looking through the individual plugins, copy with "current" is always 
> allowed and whenever the storage (and for Plugin.pm format) supports 
> snapshots respectively base images, then copy with "snap" respectively 
> "base" is allowed. That is, except for ZFS, where copy with "snap" is 
> not allowed.
> 
> But this another difference between LXC/QEMU. For containers there is no 
> equivalent check like [2] and making a full clone from a snapshot 
> actually works. When I comment out the check for QEMU, I get:
> 
> qemu-img: Could not open '/dev/zvol/myzpool/vm-108-disk-0@adsf': Could 
> not open '/dev/zvol/myzpool/vm-108-disk-0@adsf': No such file or directory
> 
> So there is a purpose for checking the copy feature, although not the 
> best one, since copying from a ZFS snapshot isn't impossible in itself 
> as the LXC case shows.

we could implement copy from zvol snapshot as well - we'd need to query 
and potentially set the snapdev property (akin to a 'map the snapshot' 
action), and then reset it back to the original value ('unmap the 
snapshot').

> Also, wouldn't removing the copy feature technically also break the API?

yes. but we could remove it in the sense of 'we no longer use it for 
anything', and then drop it with 7.x for example. the question is 
whether it makes sense to keep it to encode the 'mapping/accessing a 
snapshot/base volume for a full clone is possible' feature, since the 
assumption that you can always copy the current one is probably true 
(and format conversion for cloning happens a level above the storage, or 
can fallback to storage_migrate with its mechanism).

> If we really want to remove it, we could replace the check in QEMU by an 
> explicit check for "full copy of ZFS from snapshot" or otherwise 
> implement that feature and drop the check entirely.

see above.

>> there's also still some issue/missing check for subvols of size 0 which
>> can't be moved to a sized volume, but that is unrelated.
>> 
>>>
>>> For clone operations, there is vdisk_clone in the storage module, but it
>>> doesn't take a format parameter. Except for GlusterfsPlugin.pm (which
>>> doesn't have its own volume_has_feature) and Plugin.pm the format of the
>>> target is the same as the format of the source. Shouldn't that be
>>> considered valid regardless of the valid_target_formats option?
>> 
>> you can't say it's valid per se - since for example raw->raw is not
>> possible, but raw->qcow2 is.
>> 
> 
> Yes, raw->raw is not possible, but raw would be a valid target format 
> from the caller's perspective. Since the caller apparently can handle 
> volumes with the format of the source, the storage backend can assume 
> it's fine if the target is in that format as well. That does not mean 
> that the operation will result in that format, just that it would be 
> something the caller could deal with.
> 
> 

[pve-devel] applied: [PATCH pve-common] Inotify: read_interfaces : remove trailing whitespaces

2020-04-07 Thread Thomas Lamprecht
On 4/5/20 10:33 AM, Alexandre Derumier wrote:
> Signed-off-by: Alexandre Derumier 
> ---
>  src/PVE/INotify.pm | 2 ++
>  1 file changed, 2 insertions(+)
> 
> diff --git a/src/PVE/INotify.pm b/src/PVE/INotify.pm
> index 3fa5af6..871deda 100644
> --- a/src/PVE/INotify.pm
> +++ b/src/PVE/INotify.pm
> @@ -924,6 +924,8 @@ sub __read_etc_network_interfaces {
>  
>   while (defined ($line = <$fh>)) {
>   chomp $line;
> + $line =~ s/\s+$//;
> +
>   if ($line =~ m/^\s*#(.*?)\s*$/) {
>   $f->{comments} = '' if !$f->{comments};
>   my $comment = decode('UTF-8', $1);
> 

applied, but dropped the "chomp" line as followup, it really shouldn't be 
required
anymore.. thanks!

___
pve-devel mailing list
pve-devel@pve.proxmox.com
https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel


[pve-devel] applied: [PATCH storage] fix #2474: always show iscsi content

2020-04-07 Thread Thomas Lamprecht
On 4/7/20 9:25 AM, Dominik Csapak wrote:
> Instead of relying on list_volumes of Plugin.pm (which filters by
> the content types set in the config), use our own to always
> show the luns of an iscsi.
> 
> This makes sense here, since we need it to show the luns when using
> it as base storage for LVM (where we have content type 'none' set).
> 
> It does not interfere with the rest of the GUI, since on e.g. disk
> creation, we already filter the storages in the dropdown by content
> type, iow. an iscsi storage used this way still does not show up
> when trying to create a disk.
> 
> This also shows the luns now in the 'Content' tab, but this is also
> OK, since the user cannot actually do anything there with the luns.
> (Besides looking at them)
> 
> Signed-off-by: Dominik Csapak 
> ---
>  PVE/Storage/ISCSIPlugin.pm | 14 ++
>  1 file changed, 14 insertions(+)
> 

applied, thanks!

___
pve-devel mailing list
pve-devel@pve.proxmox.com
https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel


Re: [pve-devel] [RFC v2 manager 1/5] api: backup: add endpoint to list included guests and volumes

2020-04-07 Thread Thomas Lamprecht
On 4/6/20 4:24 PM, Aaron Lauterer wrote:
> This patch adds a new API endpoint that returns a list of included
> guests, their volumes and whether they are included in a backup.
> 
> The output is formatted to be used with the extJS tree panel.
> 
> Signed-off-by: Aaron Lauterer 
> ---
> v1 -> v2:
> * simplified the code
> * refactored according to feedback from fabian [1]
> 
> The call to PVE::VZDump->get_included_guests($job) will definitely
> change as that patch need another version [0]
> 
> [0] https://pve.proxmox.com/pipermail/pve-devel/2020-April/042795.html
> [1] https://pve.proxmox.com/pipermail/pve-devel/2020-January/041361.html
> 
>  PVE/API2/Backup.pm | 176 +
>  1 file changed, 176 insertions(+)
> 
> diff --git a/PVE/API2/Backup.pm b/PVE/API2/Backup.pm
> index 86377c0a..685a196c 100644
> --- a/PVE/API2/Backup.pm
> +++ b/PVE/API2/Backup.pm
> @@ -324,4 +324,180 @@ __PACKAGE__->register_method({
>   die "$@" if ($@);
>  }});
>  
> +__PACKAGE__->register_method({
> +name => 'get_volume_backup_included',
> +path => '{id}/included_volumes',
> +method => 'GET',
> +protected => 1,
> +description => "Returns included guests and the backup status of their 
> disks. Optimized to be used in ExtJS tree views.",
> +permissions => {
> + check => ['perm', '/', ['Sys.Audit']],
> +},
> +parameters => {
> + additionalProperties => 0,
> + properties => {
> + id => $vzdump_job_id_prop
> + },
> +},
> +returns => {
> + type => 'object',
> + description => 'Root node of the tree object. Children represent 
> guests, grandchildren represent volumes of that guest.',
> + properties => {
> + not_all_permissions => {
> + type => 'boolean',
> + optional => 1,
> + description => 'Wheter the user is missing permissions to view 
> all guests.',

s/Wheter/Whether/

> + },
> + children => {
> + type => 'array',
> + items => {
> + type => 'object',
> + properties => {
> + id => {
> + type => 'integer',
> + description => 'VMID of the guest.',
> + },
> + name => {
> + type => 'string',
> + description => 'Name of the guest',
> + optional => 1,
> + },
> + type => {
> + type => 'string',
> + description => 'Type of the guest, VM or CT.',
> + enum => ['qemu', 'lxc', 'unknown'],

We die in the unknown case, so that cannot be returned

> + },
> + children => {
> + type => 'array',
> + optional => 1,
> + description => 'The volumes of the guest with the 
> information if they will be included in backups.',
> + items => {
> + type => 'object',
> + properties => {
> + id => {
> + type => 'string',
> + description => 'Configuration key of 
> the volume.',
> + },
> + name => {
> + type => 'string',
> + description => 'Name of the volume.',
> + },
> + included => {
> + type => 'boolean',
> + description => 'Wheter the volume is 
> included in the backup or not.',
> + },
> + reason => {
> + type => 'string',
> + description => 'The reason why the 
> volume is included (or excluded).',
> + },
> + },
> + },
> + },
> + },
> + },
> + },
> + },
> +},
> +code => sub {
> + my ($param) = @_;
> +
> + my $rpcenv = PVE::RPCEnvironment::get();
> +
> + my $user = $rpcenv->get_user();
> +
> + my $vzconf = cfs_read_file('vzdump.cron');
> + my $all_jobs = $vzconf->{jobs} || [];
> + my $job;
> + my $rrd = PVE::Cluster::rrd_dump();
> +
> + foreach my $j (@$all_jobs) {
> + $job = $j if $j->{id} eq $param->{id};

This could let the reader think that this is a bug, as it looks like it gets
overwritten, plus on huge amount of jobs you would iterate a lot of those for
nothing, if the $job with the requested id was found early. Rather do:

if ($j->{id} eq 

Re: [pve-devel] applied-series: [PATCH pve-common/qemu-server 0/9] refactoring and storagemap

2020-04-07 Thread Tim Marx
> Thomas Lamprecht  hat am 7. April 2020 17:19 
> geschrieben:
> 
>  
> On 3/30/20 1:41 PM, Fabian Grünbichler wrote:
> > another round of preparatory patches for remote migration.
> > 
> > qemu-server's #1 is an unrelated follow-up fix,
> > #2-4 are refactoring,
> > #5+6 are new features/checks,
> > #7+8 are refactorings as preparation for remote migration which will
> > call the nbd allocation and vm_start_nolock in separate steps, and use
> > the returned hash instead of parsing STDOUT (these can be skipped for
> > now, it's just a hint at where I am going ;))
> 
> 
> applied the remaining parts of the series, with a small followup (the `keys 
> %hash`
> vs. `values %hash` thing).
> 
> Tim, care to integrate this in the webinterface for migrations? Maybe with the
> advanced checkbox, as else the migrate dialog could get really crowded.

yep will take a look

___
pve-devel mailing list
pve-devel@pve.proxmox.com
https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel


[pve-devel] applied-series: [PATCH v10 0/6] Add basics for custom CPU models

2020-04-07 Thread Thomas Lamprecht
On 4/7/20 3:56 PM, Stefan Reiter wrote:
> Based on the RFC and following on- and off-list discussion about custom CPU
> models [0].
> 
> In essence, this revised patch allows a user to specify custom CPU models in
> /etc/pve/cpu-models.conf (section-config style [1]), where VMs using that CPU
> model inherit details from the definition. This removes any fragile
> "auto-magical" CPU flag detection, while still giving the user a way to create
> VMs with the best possible subset of CPU features maintaining live-migration
> compatibility.
> 
> Includes the infrastructure for broadcasting supported CPU flags for each
> cluster-node via the key-value store - this is not necessary for the
> custom-cpu feature in particular, but I think could prove useful for
> implementing the GUI part (e.g. show the user which flags are supported on 
> which
> nodes).
> 
> v9 -> v10:
> * rebase on master
> * add helper for running QEMU CPU flag reading
> * improved 'phys-bits' format and according test case
> 
> v8 -> v9:
> * rebase remaining patches on master
> 
> v7 -> v8:
> * rebase on master, fix tests now using +pve0
> * fixed nits noted by Thomas
> * moved live-migration/snapshot patches forward to avoid temporary breakage
> * fix CPU hotplug with custom CPUs
> * guard mkdir with check_cfs_is_mounted and also run before write
> * fix rebase-error in cfg2cmd tests (getaddrinfo_all)
> * dropped applied patches
> 
> 
> < see [2] for older history >
> 
> 
> [0]: https://pve.proxmox.com/pipermail/pve-devel/2019-July/038268.html
> [1]: e.g.:
> cpu-model: custom-cpu-name
> host-phys-bits 1
> flags +aes;+avx;+avx2
> reported-model kvm64
> [2] https://pve.proxmox.com/pipermail/pve-devel/2020-January/041278.html
> 
> 
> qemu-server: Stefan Reiter (6):
>   Include "-cpu" parameter with live-migration
>   Include "-cpu" parameter with snapshots/suspend
>   Add helpers to better structure CPU option handling
>   Rework get_cpu_options and allow custom CPU models
>   fix #2318: allow phys-bits CPU setting
>   cfg2cmd: add test cases for custom CPU models
> 
>  PVE/API2/Qemu.pm  |   8 +
>  PVE/QemuConfig.pm |  26 +-
>  PVE/QemuMigrate.pm|  15 +
>  PVE/QemuServer.pm |  42 ++-
>  PVE/QemuServer/CPUConfig.pm   | 310 +++---
>  test/cfg2cmd/custom-cpu-model-defaults.conf   |   8 +
>  .../custom-cpu-model-defaults.conf.cmd|  24 ++
>  .../custom-cpu-model-host-phys-bits.conf  |   8 +
>  .../custom-cpu-model-host-phys-bits.conf.cmd  |  27 ++
>  test/cfg2cmd/custom-cpu-model.conf|   8 +
>  test/cfg2cmd/custom-cpu-model.conf.cmd|  27 ++
>  test/cfg2cmd/efi-raw-old.conf.cmd |   2 +-
>  test/cfg2cmd/efi-raw.conf.cmd |   2 +-
>  test/cfg2cmd/i440fx-win10-hostpci.conf.cmd|   2 +-
>  test/cfg2cmd/minimal-defaults.conf.cmd|   2 +-
>  test/cfg2cmd/pinned-version.conf.cmd  |   2 +-
>  .../q35-linux-hostpci-multifunction.conf.cmd  |   2 +-
>  test/cfg2cmd/q35-linux-hostpci.conf.cmd   |   2 +-
>  test/cfg2cmd/q35-win10-hostpci.conf.cmd   |   2 +-
>  test/cfg2cmd/simple1.conf.cmd |   2 +-
>  test/cfg2cmd/spice-enhancments.conf.cmd   |   2 +-
>  test/cfg2cmd/spice-linux-4.1.conf.cmd |   2 +-
>  test/cfg2cmd/spice-usb3.conf.cmd  |   2 +-
>  test/cfg2cmd/spice-win.conf.cmd   |   2 +-
>  test/run_config2command_tests.pl  |  23 ++
>  25 files changed, 471 insertions(+), 81 deletions(-)
>  create mode 100644 test/cfg2cmd/custom-cpu-model-defaults.conf
>  create mode 100644 test/cfg2cmd/custom-cpu-model-defaults.conf.cmd
>  create mode 100644 test/cfg2cmd/custom-cpu-model-host-phys-bits.conf
>  create mode 100644 test/cfg2cmd/custom-cpu-model-host-phys-bits.conf.cmd
>  create mode 100644 test/cfg2cmd/custom-cpu-model.conf
>  create mode 100644 test/cfg2cmd/custom-cpu-model.conf.cmd
> 

applied series, with addressing the trivial merge conflict due to the new
targetstorage format in the API in the first patch. Much thanks.

___
pve-devel mailing list
pve-devel@pve.proxmox.com
https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel


[pve-devel] applied-series: [PATCH pve-common/qemu-server 0/9] refactoring and storagemap

2020-04-07 Thread Thomas Lamprecht
On 3/30/20 1:41 PM, Fabian Grünbichler wrote:
> another round of preparatory patches for remote migration.
> 
> qemu-server's #1 is an unrelated follow-up fix,
> #2-4 are refactoring,
> #5+6 are new features/checks,
> #7+8 are refactorings as preparation for remote migration which will
> call the nbd allocation and vm_start_nolock in separate steps, and use
> the returned hash instead of parsing STDOUT (these can be skipped for
> now, it's just a hint at where I am going ;))


applied the remaining parts of the series, with a small followup (the `keys 
%hash`
vs. `values %hash` thing).

Tim, care to integrate this in the webinterface for migrations? Maybe with the
advanced checkbox, as else the migrate dialog could get really crowded.


___
pve-devel mailing list
pve-devel@pve.proxmox.com
https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel


Re: [pve-devel] [PATCH access-control v2] auth ldap/ad: make password a parameter for the api

2020-04-07 Thread Thomas Lamprecht
On 4/7/20 1:11 PM, Dominik Csapak wrote:
> Instead of simply requiring it to exist in /etc/pve.
> 
> Takes after the password handling of CIFS in pve-storage.
> 
> Signed-off-by: Dominik Csapak 
> ---
> changes from v1:
> * delete pw when given via 'delete' parameter
> * do not delete pw when updating without giving 'password' parameter



> diff --git a/PVE/Auth/LDAP.pm b/PVE/Auth/LDAP.pm
> index 905cc47..1b2c606 100755
> --- a/PVE/Auth/LDAP.pm
> +++ b/PVE/Auth/LDAP.pm
> @@ -37,6 +37,11 @@ sub properties {
>   optional => 1,
>   maxLength => 256,
>   },
> + password => {
> + description => "LDAP bind password. Will be stored in 
> '/etc/pve/priv/ldap/.pw'.",
> + type => 'string',
> + optional => 1,
> + },
>   verify => {
>   description => "Verify the server's SSL certificate",
>   type => 'boolean',

> @@ -185,7 +191,7 @@ sub connect_and_bind {
>  
>  if ($config->{bind_dn}) {
>   $bind_dn = $config->{bind_dn};
> - $bind_pass = 
> PVE::Tools::file_read_firstline("/etc/pve/priv/ldap/${realm}.pw");
> + $bind_pass = ldap_get_credentials($realm);
>   die "missing password for realm $realm\n" if !defined($bind_pass);
>  }
>  
> @@ -343,4 +349,69 @@ sub authenticate_user {
>  return 1;
>  }
>  
> +my $ldap_pw_dir = "/etc/pve/priv/ldap";
> +
> +sub ldap_cred_filename {
> +my ($realm) = @_;
> +return "${ldap_pw_dir}/${realm}.pw";
> +}
> +


looks mostly ok from a quick whiff, albeit I'd like to have the .pw
file in a priv/realm/ directory, ldap is "wrong" here, we also use
priv/storage/ as base directory for CIFS and PBS, not priv/cifs and
priv/pbs ..

___
pve-devel mailing list
pve-devel@pve.proxmox.com
https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel


[pve-devel] [PATCH v10 qemu-server 2/6] Include "-cpu" parameter with snapshots/suspend

2020-04-07 Thread Stefan Reiter
Just like with live-migration, custom CPU models might change after a
snapshot has been taken (or a VM suspended), which would lead to a
different QEMU invocation on rollback/resume.

Save the "-cpu" argument as a new "runningcpu" option into the VM conf
akin to "runningmachine" and use as override during rollback/resume.

No functional change with non-custom CPU types intended.

Signed-off-by: Stefan Reiter 
---

Changes in v10:
* Use PVE::QemuServer::CPUConfig::get_cpu_from_running_vm helper

 PVE/API2/Qemu.pm  |  1 +
 PVE/QemuConfig.pm | 26 +++---
 PVE/QemuServer.pm | 31 ++-
 3 files changed, 42 insertions(+), 16 deletions(-)

diff --git a/PVE/API2/Qemu.pm b/PVE/API2/Qemu.pm
index 9e453d2..ff61e4d 100644
--- a/PVE/API2/Qemu.pm
+++ b/PVE/API2/Qemu.pm
@@ -1129,6 +1129,7 @@ my $update_vm_api  = sub {
push @delete, 'lock'; # this is the real deal to write it out
}
push @delete, 'runningmachine' if $conf->{runningmachine};
+   push @delete, 'runningcpu' if $conf->{runningcpu};
}
 
PVE::QemuConfig->check_lock($conf) if !$skiplock;
diff --git a/PVE/QemuConfig.pm b/PVE/QemuConfig.pm
index f32618e..d29b88b 100644
--- a/PVE/QemuConfig.pm
+++ b/PVE/QemuConfig.pm
@@ -5,6 +5,7 @@ use warnings;
 
 use PVE::AbstractConfig;
 use PVE::INotify;
+use PVE::QemuServer::CPUConfig;
 use PVE::QemuServer::Drive;
 use PVE::QemuServer::Helpers;
 use PVE::QemuServer::Monitor qw(mon_cmd);
@@ -186,15 +187,20 @@ sub __snapshot_save_vmstate {
 my $statefile = PVE::Storage::vdisk_alloc($storecfg, $target, $vmid, 
'raw', $name, $size*1024);
 my $runningmachine = 
PVE::QemuServer::Machine::get_current_qemu_machine($vmid);
 
-if ($suspend) {
-   $conf->{vmstate} = $statefile;
-   $conf->{runningmachine} = $runningmachine;
-} else {
-   my $snap = $conf->{snapshots}->{$snapname};
-   $snap->{vmstate} = $statefile;
-   $snap->{runningmachine} = $runningmachine;
+# get current QEMU -cpu argument to ensure consistency of custom CPU models
+my $runningcpu;
+if (my $pid = PVE::QemuServer::check_running($vmid)) {
+   $runningcpu = PVE::QemuServer::CPUConfig::get_cpu_from_running_vm($pid);
+}
+
+if (!$suspend) {
+   $conf = $conf->{snapshots}->{$snapname};
 }
 
+$conf->{vmstate} = $statefile;
+$conf->{runningmachine} = $runningmachine;
+$conf->{runningcpu} = $runningcpu;
+
 return $statefile;
 }
 
@@ -340,6 +346,11 @@ sub __snapshot_rollback_hook {
if (defined($conf->{runningmachine})) {
$data->{forcemachine} = $conf->{runningmachine};
delete $conf->{runningmachine};
+
+   # runningcpu is newer than runningmachine, so assume it only exists
+   # here, if at all
+   $data->{forcecpu} = delete $conf->{runningcpu}
+   if defined($conf->{runningcpu});
} else {
# Note: old code did not store 'machine', so we try to be smart
# and guess the snapshot was generated with kvm 1.4 (pc-i440fx-1.4).
@@ -395,6 +406,7 @@ sub __snapshot_rollback_vm_start {
 my $params = {
statefile => $vmstate,
forcemachine => $data->{forcemachine},
+   forcecpu => $data->{forcecpu},
 };
 PVE::QemuServer::vm_start($storecfg, $vmid, $params);
 }
diff --git a/PVE/QemuServer.pm b/PVE/QemuServer.pm
index a0f8429..2d5ae7d 100644
--- a/PVE/QemuServer.pm
+++ b/PVE/QemuServer.pm
@@ -567,8 +567,15 @@ EODESCR
optional => 1,
 }),
 runningmachine => get_standard_option('pve-qemu-machine', {
-   description => "Specifies the Qemu machine type of the running vm. This 
is used internally for snapshots.",
+   description => "Specifies the QEMU machine type of the running vm. This 
is used internally for snapshots.",
 }),
+runningcpu => {
+   description => "Specifies the QEMU '-cpu' parameter of the running vm. 
This is used internally for snapshots.",
+   optional => 1,
+   type => 'string',
+   pattern => $PVE::QemuServer::CPUConfig::qemu_cmdline_cpu_re,
+   format_description => 'QEMU -cpu parameter'
+},
 machine => get_standard_option('pve-qemu-machine'),
 arch => {
description => "Virtual processor architecture. Defaults to the host.",
@@ -1948,7 +1955,8 @@ sub json_config_properties {
 my $prop = shift;
 
 foreach my $opt (keys %$confdesc) {
-   next if $opt eq 'parent' || $opt eq 'snaptime' || $opt eq 'vmstate' || 
$opt eq 'runningmachine';
+   next if $opt eq 'parent' || $opt eq 'snaptime' || $opt eq 'vmstate' ||
+   $opt eq 'runningmachine' || $opt eq 'runningcpu';
$prop->{$opt} = $confdesc->{$opt};
 }
 
@@ -4845,14 +4853,16 @@ sub vm_start_nolock {
 PVE::GuestHelpers::exec_hookscript($conf, $vmid, 'pre-start', 1);
 
 my $forcemachine = $params->{forcemachine};
+my $forcecpu = $params->{forcecpu};
 if ($resume) {
-   # enforce machine 

[pve-devel] [PATCH v10 qemu-server 4/6] Rework get_cpu_options and allow custom CPU models

2020-04-07 Thread Stefan Reiter
If a cputype is custom (check via prefix), try to load options from the
custom CPU model config, and set values accordingly.

While at it, extract currently hardcoded values into seperate sub and add
reasonings.

Since the new flag resolving outputs flags in sorted order for
consistency, adapt the test cases to not break. Only the order is
changed, not which flags are present.

Signed-off-by: Stefan Reiter 
Reviewed-By: Fabian Ebner 
Tested-By: Fabian Ebner 
---
 PVE/QemuServer/CPUConfig.pm   | 191 +-
 test/cfg2cmd/efi-raw-old.conf.cmd |   2 +-
 test/cfg2cmd/efi-raw.conf.cmd |   2 +-
 test/cfg2cmd/i440fx-win10-hostpci.conf.cmd|   2 +-
 test/cfg2cmd/minimal-defaults.conf.cmd|   2 +-
 test/cfg2cmd/pinned-version.conf.cmd  |   2 +-
 .../q35-linux-hostpci-multifunction.conf.cmd  |   2 +-
 test/cfg2cmd/q35-linux-hostpci.conf.cmd   |   2 +-
 test/cfg2cmd/q35-win10-hostpci.conf.cmd   |   2 +-
 test/cfg2cmd/simple1.conf.cmd |   2 +-
 test/cfg2cmd/spice-enhancments.conf.cmd   |   2 +-
 test/cfg2cmd/spice-linux-4.1.conf.cmd |   2 +-
 test/cfg2cmd/spice-usb3.conf.cmd  |   2 +-
 test/cfg2cmd/spice-win.conf.cmd   |   2 +-
 14 files changed, 154 insertions(+), 63 deletions(-)

diff --git a/PVE/QemuServer/CPUConfig.pm b/PVE/QemuServer/CPUConfig.pm
index 9fa6af9..af31b2b 100644
--- a/PVE/QemuServer/CPUConfig.pm
+++ b/PVE/QemuServer/CPUConfig.pm
@@ -384,99 +384,190 @@ sub print_cpuflag_hash {
 return $formatted;
 }
 
+sub parse_cpuflag_list {
+my ($re, $reason, $flaglist) = @_;
+
+my $res = {};
+return $res if !$flaglist;
+
+foreach my $flag (split(";", $flaglist)) {
+   if ($flag =~ $re) {
+   $res->{$2} = { op => $1, reason => $reason };
+   }
+}
+
+return $res;
+}
+
 # Calculate QEMU's '-cpu' argument from a given VM configuration
 sub get_cpu_options {
 my ($conf, $arch, $kvm, $kvm_off, $machine_version, $winversion, 
$gpu_passthrough) = @_;
 
-my $cpuFlags = [];
-my $ostype = $conf->{ostype};
-
-my $cpu = $kvm ? "kvm64" : "qemu64";
+my $cputype = $kvm ? "kvm64" : "qemu64";
 if ($arch eq 'aarch64') {
-   $cpu = 'cortex-a57';
-}
-my $hv_vendor_id;
-if (my $cputype = $conf->{cpu}) {
-   my $cpuconf = PVE::JSONSchema::parse_property_string($cpu_fmt, $cputype)
-   or die "Cannot parse cpu description: $cputype\n";
-   $cpu = $cpuconf->{cputype};
-   $kvm_off = 1 if $cpuconf->{hidden};
-   $hv_vendor_id = $cpuconf->{'hv-vendor-id'};
-
-   if (defined(my $flags = $cpuconf->{flags})) {
-   push @$cpuFlags, split(";", $flags);
-   }
+   $cputype = 'cortex-a57';
 }
 
-push @$cpuFlags , '+lahf_lm' if $cpu eq 'kvm64' && $arch eq 'x86_64';
+my $cpu = {};
+my $custom_cpu;
+my $hv_vendor_id;
+if (my $cpu_prop_str = $conf->{cpu}) {
+   $cpu = parse_vm_cpu_conf($cpu_prop_str)
+   or die "Cannot parse cpu description: $cpu_prop_str\n";
 
-push @$cpuFlags , '-x2apic' if $ostype && $ostype eq 'solaris';
+   $cputype = $cpu->{cputype};
 
-push @$cpuFlags, '+sep' if $cpu eq 'kvm64' || $cpu eq 'kvm32';
+   if (is_custom_model($cputype)) {
+   $custom_cpu = get_custom_model($cputype);
 
-push @$cpuFlags, '-rdtscp' if $cpu =~ m/^Opteron/;
+   $cputype = $custom_cpu->{'reported-model'} //
+   $cpu_fmt->{'reported-model'}->{default};
+   $kvm_off = $custom_cpu->{hidden}
+   if defined($custom_cpu->{hidden});
+   $hv_vendor_id = $custom_cpu->{'hv-vendor-id'};
+   }
 
-if (min_version($machine_version, 2, 3) && $arch eq 'x86_64') {
+   # VM-specific settings override custom CPU config
+   $kvm_off = $cpu->{hidden}
+   if defined($cpu->{hidden});
+   $hv_vendor_id = $cpu->{'hv-vendor-id'}
+   if defined($cpu->{'hv-vendor-id'});
+}
 
-   push @$cpuFlags , '+kvm_pv_unhalt' if $kvm;
-   push @$cpuFlags , '+kvm_pv_eoi' if $kvm;
+my $pve_flags = get_pve_cpu_flags($conf, $kvm, $cputype, $arch,
+ $machine_version);
+
+my $hv_flags = get_hyperv_enlightenments($winversion, $machine_version,
+   $conf->{bios}, $gpu_passthrough, $hv_vendor_id) if $kvm;
+
+my $custom_cputype_flags = parse_cpuflag_list($cpu_flag_any_re,
+   "set by custom CPU model", $custom_cpu->{flags});
+
+my $vm_flags = parse_cpuflag_list($cpu_flag_supported_re,
+   "manually set for VM", $cpu->{flags});
+
+my $pve_forced_flags = {};
+$pve_forced_flags->{'enforce'} = {
+   reason => "error if requested CPU settings not available",
+} if $cputype ne 'host' && $kvm && $arch eq 'x86_64';
+$pve_forced_flags->{'kvm'} = {
+   value => "off",
+   reason => "hide KVM virtualization from guest",
+} if $kvm_off;
+
+# $cputype is the "reported-model" for custom types, so we can 

[pve-devel] [PATCH v10 qemu-server 5/6] fix #2318: allow phys-bits CPU setting

2020-04-07 Thread Stefan Reiter
Can be specified for a particular VM or via a custom CPU model (VM takes
precedence).

QEMU's default limit only allows up to 1TB of RAM per VM. Increasing the
physical address bits available to a VM can fix this.

Signed-off-by: Stefan Reiter 
---

Changes in v10:
* Change phys-bits format to 'host'/8-64 instead of seperate 'host-phys-bits'
  flag

 PVE/QemuServer/CPUConfig.pm | 41 +
 1 file changed, 41 insertions(+)

diff --git a/PVE/QemuServer/CPUConfig.pm b/PVE/QemuServer/CPUConfig.pm
index af31b2b..b08588e 100644
--- a/PVE/QemuServer/CPUConfig.pm
+++ b/PVE/QemuServer/CPUConfig.pm
@@ -149,8 +149,36 @@ my $cpu_fmt = {
pattern => qr/$cpu_flag_any_re(;$cpu_flag_any_re)*/,
optional => 1,
 },
+'phys-bits' => {
+   type => 'string',
+   format => 'pve-phys-bits',
+   description => "The physical memory address bits that are reported to"
+. " the guest OS. Should be smaller or equal to the 
host's."
+. " Set to 'host' to use value from host CPU, but note 
that"
+. " doing so will break live migration to CPUs with other 
values.",
+   optional => 1,
+},
 };
 
+PVE::JSONSchema::register_format('pve-phys-bits', \_phys_bits);
+sub parse_phys_bits {
+my ($str, $noerr) = @_;
+
+my $err_msg = "value must be an integer between 8 and 64 or 'host'\n";
+
+if ($str !~ m/^(host|\d{1,2})$/) {
+   die $err_msg if !$noerr;
+   return undef;
+}
+
+if ($str =~ m/^\d+$/ && (int($str) < 8 || int($str) > 64)) {
+   die $err_msg if !$noerr;
+   return undef;
+}
+
+return $str;
+}
+
 # $cpu_fmt describes both the CPU config passed as part of a VM config, as well
 # as the definition of a custom CPU model. There are some slight differences
 # though, which we catch in the custom verification function below.
@@ -472,6 +500,19 @@ sub get_cpu_options {
 $cpu_str .= resolve_cpu_flags($pve_flags, $hv_flags, $custom_cputype_flags,
  $vm_flags, $pve_forced_flags);
 
+my $phys_bits = '';
+foreach my $conf ($custom_cpu, $cpu) {
+   next if !defined($conf);
+   my $conf_val = $conf->{'phys-bits'};
+   next if !$conf_val;
+   if ($conf_val eq 'host') {
+   $phys_bits = ",host-phys-bits=true";
+   } else {
+   $phys_bits = ",phys-bits=$conf_val";
+   }
+}
+$cpu_str .= $phys_bits;
+
 return ('-cpu', $cpu_str);
 }
 
-- 
2.26.0


___
pve-devel mailing list
pve-devel@pve.proxmox.com
https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel


[pve-devel] [PATCH v10 qemu-server 3/6] Add helpers to better structure CPU option handling

2020-04-07 Thread Stefan Reiter
To avoid hardcoding even more CPU-flag related things for custom CPU
models, introduce a dynamic approach to resolving flags.

resolve_cpu_flags takes a list of hashes (as documented in the
comment) and resolves them to a valid "-cpu" argument without
duplicates. This also helps by providing a reason why specific CPU flags
have been added, and thus allows for useful warning messages should a
flag be overwritten by another.

Signed-off-by: Stefan Reiter 
Reviewed-By: Fabian Ebner 
Tested-By: Fabian Ebner 
---
 PVE/QemuServer/CPUConfig.pm | 64 +
 1 file changed, 64 insertions(+)

diff --git a/PVE/QemuServer/CPUConfig.pm b/PVE/QemuServer/CPUConfig.pm
index 70b96be..9fa6af9 100644
--- a/PVE/QemuServer/CPUConfig.pm
+++ b/PVE/QemuServer/CPUConfig.pm
@@ -320,6 +320,70 @@ sub print_cpu_device {
 return 
"$cpu-x86_64-cpu,id=cpu$id,socket-id=$current_socket,core-id=$current_core,thread-id=0";
 }
 
+# Resolves multiple arrays of hashes representing CPU flags with metadata to a
+# single string in QEMU "-cpu" compatible format. Later arrays have higher
+# priority.
+#
+# Hashes take the following format:
+# {
+# aes => {
+# op => "+", # defaults to "" if undefined
+# reason => "to support AES acceleration", # for override warnings
+# value => "" # needed for kvm=off (value: off) etc...
+# },
+# ...
+# }
+sub resolve_cpu_flags {
+my $flags = {};
+
+for my $hash (@_) {
+   for my $flag_name (keys %$hash) {
+   my $flag = $hash->{$flag_name};
+   my $old_flag = $flags->{$flag_name};
+
+   $flag->{op} //= "";
+   $flag->{reason} //= "unknown origin";
+
+   if ($old_flag) {
+   my $value_changed = (defined($flag->{value}) != 
defined($old_flag->{value})) ||
+   (defined($flag->{value}) && $flag->{value} 
ne $old_flag->{value});
+
+   if ($old_flag->{op} eq $flag->{op} && !$value_changed) {
+   $flags->{$flag_name}->{reason} .= " & $flag->{reason}";
+   next;
+   }
+
+   my $old = print_cpuflag_hash($flag_name, $flags->{$flag_name});
+   my $new = print_cpuflag_hash($flag_name, $flag);
+   warn "warning: CPU flag/setting $new overwrites $old\n";
+   }
+
+   $flags->{$flag_name} = $flag;
+   }
+}
+
+my $flag_str = '';
+# sort for command line stability
+for my $flag_name (sort keys %$flags) {
+   $flag_str .= ',';
+   $flag_str .= $flags->{$flag_name}->{op};
+   $flag_str .= $flag_name;
+   $flag_str .= "=$flags->{$flag_name}->{value}"
+   if $flags->{$flag_name}->{value};
+}
+
+return $flag_str;
+}
+
+sub print_cpuflag_hash {
+my ($flag_name, $flag) = @_;
+my $formatted = "'$flag->{op}$flag_name";
+$formatted .= "=$flag->{value}" if defined($flag->{value});
+$formatted .= "'";
+$formatted .= " ($flag->{reason})" if defined($flag->{reason});
+return $formatted;
+}
+
 # Calculate QEMU's '-cpu' argument from a given VM configuration
 sub get_cpu_options {
 my ($conf, $arch, $kvm, $kvm_off, $machine_version, $winversion, 
$gpu_passthrough) = @_;
-- 
2.26.0


___
pve-devel mailing list
pve-devel@pve.proxmox.com
https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel


[pve-devel] [PATCH v10 qemu-server 6/6] cfg2cmd: add test cases for custom CPU models

2020-04-07 Thread Stefan Reiter
Requires a mock CPU-model config, which is given as a raw string to also
test parsing capabilities. Also tests defaulting behaviour.

Signed-off-by: Stefan Reiter 
Reviewed-By: Fabian Ebner 
Tested-By: Fabian Ebner 
---

Changes in v10:
* Added test for new phys-bits format

(Since it's just a copy of the previous custom model test and nothing else
changed I left the R-by/T-by tags on)

 test/cfg2cmd/custom-cpu-model-defaults.conf   |  8 ++
 .../custom-cpu-model-defaults.conf.cmd| 24 +
 .../custom-cpu-model-host-phys-bits.conf  |  8 ++
 .../custom-cpu-model-host-phys-bits.conf.cmd  | 27 +++
 test/cfg2cmd/custom-cpu-model.conf|  8 ++
 test/cfg2cmd/custom-cpu-model.conf.cmd| 27 +++
 test/run_config2command_tests.pl  | 23 
 7 files changed, 125 insertions(+)
 create mode 100644 test/cfg2cmd/custom-cpu-model-defaults.conf
 create mode 100644 test/cfg2cmd/custom-cpu-model-defaults.conf.cmd
 create mode 100644 test/cfg2cmd/custom-cpu-model-host-phys-bits.conf
 create mode 100644 test/cfg2cmd/custom-cpu-model-host-phys-bits.conf.cmd
 create mode 100644 test/cfg2cmd/custom-cpu-model.conf
 create mode 100644 test/cfg2cmd/custom-cpu-model.conf.cmd

diff --git a/test/cfg2cmd/custom-cpu-model-defaults.conf 
b/test/cfg2cmd/custom-cpu-model-defaults.conf
new file mode 100644
index 000..cdef285
--- /dev/null
+++ b/test/cfg2cmd/custom-cpu-model-defaults.conf
@@ -0,0 +1,8 @@
+# TEST: Check if custom CPU models are resolving defaults correctly
+cores: 3
+cpu: custom-alldefault
+name: customcpu-defaults
+numa: 0
+ostype: l26
+smbios1: uuid=2ea3f676-dfa5-11e9-ae82-c721e12f3fce
+sockets: 1
diff --git a/test/cfg2cmd/custom-cpu-model-defaults.conf.cmd 
b/test/cfg2cmd/custom-cpu-model-defaults.conf.cmd
new file mode 100644
index 000..ca8fcb0
--- /dev/null
+++ b/test/cfg2cmd/custom-cpu-model-defaults.conf.cmd
@@ -0,0 +1,24 @@
+/usr/bin/kvm \
+  -id 8006 \
+  -name customcpu-defaults \
+  -chardev 'socket,id=qmp,path=/var/run/qemu-server/8006.qmp,server,nowait' \
+  -mon 'chardev=qmp,mode=control' \
+  -chardev 'socket,id=qmp-event,path=/var/run/qmeventd.sock,reconnect=5' \
+  -mon 'chardev=qmp-event,mode=control' \
+  -pidfile /var/run/qemu-server/8006.pid \
+  -daemonize \
+  -smbios 'type=1,uuid=2ea3f676-dfa5-11e9-ae82-c721e12f3fce' \
+  -smp '3,sockets=1,cores=3,maxcpus=3' \
+  -nodefaults \
+  -boot 
'menu=on,strict=on,reboot-timeout=1000,splash=/usr/share/qemu-server/bootsplash.jpg'
 \
+  -vnc unix:/var/run/qemu-server/8006.vnc,password \
+  -cpu kvm64,enforce,+kvm_pv_eoi,+kvm_pv_unhalt,+lahf_lm,+sep \
+  -m 512 \
+  -device 'pci-bridge,id=pci.1,chassis_nr=1,bus=pci.0,addr=0x1e' \
+  -device 'pci-bridge,id=pci.2,chassis_nr=2,bus=pci.0,addr=0x1f' \
+  -device 'piix3-usb-uhci,id=uhci,bus=pci.0,addr=0x1.0x2' \
+  -device 'usb-tablet,id=tablet,bus=uhci.0,port=1' \
+  -device 'VGA,id=vga,bus=pci.0,addr=0x2' \
+  -device 'virtio-balloon-pci,id=balloon0,bus=pci.0,addr=0x3' \
+  -iscsi 'initiator-name=iqn.1993-08.org.debian:01:aabbccddeeff' \
+  -machine 'type=pc+pve0'
diff --git a/test/cfg2cmd/custom-cpu-model-host-phys-bits.conf 
b/test/cfg2cmd/custom-cpu-model-host-phys-bits.conf
new file mode 100644
index 000..a770d93
--- /dev/null
+++ b/test/cfg2cmd/custom-cpu-model-host-phys-bits.conf
@@ -0,0 +1,8 @@
+# TEST: Check if custom CPU models are resolved correctly
+cores: 3
+cpu: custom-qemu64,phys-bits=host
+name: customcpu
+numa: 0
+ostype: win10
+smbios1: uuid=2ea3f676-dfa5-11e9-ae82-c721e12f3fcf
+sockets: 1
diff --git a/test/cfg2cmd/custom-cpu-model-host-phys-bits.conf.cmd 
b/test/cfg2cmd/custom-cpu-model-host-phys-bits.conf.cmd
new file mode 100644
index 000..d8fa254
--- /dev/null
+++ b/test/cfg2cmd/custom-cpu-model-host-phys-bits.conf.cmd
@@ -0,0 +1,27 @@
+/usr/bin/kvm \
+  -id 8006 \
+  -name customcpu \
+  -chardev 'socket,id=qmp,path=/var/run/qemu-server/8006.qmp,server,nowait' \
+  -mon 'chardev=qmp,mode=control' \
+  -chardev 'socket,id=qmp-event,path=/var/run/qmeventd.sock,reconnect=5' \
+  -mon 'chardev=qmp-event,mode=control' \
+  -pidfile /var/run/qemu-server/8006.pid \
+  -daemonize \
+  -smbios 'type=1,uuid=2ea3f676-dfa5-11e9-ae82-c721e12f3fcf' \
+  -smp '3,sockets=1,cores=3,maxcpus=3' \
+  -nodefaults \
+  -boot 
'menu=on,strict=on,reboot-timeout=1000,splash=/usr/share/qemu-server/bootsplash.jpg'
 \
+  -vnc unix:/var/run/qemu-server/8006.vnc,password \
+  -no-hpet \
+  -cpu 
'athlon,+aes,+avx,enforce,hv_ipi,hv_relaxed,hv_reset,hv_runtime,hv_spinlocks=0x1fff,hv_stimer,hv_synic,hv_time,hv_vapic,hv_vendor_id=testvend,hv_vpindex,+kvm_pv_eoi,-kvm_pv_unhalt,vendor=AuthenticAMD,host-phys-bits=true'
 \
+  -m 512 \
+  -device 'pci-bridge,id=pci.1,chassis_nr=1,bus=pci.0,addr=0x1e' \
+  -device 'pci-bridge,id=pci.2,chassis_nr=2,bus=pci.0,addr=0x1f' \
+  -device 'piix3-usb-uhci,id=uhci,bus=pci.0,addr=0x1.0x2' \
+  -device 'usb-tablet,id=tablet,bus=uhci.0,port=1' \
+  -device 

[pve-devel] [PATCH v10 qemu-server 1/6] Include "-cpu" parameter with live-migration

2020-04-07 Thread Stefan Reiter
This is required to support custom CPU models, since the
"cpu-models.conf" file is not versioned, and can be changed while a VM
using a custom model is running. Changing the file in such a state can
lead to a different "-cpu" argument on the receiving side.

This patch fixes this by passing the entire "-cpu" option (extracted
from /proc/.../cmdline) as a "qm start" parameter. Note that this is
only done if the VM to migrate is using a custom model (which we can
check just fine, since the .conf *is* versioned with pending
changes), thus not breaking any live-migration directionality.

Signed-off-by: Stefan Reiter 
---

Changes in v10:
* Introduce and use PVE::QemuServer::CPUConfig::get_cpu_from_running_vm helper

 PVE/API2/Qemu.pm|  7 +++
 PVE/QemuMigrate.pm  | 15 +++
 PVE/QemuServer.pm   | 13 ++---
 PVE/QemuServer/CPUConfig.pm | 14 ++
 4 files changed, 46 insertions(+), 3 deletions(-)

diff --git a/PVE/API2/Qemu.pm b/PVE/API2/Qemu.pm
index 6fad972..9e453d2 100644
--- a/PVE/API2/Qemu.pm
+++ b/PVE/API2/Qemu.pm
@@ -2024,6 +2024,11 @@ __PACKAGE__->register_method({
optional => 1,
},
machine => get_standard_option('pve-qemu-machine'),
+   'force-cpu' => {
+   description => "Override QEMU's -cpu argument with the given 
string.",
+   type => 'string',
+   optional => 1,
+   },
targetstorage => {
description => "Target storage for the migration. (Can be '1' 
to use the same storage id as on the source node.)",
type => 'string',
@@ -2052,6 +2057,7 @@ __PACKAGE__->register_method({
my $timeout = extract_param($param, 'timeout');
 
my $machine = extract_param($param, 'machine');
+   my $force_cpu = extract_param($param, 'force-cpu');
 
my $get_root_param = sub {
my $value = extract_param($param, $_[0]);
@@ -2129,6 +2135,7 @@ __PACKAGE__->register_method({
skiplock => $skiplock,
forcemachine => $machine,
timeout => $timeout,
+   forcecpu => $force_cpu,
};
 
PVE::QemuServer::vm_start($storecfg, $vmid, $params, 
$migrate_opts);
diff --git a/PVE/QemuMigrate.pm b/PVE/QemuMigrate.pm
index e954eca..c12b2b5 100644
--- a/PVE/QemuMigrate.pm
+++ b/PVE/QemuMigrate.pm
@@ -17,6 +17,7 @@ use PVE::ReplicationState;
 use PVE::Storage;
 use PVE::Tools;
 
+use PVE::QemuServer::CPUConfig;
 use PVE::QemuServer::Drive;
 use PVE::QemuServer::Helpers qw(min_version);
 use PVE::QemuServer::Machine;
@@ -227,7 +228,17 @@ sub prepare {
 
$self->{forcemachine} = 
PVE::QemuServer::Machine::qemu_machine_pxe($vmid, $conf);
 
+   # To support custom CPU types, we keep QEMU's "-cpu" parameter intact.
+   # Since the parameter itself contains no reference to a custom model,
+   # this makes migration independent of changes to "cpu-models.conf".
+   if ($conf->{cpu}) {
+   my $cpuconf = 
PVE::QemuServer::CPUConfig::parse_cpu_conf_basic($conf->{cpu});
+   if ($cpuconf && 
PVE::QemuServer::CPUConfig::is_custom_model($cpuconf->{cputype})) {
+   $self->{forcecpu} = 
PVE::QemuServer::CPUConfig::get_cpu_from_running_vm($pid);
+   }
+   }
 }
+
 my $loc_res = PVE::QemuServer::check_local_resources($conf, 1);
 if (scalar @$loc_res) {
if ($self->{running} || !$self->{opts}->{force}) {
@@ -664,6 +675,10 @@ sub phase2 {
push @$cmd, '--machine', $self->{forcemachine};
 }
 
+if ($self->{forcecpu}) {
+   push @$cmd, '--force-cpu', $self->{forcecpu};
+}
+
 if ($self->{online_local_volumes}) {
push @$cmd, '--targetstorage', ($self->{opts}->{targetstorage} // '1');
 }
diff --git a/PVE/QemuServer.pm b/PVE/QemuServer.pm
index 510a995..a0f8429 100644
--- a/PVE/QemuServer.pm
+++ b/PVE/QemuServer.pm
@@ -2911,7 +2911,7 @@ sub query_understood_cpu_flags {
 }
 
 sub config_to_command {
-my ($storecfg, $vmid, $conf, $defaults, $forcemachine) = @_;
+my ($storecfg, $vmid, $conf, $defaults, $forcemachine, $forcecpu) = @_;
 
 my $cmd = [];
 my $globalFlags = [];
@@ -3317,7 +3317,12 @@ sub config_to_command {
push @$rtcFlags, 'base=localtime';
 }
 
-push @$cmd, get_cpu_options($conf, $arch, $kvm, $kvm_off, 
$machine_version, $winversion, $gpu_passthrough);
+if ($forcecpu) {
+   push @$cmd, '-cpu', $forcecpu;
+} else {
+   push @$cmd, get_cpu_options($conf, $arch, $kvm, $kvm_off,
+   $machine_version, $winversion, $gpu_passthrough);
+}
 
 PVE::QemuServer::Memory::config($conf, $vmid, $sockets, $cores, $defaults, 
$hotplug_features, $cmd);
 
@@ -4800,6 +4805,7 @@ sub vm_start {
 #   statefile => 'tcp', 'unix' for migration or path/volid for RAM state
 #   skiplock => 0/1, skip checking for config lock
 #   forcemachine => to force Qemu machine 

[pve-devel] [PATCH v10 0/6] Add basics for custom CPU models

2020-04-07 Thread Stefan Reiter
Based on the RFC and following on- and off-list discussion about custom CPU
models [0].

In essence, this revised patch allows a user to specify custom CPU models in
/etc/pve/cpu-models.conf (section-config style [1]), where VMs using that CPU
model inherit details from the definition. This removes any fragile
"auto-magical" CPU flag detection, while still giving the user a way to create
VMs with the best possible subset of CPU features maintaining live-migration
compatibility.

Includes the infrastructure for broadcasting supported CPU flags for each
cluster-node via the key-value store - this is not necessary for the
custom-cpu feature in particular, but I think could prove useful for
implementing the GUI part (e.g. show the user which flags are supported on which
nodes).

v9 -> v10:
* rebase on master
* add helper for running QEMU CPU flag reading
* improved 'phys-bits' format and according test case

v8 -> v9:
* rebase remaining patches on master

v7 -> v8:
* rebase on master, fix tests now using +pve0
* fixed nits noted by Thomas
* moved live-migration/snapshot patches forward to avoid temporary breakage
* fix CPU hotplug with custom CPUs
* guard mkdir with check_cfs_is_mounted and also run before write
* fix rebase-error in cfg2cmd tests (getaddrinfo_all)
* dropped applied patches


< see [2] for older history >


[0]: https://pve.proxmox.com/pipermail/pve-devel/2019-July/038268.html
[1]: e.g.:
cpu-model: custom-cpu-name
host-phys-bits 1
flags +aes;+avx;+avx2
reported-model kvm64
[2] https://pve.proxmox.com/pipermail/pve-devel/2020-January/041278.html


qemu-server: Stefan Reiter (6):
  Include "-cpu" parameter with live-migration
  Include "-cpu" parameter with snapshots/suspend
  Add helpers to better structure CPU option handling
  Rework get_cpu_options and allow custom CPU models
  fix #2318: allow phys-bits CPU setting
  cfg2cmd: add test cases for custom CPU models

 PVE/API2/Qemu.pm  |   8 +
 PVE/QemuConfig.pm |  26 +-
 PVE/QemuMigrate.pm|  15 +
 PVE/QemuServer.pm |  42 ++-
 PVE/QemuServer/CPUConfig.pm   | 310 +++---
 test/cfg2cmd/custom-cpu-model-defaults.conf   |   8 +
 .../custom-cpu-model-defaults.conf.cmd|  24 ++
 .../custom-cpu-model-host-phys-bits.conf  |   8 +
 .../custom-cpu-model-host-phys-bits.conf.cmd  |  27 ++
 test/cfg2cmd/custom-cpu-model.conf|   8 +
 test/cfg2cmd/custom-cpu-model.conf.cmd|  27 ++
 test/cfg2cmd/efi-raw-old.conf.cmd |   2 +-
 test/cfg2cmd/efi-raw.conf.cmd |   2 +-
 test/cfg2cmd/i440fx-win10-hostpci.conf.cmd|   2 +-
 test/cfg2cmd/minimal-defaults.conf.cmd|   2 +-
 test/cfg2cmd/pinned-version.conf.cmd  |   2 +-
 .../q35-linux-hostpci-multifunction.conf.cmd  |   2 +-
 test/cfg2cmd/q35-linux-hostpci.conf.cmd   |   2 +-
 test/cfg2cmd/q35-win10-hostpci.conf.cmd   |   2 +-
 test/cfg2cmd/simple1.conf.cmd |   2 +-
 test/cfg2cmd/spice-enhancments.conf.cmd   |   2 +-
 test/cfg2cmd/spice-linux-4.1.conf.cmd |   2 +-
 test/cfg2cmd/spice-usb3.conf.cmd  |   2 +-
 test/cfg2cmd/spice-win.conf.cmd   |   2 +-
 test/run_config2command_tests.pl  |  23 ++
 25 files changed, 471 insertions(+), 81 deletions(-)
 create mode 100644 test/cfg2cmd/custom-cpu-model-defaults.conf
 create mode 100644 test/cfg2cmd/custom-cpu-model-defaults.conf.cmd
 create mode 100644 test/cfg2cmd/custom-cpu-model-host-phys-bits.conf
 create mode 100644 test/cfg2cmd/custom-cpu-model-host-phys-bits.conf.cmd
 create mode 100644 test/cfg2cmd/custom-cpu-model.conf
 create mode 100644 test/cfg2cmd/custom-cpu-model.conf.cmd

-- 
2.26.0

___
pve-devel mailing list
pve-devel@pve.proxmox.com
https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel


[pve-devel] [PATCH storage 2/3] ZFS: use -p flag and remove zfs_parse_size

2020-04-07 Thread Aaron Lauterer
ZFS supports the -p flag in the list command since a few years now.
Let us use the real byte values and avoid the error prone calculation
from human readable numbers that can lead to incorrect numbers if the
reported human readable value is a rounded number.

Signed-off-by: Aaron Lauterer 
---
 PVE/Storage.pm   |  6 +++---
 PVE/Storage/ZFSPoolPlugin.pm | 40 ++--
 2 files changed, 5 insertions(+), 41 deletions(-)

diff --git a/PVE/Storage.pm b/PVE/Storage.pm
index 60b8310..7271463 100755
--- a/PVE/Storage.pm
+++ b/PVE/Storage.pm
@@ -1158,7 +1158,7 @@ sub scan_cifs {
 
 sub scan_zfs {
 
-my $cmd = ['zfs',  'list', '-t', 'filesystem', '-H', '-o', 
'name,avail,used'];
+my $cmd = ['zfs',  'list', '-t', 'filesystem', '-Hp', '-o', 
'name,avail,used'];
 
 my $res = [];
 run_command($cmd, outfunc => sub {
@@ -1166,8 +1166,8 @@ sub scan_zfs {
 
if ($line =~m/^(\S+)\s+(\S+)\s+(\S+)$/) {
my ($pool, $size_str, $used_str) = ($1, $2, $3);
-   my $size = PVE::Storage::ZFSPoolPlugin::zfs_parse_size($size_str);
-   my $used = PVE::Storage::ZFSPoolPlugin::zfs_parse_size($used_str);
+   my $size = $size_str + 0;
+   my $used = $used_str + 0;
# ignore subvolumes generated by our ZFSPoolPlugin
return if $pool =~ m!/subvol-\d+-[^/]+$!;
return if $pool =~ m!/basevol-\d+-[^/]+$!;
diff --git a/PVE/Storage/ZFSPoolPlugin.pm b/PVE/Storage/ZFSPoolPlugin.pm
index cb3f2f0..ad2fe9b 100644
--- a/PVE/Storage/ZFSPoolPlugin.pm
+++ b/PVE/Storage/ZFSPoolPlugin.pm
@@ -54,42 +54,6 @@ sub options {
 
 # static zfs helper methods
 
-sub zfs_parse_size {
-my ($text) = @_;
-
-return 0 if !$text;
-
-if ($text =~ m/^(\d+(\.\d+)?)([TGMK])?$/) {
-
-   my ($size, $reminder, $unit) = ($1, $2, $3);
-
-   if ($unit) {
-   if ($unit eq 'K') {
-   $size *= 1024;
-   } elsif ($unit eq 'M') {
-   $size *= 1024*1024;
-   } elsif ($unit eq 'G') {
-   $size *= 1024*1024*1024;
-   } elsif ($unit eq 'T') {
-   $size *= 1024*1024*1024*1024;
-   } else {
-   die "got unknown zfs size unit '$unit'\n";
-   }
-   }
-
-   if ($reminder) {
-   $size = ceil($size);
-   }
-
-   return $size + 0;
-
-}
-
-warn "unable to parse zfs size '$text'\n";
-
-return 0;
-}
-
 sub zfs_parse_zvol_list {
 my ($text) = @_;
 
@@ -117,11 +81,11 @@ sub zfs_parse_zvol_list {
if ($refquota eq 'none') {
$zvol->{size} = 0;
} else {
-   $zvol->{size} = zfs_parse_size($refquota);
+   $zvol->{size} = $refquota + 0;
}
$zvol->{format} = 'subvol';
} else {
-   $zvol->{size} = zfs_parse_size($size);
+   $zvol->{size} = $size + 0;
$zvol->{format} = 'raw';
}
if ($origin !~ /^-$/) {
-- 
2.20.1


___
pve-devel mailing list
pve-devel@pve.proxmox.com
https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel


[pve-devel] [PATCH storage 3/3] ZFS: use -p flag where possible

2020-04-07 Thread Aaron Lauterer
Signed-off-by: Aaron Lauterer 
---

this was the only other place I found where the use of the -p flag might
be useful

 PVE/Storage/ZFSPoolPlugin.pm | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/PVE/Storage/ZFSPoolPlugin.pm b/PVE/Storage/ZFSPoolPlugin.pm
index ad2fe9b..57846bb 100644
--- a/PVE/Storage/ZFSPoolPlugin.pm
+++ b/PVE/Storage/ZFSPoolPlugin.pm
@@ -571,7 +571,7 @@ sub clone_image {
 my $name = $class->find_free_diskname($storeid, $scfg, $vmid, $format);
 
 if ($format eq 'subvol') {
-   my $size = $class->zfs_request($scfg, undef, 'list', '-H', '-o', 
'refquota', "$scfg->{pool}/$basename");
+   my $size = $class->zfs_request($scfg, undef, 'list', '-Hp', '-o', 
'refquota', "$scfg->{pool}/$basename");
chomp($size);
$class->zfs_request($scfg, undef, 'clone', 
"$scfg->{pool}/$basename\@$snap", "$scfg->{pool}/$name", '-o', 
"refquota=$size");
 } else {
-- 
2.20.1


___
pve-devel mailing list
pve-devel@pve.proxmox.com
https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel


[pve-devel] [PATCH v2 storage 1/3] ZFSPoolPlugin: fix #2662 get volume size correctly

2020-04-07 Thread Aaron Lauterer
Getting the volume sizes as byte values instead of converted to human
readable units helps to avoid rounding errors in the further processing
if the volume size is more on the odd side.

The `zfs list` command supports the -p(arseable) flag since a few years
now.
When returning the size in bytes there is no  calculation performed and
thus we need to explicitly cast the size to an integer before returning
it.

Signed-off-by: Aaron Lauterer 
---

v1->v2: made the -p flag more obvious in the commit msg by adding a dash

Removal of zfs_parse_size and using the -p flag where appropiate is done
in individual patches which I hope is okay.

 PVE/Storage/ZFSPoolPlugin.pm | 4 ++--
 1 file changed, 2 insertions(+), 2 deletions(-)

diff --git a/PVE/Storage/ZFSPoolPlugin.pm b/PVE/Storage/ZFSPoolPlugin.pm
index b538e3b..cb3f2f0 100644
--- a/PVE/Storage/ZFSPoolPlugin.pm
+++ b/PVE/Storage/ZFSPoolPlugin.pm
@@ -81,7 +81,7 @@ sub zfs_parse_size {
$size = ceil($size);
}
 
-   return $size;
+   return $size + 0;
 
 }
 
@@ -400,7 +400,7 @@ sub zfs_delete_zvol {
 sub zfs_list_zvol {
 my ($class, $scfg) = @_;
 
-my $text = $class->zfs_request($scfg, 10, 'list', '-o', 
'name,volsize,origin,type,refquota', '-t', 'volume,filesystem', '-Hr');
+my $text = $class->zfs_request($scfg, 10, 'list', '-o', 
'name,volsize,origin,type,refquota', '-t', 'volume,filesystem', '-Hrp');
 my $zvols = zfs_parse_zvol_list($text);
 return undef if !$zvols;
 
-- 
2.20.1


___
pve-devel mailing list
pve-devel@pve.proxmox.com
https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel


Re: [pve-devel] applied-series: [PATCH v3 storage 1/5] Allow passing options to volume_has_feature

2020-04-07 Thread Fabian Ebner

On 06.04.20 11:52, Fabian Grünbichler wrote:

On April 6, 2020 10:46 am, Fabian Ebner wrote:

On 27.03.20 10:09, Fabian Grünbichler wrote:

with a small follow-up for vzdump (see separate mail), and comment below

On March 23, 2020 12:18 pm, Fabian Ebner wrote:

With the option valid_target_formats it's possible
to let the caller specify possible formats for the target
of an operation.
[0]: If the option is not set, assume that every format is valid.

In most cases the format of the the target and the format
of the source will agree (and therefore assumption [0] is
not actually assuming very much and ensures backwards
compatability). But when cloning a volume on a storage
using Plugin.pm's implementation (e.g. directory based
storages), the result is always a qcow2 image.

When cloning containers, the new option can be used to detect
that qcow2 is not valid and hence the clone feature is not
available.

Signed-off-by: Fabian Ebner 
---


it would be a good follow-up to think which other storage plugins should
also implement this. e.g. plugins supporting subvol and raw/qcow2/vmdk -
can they copy/clone from one to the other? future people might only look
at Plugin.pm, see that such an option exists, and assume it works for all
plugins/features ;)



The copy operations are actually implemented as the non-full clone_disk
for QEMU and as copy_volume in LXC. AFAIU they should always work.


right, those are actually not implemented in the storage layer
(anymore?), so checking there does not make much sense. I wonder whether
we even need the 'copy' feature then? export/import uses it's own format
matching, and the full clone stuff is already in
pve-container/qemu-server with lots of special handling.



[2]: There is a single caller (except for the has_feature API calls) in 
API2/Qemu.pm's clone_vm.


("allowed" in the next paragraph means has_feature returns 1)
Looking through the individual plugins, copy with "current" is always 
allowed and whenever the storage (and for Plugin.pm format) supports 
snapshots respectively base images, then copy with "snap" respectively 
"base" is allowed. That is, except for ZFS, where copy with "snap" is 
not allowed.


But this another difference between LXC/QEMU. For containers there is no 
equivalent check like [2] and making a full clone from a snapshot 
actually works. When I comment out the check for QEMU, I get:


qemu-img: Could not open '/dev/zvol/myzpool/vm-108-disk-0@adsf': Could 
not open '/dev/zvol/myzpool/vm-108-disk-0@adsf': No such file or directory


So there is a purpose for checking the copy feature, although not the 
best one, since copying from a ZFS snapshot isn't impossible in itself 
as the LXC case shows.

Also, wouldn't removing the copy feature technically also break the API?

If we really want to remove it, we could replace the check in QEMU by an 
explicit check for "full copy of ZFS from snapshot" or otherwise 
implement that feature and drop the check entirely.



there's also still some issue/missing check for subvols of size 0 which
can't be moved to a sized volume, but that is unrelated.



For clone operations, there is vdisk_clone in the storage module, but it
doesn't take a format parameter. Except for GlusterfsPlugin.pm (which
doesn't have its own volume_has_feature) and Plugin.pm the format of the
target is the same as the format of the source. Shouldn't that be
considered valid regardless of the valid_target_formats option?


you can't say it's valid per se - since for example raw->raw is not
possible, but raw->qcow2 is.



Yes, raw->raw is not possible, but raw would be a valid target format 
from the caller's perspective. Since the caller apparently can handle 
volumes with the format of the source, the storage backend can assume 
it's fine if the target is in that format as well. That does not mean 
that the operation will result in that format, just that it would be 
something the caller could deal with.


But it's true that this is an additional assumption:
[0]: If the option is not set, assume that every format is valid.
[1]: Regardless of whether the option is set, the format of the source 
is valid.

(valid again means something the caller can deal with).

[0] and [1] together are sufficient to make the current 
implementation(s) of volume_has_feature correct. If we drop [1], we need 
additional checks for clone in volume_has_feature (for LvmThin/RBD/ZFS), 
i.e. check if the format of the source (which will be the format of the 
target) appears in the list the caller gave us.


Should I add a comment with both assumptions or only the first one and 
add the additional checks?



Otherwise one needs to add a check whether valid_target_formats includes
the format of the source for other plugins that implement clone_image.


hm. it's all a bit of a mess with a fuzzy boundary between storage and
qemu-server/pve-container, and probably requires more thought to clean
up...


Changes from v2:
  * Use new options parameter instead 

[pve-devel] [PATCH access-control v2] auth ldap/ad: make password a parameter for the api

2020-04-07 Thread Dominik Csapak
Instead of simply requiring it to exist in /etc/pve.

Takes after the password handling of CIFS in pve-storage.

Signed-off-by: Dominik Csapak 
---
changes from v1:
* delete pw when given via 'delete' parameter
* do not delete pw when updating without giving 'password' parameter

 PVE/API2/Domains.pm | 26 
 PVE/Auth/AD.pm  |  1 +
 PVE/Auth/LDAP.pm| 73 -
 PVE/Auth/Plugin.pm  | 15 ++
 4 files changed, 114 insertions(+), 1 deletion(-)

diff --git a/PVE/API2/Domains.pm b/PVE/API2/Domains.pm
index 8ae1db0..b25a5fe 100644
--- a/PVE/API2/Domains.pm
+++ b/PVE/API2/Domains.pm
@@ -88,6 +88,9 @@ __PACKAGE__->register_method ({
 code => sub {
my ($param) = @_;
 
+   # always extract, add it with hook
+   my $password = extract_param($param, 'password');
+
PVE::Auth::Plugin::lock_domain_config(
sub {
 
@@ -117,6 +120,13 @@ __PACKAGE__->register_method ({
 
$ids->{$realm} = $config;
 
+   my $opts = $plugin->options();
+   if (defined($password) && !defined($opts->{password})) {
+   $password = undef;
+   warn "ignoring password parameter";
+   }
+   $plugin->on_add_hook($realm, $config, password => $password);
+
cfs_write_file($domainconfigfile, $cfg);
}, "add auth server failed");
 
@@ -137,6 +147,9 @@ __PACKAGE__->register_method ({
 code => sub {
my ($param) = @_;
 
+   # always extract, update in hook
+   my $password = extract_param($param, 'password');
+
PVE::Auth::Plugin::lock_domain_config(
sub {
 
@@ -154,8 +167,10 @@ __PACKAGE__->register_method ({
my $delete_str = extract_param($param, 'delete');
die "no options specified\n" if !$delete_str && !scalar(keys 
%$param);
 
+   my $delete_pw = 0;
foreach my $opt (PVE::Tools::split_list($delete_str)) {
delete $ids->{$realm}->{$opt};
+   $delete_pw = 1 if $opt eq 'password';
}
 
my $plugin = PVE::Auth::Plugin->lookup($ids->{$realm}->{type});
@@ -171,6 +186,14 @@ __PACKAGE__->register_method ({
$ids->{$realm}->{$p} = $config->{$p};
}
 
+   my $opts = $plugin->options();
+   if ($delete_pw || ($opts->{password} && defined($password))) {
+   $plugin->on_update_hook($realm, $config, password => 
$password);
+   } else {
+   warn "ignoring password parameter" if defined($password);
+   $plugin->on_update_hook($realm, $config);
+   }
+
cfs_write_file($domainconfigfile, $cfg);
}, "update auth server failed");
 
@@ -238,6 +261,9 @@ __PACKAGE__->register_method ({
 
die "domain '$realm' does not exist\n" if !$ids->{$realm};
 
+   my $plugin = PVE::Auth::Plugin->lookup($ids->{$realm}->{type});
+   $plugin->on_delete_hook($realm, $ids->{$realm});
+
delete $ids->{$realm};
 
cfs_write_file($domainconfigfile, $cfg);
diff --git a/PVE/Auth/AD.pm b/PVE/Auth/AD.pm
index 4f40be3..24b0e9f 100755
--- a/PVE/Auth/AD.pm
+++ b/PVE/Auth/AD.pm
@@ -83,6 +83,7 @@ sub options {
certkey => { optional => 1 },
base_dn => { optional => 1 },
bind_dn => { optional => 1 },
+   password => { optional => 1 },
user_attr => { optional => 1 },
filter => { optional => 1 },
sync_attributes => { optional => 1 },
diff --git a/PVE/Auth/LDAP.pm b/PVE/Auth/LDAP.pm
index 905cc47..1b2c606 100755
--- a/PVE/Auth/LDAP.pm
+++ b/PVE/Auth/LDAP.pm
@@ -37,6 +37,11 @@ sub properties {
optional => 1,
maxLength => 256,
},
+   password => {
+   description => "LDAP bind password. Will be stored in 
'/etc/pve/priv/ldap/.pw'.",
+   type => 'string',
+   optional => 1,
+   },
verify => {
description => "Verify the server's SSL certificate",
type => 'boolean',
@@ -126,6 +131,7 @@ sub options {
server2 => { optional => 1 },
base_dn => {},
bind_dn => { optional => 1 },
+   password => { optional => 1 },
user_attr => {},
port => { optional => 1 },
secure => { optional => 1 },
@@ -185,7 +191,7 @@ sub connect_and_bind {
 
 if ($config->{bind_dn}) {
$bind_dn = $config->{bind_dn};
-   $bind_pass = 
PVE::Tools::file_read_firstline("/etc/pve/priv/ldap/${realm}.pw");
+   $bind_pass = ldap_get_credentials($realm);
die "missing password for realm $realm\n" if !defined($bind_pass);
 }
 
@@ -343,4 +349,69 @@ sub authenticate_user {
 return 1;
 }
 
+my $ldap_pw_dir = "/etc/pve/priv/ldap";
+
+sub ldap_cred_filename {
+my ($realm) = @_;
+return "${ldap_pw_dir}/${realm}.pw";
+}
+

[pve-devel] [PATCH access-control] auth ldap/ad: make password a parameter for the api

2020-04-07 Thread Dominik Csapak
Instead of simply requiring it to exist in /etc/pve.

Takes after the password handling of CIFS in pve-storage.

Signed-off-by: Dominik Csapak 
---
i guess if we have to add this pattern again somewhere, it would be
useful to refactor it to the SectionConfig, but for now we simply duplicate

 PVE/API2/Domains.pm | 24 +++
 PVE/Auth/AD.pm  |  1 +
 PVE/Auth/LDAP.pm| 73 -
 PVE/Auth/Plugin.pm  | 15 ++
 4 files changed, 112 insertions(+), 1 deletion(-)

diff --git a/PVE/API2/Domains.pm b/PVE/API2/Domains.pm
index 8ae1db0..c6c4d9d 100644
--- a/PVE/API2/Domains.pm
+++ b/PVE/API2/Domains.pm
@@ -88,6 +88,9 @@ __PACKAGE__->register_method ({
 code => sub {
my ($param) = @_;
 
+   # always extract, add it with hook
+   my $password = extract_param($param, 'password');
+
PVE::Auth::Plugin::lock_domain_config(
sub {
 
@@ -117,6 +120,13 @@ __PACKAGE__->register_method ({
 
$ids->{$realm} = $config;
 
+   my $opts = $plugin->options();
+   if (defined($password) && !defined($opts->{password})) {
+   $password = undef;
+   warn "ignoring password parameter";
+   }
+   $plugin->on_add_hook($realm, $config, password => $password);
+
cfs_write_file($domainconfigfile, $cfg);
}, "add auth server failed");
 
@@ -137,6 +147,9 @@ __PACKAGE__->register_method ({
 code => sub {
my ($param) = @_;
 
+   # always extract, update in hook
+   my $password = extract_param($param, 'password');
+
PVE::Auth::Plugin::lock_domain_config(
sub {
 
@@ -171,6 +184,14 @@ __PACKAGE__->register_method ({
$ids->{$realm}->{$p} = $config->{$p};
}
 
+   my $opts = $plugin->options();
+   if (defined($opts->{password})) {
+   $plugin->on_update_hook($realm, $config, password => 
$password);
+   } else  {
+   warn "ignoring password parameter" if defined($password);
+   $plugin->on_update_hook($realm, $config);
+   }
+
cfs_write_file($domainconfigfile, $cfg);
}, "update auth server failed");
 
@@ -238,6 +259,9 @@ __PACKAGE__->register_method ({
 
die "domain '$realm' does not exist\n" if !$ids->{$realm};
 
+   my $plugin = PVE::Auth::Plugin->lookup($ids->{$realm}->{type});
+   $plugin->on_delete_hook($realm, $ids->{$realm});
+
delete $ids->{$realm};
 
cfs_write_file($domainconfigfile, $cfg);
diff --git a/PVE/Auth/AD.pm b/PVE/Auth/AD.pm
index 4f40be3..24b0e9f 100755
--- a/PVE/Auth/AD.pm
+++ b/PVE/Auth/AD.pm
@@ -83,6 +83,7 @@ sub options {
certkey => { optional => 1 },
base_dn => { optional => 1 },
bind_dn => { optional => 1 },
+   password => { optional => 1 },
user_attr => { optional => 1 },
filter => { optional => 1 },
sync_attributes => { optional => 1 },
diff --git a/PVE/Auth/LDAP.pm b/PVE/Auth/LDAP.pm
index 905cc47..1b2c606 100755
--- a/PVE/Auth/LDAP.pm
+++ b/PVE/Auth/LDAP.pm
@@ -37,6 +37,11 @@ sub properties {
optional => 1,
maxLength => 256,
},
+   password => {
+   description => "LDAP bind password. Will be stored in 
'/etc/pve/priv/ldap/.pw'.",
+   type => 'string',
+   optional => 1,
+   },
verify => {
description => "Verify the server's SSL certificate",
type => 'boolean',
@@ -126,6 +131,7 @@ sub options {
server2 => { optional => 1 },
base_dn => {},
bind_dn => { optional => 1 },
+   password => { optional => 1 },
user_attr => {},
port => { optional => 1 },
secure => { optional => 1 },
@@ -185,7 +191,7 @@ sub connect_and_bind {
 
 if ($config->{bind_dn}) {
$bind_dn = $config->{bind_dn};
-   $bind_pass = 
PVE::Tools::file_read_firstline("/etc/pve/priv/ldap/${realm}.pw");
+   $bind_pass = ldap_get_credentials($realm);
die "missing password for realm $realm\n" if !defined($bind_pass);
 }
 
@@ -343,4 +349,69 @@ sub authenticate_user {
 return 1;
 }
 
+my $ldap_pw_dir = "/etc/pve/priv/ldap";
+
+sub ldap_cred_filename {
+my ($realm) = @_;
+return "${ldap_pw_dir}/${realm}.pw";
+}
+
+sub ldap_set_credentials {
+my ($password, $realm) = @_;
+
+my $file = ldap_cred_filename($realm);
+mkdir $ldap_pw_dir;
+
+PVE::Tools::file_set_contents($file, $password);
+
+return $file;
+}
+
+sub ldap_get_credentials {
+my ($realm) = @_;
+
+my $file = ldap_cred_filename($realm);
+
+if (-e $file) {
+   return PVE::Tools::file_read_firstline($file);
+}
+return undef;
+}
+
+sub ldap_delete_credentials {
+my ($realm) = @_;
+
+my $file = ldap_cred_filename($realm);
+
+unlink($file) 

Re: [pve-devel] [PATCH storage] ZFSPoolPlugin: fix #2662 get volume size correctly

2020-04-07 Thread Fabian Grünbichler
On April 7, 2020 11:28 am, Aaron Lauterer wrote:
> 
> 
> On 4/3/20 5:07 PM, Fabian Grünbichler wrote:
>> there's another instance of 'zfs list ...' in PVE::Storage that could
>> also be switched to '-p'
> 
> Which one do you mean? Line 610?
> 
>1 if ($format eq 'subvol') { 
> 
> 610 my $size = $class->zfs_request($scfg, undef, 'list', '-H', 
> '-o', 'refquota', "$scfg->{pool}/$basename");
>1 chomp($size);
> 
> I don't see much chances for a bug here as the value is used right away 
> in the next step but it would definitely be more precise to use the -p flag.

the bug is that the parse_size sub is very broken, so we should get rid 
of it ;)

___
pve-devel mailing list
pve-devel@pve.proxmox.com
https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel


Re: [pve-devel] [PATCH storage] ZFSPoolPlugin: fix #2662 get volume size correctly

2020-04-07 Thread Aaron Lauterer



On 4/3/20 5:07 PM, Fabian Grünbichler wrote:

there's another instance of 'zfs list ...' in PVE::Storage that could
also be switched to '-p'


Which one do you mean? Line 610?

  1 if ($format eq 'subvol') { 

610 my $size = $class->zfs_request($scfg, undef, 'list', '-H', 
'-o', 'refquota', "$scfg->{pool}/$basename");

  1 chomp($size);

I don't see much chances for a bug here as the value is used right away 
in the next step but it would definitely be more precise to use the -p flag.


On April 3, 2020 2:29 pm, Aaron Lauterer wrote:

Getting the volume sizes as byte values instead of converted to human
readable units helps to avoid rounding errors in the further processing
if the volume size is more on the odd side.

The `zfs list` command supports the p(arseable) flag since a few years
now.
When returning the size in bytes there is no  calculation performed and
thus we need to explicitly cast the size to an integer before returning
it.

Signed-off-by: Aaron Lauterer 
---

I don't think we need to worry about other ZFS implementations regarding
ZFS over iSCSI. FreeBSD supports it since 9.3 [0], released mid 2014.
Illumos added it in 2013 [1].

[0] 
https://www.freebsd.org/cgi/man.cgi?query=zfs=0=0=FreeBSD+9.3-RELEASE=default=html
[1] 
https://github.com/illumos/illumos-gate/commit/43d68d68c1ce08fb35026bebfb141af422e7082e#diff-b138320fc5f9d5c48bb4b03a5e4e4cbb

  PVE/Storage/ZFSPoolPlugin.pm | 4 ++--
  1 file changed, 2 insertions(+), 2 deletions(-)

diff --git a/PVE/Storage/ZFSPoolPlugin.pm b/PVE/Storage/ZFSPoolPlugin.pm
index b538e3b..cb3f2f0 100644
--- a/PVE/Storage/ZFSPoolPlugin.pm
+++ b/PVE/Storage/ZFSPoolPlugin.pm
@@ -81,7 +81,7 @@ sub zfs_parse_size {
$size = ceil($size);
}


and then the whole zfs_parse_size sub which is completely broken can be
dropped :)

  
-	return $size;

+   return $size + 0;


but untainting/converting to int is still needed for those 4 current
call sites


Will do in V2


  
  }
  
@@ -400,7 +400,7 @@ sub zfs_delete_zvol {

  sub zfs_list_zvol {
  my ($class, $scfg) = @_;
  
-my $text = $class->zfs_request($scfg, 10, 'list', '-o', 'name,volsize,origin,type,refquota', '-t', 'volume,filesystem', '-Hr');

+my $text = $class->zfs_request($scfg, 10, 'list', '-o', 
'name,volsize,origin,type,refquota', '-t', 'volume,filesystem', '-Hrp');
  my $zvols = zfs_parse_zvol_list($text);
  return undef if !$zvols;
  
--

2.20.1


___
pve-devel mailing list
pve-devel@pve.proxmox.com
https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel




___
pve-devel mailing list
pve-devel@pve.proxmox.com
https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel



___
pve-devel mailing list
pve-devel@pve.proxmox.com
https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel


Re: [pve-devel] [PATCH v9 qemu-server 2/6] Include "-cpu" parameter with snapshots/suspend

2020-04-07 Thread Stefan Reiter

On 07/04/2020 09:04, Fabian Ebner wrote:

On 06.04.20 15:01, Stefan Reiter wrote:

On 06/04/2020 14:31, Fabian Ebner wrote:

On 26.03.20 16:13, Stefan Reiter wrote:

Just like with live-migration, custom CPU models might change after a
snapshot has been taken (or a VM suspended), which would lead to a
different QEMU invocation on rollback/resume.

Save the "-cpu" argument as a new "runningcpu" option into the VM conf
akin to "runningmachine" and use as override during rollback/resume.

No functional change with non-custom CPU types intended.



The changes apply to all CPU types, but that's not a bad thing. If 
one takes a snapshot, restarts the VM with a different CPU type and 
does a rollback, it'll use the CPU at the time of the snapshot even 
if no custom models are involved.




Was that a thing before? It should have worked that way anyhow, since 
the 'cpu' setting was stored in the snapshot's section in the config.




Ah, you're right. As long as the defaults for the standard models don't 
change, vm_start will re-generate the same -cpu option when no custom 
model is involved.




The default models can also be versioned with 'runningmachine', so for 
non-custom cpus this was always safe.


___
pve-devel mailing list
pve-devel@pve.proxmox.com
https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel


[pve-devel] [PATCH storage] fix #2474: always show iscsi content

2020-04-07 Thread Dominik Csapak
Instead of relying on list_volumes of Plugin.pm (which filters by
the content types set in the config), use our own to always
show the luns of an iscsi.

This makes sense here, since we need it to show the luns when using
it as base storage for LVM (where we have content type 'none' set).

It does not interfere with the rest of the GUI, since on e.g. disk
creation, we already filter the storages in the dropdown by content
type, iow. an iscsi storage used this way still does not show up
when trying to create a disk.

This also shows the luns now in the 'Content' tab, but this is also
OK, since the user cannot actually do anything there with the luns.
(Besides looking at them)

Signed-off-by: Dominik Csapak 
---
 PVE/Storage/ISCSIPlugin.pm | 14 ++
 1 file changed, 14 insertions(+)

diff --git a/PVE/Storage/ISCSIPlugin.pm b/PVE/Storage/ISCSIPlugin.pm
index 546349e..d5cb733 100644
--- a/PVE/Storage/ISCSIPlugin.pm
+++ b/PVE/Storage/ISCSIPlugin.pm
@@ -315,6 +315,20 @@ sub free_image {
 die "can't free space in iscsi storage\n";
 }
 
+# list all luns regardless of set content_types, since we need it for
+# listing in the gui and we can only have images anyway
+sub list_volumes {
+my ($class, $storeid, $scfg, $vmid, $content_types) = @_;
+
+my $res = $class->list_images($storeid, $scfg, $vmid);
+
+for my $item (@$res) {
+   $item->{content} = 'images'; # we only have images
+}
+
+return $res;
+}
+
 sub list_images {
 my ($class, $storeid, $scfg, $vmid, $vollist, $cache) = @_;
 
-- 
2.20.1


___
pve-devel mailing list
pve-devel@pve.proxmox.com
https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel


Re: [pve-devel] Fwd: proxmox-backup dev env

2020-04-07 Thread Dietmar Maurer
> Hi I downloaded and did the proxmox-backup cargo build, but now when I try
> to build make dinstall it fails because it can't find the crate on the
> registry, how did you solve this problem you have an internal cargo
> registry ?  I tried cargo local registry and also cargo vendor but I can't
> get around the problem.

We have a Debian repository with all rust development packages required:

deb http://download.proxmox.com/debian/devel buster main

We build everything using those rust debian packages. The registry for debian
packaged rust is /usr/share/cargo/registry/ ...

___
pve-devel mailing list
pve-devel@pve.proxmox.com
https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel


Re: [pve-devel] [PATCH v9 qemu-server 5/6] fix #2318: allow phys-bits and host-phys-bits CPU settings

2020-04-07 Thread Fabian Ebner

On 06.04.20 15:01, Stefan Reiter wrote:

On 06/04/2020 14:34, Fabian Ebner wrote:

On 26.03.20 16:13, Stefan Reiter wrote:

Can be specified for a particular VM or via a custom CPU model (VM takes
precedence).

QEMU's default limit only allows up to 1TB of RAM per VM. Increasing the
physical address bits available to a VM can fix this.

Signed-off-by: Stefan Reiter 
---
  PVE/QemuServer/CPUConfig.pm | 24 
  1 file changed, 24 insertions(+)

diff --git a/PVE/QemuServer/CPUConfig.pm b/PVE/QemuServer/CPUConfig.pm
index fa09f4b..2b2529d 100644
--- a/PVE/QemuServer/CPUConfig.pm
+++ b/PVE/QemuServer/CPUConfig.pm
@@ -149,6 +149,19 @@ my $cpu_fmt = {
  pattern => qr/$cpu_flag_any_re(;$cpu_flag_any_re)*/,
  optional => 1,
  },
+    'phys-bits' => {
+    type => 'integer',
+    minimum => 8,
+    maximum => 64,
+    description => "The physical memory address bits that are 
reported to the guest OS. Should be smaller or equal to the host's.",

+    optional => 1,
+    },
+    'host-phys-bits' => {
+    type => 'boolean',
+    default => 0,
+    description => "Whether to report the host's physical memory 
address bits. Overrides 'phys-bits' when set.",


Is it better to die when both are set, so that a user gets informed 
about the clashing options?




Actually, how do you feel about making this a single 'string' option 
'phys-bits' and allowing /^(host|\d{1,2})$/ ? Would mean that min/max(?) 
int values had to be checked manually, but I think it would make it 
clearer for the user.




Sounds good to me, then the dichotomy is explicit. And for checking, a 
'pve-phys-bits' format and verifier routine (which also does the min/max 
check) could be introduced. Then get_cpu_options already knows it'll 
either get "host" or a valid integer for pyhs-bits and doesn't have to 
check itself.



+    optional => 1,
+    },
  };
  # $cpu_fmt describes both the CPU config passed as part of a VM 
config, as well

@@ -472,6 +485,17 @@ sub get_cpu_options {
  $cpu_str .= resolve_cpu_flags($pve_flags, $hv_flags, 
$custom_cputype_flags,

    $vm_flags, $pve_forced_flags);
+    my $phys_bits = '';
+    foreach my $conf ($custom_cpu, $cpu) {
+    next if !defined($conf);
+    if ($conf->{'host-phys-bits'}) {
+    $phys_bits = ",host-phys-bits=true";
+    } elsif ($conf->{'phys-bits'}) {
+    $phys_bits = ",phys-bits=$conf->{'phys-bits'}";
+    }
+    }
+    $cpu_str .= $phys_bits;
+
  return ('-cpu', $cpu_str);
  }



___
pve-devel mailing list
pve-devel@pve.proxmox.com
https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel


Re: [pve-devel] [PATCH v9 qemu-server 2/6] Include "-cpu" parameter with snapshots/suspend

2020-04-07 Thread Fabian Ebner

On 06.04.20 15:01, Stefan Reiter wrote:

On 06/04/2020 14:31, Fabian Ebner wrote:

On 26.03.20 16:13, Stefan Reiter wrote:

Just like with live-migration, custom CPU models might change after a
snapshot has been taken (or a VM suspended), which would lead to a
different QEMU invocation on rollback/resume.

Save the "-cpu" argument as a new "runningcpu" option into the VM conf
akin to "runningmachine" and use as override during rollback/resume.

No functional change with non-custom CPU types intended.



The changes apply to all CPU types, but that's not a bad thing. If one 
takes a snapshot, restarts the VM with a different CPU type and does a 
rollback, it'll use the CPU at the time of the snapshot even if no 
custom models are involved.




Was that a thing before? It should have worked that way anyhow, since 
the 'cpu' setting was stored in the snapshot's section in the config.




Ah, you're right. As long as the defaults for the standard models don't 
change, vm_start will re-generate the same -cpu option when no custom 
model is involved.



Signed-off-by: Stefan Reiter 
---
  PVE/API2/Qemu.pm  |  1 +
  PVE/QemuConfig.pm | 34 ++
  PVE/QemuServer.pm | 28 
  3 files changed, 47 insertions(+), 16 deletions(-)

diff --git a/PVE/API2/Qemu.pm b/PVE/API2/Qemu.pm
index 80fd7ea..2c7e998 100644
--- a/PVE/API2/Qemu.pm
+++ b/PVE/API2/Qemu.pm
@@ -1129,6 +1129,7 @@ my $update_vm_api  = sub {
  push @delete, 'lock'; # this is the real deal to write it out
  }
  push @delete, 'runningmachine' if $conf->{runningmachine};
+    push @delete, 'runningcpu' if $conf->{runningcpu};
  }
  PVE::QemuConfig->check_lock($conf) if !$skiplock;
diff --git a/PVE/QemuConfig.pm b/PVE/QemuConfig.pm
index 8d03774..386223d 100644
--- a/PVE/QemuConfig.pm
+++ b/PVE/QemuConfig.pm
@@ -5,6 +5,7 @@ use warnings;
  use PVE::AbstractConfig;
  use PVE::INotify;
+use PVE::QemuServer::CPUConfig;
  use PVE::QemuServer::Drive;
  use PVE::QemuServer::Helpers;
  use PVE::QemuServer::Monitor qw(mon_cmd);
@@ -155,15 +156,26 @@ sub __snapshot_save_vmstate {
  my $statefile = PVE::Storage::vdisk_alloc($storecfg, $target, 
$vmid, 'raw', $name, $size*1024);
  my $runningmachine = 
PVE::QemuServer::Machine::get_current_qemu_machine($vmid);

-    if ($suspend) {
-    $conf->{vmstate} = $statefile;
-    $conf->{runningmachine} = $runningmachine;
-    } else {
-    my $snap = $conf->{snapshots}->{$snapname};
-    $snap->{vmstate} = $statefile;
-    $snap->{runningmachine} = $runningmachine;
+    # get current QEMU -cpu argument
+    my $runningcpu;
+    if (my $pid = PVE::QemuServer::check_running($vmid)) {
+    my $cmdline = PVE::QemuServer::Helpers::parse_cmdline($pid);
+    die "could not read commandline of running machine\n"
+    if !$cmdline->{cpu}->{value};
+
+    # sanitize and untaint value
+    $cmdline->{cpu}->{value} =~ 
$PVE::QemuServer::CPUConfig::qemu_cmdline_cpu_re;

+    $runningcpu = $1;
+    }
+
+    if (!$suspend) {
+    $conf = $conf->{snapshots}->{$snapname};
  }
+    $conf->{vmstate} = $statefile;
+    $conf->{runningmachine} = $runningmachine;
+    $conf->{runningcpu} = $runningcpu;
+
  return $statefile;
  }
@@ -309,6 +321,11 @@ sub __snapshot_rollback_hook {
  if (defined($conf->{runningmachine})) {
  $data->{forcemachine} = $conf->{runningmachine};
  delete $conf->{runningmachine};
+
+    # runningcpu is newer than runningmachine, so assume it only 
exists

+    # here, if at all
+    $data->{forcecpu} = delete $conf->{runningcpu}
+    if defined($conf->{runningcpu});
  } else {
  # Note: old code did not store 'machine', so we try to be 
smart
  # and guess the snapshot was generated with kvm 1.4 
(pc-i440fx-1.4).

@@ -361,7 +378,8 @@ sub __snapshot_rollback_vm_start {
  my ($class, $vmid, $vmstate, $data) = @_;
  my $storecfg = PVE::Storage::config();
-    PVE::QemuServer::vm_start($storecfg, $vmid, $vmstate, undef, 
undef, undef, $data->{forcemachine});
+    PVE::QemuServer::vm_start($storecfg, $vmid, $vmstate, undef, 
undef, undef,
+    $data->{forcemachine}, undef, undef, undef, undef, undef, undef, 
$data->{forcecpu});

  }
  sub __snapshot_rollback_get_unused {
diff --git a/PVE/QemuServer.pm b/PVE/QemuServer.pm
index 98c2a9a..70a5234 100644
--- a/PVE/QemuServer.pm
+++ b/PVE/QemuServer.pm
@@ -567,8 +567,15 @@ EODESCR
  optional => 1,
  }),
  runningmachine => get_standard_option('pve-qemu-machine', {
-    description => "Specifies the Qemu machine type of the running 
vm. This is used internally for snapshots.",
+    description => "Specifies the QEMU machine type of the running 
vm. This is used internally for snapshots.",

  }),
+    runningcpu => {
+    description => "Specifies the QEMU '-cpu' parameter of the 
running vm. This is used internally for snapshots.",

+    optional => 1,
+    type => 'string',
+    pattern => 

[pve-devel] Fwd: proxmox-backup dev env

2020-04-07 Thread Daniele de Petris
Hi I downloaded and did the proxmox-backup cargo build, but now when I try
to build make dinstall it fails because it can't find the crate on the
registry, how did you solve this problem you have an internal cargo
registry ?  I tried cargo local registry and also cargo vendor but I can't
get around the problem.
Sorry for  previous message out of membership, now I have a subscription to
maillist.

Regards

Daniele de Petris


___
pve-devel mailing list
pve-devel@pve.proxmox.com
https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel